It is common for many accounts to share the same code - at the database
level, code is stored by hash meaning only one copy exists per unique
program but when loaded in memory, a copy is made for each account.
Further, every time we execute the code, it must be scanned for invalid
jump destinations which slows down EVM exeuction.
Finally, the extcodesize call causes code to be loaded even if only the
size is needed.
This PR improves on all these points by introducing a shared
CodeBytesRef type whose code section is immutable and that can be shared
between accounts. Further, a dedicated `len` API call is added so that
the EXTCODESIZE opcode can operate without polluting the GC and code
cache, for cases where only the size is requested - rocksdb will in this
case cache the code itself in the row cache meaning that lookup of the
code itself remains fast when length is asked for first.
With 16k code entries, there's a 90% hit rate which goes up to 99%
during the 2.3M attack - the cache significantly lowers memory
consumption and execution time not only during this event but across the
board.
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
* move forks constants, rename errors
* Move vm/utils to vm/interpreter/utils
* initial opcodes refactoring
* Add refactored Comparison & Bitwise Logic Operations
* Add sha3 and address, simplify macro, support pop 0
* balance, origin, caller, callValue
* fix gas copy opcodes gas costs, add callDataLoad/Size/Copy, CodeSize/Copy and gas price opcode
* Update with 30s, 40s, 50s opcodes + impl of balance + stack improvement
* add push, dup, swap, log, create and call operations
* finish opcode implementation
* Add the new dispatching logic
* Pass the opcode test
* Make test_vm_json compile
* halt execution without exceptions for Return, Revert, selfdestruct (fix#62)
* Properly catch and recover from EVM exceptions (stack underflow ...)
* Fix byte op
* Fix jump regressions
* Update for latest devel, don't import old dispatch code as quasiBoolean macro is broken by latest devel
* Fix sha3 regression on empty memory slice and until end of range slice
* Fix padding / range error on expXY_success (gas computation left)
* update logging procs
* Add tracing - expXY_success is not a regression, sload stub was accidentally passing the test
* Reuse the same stub as OO implementation
* Delete previous opcode implementation
* Delete object oriented fork code
* Delete exceptions that were used as control flows
* delete base.nim 🔥, yet another OO remnants
* Delete opcode table
* Enable omputed gotos and compile-time gas fees
* Revert const gasCosts -> generates SIGSEGV
* inline push, swap and dup opcodes
* loggers are now template again, why does this pass new tests?
* Trigger CI rebuild after rocksdb fix https://github.com/status-im/nim-rocksdb/pull/5
* Address review comment on "push" + VMTests in debug mode (not release)
* Address review comment: don't tag fork by default, make opcode impl grepable
* Static compilation fixes after rebasing
* fix the initialization of the VM database
* add a missing import
* Deactivate balance and sload test following #59
* Reactivate stack check (deactivated in #59, necessary to pass tests)
* Merge remaining opcodes implementation from #59
* Merge callDataLoad and codeCopy fixes, todo simplify see #67
* Move and cleanup interpreter files - prepare for redesign of VM
* fix call comment aobut recursive dependencies
* memory: use a template again and avoid (?) a cstring-> string conversion
* Fix stack test regression
* Fix recursive dependency on logging_ops, test_vm_json compiles but regression :/
* Fix signextend regression
* Fix 3 signed test and sha3 test