mirror of
https://github.com/status-im/nimbus-eth1.git
synced 2025-02-28 20:00:43 +00:00
With the introduction of layered frames, each database lookup may result in hundreds of table lookups as the frame stack is traversed. This change restores performance by introducing snapshots to limit the lookup depth at the expense of slightly increased memory usage. The snapshot contains the cumulative changes of all ancestors and itself allowing the lookup recursion to stop whenever it is encountered. The number of snapshots to keep in memory is a tradeoff between lookup performance and memory usage - this change starts with a simple strategy of keeping snapshots for head frames (approximately). T he snapshot is created during checkpointing, ie after block validation, to make sure that it's cheap to start verifying blocks - parent snapshots are moved to the descendant as part of checkpointing which effectively means that head frames hold snapshots in most cases. The outcome of this tradeoff is that applying a block to a known head is fast while creating a new branch of history remains expensive. Another consequence is that when persisting changes to disk, we must re-traverse the stack of changes to build a cumulative set of changes to be persisted. A future strategy might be to keep additional "keyframes" along the way, ie one per epoch for example - this would bound the "branch creation" cost to a constant factor, but memory overhead should first be considered. Another strategy might be to avoid keeping snapshots for non-canonical branches, specially when they become older and thus less likely to be branched from. * `level` is updated to work like a temporary serial number to maintain its relative position in the sorting order as frames are persisted * a `snapshot` is added to some TxFrame instances - the snapshot collects all ancestor changes up to and including the given frame. `level` is used as a marker to prune the snapshot of changes that have been persisted already. * stack traversals for the purpose of lookup stop when they encounter a snapshot - this bounds the lookup depth to the first encountered snapshot After this PR, sync performance lands at about 2-3 blocks per second (~10x improvement) - this is quite reasonable when comparing with block import which skips the expensive state root verification and thus achieves ~20 blk/s on the same hardware. Additional work to bring live syncing performance in line with disk-based block import would focus on reducing state root verification cost.