nimbus-eth1/nimbus/db/opts.nim

75 lines
2.7 KiB
Nim
Raw Normal View History

# Nimbus
# Copyright (c) 2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
{.push raises: [].}
import results
export results
const
# https://github.com/facebook/rocksdb/wiki/Setup-Options-and-Basic-Tuning
Speed up initial MPT root computation after import (#2788) When `nimbus import` runs, we end up with a database without MPT roots leading to long startup times the first time one is needed. Computing the state root is slow because the on-disk order based on VertexID sorting does not match the trie traversal order and therefore makes lookups inefficent. Here we introduce a helper that speeds up this computation by traversing the trie in on-disk order and computing the trie hashes bottom up instead - even though this leads to some redundant reads of nodes that we cannot yet compute, it's still a net win as leaves and "bottom" branches make up the majority of the database. This PR also addresses a few other sources of inefficiency largely due to the separation of AriKey and AriVtx into their own column families. Each column family is its own LSM tree that produces hundreds of SST filtes - with a limit of 512 open files, rocksdb must keep closing and opening files which leads to expensive metadata reads during random access. When rocksdb makes a lookup, it has to read several layers of files for each lookup. Ribbon filters to skip over files that don't have the requested data but when these filters are not in memory, reading them is slow - this happens in two cases: when opening a file and when the filter has been evicted from the LRU cache. Addressing the open file limit solves one source of inefficiency, but we must also increase the block cache size to deal with this problem. * rocksdb.max_open_files increased to 2048 * per-file size limits increased so that fewer files are created * WAL size increased to avoid partial flushes which lead to small files * rocksdb block cache increased All these increases of course lead to increased memory usage, but at least performance is acceptable - in the future, we'll need to explore options such as joining AriVtx and AriKey and/or reducing the row count (by grouping branch layers under a single vertexid). With this PR, the mainnet state root can be computed in ~8 hours (down from 2-3 days) - not great, but still better. Further, we write all keys to the database, also those that are less than 32 bytes - because the mpt path is part of the input, it is very rare that we actually hit a key like this (about 200k such entries on mainnet), so the code complexity is not worth the benefit really, in the current database layout / design.
2024-10-27 11:08:37 +00:00
defaultMaxOpenFiles* = 2048
defaultWriteBufferSize* = 64 * 1024 * 1024
defaultRowCacheSize* = 0
## The row cache is disabled by default as the rdb lru caches do a better
## job at a similar abstraction level - ie they work at the same granularity
## as the rocksdb row cache but with less overhead
Store keys together with node data (#2849) Currently, computed hash keys are stored in a separate column family with respect to the MPT data they're generated from - this has several disadvantages: * A lot of space is wasted because the lookup key (`RootedVertexID`) is repeated in both tables - this is 30% of the `AriKey` content! * rocksdb must maintain in-memory bloom filters and LRU caches for said keys, doubling its "minimal efficient cache size" * An extra disk traversal must be made to check for existence of cached hash key * Doubles the amount of files on disk due to each column family being its own set of files Here, the two CFs are joined such that both key and data is stored in `AriVtx`. This means: * we save ~30% disk space on repeated lookup keys * we save ~2gb of memory overhead that can be used to cache data instead of indices * we can skip storing hash keys for MPT leaf nodes - these are trivial to compute and waste a lot of space - previously they had to present in the `AriKey` CF to avoid having to look in two tables on the happy path. * There is a small increase in write amplification because when a hash value is updated for a branch node, we must write both key and branch data - previously we would write only the key * There's a small shift in CPU usage - instead of performing lookups in the database, hashes for leaf nodes are (re)-computed on the fly * We can return to slightly smaller on-disk SST files since there's fewer of them, which should reduce disk traffic a bit Internally, there are also other advantages: * when clearing keys, we no longer have to store a zero hash in memory - instead, we deduce staleness of the cached key from the presence of an updated VertexRef - this saves ~1gb of mem overhead during import * hash key cache becomes dedicated to branch keys since leaf keys are no longer stored in memory, reducing churn * key computation is a lot faster thanks to the skipped second disk traversal - a key computation for mainnet can be completed in 11 hours instead of ~2 days (!) thanks to better cache usage and less read amplification - with additional improvements to the on-disk format, we can probably get rid of the initial full traversal method of seeding the key cache on first start after import All in all, this PR reduces the size of a mainnet database from 160gb to 110gb and the peak memory footprint during import by ~1-2gb.
2024-11-20 08:56:27 +00:00
defaultBlockCacheSize* = 1024 * 1024 * 1024 * 5 div 2
Speed up initial MPT root computation after import (#2788) When `nimbus import` runs, we end up with a database without MPT roots leading to long startup times the first time one is needed. Computing the state root is slow because the on-disk order based on VertexID sorting does not match the trie traversal order and therefore makes lookups inefficent. Here we introduce a helper that speeds up this computation by traversing the trie in on-disk order and computing the trie hashes bottom up instead - even though this leads to some redundant reads of nodes that we cannot yet compute, it's still a net win as leaves and "bottom" branches make up the majority of the database. This PR also addresses a few other sources of inefficiency largely due to the separation of AriKey and AriVtx into their own column families. Each column family is its own LSM tree that produces hundreds of SST filtes - with a limit of 512 open files, rocksdb must keep closing and opening files which leads to expensive metadata reads during random access. When rocksdb makes a lookup, it has to read several layers of files for each lookup. Ribbon filters to skip over files that don't have the requested data but when these filters are not in memory, reading them is slow - this happens in two cases: when opening a file and when the filter has been evicted from the LRU cache. Addressing the open file limit solves one source of inefficiency, but we must also increase the block cache size to deal with this problem. * rocksdb.max_open_files increased to 2048 * per-file size limits increased so that fewer files are created * WAL size increased to avoid partial flushes which lead to small files * rocksdb block cache increased All these increases of course lead to increased memory usage, but at least performance is acceptable - in the future, we'll need to explore options such as joining AriVtx and AriKey and/or reducing the row count (by grouping branch layers under a single vertexid). With this PR, the mainnet state root can be computed in ~8 hours (down from 2-3 days) - not great, but still better. Further, we write all keys to the database, also those that are less than 32 bytes - because the mpt path is part of the input, it is very rare that we actually hit a key like this (about 200k such entries on mainnet), so the code complexity is not worth the benefit really, in the current database layout / design.
2024-10-27 11:08:37 +00:00
## The block cache is used to cache indicies, ribbon filters and
## decompressed data, roughly in that priority order. At the time of writing
## we have about 2 giga-entries in the MPT - with the ribbon filter
## using about 8 bits per entry we need 2gb of space just for the filters.
##
## When the filters don't fit in memory, random access patterns such as
## MPT root computations suffer because of filter evictions and subsequent
## re-reads from file.
##
Store keys together with node data (#2849) Currently, computed hash keys are stored in a separate column family with respect to the MPT data they're generated from - this has several disadvantages: * A lot of space is wasted because the lookup key (`RootedVertexID`) is repeated in both tables - this is 30% of the `AriKey` content! * rocksdb must maintain in-memory bloom filters and LRU caches for said keys, doubling its "minimal efficient cache size" * An extra disk traversal must be made to check for existence of cached hash key * Doubles the amount of files on disk due to each column family being its own set of files Here, the two CFs are joined such that both key and data is stored in `AriVtx`. This means: * we save ~30% disk space on repeated lookup keys * we save ~2gb of memory overhead that can be used to cache data instead of indices * we can skip storing hash keys for MPT leaf nodes - these are trivial to compute and waste a lot of space - previously they had to present in the `AriKey` CF to avoid having to look in two tables on the happy path. * There is a small increase in write amplification because when a hash value is updated for a branch node, we must write both key and branch data - previously we would write only the key * There's a small shift in CPU usage - instead of performing lookups in the database, hashes for leaf nodes are (re)-computed on the fly * We can return to slightly smaller on-disk SST files since there's fewer of them, which should reduce disk traffic a bit Internally, there are also other advantages: * when clearing keys, we no longer have to store a zero hash in memory - instead, we deduce staleness of the cached key from the presence of an updated VertexRef - this saves ~1gb of mem overhead during import * hash key cache becomes dedicated to branch keys since leaf keys are no longer stored in memory, reducing churn * key computation is a lot faster thanks to the skipped second disk traversal - a key computation for mainnet can be completed in 11 hours instead of ~2 days (!) thanks to better cache usage and less read amplification - with additional improvements to the on-disk format, we can probably get rid of the initial full traversal method of seeding the key cache on first start after import All in all, this PR reduces the size of a mainnet database from 160gb to 110gb and the peak memory footprint during import by ~1-2gb.
2024-11-20 08:56:27 +00:00
## A bit of space on top of the filter is left for data block caching
defaultRdbVtxCacheSize* = 512 * 1024 * 1024
## Cache of branches and leaves in the state MPTs (world and account)
defaultRdbKeyCacheSize* = 256 * 1024 * 1024
## Hashes of the above
defaultRdbBranchCacheSize* = 1024 * 1024 * 1024
## Cache of branches and leaves in the state MPTs (world and account)
type DbOptions* = object # Options that are transported to the database layer
maxOpenFiles*: int
writeBufferSize*: int
rowCacheSize*: int
blockCacheSize*: int
rdbVtxCacheSize*: int
rdbKeyCacheSize*: int
rdbBranchCacheSize*: int
rdbPrintStats*: bool
func init*(
T: type DbOptions,
maxOpenFiles = defaultMaxOpenFiles,
writeBufferSize = defaultWriteBufferSize,
rowCacheSize = defaultRowCacheSize,
blockCacheSize = defaultBlockCacheSize,
rdbVtxCacheSize = defaultRdbVtxCacheSize,
rdbKeyCacheSize = defaultRdbKeyCacheSize,
rdbBranchCacheSize = defaultRdbBranchCacheSize,
rdbPrintStats = false,
): T =
T(
maxOpenFiles: maxOpenFiles,
writeBufferSize: writeBufferSize,
rowCacheSize: rowCacheSize,
blockCacheSize: blockCacheSize,
rdbVtxCacheSize: rdbVtxCacheSize,
rdbKeyCacheSize: rdbKeyCacheSize,
rdbBranchCacheSize: rdbBranchCacheSize,
rdbPrintStats: rdbPrintStats,
)