When bumping to a more recent commit than the configured `branch`,
currently the lint error message is confusing:
```
fatal: error processing shallow info: 4
Submodule 'vendor/nim-chronos': Failed to fetch 'master':
```
This happens when the selected commit is more recent than the latest
one on the `branch`. Comparing the commit dates allows a better message.
CI Lint check failed when bumping to a commit outside default shallow
range. Deepen the checkout through the bumped commit date to ensure
history is available for the ancestry check.
Running the lint checks separately allows running tests to check code
correctness even when targeting non-master branches or having outdated
copyright headers.
Currently CI only tests against status `version-1-6` branch.
Update to test against the selected commit through submodule lock,
as well as the latest upstream `version-1-6` instead.
It occurs sometimes that a submodule is bumped to a PR commit instead of
the corresponding canonical branch (as registered in `.gitmodules`).
Because we typically use `squash`, that PR commit can subsequently
become unreachable, randomly breaking the build of `nimbus-eth2`.
Prevent these accidents by only allowing submodule bumps to commits
on the branch registered in `.gitmodules`. On private branches, simply
update `.gitmodules` to match the personal dev branch.
Instead of comparing against current base branch head, use the common
ancestor of the PR and the base branch to avoid false positives when
a year was bumped in the base branch but not yet merged into the PR.
The `SAFE_SLOTS_TO_UPDATE_JUSTIFIED` constant is no longer used as the
bouncing attack fix was removed:
https://github.com/ethereum/consensus-specs/pull/3290
Note: Some test networks still define the constant, ignoring the config
constant for now until it is no longer used.
When only submodules were bumped but no other changes are committed,
`git diff` returns empty list, and `grep` returns 1. Suppress `grep`
error to prevent CI fail in that case.
* era: load blocks and states
Era files contain finalized history and can be thought of as an
alternative source for block and state data that allows clients to avoid
syncing this information from the P2P network - the P2P network is then
used to "top up" the client with the most recent data. They can be
freely shared in the community via whatever means (http, torrent, etc)
and serve as a permanent cold store of consensus data (and, after the
merge, execution data) for history buffs and bean counters alike.
This PR gently introduces support for loading blocks and states in two
cases: block requests from rest/p2p and frontfilling when doing
checkpoint sync.
The era files are used as a secondary source if the information is not
found in the database - compared to the database, there are a few key
differences:
* the database stores the block indexed by block root while the era file
indexes by slot - the former is used only in rest, while the latter is
used both by p2p and rest.
* when loading blocks from era files, the root is no longer trivially
available - if it is needed, it must either be computed (slow) or cached
(messy) - the good news is that for p2p requests, it is not needed
* in era files, "framed" snappy encoding is used while in the database
we store unframed snappy - for p2p2 requests, the latter requires
recompression while the former could avoid it
* front-filling is the process of using era files to replace backfilling
- in theory this front-filling could happen from any block and
front-fills with gaps could also be entertained, but our backfilling
algorithm cannot take advantage of this because there's no (simple) way
to tell it to "skip" a range.
* front-filling, as implemented, is a bit slow (10s to load mainnet): we
load the full BeaconState for every era to grab the roots of the blocks
- it would be better to partially load the state - as such, it would
also be good to be able to partially decompress snappy blobs
* lookups from REST via root are served by first looking up a block
summary in the database, then using the slot to load the block data from
the era file - however, there needs to be an option to create the
summary table from era files to fully support historical queries
To test this, `ncli_db` has an era file exporter: the files it creates
should be placed in an `era` folder next to `db` in the data directory.
What's interesting in particular about this setup is that `db` remains
as the source of truth for security purposes - it stores the latest
synced head root which in turn determines where a node "starts" its
consensus participation - the era directory however can be freely shared
between nodes / people without any (significant) security implications,
assuming the era files are consistent / not broken.
There's lots of future improvements to be had:
* we can drop the in-memory `BlockRef` index almost entirely - at this
point, resident memory usage of Nimbus should drop to a cool 500-600 mb
* we could serve era files via REST trivially: this would drop backfill
times to whatever time it takes to download the files - unlike the
current implementation that downloads block by block, downloading an era
at a time almost entirely cuts out request overhead
* we can "reasonably" recreate detailed state history from almost any
point in time, turning an O(slot) process into O(1) effectively - we'll
still need caches and indices to do this with sufficient efficiency for
the rest api, but at least it cuts the whole process down to minutes
instead of hours, for arbitrary points in time
* CI: ignore failures with Nim-1.6 (temporary)
* test fixes
Co-authored-by: Ștefan Talpalaru <stefantalpalaru@yahoo.com>