We do a linear scan of all pubkeys for each validator and slot - this
becomes expensive with large validator counts.
* normalise BN/VC validator startup logging
* fix crash when host cannot be resolved while adding remote validator
* silence repeated log spam for unknown validators
* print pubkey/index/activation mapping on startup/validator
identification
By pre-seeding the sync committee cache when applying blocks, we avoid a
significantly expensive validator set traversal / sync committee index
construction during sync / block application - 20-30% sync speedup
post-altair.
* also cache/reload total active balance for another cool 10%
The `/etc/os-release` file exists in most distributions and can be
easily read in Bash by sourcing it:
```
> docker run --rm -it debian:bullseye
root@2f5d6e038738:/# grep '^ID=' /etc/os-release
ID=debian
```
```
> docker run --rm -it ubuntu:22.04
root@316b572b6e4d:/# grep '^ID=' /etc/os-release
ID=ubuntu
```
The dependency on `lsb-release` tool
is unnecessary, and pulls in additional big dependencies like `python3`:
```
# apt show lsb-release | grep Depends
Depends: python3:any, distro-info-data
```
Which if used in a Docker container would make it unnecessarily big.
Signed-off-by: Jakub Sokołowski <jakub@status.im>
Otherwise installation in Docker containers fails with:
```
...
Adding new user `nimbus' (UID 101) with group `nimbus' ...
Not creating home directory `/home/nimbus'.
/var/lib/dpkg/info/nimbus-beacon-node.postinst: 39: systemctl: not found
dpkg: error processing package nimbus-beacon-node (--configure):
installed nimbus-beacon-node package post-installation script subprocess returned error exit status 127
Errors were encountered while processing:
nimbus-beacon-node
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
Signed-off-by: Jakub Sokołowski <jakub@status.im>
While syncing the finalized portion of the chain, the execution client
cannot efficiently sync and most of the time returns `SYNCING` - in this
PR, we use CL-verified optmistic sync as long as the block is claimed to
be finalized, only occasionally updating the EL with progress.
Although a peer might lie about what is finalized and what isn't,
eventually we'll call the execution client - thus, all a dishonest
client can do is delay execution verification slightly. Gossip blocks in
particular are never assumed to be finalized.
Extends fork choice state to also track slot numbers to improve accuracy
of `/eth/v1/debug/fork_choice` endpoint. Autoenable this API on devnet,
and disable some extra checks on devnet to aid focused testing efforts.
Align fork choice pruning logic with API based on checkpoints vs root.
* clean up some Nim 1.2 workarounds
* re-add notes about JS backend
* another proc/noSideEffect -> func
* revert ncli/ncli_common.nim changes; 19969 evidently wasn't backported to 1.6
To allow LC data retention longer than the one for historic states,
introduce persistent DB caches for `current_sync_committee` and
`LightClientHeader` for finalized epoch boundary blocks.
This way, historic `LightClientBootstrap` requests may still be honored
even after pruning. Note that historic `LightClientUpdate` requests are
already answered using fully persisted objects, so don't need changes.
Sync committees and headers are cached on finalization of new data.
For existing data, info is lazily cached on first access.
Co-authored-by: Jacek Sieka <jacek@status.im>
The "on" default for validator monitor details incurs a heavy
performance penalty on large-validator setups - this may cause excess
memory usage or slowdowns when metrics are queried - this PR changes the
default to off, as was intended for the 23.1.0 release.