nimbus-eth1/nimbus/core/clique/clique_helpers.nim

112 lines
3.6 KiB
Nim
Raw Normal View History

# Nimbus
# Copyright (c) 2018 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed except
# according to those terms.
##
## Tools & Utils for Clique PoA Consensus Protocol
## ===============================================
##
## For details see
## `EIP-225 <https://github.com/ethereum/EIPs/blob/master/EIPS/eip-225.md>`_
## and
## `go-ethereum <https://github.com/ethereum/EIPs/blob/master/EIPS/eip-225.md>`_
##
import
Renamed source file clique_utils => clique_helpers (#762) * Renamed source file clique_utils => clique_helpers why: New name is more in line with other modules where local libraries are named similarly. * re-implemented PoA verification module as clique_verify.nim details: The verification code was ported from the go sources and provisionally stored in the clique_misc.nim source file. todo: Bring it to life. * re-design Snapshot descriptor as: ref object why: Avoids some copying descriptor objects details: The snapshot management in clique_snapshot.nim has been cleaned up. todo: There is a lot of unnecessary copying & sub-list manipulation of seq[BlockHeader] lists which needs to be simplified by managing index intervals. * optimised sequence handling for Clique/PoA why: To much ado about nothing details: * Working with shallow sequences inside PoA processing avoids unnecessary copying. * Using degenerate lists in the cliqueVerify() batch where only the parent (and no other ancestor) is needed. todo: Expose only functions that are needed, shallow sequences should be handles with care. * fix var-parameter function argument * Activate PoA engine -- currently proof of concept details: PoA engine is activated with newChain(extraValidation = true) applied to a PoA network. status and todo: The extraValidation flag on the Chain object can be set at a later state which allows to pre-load parts of the block chain without verification. Setting it later will only go back the block chain to the latest epoch checkpoint. This is inherent to the Clique protocol, needs testing though. PoA engine works in fine weather mode on Goerli replay. With the canonical eip-225 tests, there are quite a few fringe conditions that fail. These can easily fudged over to make things work but need some more work to understand and correct properly. * Make the last offending verification header available why: Makes some fringe case tests work. details: Within a failed transaction comprising several blocks, this feature help to identify the offending block if there was a PoA verification error. * Make PoA header verifier store the final snapshot why: The last snapshot needed by the verifier is the one of the parent but the list of authorised signer is derived from the current snapshot. So updating to the latest snapshot provides the latest signers list. details: Also, PoA processing has been implemented as transaction in persistBlocks() with Clique state rollback. Clique tests succeed now. * Avoiding double yields in iterator => replaced by template why: Tanks to Andri who observed it (see #762) * Calibrate logging interval and fix logging event detection why: Logging interval as copied from Go implementation was too large and needed re-calibration. Elapsed time calculation was bonkers, negative the wrong way round.
2021-07-21 13:31:52 +00:00
std/[algorithm, times],
../../constants,
2022-12-02 04:35:41 +00:00
../../utils/utils,
./clique_defs,
eth/[common, rlp],
stew/[objects, results],
stint
type
EthSortOrder* = enum
EthDescending = SortOrder.Descending.ord
EthAscending = SortOrder.Ascending.ord
{.push raises: [].}
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
func zeroItem[T](t: typedesc[T]): T =
discard
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc isZero*[T: EthAddress|Hash256|Duration](a: T): bool =
## `true` if `a` is all zero
a == zeroItem(T)
proc sorted*(e: openArray[EthAddress]; order = EthAscending): seq[EthAddress] =
proc eCmp(x, y: EthAddress): int =
for n in 0 ..< x.len:
if x[n] < y[n]:
return -1
elif y[n] < x[n]:
return 1
e.sorted(cmp = eCmp, order = order.ord.SortOrder)
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
proc cliqueResultErr*(w: CliqueError): CliqueOkResult =
## Return error result (syntactic sugar)
err(w)
proc extraDataAddresses*(extraData: Blob): seq[EthAddress] =
## Extract signer addresses from extraData header field
proc toEthAddress(a: openArray[byte]; start: int): EthAddress =
toArray(EthAddress.len, a[start ..< start + EthAddress.len])
if EXTRA_VANITY + EXTRA_SEAL < extraData.len and
((extraData.len - (EXTRA_VANITY + EXTRA_SEAL)) mod EthAddress.len) == 0:
var addrOffset = EXTRA_VANITY
while addrOffset + EthAddress.len <= extraData.len - EXTRA_SEAL:
result.add extraData.toEthAddress(addrOffset)
addrOffset += EthAddress.len
# core/types/block.go(343): func (b *Block) WithSeal(header [..]
proc withHeader*(b: EthBlock; header: BlockHeader): EthBlock =
## New block with the data from `b` but the header replaced with the
## argument one.
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
EthBlock(header: header,
txs: b.txs,
uncles: b.uncles)
# ------------------------------------------------------------------------------
# Seal hash support
# ------------------------------------------------------------------------------
# clique/clique.go(730): func encodeSigHeader(w [..]
proc encodeSealHeader*(header: BlockHeader): seq[byte] =
## Cut sigature off `extraData` header field and consider new `baseFee`
## field for Eip1559.
doAssert EXTRA_SEAL < header.extraData.len
var rlpHeader = header
rlpHeader.extraData.setLen(header.extraData.len - EXTRA_SEAL)
rlp.encode(rlpHeader)
# clique/clique.go(688): func SealHash(header *types.Header) common.Hash {
proc hashSealHeader*(header: BlockHeader): Hash256 =
## Returns the hash of a block prior to it being sealed.
header.encodeSealHeader.keccakHash
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------