Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
# nim-eth
|
2022-11-26 14:59:19 +00:00
|
|
|
# Copyright (c) 2018-2022 Status Research & Development GmbH
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
# Licensed and distributed under either of
|
2022-05-13 16:30:10 +00:00
|
|
|
# * MIT license (license terms in the root directory or at
|
|
|
|
# https://opensource.org/licenses/MIT).
|
|
|
|
# * Apache v2 license (license terms in the root directory or at
|
|
|
|
# https://www.apache.org/licenses/LICENSE-2.0).
|
|
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
|
|
|
import
|
2022-11-16 06:45:28 +00:00
|
|
|
std/[sets, options, random,
|
|
|
|
hashes, sequtils, math, tables, times],
|
2022-05-13 16:30:10 +00:00
|
|
|
chronicles,
|
|
|
|
chronos,
|
2022-12-02 04:39:12 +00:00
|
|
|
eth/p2p,
|
2022-05-13 16:30:10 +00:00
|
|
|
eth/p2p/[private/p2p_types, peer_pool],
|
|
|
|
stew/byteutils,
|
2022-10-10 02:31:28 +00:00
|
|
|
"."/[protocol, types],
|
2022-12-02 04:39:12 +00:00
|
|
|
../core/[chain, clique/clique_sealer, gaslimit, withdrawals],
|
|
|
|
../core/pow/difficulty,
|
|
|
|
../constants,
|
|
|
|
../utils/utils,
|
|
|
|
../common/common
|
2022-03-21 17:12:07 +00:00
|
|
|
|
|
|
|
{.push raises:[Defect].}
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-05-23 16:53:19 +00:00
|
|
|
logScope:
|
2022-06-16 08:58:50 +00:00
|
|
|
topics = "fast-sync"
|
2022-05-23 16:53:19 +00:00
|
|
|
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
const
|
|
|
|
minPeersToStartSync* = 2 # Wait for consensus of at least this
|
|
|
|
# number of peers before syncing
|
2022-11-16 06:45:28 +00:00
|
|
|
CleanupInterval = initDuration(minutes = 20)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
|
|
|
type
|
2022-05-13 16:30:10 +00:00
|
|
|
#SyncStatus = enum
|
|
|
|
# syncSuccess
|
|
|
|
# syncNotEnoughPeers
|
|
|
|
# syncTimeOut
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-11-16 06:45:28 +00:00
|
|
|
HashToTime = TableRef[Hash256, Time]
|
|
|
|
|
2022-03-21 17:12:07 +00:00
|
|
|
BlockchainSyncDefect* = object of Defect
|
|
|
|
## Catch and relay exception
|
|
|
|
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
WantedBlocksState = enum
|
|
|
|
Initial,
|
|
|
|
Requested,
|
|
|
|
Received,
|
|
|
|
Persisted
|
|
|
|
|
|
|
|
WantedBlocks = object
|
2022-11-16 06:45:28 +00:00
|
|
|
isHash: bool
|
|
|
|
hash: Hash256
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
startIndex: BlockNumber
|
|
|
|
numBlocks: uint
|
|
|
|
state: WantedBlocksState
|
|
|
|
headers: seq[BlockHeader]
|
|
|
|
bodies: seq[BlockBody]
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
LegacySyncRef* = ref object
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
workQueue: seq[WantedBlocks]
|
2022-12-02 04:39:12 +00:00
|
|
|
chain: ChainRef
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
peerPool: PeerPool
|
|
|
|
trustedPeers: HashSet[Peer]
|
|
|
|
hasOutOfOrderBlocks: bool
|
2022-11-16 06:45:28 +00:00
|
|
|
busyPeers: HashSet[Peer]
|
|
|
|
knownByPeer: Table[Peer, HashToTime]
|
|
|
|
lastCleanup: Time
|
|
|
|
|
2022-12-05 02:51:32 +00:00
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private functions: sync progress
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
template endBlockNumber(ctx: LegacySyncRef): BlockNumber =
|
|
|
|
ctx.chain.com.syncHighest
|
|
|
|
|
|
|
|
template `endBlockNumber=`(ctx: LegacySyncRef, number: BlockNumber) =
|
|
|
|
ctx.chain.com.syncHighest = number
|
|
|
|
|
|
|
|
# Block which was downloaded and verified
|
|
|
|
template finalizedBlock(ctx: LegacySyncRef): BlockNumber =
|
|
|
|
ctx.chain.com.syncCurrent
|
|
|
|
|
|
|
|
template `finalizedBlock=`(ctx: LegacySyncRef, number: BlockNumber) =
|
|
|
|
ctx.chain.com.syncCurrent = number
|
|
|
|
|
2022-11-16 06:45:28 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private functions: peers related functions
|
|
|
|
# ------------------------------------------------------------------------------
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
|
|
|
proc hash*(p: Peer): Hash = hash(cast[pointer](p))
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc cleanupKnownByPeer(ctx: LegacySyncRef) =
|
2022-11-16 06:45:28 +00:00
|
|
|
let now = getTime()
|
|
|
|
var tmp = initHashSet[Hash256]()
|
|
|
|
for _, map in ctx.knownByPeer:
|
|
|
|
for hash, time in map:
|
|
|
|
if time - now >= CleanupInterval:
|
|
|
|
tmp.incl hash
|
|
|
|
for hash in tmp:
|
|
|
|
map.del(hash)
|
|
|
|
tmp.clear()
|
|
|
|
|
|
|
|
var tmpPeer = initHashSet[Peer]()
|
|
|
|
for peer, map in ctx.knownByPeer:
|
|
|
|
if map.len == 0:
|
|
|
|
tmpPeer.incl peer
|
|
|
|
|
|
|
|
for peer in tmpPeer:
|
|
|
|
ctx.knownByPeer.del peer
|
|
|
|
|
|
|
|
ctx.lastCleanup = now
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc addToKnownByPeer(ctx: LegacySyncRef,
|
2022-11-16 06:45:28 +00:00
|
|
|
blockHash: Hash256,
|
|
|
|
peer: Peer): bool =
|
|
|
|
var map: HashToTime
|
|
|
|
if not ctx.knownByPeer.take(peer, map):
|
|
|
|
map = newTable[Hash256, Time]()
|
|
|
|
result = false
|
|
|
|
else:
|
|
|
|
result = true
|
|
|
|
|
|
|
|
map[blockHash] = getTime()
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc getPeers(ctx: LegacySyncRef, thisPeer: Peer): seq[Peer] =
|
2022-11-16 06:45:28 +00:00
|
|
|
# do not send back block/blockhash to thisPeer
|
|
|
|
for peer in peers(ctx.peerPool):
|
|
|
|
if peer != thisPeer:
|
|
|
|
result.add peer
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc handleLostPeer(ctx: LegacySyncRef) =
|
2022-11-16 06:45:28 +00:00
|
|
|
# TODO: ask the PeerPool for new connections and then call
|
|
|
|
# `obtainBlocksFromPeer`
|
|
|
|
discard
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private functions: validators
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc validateDifficulty(ctx: LegacySyncRef,
|
2022-12-02 04:39:12 +00:00
|
|
|
header, parentHeader: BlockHeader,
|
|
|
|
consensusType: ConsensusType): bool =
|
2022-11-16 06:45:28 +00:00
|
|
|
try:
|
2022-12-02 04:39:12 +00:00
|
|
|
let com = ctx.chain.com
|
2022-11-16 06:45:28 +00:00
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
case consensusType
|
|
|
|
of ConsensusType.POA:
|
2022-11-16 06:45:28 +00:00
|
|
|
let rc = ctx.chain.clique.calcDifficulty(parentHeader)
|
|
|
|
if rc.isErr:
|
|
|
|
return false
|
|
|
|
if header.difficulty < rc.get():
|
|
|
|
trace "provided header difficulty is too low",
|
|
|
|
expect=rc.get(), get=header.difficulty
|
|
|
|
return false
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
of ConsensusType.POW:
|
|
|
|
let calcDiffc = com.calcDifficulty(header.timestamp, parentHeader)
|
|
|
|
if header.difficulty < calcDiffc:
|
|
|
|
trace "provided header difficulty is too low",
|
|
|
|
expect=calcDiffc, get=header.difficulty
|
|
|
|
return false
|
2022-11-16 06:45:28 +00:00
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
of ConsensusType.POS:
|
|
|
|
if header.difficulty != 0.u256:
|
|
|
|
trace "invalid difficulty",
|
|
|
|
expect=0, get=header.difficulty
|
|
|
|
return false
|
|
|
|
|
|
|
|
return true
|
2022-11-16 06:45:28 +00:00
|
|
|
except CatchableError as e:
|
|
|
|
error "Exception in FastSync.validateDifficulty()",
|
|
|
|
exc = e.name, err = e.msg
|
|
|
|
return false
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc validateHeader(ctx: LegacySyncRef, header: BlockHeader,
|
2022-12-02 04:39:12 +00:00
|
|
|
height = none(BlockNumber)): bool
|
|
|
|
{.raises: [Defect,CatchableError].} =
|
2022-11-16 06:45:28 +00:00
|
|
|
if header.parentHash == GENESIS_PARENT_HASH:
|
|
|
|
return true
|
|
|
|
|
|
|
|
let
|
2022-12-02 04:39:12 +00:00
|
|
|
db = ctx.chain.db
|
|
|
|
com = ctx.chain.com
|
2022-11-16 06:45:28 +00:00
|
|
|
|
|
|
|
var parentHeader: BlockHeader
|
|
|
|
if not db.getBlockHeader(header.parentHash, parentHeader):
|
|
|
|
error "can't get parentHeader",
|
|
|
|
hash=header.parentHash, number=header.blockNumber
|
|
|
|
return false
|
|
|
|
|
|
|
|
if header.blockNumber != parentHeader.blockNumber + 1.toBlockNumber:
|
|
|
|
trace "invalid block number",
|
|
|
|
expect=parentHeader.blockNumber + 1.toBlockNumber,
|
|
|
|
get=header.blockNumber
|
|
|
|
return false
|
|
|
|
|
|
|
|
if header.timestamp <= parentHeader.timestamp:
|
|
|
|
trace "invalid timestamp",
|
|
|
|
parent=parentHeader.timestamp,
|
|
|
|
header=header.timestamp
|
|
|
|
return false
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
let consensusType = com.consensus(header)
|
|
|
|
if not ctx.validateDifficulty(header, parentHeader, consensusType):
|
2022-11-16 06:45:28 +00:00
|
|
|
return false
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
if consensusType == ConsensusType.POA:
|
|
|
|
let period = initDuration(seconds = com.cliquePeriod)
|
2022-11-16 06:45:28 +00:00
|
|
|
# Timestamp diff between blocks is lower than PERIOD (clique)
|
|
|
|
if parentHeader.timestamp + period > header.timestamp:
|
|
|
|
trace "invalid timestamp diff (lower than period)",
|
|
|
|
parent=parentHeader.timestamp,
|
|
|
|
header=header.timestamp,
|
|
|
|
period
|
|
|
|
return false
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
var res = com.validateGasLimitOrBaseFee(header, parentHeader)
|
2022-11-16 06:45:28 +00:00
|
|
|
if res.isErr:
|
|
|
|
trace "validate gaslimit error",
|
|
|
|
msg=res.error
|
|
|
|
return false
|
|
|
|
|
|
|
|
if height.isSome:
|
|
|
|
let dif = height.get() - parentHeader.blockNumber
|
|
|
|
if not (dif < 8.toBlockNumber and dif > 1.toBlockNumber):
|
|
|
|
trace "uncle block has a parent that is too old or too young",
|
|
|
|
dif=dif,
|
|
|
|
height=height.get(),
|
|
|
|
parentNumber=parentHeader.blockNumber
|
|
|
|
return false
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
res = com.validateWithdrawals(header)
|
2022-11-26 14:59:19 +00:00
|
|
|
if res.isErr:
|
|
|
|
trace "validate withdrawals error",
|
|
|
|
msg=res.error
|
|
|
|
return false
|
|
|
|
|
2022-11-16 06:45:28 +00:00
|
|
|
return true
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private functions: sync worker
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc broadcastBlockHash(ctx: LegacySyncRef, hashes: seq[NewBlockHashesAnnounce], peers: seq[Peer]) {.async.} =
|
2022-11-16 06:45:28 +00:00
|
|
|
try:
|
|
|
|
|
|
|
|
var bha = newSeqOfCap[NewBlockHashesAnnounce](hashes.len)
|
|
|
|
for peer in peers:
|
|
|
|
for val in hashes:
|
|
|
|
let alreadyKnownByPeer = ctx.addToKnownByPeer(val.hash, peer)
|
|
|
|
if not alreadyKnownByPeer:
|
|
|
|
bha.add val
|
|
|
|
if bha.len > 0:
|
|
|
|
trace trEthSendNewBlockHashes, numHashes=bha.len, peer
|
|
|
|
await peer.newBlockHashes(bha)
|
|
|
|
bha.setLen(0)
|
|
|
|
|
|
|
|
except TransportError:
|
|
|
|
debug "Transport got closed during broadcastBlockHash"
|
|
|
|
except CatchableError as e:
|
|
|
|
debug "Exception in broadcastBlockHash", exc = e.name, err = e.msg
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc broadcastBlock(ctx: LegacySyncRef, blk: EthBlock, peers: seq[Peer]) {.async.} =
|
2022-11-16 06:45:28 +00:00
|
|
|
try:
|
|
|
|
|
|
|
|
let
|
|
|
|
db = ctx.chain.db
|
|
|
|
blockHash = blk.header.blockHash
|
|
|
|
td = db.getScore(blk.header.parentHash) +
|
|
|
|
blk.header.difficulty
|
|
|
|
|
|
|
|
for peer in peers:
|
|
|
|
let alreadyKnownByPeer = ctx.addToKnownByPeer(blockHash, peer)
|
|
|
|
if not alreadyKnownByPeer:
|
|
|
|
trace trEthSendNewBlock,
|
|
|
|
number=blk.header.blockNumber, td=td,
|
|
|
|
hash=short(blockHash), peer
|
|
|
|
await peer.newBlock(blk, td)
|
|
|
|
|
|
|
|
except TransportError:
|
|
|
|
debug "Transport got closed during broadcastBlock"
|
|
|
|
except CatchableError as e:
|
|
|
|
debug "Exception in broadcastBlock", exc = e.name, err = e.msg
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc broadcastBlockHash(ctx: LegacySyncRef, blk: EthBlock, peers: seq[Peer]) {.async.} =
|
2022-11-16 06:45:28 +00:00
|
|
|
try:
|
|
|
|
|
|
|
|
let bha = NewBlockHashesAnnounce(
|
|
|
|
number: blk.header.blockNumber,
|
|
|
|
hash: blk.header.blockHash
|
|
|
|
)
|
|
|
|
|
|
|
|
for peer in peers:
|
|
|
|
let alreadyKnownByPeer = ctx.addToKnownByPeer(bha.hash, peer)
|
|
|
|
if not alreadyKnownByPeer:
|
|
|
|
trace trEthSendNewBlockHashes,
|
|
|
|
number=bha.number, hash=short(bha.hash), peer
|
|
|
|
await peer.newBlockHashes([bha])
|
|
|
|
|
|
|
|
except TransportError:
|
|
|
|
debug "Transport got closed during broadcastBlockHash"
|
|
|
|
except CatchableError as e:
|
|
|
|
debug "Exception in broadcastBlockHash", exc = e.name, err = e.msg
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc sendBlockOrHash(ctx: LegacySyncRef, peer: Peer) {.async.} =
|
2022-11-16 06:45:28 +00:00
|
|
|
# because peer TD is lower than us,
|
|
|
|
# it become our recipient of block and block hashes
|
|
|
|
# instead of we download from it
|
|
|
|
|
|
|
|
try:
|
|
|
|
let
|
|
|
|
db = ctx.chain.db
|
|
|
|
peerBlockHash = peer.state(eth).bestBlockHash
|
|
|
|
|
|
|
|
var peerBlockHeader: BlockHeader
|
|
|
|
if not db.getBlockHeader(peerBlockHash, peerBlockHeader):
|
|
|
|
error "can't get block header", hash=short(peerBlockHash)
|
|
|
|
return
|
|
|
|
|
|
|
|
let
|
|
|
|
dist = (ctx.finalizedBlock - peerBlockHeader.blockNumber).truncate(int)
|
|
|
|
start = peerBlockHeader.blockNumber + 1.toBlockNumber
|
|
|
|
|
|
|
|
if dist == 1:
|
|
|
|
# only one block apart, send NewBlock
|
|
|
|
let number = ctx.finalizedBlock
|
|
|
|
var header: BlockHeader
|
|
|
|
var body: BlockBody
|
|
|
|
if not db.getBlockHeader(number, header):
|
|
|
|
error "can't get block header", number=number
|
|
|
|
return
|
|
|
|
|
|
|
|
let blockHash = header.blockHash
|
|
|
|
if not db.getBlockBody(blockHash, body):
|
|
|
|
error "can't get block body", number=number
|
|
|
|
return
|
|
|
|
|
|
|
|
let
|
|
|
|
ourTD = db.getScore(blockHash)
|
|
|
|
newBlock = EthBlock(
|
|
|
|
header: header,
|
|
|
|
txs: body.transactions,
|
|
|
|
uncles: body.uncles)
|
|
|
|
|
|
|
|
trace "send newBlock",
|
|
|
|
number = header.blockNumber,
|
|
|
|
hash = short(header.blockHash),
|
|
|
|
td = ourTD, peer
|
|
|
|
await peer.newBlock(newBlock, ourTD)
|
|
|
|
return
|
|
|
|
|
|
|
|
if dist > maxHeadersFetch:
|
|
|
|
# distance is too far, ignore this peer
|
|
|
|
return
|
|
|
|
|
|
|
|
# send hashes in batch
|
|
|
|
var
|
|
|
|
hash: Hash256
|
|
|
|
number = 0
|
|
|
|
hashes = newSeqOfCap[NewBlockHashesAnnounce](maxHeadersFetch)
|
|
|
|
|
|
|
|
while number < dist:
|
|
|
|
let blockNumber = start + number.toBlockNumber
|
|
|
|
if not db.getBlockHash(blockNumber, hash):
|
|
|
|
error "failed to get block hash", number=blockNumber
|
|
|
|
return
|
|
|
|
|
|
|
|
hashes.add(NewBlockHashesAnnounce(
|
|
|
|
number: blockNumber,
|
|
|
|
hash: hash))
|
|
|
|
|
|
|
|
if hashes.len == maxHeadersFetch:
|
|
|
|
trace "send newBlockHashes(batch)", numHashes=hashes.len, peer
|
|
|
|
await peer.newBlockHashes(hashes)
|
|
|
|
hashes.setLen(0)
|
|
|
|
inc number
|
|
|
|
|
|
|
|
# send the rest of hashes if available
|
|
|
|
if hashes.len > 0:
|
|
|
|
trace "send newBlockHashes(remaining)", numHashes=hashes.len, peer
|
|
|
|
await peer.newBlockHashes(hashes)
|
|
|
|
|
|
|
|
except TransportError:
|
|
|
|
debug "Transport got closed during obtainBlocksFromPeer"
|
|
|
|
except CatchableError as e:
|
|
|
|
debug "Exception in getBestBlockNumber()", exc = e.name, err = e.msg
|
|
|
|
# no need to exit here, because the context might still have blocks to fetch
|
|
|
|
# from this peer
|
|
|
|
discard e
|
|
|
|
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
proc endIndex(b: WantedBlocks): BlockNumber =
|
|
|
|
result = b.startIndex
|
|
|
|
result += (b.numBlocks - 1).toBlockNumber
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc availableWorkItem(ctx: LegacySyncRef): int =
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
var maxPendingBlock = ctx.finalizedBlock # the last downloaded & processed
|
|
|
|
trace "queue len", length = ctx.workQueue.len
|
|
|
|
result = -1
|
|
|
|
for i in 0 .. ctx.workQueue.high:
|
|
|
|
case ctx.workQueue[i].state
|
|
|
|
of Initial:
|
|
|
|
# When there is a work item at Initial state, immediatly use this one.
|
|
|
|
# This usually means a previous work item that failed somewhere in the
|
|
|
|
# process, and thus can be reused to work on.
|
|
|
|
return i
|
|
|
|
of Persisted:
|
|
|
|
# In case of Persisted, we can reset this work item to a new one.
|
|
|
|
result = i
|
|
|
|
# No break here to give work items in Initial state priority and to
|
|
|
|
# calculate endBlock.
|
|
|
|
else:
|
|
|
|
discard
|
|
|
|
|
|
|
|
# Check all endBlocks of all workqueue items to decide on next range of
|
|
|
|
# blocks to collect & process.
|
|
|
|
let endBlock = ctx.workQueue[i].endIndex
|
|
|
|
if endBlock > maxPendingBlock:
|
|
|
|
maxPendingBlock = endBlock
|
|
|
|
|
|
|
|
let nextRequestedBlock = maxPendingBlock + 1
|
2022-11-16 06:45:28 +00:00
|
|
|
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
# If this next block doesn't exist yet according to any of our peers, don't
|
|
|
|
# return a work item (and sync will be stopped).
|
2022-11-16 06:45:28 +00:00
|
|
|
if nextRequestedBlock > ctx.endBlockNumber:
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
return -1
|
|
|
|
|
|
|
|
# Increase queue when there are no free (Initial / Persisted) work items in
|
|
|
|
# the queue. At start, queue will be empty.
|
|
|
|
if result == -1:
|
|
|
|
result = ctx.workQueue.len
|
|
|
|
ctx.workQueue.setLen(result + 1)
|
|
|
|
|
|
|
|
# Create new work item when queue was increased, reset when selected work item
|
|
|
|
# is at Persisted state.
|
2022-11-16 06:45:28 +00:00
|
|
|
var numBlocks = (ctx.endBlockNumber - nextRequestedBlock).truncate(int) + 1
|
|
|
|
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
if numBlocks > maxHeadersFetch:
|
|
|
|
numBlocks = maxHeadersFetch
|
2022-11-16 06:45:28 +00:00
|
|
|
ctx.workQueue[result] = WantedBlocks(
|
|
|
|
startIndex: nextRequestedBlock,
|
|
|
|
numBlocks : numBlocks.uint,
|
|
|
|
state : Initial,
|
|
|
|
isHash : false)
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc appendWorkItem(ctx: LegacySyncRef, hash: Hash256,
|
2022-11-16 06:45:28 +00:00
|
|
|
startIndex: BlockNumber, numBlocks: uint) =
|
|
|
|
for i in 0 .. ctx.workQueue.high:
|
|
|
|
if ctx.workQueue[i].state == Persisted:
|
|
|
|
ctx.workQueue[i] = WantedBlocks(
|
|
|
|
isHash : true,
|
|
|
|
hash : hash,
|
|
|
|
startIndex: startIndex,
|
|
|
|
numBlocks : numBlocks,
|
|
|
|
state : Initial)
|
|
|
|
return
|
|
|
|
|
|
|
|
let i = ctx.workQueue.len
|
|
|
|
ctx.workQueue.setLen(i + 1)
|
|
|
|
ctx.workQueue[i] = WantedBlocks(
|
|
|
|
isHash : true,
|
|
|
|
hash : hash,
|
|
|
|
startIndex: startIndex,
|
|
|
|
numBlocks : numBlocks,
|
|
|
|
state : Initial)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc persistWorkItem(ctx: LegacySyncRef, wi: var WantedBlocks): ValidationResult
|
2022-03-21 17:12:07 +00:00
|
|
|
{.gcsafe, raises:[Defect,CatchableError].} =
|
2022-07-06 15:00:51 +00:00
|
|
|
try:
|
2022-03-21 17:12:07 +00:00
|
|
|
result = ctx.chain.persistBlocks(wi.headers, wi.bodies)
|
2022-07-06 15:00:51 +00:00
|
|
|
except CatchableError as e:
|
|
|
|
error "storing persistent blocks failed",
|
|
|
|
error = $e.name, msg = e.msg
|
|
|
|
result = ValidationResult.Error
|
|
|
|
except Defect as e:
|
|
|
|
# Pass through
|
2022-07-21 12:14:41 +00:00
|
|
|
raise e
|
2022-07-06 15:00:51 +00:00
|
|
|
except Exception as e:
|
2022-07-21 12:14:41 +00:00
|
|
|
# Notorious case where the `Chain` reference applied to `persistBlocks()`
|
|
|
|
# has the compiler traced a possible `Exception` (i.e. `ctx.chain` could
|
|
|
|
# be uninitialised.)
|
2022-07-06 15:00:51 +00:00
|
|
|
error "exception while storing persistent blocks",
|
|
|
|
error = $e.name, msg = e.msg
|
2022-07-21 12:14:41 +00:00
|
|
|
raise (ref Defect)(msg: $e.name & ": " & e.msg)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
case result
|
|
|
|
of ValidationResult.OK:
|
|
|
|
ctx.finalizedBlock = wi.endIndex
|
|
|
|
wi.state = Persisted
|
|
|
|
of ValidationResult.Error:
|
|
|
|
wi.state = Initial
|
|
|
|
# successful or not, we're done with these blocks
|
|
|
|
wi.headers = @[]
|
|
|
|
wi.bodies = @[]
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc persistPendingWorkItems(ctx: LegacySyncRef): (int, ValidationResult)
|
2022-03-21 17:12:07 +00:00
|
|
|
{.gcsafe, raises:[Defect,CatchableError].} =
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
var nextStartIndex = ctx.finalizedBlock + 1
|
|
|
|
var keepRunning = true
|
|
|
|
var hasOutOfOrderBlocks = false
|
|
|
|
trace "Looking for out of order blocks"
|
|
|
|
while keepRunning:
|
|
|
|
keepRunning = false
|
|
|
|
hasOutOfOrderBlocks = false
|
|
|
|
# Go over the full work queue and check for every work item if it is in
|
|
|
|
# Received state and has the next blocks in line to be processed.
|
|
|
|
for i in 0 ..< ctx.workQueue.len:
|
|
|
|
let start = ctx.workQueue[i].startIndex
|
|
|
|
# There should be at least 1 like this, namely the just received work item
|
|
|
|
# that initiated this call.
|
|
|
|
if ctx.workQueue[i].state == Received:
|
|
|
|
if start == nextStartIndex:
|
|
|
|
trace "Processing pending work item", number = start
|
|
|
|
result = (i, ctx.persistWorkItem(ctx.workQueue[i]))
|
|
|
|
# TODO: We can stop here on failure, but have to set
|
|
|
|
# hasOutofORderBlocks. Is this always valid?
|
|
|
|
nextStartIndex = ctx.finalizedBlock + 1
|
|
|
|
keepRunning = true
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
hasOutOfOrderBlocks = true
|
|
|
|
|
|
|
|
ctx.hasOutOfOrderBlocks = hasOutOfOrderBlocks
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc returnWorkItem(ctx: LegacySyncRef, workItem: int): ValidationResult
|
2022-03-21 17:12:07 +00:00
|
|
|
{.gcsafe, raises:[Defect,CatchableError].} =
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
let wi = addr ctx.workQueue[workItem]
|
|
|
|
let askedBlocks = wi.numBlocks.int
|
|
|
|
let receivedBlocks = wi.headers.len
|
|
|
|
let start = wi.startIndex
|
|
|
|
|
|
|
|
if askedBlocks == receivedBlocks:
|
|
|
|
trace "Work item complete",
|
|
|
|
start,
|
|
|
|
askedBlocks,
|
|
|
|
receivedBlocks
|
|
|
|
|
|
|
|
if wi.startIndex != ctx.finalizedBlock + 1:
|
|
|
|
trace "Blocks out of order", start, final = ctx.finalizedBlock
|
|
|
|
ctx.hasOutOfOrderBlocks = true
|
|
|
|
|
|
|
|
if ctx.hasOutOfOrderBlocks:
|
|
|
|
let (index, validation) = ctx.persistPendingWorkItems()
|
|
|
|
# Only report an error if it was this peer's work item that failed
|
2022-04-08 04:54:11 +00:00
|
|
|
if validation == ValidationResult.Error and index == workItem:
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
result = ValidationResult.Error
|
|
|
|
# TODO: What about failures on other peers' work items?
|
|
|
|
# In that case the peer will probably get disconnected on future erroneous
|
|
|
|
# work items, but before this occurs, several more blocks (that will fail)
|
|
|
|
# might get downloaded from this peer. This will delay the sync and this
|
|
|
|
# should be improved.
|
|
|
|
else:
|
|
|
|
trace "Processing work item", number = wi.startIndex
|
|
|
|
# Validation result needs to be returned so that higher up can be decided
|
|
|
|
# to disconnect from this peer in case of error.
|
|
|
|
result = ctx.persistWorkItem(wi[])
|
|
|
|
else:
|
|
|
|
trace "Work item complete but we got fewer blocks than requested, so we're ditching the whole thing.",
|
|
|
|
start,
|
|
|
|
askedBlocks,
|
|
|
|
receivedBlocks
|
|
|
|
return ValidationResult.Error
|
|
|
|
|
|
|
|
proc getBestBlockNumber(p: Peer): Future[BlockNumber] {.async.} =
|
|
|
|
let request = BlocksRequest(
|
|
|
|
startBlock: HashOrNum(isHash: true,
|
|
|
|
hash: p.state(eth).bestBlockHash),
|
|
|
|
maxResults: 1,
|
|
|
|
skip: 0,
|
|
|
|
reverse: true)
|
|
|
|
|
2022-06-06 13:42:08 +00:00
|
|
|
trace trEthSendSendingGetBlockHeaders, peer=p,
|
2022-03-21 17:12:07 +00:00
|
|
|
startBlock=request.startBlock.hash.toHex, max=request.maxResults
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
let latestBlock = await p.getBlockHeaders(request)
|
|
|
|
|
2021-07-22 16:36:17 +00:00
|
|
|
if latestBlock.isSome:
|
|
|
|
if latestBlock.get.headers.len > 0:
|
|
|
|
result = latestBlock.get.headers[0].blockNumber
|
2022-06-06 13:42:08 +00:00
|
|
|
trace trEthRecvReceivedBlockHeaders, peer=p,
|
2022-05-23 16:53:19 +00:00
|
|
|
count=latestBlock.get.headers.len,
|
|
|
|
blockNumber=(if latestBlock.get.headers.len > 0: $result else: "missing")
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-11-16 06:45:28 +00:00
|
|
|
proc toRequest(workItem: WantedBlocks): BlocksRequest =
|
|
|
|
if workItem.isHash:
|
|
|
|
BlocksRequest(
|
|
|
|
startBlock: HashOrNum(isHash: true, hash: workItem.hash),
|
|
|
|
maxResults: workItem.numBlocks,
|
|
|
|
skip: 0,
|
|
|
|
reverse: false)
|
|
|
|
else:
|
|
|
|
BlocksRequest(
|
|
|
|
startBlock: HashOrNum(isHash: false, number: workItem.startIndex),
|
|
|
|
maxResults: workItem.numBlocks,
|
|
|
|
skip: 0,
|
|
|
|
reverse: false)
|
|
|
|
|
|
|
|
type
|
|
|
|
BodyHash = object
|
|
|
|
txRoot: Hash256
|
|
|
|
uncleHash: Hash256
|
|
|
|
|
|
|
|
proc hash*(x: BodyHash): Hash =
|
|
|
|
var h: Hash = 0
|
|
|
|
h = h !& hash(x.txRoot.data)
|
|
|
|
h = h !& hash(x.uncleHash.data)
|
|
|
|
result = !$h
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc fetchBodies(ctx: LegacySyncRef, peer: Peer,
|
2022-11-16 06:45:28 +00:00
|
|
|
workItemIdx: int, reqBodies: seq[bool]): Future[bool] {.async.} =
|
|
|
|
template workItem: auto = ctx.workQueue[workItemIdx]
|
|
|
|
var bodies = newSeqOfCap[BlockBody](workItem.headers.len)
|
|
|
|
var hashes = newSeqOfCap[KeccakHash](maxBodiesFetch)
|
|
|
|
|
|
|
|
doAssert(reqBodies.len == workItem.headers.len)
|
|
|
|
|
|
|
|
template fetchBodies() =
|
|
|
|
trace trEthSendSendingGetBlockBodies, peer,
|
|
|
|
hashes=hashes.len
|
|
|
|
let b = await peer.getBlockBodies(hashes)
|
|
|
|
if b.isNone:
|
|
|
|
raise newException(CatchableError, "Was not able to get the block bodies")
|
|
|
|
let bodiesLen = b.get.blocks.len
|
|
|
|
trace trEthRecvReceivedBlockBodies, peer,
|
|
|
|
count=bodiesLen, requested=hashes.len
|
|
|
|
if bodiesLen == 0:
|
|
|
|
raise newException(CatchableError, "Zero block bodies received for request")
|
|
|
|
elif bodiesLen < hashes.len:
|
|
|
|
hashes.delete(0, bodiesLen - 1)
|
|
|
|
elif bodiesLen == hashes.len:
|
|
|
|
hashes.setLen(0)
|
|
|
|
else:
|
|
|
|
raise newException(CatchableError, "Too many block bodies received for request")
|
|
|
|
bodies.add(b.get.blocks)
|
|
|
|
|
|
|
|
var numRequest = 0
|
|
|
|
for i, h in workItem.headers:
|
|
|
|
if reqBodies[i]:
|
|
|
|
hashes.add(h.blockHash)
|
|
|
|
inc numRequest
|
|
|
|
|
|
|
|
if hashes.len == maxBodiesFetch:
|
|
|
|
fetchBodies()
|
|
|
|
|
|
|
|
while hashes.len != 0:
|
|
|
|
fetchBodies()
|
|
|
|
|
|
|
|
workItem.bodies = newSeqOfCap[BlockBody](workItem.headers.len)
|
|
|
|
var bodyHashes = initTable[BodyHash, int]()
|
|
|
|
for z, body in bodies:
|
|
|
|
let bodyHash = BodyHash(
|
|
|
|
txRoot: calcTxRoot(body.transactions),
|
|
|
|
uncleHash: rlpHash(body.uncles))
|
|
|
|
bodyHashes[bodyHash] = z
|
|
|
|
|
|
|
|
for i, req in reqBodies:
|
|
|
|
if req:
|
|
|
|
let bodyHash = BodyHash(
|
|
|
|
txRoot: workItem.headers[i].txRoot,
|
|
|
|
uncleHash: workItem.headers[i].ommersHash)
|
|
|
|
let z = bodyHashes.getOrDefault(bodyHash, -1)
|
|
|
|
if z == -1:
|
|
|
|
error "header missing it's body",
|
|
|
|
number=workItem.headers[i].blockNumber,
|
|
|
|
hash=workItem.headers[i].blockHash.short
|
|
|
|
return false
|
|
|
|
workItem.bodies.add bodies[z]
|
|
|
|
else:
|
|
|
|
workItem.bodies.add BlockBody()
|
|
|
|
|
|
|
|
return true
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc obtainBlocksFromPeer(ctx: LegacySyncRef, peer: Peer) {.async.} =
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
# Update our best block number
|
|
|
|
try:
|
2022-11-16 06:45:28 +00:00
|
|
|
if ctx.endBlockNumber <= ctx.finalizedBlock:
|
|
|
|
# endBlockNumber need update
|
|
|
|
let bestBlockNumber = await peer.getBestBlockNumber()
|
|
|
|
if bestBlockNumber > ctx.endBlockNumber:
|
|
|
|
trace "New sync end block number", number = bestBlockNumber
|
|
|
|
ctx.endBlockNumber = bestBlockNumber
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
except TransportError:
|
|
|
|
debug "Transport got closed during obtainBlocksFromPeer"
|
|
|
|
except CatchableError as e:
|
|
|
|
debug "Exception in getBestBlockNumber()", exc = e.name, err = e.msg
|
|
|
|
# no need to exit here, because the context might still have blocks to fetch
|
|
|
|
# from this peer
|
2022-10-10 02:31:28 +00:00
|
|
|
discard e
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-11-16 06:45:28 +00:00
|
|
|
while (let workItemIdx = ctx.availableWorkItem(); workItemIdx != -1 and
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
peer.connectionState notin {Disconnecting, Disconnected}):
|
2022-11-16 06:45:28 +00:00
|
|
|
template workItem: auto = ctx.workQueue[workItemIdx]
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
workItem.state = Requested
|
|
|
|
trace "Requesting block headers", start = workItem.startIndex,
|
|
|
|
count = workItem.numBlocks, peer = peer.remote.node
|
2022-11-16 06:45:28 +00:00
|
|
|
let request = toRequest(workItem)
|
|
|
|
var dataReceived = true
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
try:
|
2022-06-06 13:42:08 +00:00
|
|
|
trace trEthSendSendingGetBlockHeaders, peer,
|
2022-05-09 14:04:48 +00:00
|
|
|
startBlock=request.startBlock.number, max=request.maxResults,
|
|
|
|
step=traceStep(request)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
let results = await peer.getBlockHeaders(request)
|
|
|
|
if results.isSome:
|
2022-06-06 13:42:08 +00:00
|
|
|
trace trEthRecvReceivedBlockHeaders, peer,
|
2022-05-09 14:04:48 +00:00
|
|
|
count=results.get.headers.len, requested=request.maxResults
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
shallowCopy(workItem.headers, results.get.headers)
|
|
|
|
|
2022-11-16 06:45:28 +00:00
|
|
|
var
|
|
|
|
reqBodies = newSeqOfCap[bool](workItem.headers.len)
|
|
|
|
numRequest = 0
|
|
|
|
nextIndex = workItem.startIndex
|
|
|
|
|
|
|
|
# skip requesting empty bodies
|
|
|
|
for h in workItem.headers:
|
|
|
|
if h.blockNumber != nextIndex:
|
|
|
|
raise newException(CatchableError,
|
|
|
|
"The block numbers are not in sequence. Not processing this workItem.")
|
|
|
|
|
|
|
|
nextIndex = nextIndex + 1
|
|
|
|
let req = h.txRoot != EMPTY_ROOT_HASH or
|
|
|
|
h.ommersHash != EMPTY_UNCLE_HASH
|
|
|
|
reqBodies.add(req)
|
|
|
|
if req: inc numRequest
|
|
|
|
|
|
|
|
if numRequest > 0:
|
|
|
|
dataReceived = dataReceived and
|
|
|
|
await ctx.fetchBodies(peer, workItemIdx, reqBodies)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
else:
|
2022-11-16 06:45:28 +00:00
|
|
|
# all bodies are empty
|
|
|
|
workItem.bodies.setLen(workItem.headers.len)
|
|
|
|
|
|
|
|
let peers = ctx.getPeers(peer)
|
|
|
|
if peers.len > 0:
|
|
|
|
var hashes = newSeqOfCap[NewBlockHashesAnnounce](workItem.headers.len)
|
|
|
|
for h in workItem.headers:
|
|
|
|
hashes.add NewBlockHashesAnnounce(
|
|
|
|
hash: h.blockHash,
|
|
|
|
number: h.blockNumber)
|
|
|
|
trace "broadcast block hashes", numPeers=peers.len, numHashes=hashes.len
|
|
|
|
await ctx.broadcastBlockHash(hashes, peers)
|
|
|
|
|
|
|
|
# fetchBodies can fail
|
|
|
|
dataReceived = dataReceived and true
|
|
|
|
|
|
|
|
else:
|
|
|
|
dataReceived = false
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
except TransportError:
|
2022-11-16 06:45:28 +00:00
|
|
|
debug "Transport got closed during obtainBlocksFromPeer",
|
|
|
|
peer
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
except CatchableError as e:
|
|
|
|
# the success case sets `dataReceived`, so we can just fall back to the
|
|
|
|
# failure path below. If we signal time-outs with exceptions such
|
|
|
|
# failures will be easier to handle.
|
2022-11-16 06:45:28 +00:00
|
|
|
debug "Exception in obtainBlocksFromPeer()",
|
|
|
|
exc = e.name, err = e.msg, peer
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
|
|
|
var giveUpOnPeer = false
|
|
|
|
if dataReceived:
|
2022-11-16 06:45:28 +00:00
|
|
|
trace "Finished obtaining blocks", peer, numBlocks=workItem.headers.len
|
|
|
|
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
workItem.state = Received
|
2022-11-16 06:45:28 +00:00
|
|
|
let res = ctx.returnWorkItem(workItemIdx)
|
|
|
|
if res != ValidationResult.OK:
|
|
|
|
trace "validation error"
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
giveUpOnPeer = true
|
|
|
|
else:
|
|
|
|
giveUpOnPeer = true
|
|
|
|
|
|
|
|
if giveUpOnPeer:
|
|
|
|
workItem.state = Initial
|
|
|
|
try:
|
|
|
|
await peer.disconnect(SubprotocolReason)
|
|
|
|
except CatchableError:
|
|
|
|
discard
|
2022-11-16 06:45:28 +00:00
|
|
|
ctx.handleLostPeer()
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
break
|
|
|
|
|
|
|
|
proc peersAgreeOnChain(a, b: Peer): Future[bool] {.async.} =
|
|
|
|
# Returns true if one of the peers acknowledges existence of the best block
|
|
|
|
# of another peer.
|
|
|
|
var
|
|
|
|
a = a
|
|
|
|
b = b
|
|
|
|
|
|
|
|
if a.state(eth).bestDifficulty < b.state(eth).bestDifficulty:
|
|
|
|
swap(a, b)
|
|
|
|
|
|
|
|
let request = BlocksRequest(
|
|
|
|
startBlock: HashOrNum(isHash: true,
|
|
|
|
hash: b.state(eth).bestBlockHash),
|
|
|
|
maxResults: 1,
|
|
|
|
skip: 0,
|
|
|
|
reverse: true)
|
|
|
|
|
2022-06-06 13:42:08 +00:00
|
|
|
trace trEthSendSendingGetBlockHeaders, peer=a,
|
2022-03-21 17:12:07 +00:00
|
|
|
startBlock=request.startBlock.hash.toHex, max=request.maxResults
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
let latestBlock = await a.getBlockHeaders(request)
|
2021-07-22 16:36:17 +00:00
|
|
|
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
result = latestBlock.isSome and latestBlock.get.headers.len > 0
|
2022-05-23 16:53:19 +00:00
|
|
|
if latestBlock.isSome:
|
2021-07-22 16:36:17 +00:00
|
|
|
let blockNumber = if result: $latestBlock.get.headers[0].blockNumber
|
|
|
|
else: "missing"
|
2022-06-06 13:42:08 +00:00
|
|
|
trace trEthRecvReceivedBlockHeaders, peer=a,
|
2021-07-22 16:36:17 +00:00
|
|
|
count=latestBlock.get.headers.len, blockNumber
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc randomTrustedPeer(ctx: LegacySyncRef): Peer =
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
var k = rand(ctx.trustedPeers.len - 1)
|
|
|
|
var i = 0
|
|
|
|
for p in ctx.trustedPeers:
|
|
|
|
result = p
|
|
|
|
if i == k: return
|
|
|
|
inc i
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc startSyncWithPeerImpl(ctx: LegacySyncRef, peer: Peer) {.async.} =
|
2022-05-23 16:53:19 +00:00
|
|
|
trace "Start sync", peer, trustedPeers = ctx.trustedPeers.len
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
if ctx.trustedPeers.len >= minPeersToStartSync:
|
|
|
|
# We have enough trusted peers. Validate new peer against trusted
|
|
|
|
if await peersAgreeOnChain(peer, ctx.randomTrustedPeer()):
|
|
|
|
ctx.trustedPeers.incl(peer)
|
2022-05-09 14:04:48 +00:00
|
|
|
asyncSpawn ctx.obtainBlocksFromPeer(peer)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
elif ctx.trustedPeers.len == 0:
|
|
|
|
# Assume the peer is trusted, but don't start sync until we reevaluate
|
|
|
|
# it with more peers
|
|
|
|
trace "Assume trusted peer", peer
|
|
|
|
ctx.trustedPeers.incl(peer)
|
|
|
|
else:
|
|
|
|
# At this point we have some "trusted" candidates, but they are not
|
|
|
|
# "trusted" enough. We evaluate `peer` against all other candidates.
|
|
|
|
# If one of the candidates disagrees, we swap it for `peer`. If all
|
|
|
|
# candidates agree, we add `peer` to trusted set. The peers in the set
|
|
|
|
# will become "fully trusted" (and sync will start) when the set is big
|
|
|
|
# enough
|
|
|
|
var
|
|
|
|
agreeScore = 0
|
|
|
|
disagreedPeer: Peer
|
|
|
|
|
|
|
|
for tp in ctx.trustedPeers:
|
|
|
|
if await peersAgreeOnChain(peer, tp):
|
|
|
|
inc agreeScore
|
|
|
|
else:
|
|
|
|
disagreedPeer = tp
|
|
|
|
|
|
|
|
let disagreeScore = ctx.trustedPeers.len - agreeScore
|
|
|
|
|
|
|
|
if agreeScore == ctx.trustedPeers.len:
|
|
|
|
ctx.trustedPeers.incl(peer) # The best possible outcome
|
|
|
|
elif disagreeScore == 1:
|
|
|
|
trace "Peer is no longer trusted for sync", peer
|
|
|
|
ctx.trustedPeers.excl(disagreedPeer)
|
|
|
|
ctx.trustedPeers.incl(peer)
|
|
|
|
else:
|
|
|
|
trace "Peer not trusted for sync", peer
|
|
|
|
|
|
|
|
if ctx.trustedPeers.len == minPeersToStartSync:
|
|
|
|
for p in ctx.trustedPeers:
|
2022-05-09 14:04:48 +00:00
|
|
|
asyncSpawn ctx.obtainBlocksFromPeer(p)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc startSyncWithPeer(ctx: LegacySyncRef, peer: Peer) =
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
try:
|
2022-11-16 06:45:28 +00:00
|
|
|
let
|
|
|
|
db = ctx.chain.db
|
|
|
|
header = db.getBlockHeader(ctx.finalizedBlock)
|
|
|
|
ourTD = db.getScore(header.blockHash)
|
|
|
|
peerTD = peer.state(eth).bestDifficulty
|
|
|
|
|
|
|
|
if peerTD <= ourTD:
|
|
|
|
# do nothing if peer have same height
|
|
|
|
if peerTD < ourTD:
|
|
|
|
trace "Peer have lower TD, become recipient",
|
|
|
|
peer, ourTD, peerTD
|
|
|
|
asyncSpawn ctx.sendBlockOrHash(peer)
|
|
|
|
return
|
|
|
|
|
|
|
|
ctx.busyPeers.incl(peer)
|
|
|
|
let f = ctx.startSyncWithPeerImpl(peer)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
f.callback = proc(data: pointer) {.gcsafe.} =
|
|
|
|
if f.failed:
|
|
|
|
if f.error of TransportError:
|
|
|
|
debug "Transport got closed during startSyncWithPeer"
|
|
|
|
else:
|
|
|
|
error "startSyncWithPeer failed", msg = f.readError.msg, peer
|
2022-11-16 06:45:28 +00:00
|
|
|
ctx.busyPeers.excl(peer)
|
|
|
|
asyncSpawn f
|
|
|
|
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
except TransportError:
|
|
|
|
debug "Transport got closed during startSyncWithPeer"
|
|
|
|
except CatchableError as e:
|
|
|
|
debug "Exception in startSyncWithPeer()", exc = e.name, err = e.msg
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc startObtainBlocks(ctx: LegacySyncRef, peer: Peer) =
|
2022-11-16 06:45:28 +00:00
|
|
|
# simpler version of startSyncWithPeer
|
|
|
|
try:
|
|
|
|
|
|
|
|
ctx.busyPeers.incl(peer)
|
|
|
|
let f = ctx.obtainBlocksFromPeer(peer)
|
|
|
|
f.callback = proc(data: pointer) {.gcsafe.} =
|
|
|
|
if f.failed:
|
|
|
|
if f.error of TransportError:
|
|
|
|
debug "Transport got closed during startObtainBlocks"
|
|
|
|
else:
|
|
|
|
error "startObtainBlocks failed", msg = f.readError.msg, peer
|
|
|
|
ctx.busyPeers.excl(peer)
|
|
|
|
asyncSpawn f
|
|
|
|
|
|
|
|
except TransportError:
|
|
|
|
debug "Transport got closed during startObtainBlocks"
|
|
|
|
except CatchableError as e:
|
|
|
|
debug "Exception in startObtainBlocks()", exc = e.name, err = e.msg
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc onPeerConnected(ctx: LegacySyncRef, peer: Peer) =
|
2022-11-16 06:45:28 +00:00
|
|
|
trace "New candidate for sync", peer
|
|
|
|
ctx.startSyncWithPeer(peer)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc onPeerDisconnected(ctx: LegacySyncRef, p: Peer) =
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
trace "peer disconnected ", peer = p
|
|
|
|
ctx.trustedPeers.excl(p)
|
2022-11-16 06:45:28 +00:00
|
|
|
ctx.busyPeers.excl(p)
|
|
|
|
ctx.knownByPeer.del(p)
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public constructor/destructor
|
|
|
|
# ------------------------------------------------------------------------------
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc new*(T: type LegacySyncRef; ethNode: EthereumNode; chain: ChainRef): T
|
2022-05-13 16:30:10 +00:00
|
|
|
{.gcsafe, raises:[Defect,CatchableError].} =
|
2022-12-05 02:51:32 +00:00
|
|
|
result = LegacySyncRef(
|
2022-05-13 16:30:10 +00:00
|
|
|
# workQueue: n/a
|
|
|
|
# endBlockNumber: n/a
|
|
|
|
# hasOutOfOrderBlocks: n/a
|
2022-10-10 02:31:28 +00:00
|
|
|
chain: chain,
|
2022-05-13 16:30:10 +00:00
|
|
|
peerPool: ethNode.peerPool,
|
2022-12-05 02:51:32 +00:00
|
|
|
trustedPeers: initHashSet[Peer]())
|
|
|
|
|
|
|
|
# finalizedBlock
|
|
|
|
chain.com.syncCurrent = chain.db.getCanonicalHead().blockNumber
|
2022-05-13 16:30:10 +00:00
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc start*(ctx: LegacySyncRef) =
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
## Code for the fast blockchain sync procedure:
|
2022-05-13 16:30:10 +00:00
|
|
|
## <https://github.com/ethereum/wiki/wiki/Parallel-Block-Downloads>_
|
|
|
|
## <https://github.com/ethereum/go-ethereum/pull/1889__
|
2022-11-16 06:45:28 +00:00
|
|
|
|
|
|
|
try:
|
|
|
|
var blockHash: Hash256
|
|
|
|
let
|
|
|
|
db = ctx.chain.db
|
2022-12-02 04:39:12 +00:00
|
|
|
com = ctx.chain.com
|
2022-11-16 06:45:28 +00:00
|
|
|
|
|
|
|
if not db.getBlockHash(ctx.finalizedBlock, blockHash):
|
|
|
|
debug "FastSync.start: Failed to get blockHash",
|
|
|
|
number=ctx.finalizedBlock
|
|
|
|
return
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
if com.consensus == ConsensusType.POS:
|
2022-11-16 06:45:28 +00:00
|
|
|
debug "Fast sync is disabled after POS merge"
|
|
|
|
return
|
|
|
|
|
2022-12-05 02:51:32 +00:00
|
|
|
ctx.chain.com.syncStart = ctx.finalizedBlock
|
2022-11-16 06:45:28 +00:00
|
|
|
info "Fast Sync: start sync from",
|
2022-12-05 02:51:32 +00:00
|
|
|
number=ctx.chain.com.syncStart,
|
2022-11-16 06:45:28 +00:00
|
|
|
hash=blockHash
|
|
|
|
|
|
|
|
except CatchableError as e:
|
|
|
|
debug "Exception in FastSync.start()",
|
|
|
|
exc = e.name, err = e.msg
|
|
|
|
|
2022-05-13 16:30:10 +00:00
|
|
|
var po = PeerObserver(
|
|
|
|
onPeerConnected:
|
|
|
|
proc(p: Peer) {.gcsafe.} =
|
2022-11-16 06:45:28 +00:00
|
|
|
ctx.onPeerConnected(p),
|
2022-05-13 16:30:10 +00:00
|
|
|
onPeerDisconnected:
|
|
|
|
proc(p: Peer) {.gcsafe.} =
|
|
|
|
ctx.onPeerDisconnected(p))
|
|
|
|
po.setProtocol eth
|
|
|
|
ctx.peerPool.addObserver(ctx, po)
|
Sync: Move `blockchain_sync` code and use it with `eth/65`
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
2021-07-22 13:36:10 +00:00
|
|
|
|
2022-11-16 06:45:28 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public procs: eth wire protocol handlers
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc handleNewBlockHashes(ctx: LegacySyncRef,
|
2022-11-16 06:45:28 +00:00
|
|
|
peer: Peer,
|
|
|
|
hashes: openArray[NewBlockHashesAnnounce]) {.
|
|
|
|
gcsafe, raises: [Defect, CatchableError].} =
|
|
|
|
|
|
|
|
trace trEthRecvNewBlockHashes,
|
|
|
|
numHash=hashes.len
|
|
|
|
|
|
|
|
if hashes.len == 0:
|
|
|
|
return
|
|
|
|
|
|
|
|
var number = hashes[0].number
|
|
|
|
if hashes.len > 1:
|
|
|
|
for i in 1..<hashes.len:
|
|
|
|
let val = hashes[i]
|
|
|
|
if val.number != number + 1.toBlockNumber:
|
|
|
|
error "Found a gap in block hashes"
|
|
|
|
return
|
|
|
|
number = val.number
|
|
|
|
|
|
|
|
number = hashes[^1].number
|
|
|
|
if number <= ctx.endBlockNumber:
|
|
|
|
trace "Will not set new synctarget",
|
|
|
|
newSyncHeight=number,
|
|
|
|
endBlockNumber=ctx.endBlockNumber,
|
|
|
|
peer
|
|
|
|
return
|
|
|
|
|
|
|
|
# don't send back hashes to this peer
|
|
|
|
for val in hashes:
|
|
|
|
discard ctx.addToKnownByPeer(val.hash, peer)
|
|
|
|
|
|
|
|
# set new sync target, + 1'u means including last block
|
|
|
|
let numBlocks = (number - hashes[0].number).truncate(uint) + 1'u
|
|
|
|
ctx.appendWorkItem(hashes[0].hash, hashes[0].number, numBlocks)
|
|
|
|
ctx.endBlockNumber = number
|
|
|
|
trace "New sync target height", number
|
|
|
|
|
|
|
|
if ctx.busyPeers.len > 0:
|
|
|
|
# do nothing. busy peers will keep syncing
|
|
|
|
# until new sync target reached
|
|
|
|
trace "sync using busyPeers",
|
|
|
|
len=ctx.busyPeers.len
|
|
|
|
return
|
|
|
|
|
|
|
|
if ctx.trustedPeers.len == 0:
|
|
|
|
trace "sync with this peer"
|
|
|
|
ctx.startObtainBlocks(peer)
|
|
|
|
else:
|
|
|
|
trace "sync with random peer"
|
|
|
|
let peer = ctx.randomTrustedPeer()
|
|
|
|
ctx.startSyncWithPeer(peer)
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
proc handleNewBlock(ctx: LegacySyncRef,
|
2022-11-16 06:45:28 +00:00
|
|
|
peer: Peer,
|
|
|
|
blk: EthBlock,
|
|
|
|
totalDifficulty: DifficultyInt) {.
|
|
|
|
gcsafe, raises: [Defect, CatchableError].} =
|
|
|
|
|
|
|
|
trace trEthRecvNewBlock,
|
|
|
|
number=blk.header.blockNumber,
|
|
|
|
hash=short(blk.header.blockHash)
|
|
|
|
|
|
|
|
if ctx.lastCleanup - getTime() > CleanupInterval:
|
|
|
|
ctx.cleanupKnownByPeer()
|
|
|
|
|
|
|
|
# Don't send NEW_BLOCK announcement to peer that sent original new block message
|
|
|
|
discard ctx.addToKnownByPeer(blk.header.blockHash, peer)
|
|
|
|
|
|
|
|
if blk.header.blockNumber > ctx.finalizedBlock + 1.toBlockNumber:
|
|
|
|
# If the block number exceeds one past our height we cannot validate it
|
|
|
|
trace "NewBlock got block past our height",
|
|
|
|
number=blk.header.blockNumber
|
|
|
|
return
|
|
|
|
|
|
|
|
if not ctx.validateHeader(blk.header):
|
|
|
|
error "invalid header from peer",
|
|
|
|
peer, hash=short(blk.header.blockHash)
|
|
|
|
return
|
|
|
|
|
|
|
|
# Send NEW_BLOCK to square root of total number of peers in pool
|
|
|
|
# https://github.com/ethereum/devp2p/blob/master/caps/eth.md#block-propagation
|
|
|
|
let
|
|
|
|
numPeersToShareWith = sqrt(ctx.peerPool.len.float32).int
|
|
|
|
peers = ctx.getPeers(peer)
|
|
|
|
|
|
|
|
debug "num peers to share with",
|
|
|
|
number=numPeersToShareWith,
|
|
|
|
numPeers=peers.len
|
|
|
|
|
|
|
|
if peers.len > 0 and numPeersToShareWith > 0:
|
|
|
|
asyncSpawn ctx.broadcastBlock(blk, peers[0..<numPeersToShareWith])
|
|
|
|
|
|
|
|
var parentHash: Hash256
|
|
|
|
if not ctx.chain.db.getBlockHash(ctx.finalizedBlock, parentHash):
|
|
|
|
error "failed to get parent hash",
|
|
|
|
number=ctx.finalizedBlock
|
|
|
|
return
|
|
|
|
|
|
|
|
if parentHash == blk.header.parentHash:
|
|
|
|
# If new block is child of current chain tip, insert new block into chain
|
|
|
|
let body = BlockBody(
|
|
|
|
transactions: blk.txs,
|
|
|
|
uncles: blk.uncles
|
|
|
|
)
|
|
|
|
let res = ctx.chain.persistBlocks([blk.header], [body])
|
|
|
|
|
|
|
|
# Check if new sync target height can be set
|
|
|
|
if res == ValidationResult.OK:
|
|
|
|
ctx.endBlockNumber = blk.header.blockNumber
|
|
|
|
ctx.finalizedBlock = blk.header.blockNumber
|
|
|
|
|
|
|
|
else:
|
|
|
|
# Call handleNewBlockHashes to retrieve all blocks between chain tip and new block
|
|
|
|
let newSyncHeight = NewBlockHashesAnnounce(
|
|
|
|
number: blk.header.blockNumber,
|
|
|
|
hash: blk.header.blockHash
|
|
|
|
)
|
|
|
|
ctx.handleNewBlockHashes(peer, [newSyncHeight])
|
|
|
|
|
|
|
|
if peers.len > 0 and numPeersToShareWith > 0:
|
|
|
|
# Send `NEW_BLOCK_HASHES` message for received block to all other peers
|
|
|
|
asyncSpawn ctx.broadcastBlockHash(blk, peers[numPeersToShareWith..^1])
|
|
|
|
|
|
|
|
proc newBlockHashesHandler*(arg: pointer,
|
|
|
|
peer: Peer,
|
|
|
|
hashes: openArray[NewBlockHashesAnnounce]) {.
|
|
|
|
gcsafe, raises: [Defect, CatchableError].} =
|
2022-12-05 02:42:09 +00:00
|
|
|
let ctx = cast[LegacySyncRef](arg)
|
2022-11-16 06:45:28 +00:00
|
|
|
ctx.handleNewBlockHashes(peer, hashes)
|
|
|
|
|
|
|
|
proc newBlockHandler*(arg: pointer,
|
|
|
|
peer: Peer,
|
|
|
|
blk: EthBlock,
|
|
|
|
totalDifficulty: DifficultyInt) {.
|
|
|
|
gcsafe, raises: [Defect, CatchableError].} =
|
|
|
|
|
2022-12-05 02:42:09 +00:00
|
|
|
let ctx = cast[LegacySyncRef](arg)
|
2022-11-16 06:45:28 +00:00
|
|
|
ctx.handleNewBlock(peer, blk, totalDifficulty)
|
|
|
|
|
2022-05-13 16:30:10 +00:00
|
|
|
# End
|