Remove Portal beacon-lc-bridge (#2897)
The idea of the beacon-lc-bridge was to allow to bridge data into the Portal network while only using p2p protocols to get access to the data. It is however incomplete as for history content the receipts are missing. These could be added by also adding devp2p access. But for the beacon content, there would be no way for getting the historical summaries over p2p. And then we did not even look yet on how to do this for state. Considering it is incomplete it was also not being used by anyone and thus we remove it.
This commit is contained in:
parent
0f18de61dc
commit
3bf0920a16
2
Makefile
2
Makefile
|
@ -71,13 +71,11 @@ TOOLS_CSV := $(subst $(SPACE),$(COMMA),$(TOOLS))
|
||||||
# Fluffy debugging tools + testing tools
|
# Fluffy debugging tools + testing tools
|
||||||
FLUFFY_TOOLS := \
|
FLUFFY_TOOLS := \
|
||||||
portal_bridge \
|
portal_bridge \
|
||||||
beacon_lc_bridge \
|
|
||||||
eth_data_exporter \
|
eth_data_exporter \
|
||||||
blockwalk \
|
blockwalk \
|
||||||
portalcli \
|
portalcli \
|
||||||
fcli_db
|
fcli_db
|
||||||
FLUFFY_TOOLS_DIRS := \
|
FLUFFY_TOOLS_DIRS := \
|
||||||
fluffy/tools/beacon_lc_bridge \
|
|
||||||
fluffy/tools/portal_bridge \
|
fluffy/tools/portal_bridge \
|
||||||
fluffy/tools/state_bridge \
|
fluffy/tools/state_bridge \
|
||||||
fluffy/tools
|
fluffy/tools
|
||||||
|
|
|
@ -1,14 +1,12 @@
|
||||||
# Bridging content into the Portal history network
|
# Bridging content into the Portal history network
|
||||||
|
|
||||||
## Seeding from content bridges
|
## Seeding history content with the `portal_bridge`
|
||||||
|
|
||||||
### Seeding history content with the `portal_bridge`
|
|
||||||
|
|
||||||
The `portal_bridge` requires `era1` files as source for the block content from before the merge.
|
The `portal_bridge` requires `era1` files as source for the block content from before the merge.
|
||||||
It requires access to a full node with EL JSON-RPC API for seeding the latest (head of the chain) block content.
|
It requires access to a full node with EL JSON-RPC API for seeding the latest (head of the chain) block content.
|
||||||
Any block content between the merge and the latest is currently not implemented, but will be implemented in the future by usage of `era` files as source.
|
Any block content between the merge and the latest is currently not implemented, but will be implemented in the future by usage of `era` files as source.
|
||||||
|
|
||||||
#### Step 1: Run a Portal client
|
### Step 1: Run a Portal client
|
||||||
|
|
||||||
Run a Portal client with the Portal JSON-RPC API enabled, e.g. Fluffy:
|
Run a Portal client with the Portal JSON-RPC API enabled, e.g. Fluffy:
|
||||||
|
|
||||||
|
@ -20,12 +18,12 @@ Run a Portal client with the Portal JSON-RPC API enabled, e.g. Fluffy:
|
||||||
for the use case where the node's only focus is on gossiping content from the
|
for the use case where the node's only focus is on gossiping content from the
|
||||||
`portal_bridge`.
|
`portal_bridge`.
|
||||||
|
|
||||||
#### Step 2: Run an EL client
|
### Step 2: Run an EL client
|
||||||
|
|
||||||
The `portal_bridge` needs access to the EL JSON-RPC API, either through a local
|
The `portal_bridge` needs access to the EL JSON-RPC API, either through a local
|
||||||
Ethereum client or via a web3 provider.
|
Ethereum client or via a web3 provider.
|
||||||
|
|
||||||
#### Step 3: Run the Portal bridge in history mode
|
### Step 3: Run the Portal bridge in history mode
|
||||||
|
|
||||||
Build & run the `portal_bridge`:
|
Build & run the `portal_bridge`:
|
||||||
```bash
|
```bash
|
||||||
|
@ -49,33 +47,6 @@ WEB3_URL="http://127.0.0.1:8548" # Replace with your provider.
|
||||||
./build/portal_bridge history --latest:true --backfill:true --audit:true --era1-dir:/somedir/era1/ --web3-url:${WEB3_URL}
|
./build/portal_bridge history --latest:true --backfill:true --audit:true --era1-dir:/somedir/era1/ --web3-url:${WEB3_URL}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Seeding post-merge history content with the `beacon_lc_bridge`
|
|
||||||
|
|
||||||
The `beacon_lc_bridge` is more of a standalone bridge that does not require access to a full node with its EL JSON-RPC API. However it is also more limited in the functions it provides.
|
|
||||||
It will start with the consensus light client sync and follow beacon block gossip. Once it is synced, the execution payload of new beacon blocks will be extracted and injected in the Portal network as execution headers
|
|
||||||
and blocks.
|
|
||||||
|
|
||||||
> Note: The execution headers will come without a proof.
|
|
||||||
|
|
||||||
The injection into the Portal network is done via the
|
|
||||||
`portal_historyGossip` JSON-RPC endpoint of the running Fluffy node.
|
|
||||||
|
|
||||||
> Note: Backfilling of block bodies and headers is not yet supported.
|
|
||||||
|
|
||||||
Run a Fluffy node with the JSON-RPC API enabled.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./build/fluffy --rpc
|
|
||||||
```
|
|
||||||
|
|
||||||
Build & run the `beacon_lc_bridge`:
|
|
||||||
```bash
|
|
||||||
make beacon_lc_bridge
|
|
||||||
|
|
||||||
TRUSTED_BLOCK_ROOT=0x1234567890123456789012345678901234567890123456789012345678901234 # Replace with trusted block root.
|
|
||||||
./build/beacon_lc_bridge --trusted-block-root=${TRUSTED_BLOCK_ROOT}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Seeding directly from the fluffy client
|
## Seeding directly from the fluffy client
|
||||||
|
|
||||||
This method currently only supports seeding block content from before the merge.
|
This method currently only supports seeding block content from before the merge.
|
||||||
|
|
|
@ -1,562 +0,0 @@
|
||||||
# Nimbus
|
|
||||||
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
|
||||||
# Licensed and distributed under either of
|
|
||||||
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
|
|
||||||
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
|
|
||||||
# at your option. This file may not be copied, modified, or distributed except according to those terms.
|
|
||||||
|
|
||||||
## The beacon_lc_bridge is a "standalone" bridge, which means that it requires
|
|
||||||
## only a Portal client to inject content into the Portal network, but retrieves
|
|
||||||
## the content via p2p protocols only (no private / centralized full node access
|
|
||||||
## required).
|
|
||||||
## The bridge allows to follow the head of the beacon chain and inject the latest
|
|
||||||
## execution block headers and bodies into the Portal history network.
|
|
||||||
## It can, optionally, inject the beacon LC content into the Portal beacon network.
|
|
||||||
##
|
|
||||||
## The bridge does consensus light client sync and follows beacon block gossip.
|
|
||||||
## Once it is synced, the execution payload of new beacon blocks will be
|
|
||||||
## extracted and injected in the Portal network as execution headers and blocks.
|
|
||||||
##
|
|
||||||
## The injection into the Portal network is done via the `portal_historyGossip`
|
|
||||||
## JSON-RPC endpoint of a running Fluffy node.
|
|
||||||
##
|
|
||||||
## Actions that this type of bridge (currently?) cannot perform:
|
|
||||||
## 1. Inject block receipts into the portal network
|
|
||||||
## 2. Inject epoch accumulators into the portal network
|
|
||||||
## 3. Backfill headers and blocks
|
|
||||||
## 4. Provide proofs for the headers
|
|
||||||
##
|
|
||||||
## - To provide 1., it would require devp2p/eth access for the bridge to remain
|
|
||||||
## standalone.
|
|
||||||
## - To provide 2., it could use Era1 files.
|
|
||||||
## - To provide 3. and 4, it could use Era1 files pre-merge, and Era files
|
|
||||||
## post-merge. To backfill without Era or Era1 files, it could use libp2p and
|
|
||||||
## devp2p for access to the blocks, however it would not be possible to (easily)
|
|
||||||
## build the proofs for the headers.
|
|
||||||
|
|
||||||
{.push raises: [].}
|
|
||||||
|
|
||||||
import
|
|
||||||
std/[os, strutils],
|
|
||||||
chronicles,
|
|
||||||
chronos,
|
|
||||||
confutils,
|
|
||||||
eth/[rlp, trie/ordered_trie],
|
|
||||||
eth/common/keys,
|
|
||||||
eth/common/[base, headers_rlp, blocks_rlp],
|
|
||||||
beacon_chain/el/[el_manager, engine_api_conversions],
|
|
||||||
beacon_chain/gossip_processing/optimistic_processor,
|
|
||||||
beacon_chain/networking/[eth2_network, topic_params],
|
|
||||||
beacon_chain/spec/beaconstate,
|
|
||||||
beacon_chain/spec/datatypes/[phase0, altair, bellatrix],
|
|
||||||
beacon_chain/[light_client, nimbus_binary_common],
|
|
||||||
# Weirdness. Need to import this to be able to do errors.ValidationResult as
|
|
||||||
# else we get an ambiguous identifier, ValidationResult from eth & libp2p.
|
|
||||||
libp2p/protocols/pubsub/errors,
|
|
||||||
../../rpc/portal_rpc_client,
|
|
||||||
../../network/history/[history_content, history_network],
|
|
||||||
../../network/beacon/beacon_content,
|
|
||||||
../../common/common_types,
|
|
||||||
./beacon_lc_bridge_conf
|
|
||||||
|
|
||||||
from beacon_chain/gossip_processing/block_processor import newExecutionPayload
|
|
||||||
from beacon_chain/gossip_processing/eth2_processor import toValidationResult
|
|
||||||
|
|
||||||
template append(w: var RlpWriter, typedTransaction: TypedTransaction) =
|
|
||||||
w.appendRawBytes(distinctBase typedTransaction)
|
|
||||||
|
|
||||||
template append(w: var RlpWriter, withdrawalV1: WithdrawalV1) =
|
|
||||||
# TODO: Since Capella we can also access ExecutionPayloadHeader and thus
|
|
||||||
# could get the Roots through there instead.
|
|
||||||
w.append blocks.Withdrawal(
|
|
||||||
index: distinctBase(withdrawalV1.index),
|
|
||||||
validatorIndex: distinctBase(withdrawalV1.validatorIndex),
|
|
||||||
address: withdrawalV1.address,
|
|
||||||
amount: distinctBase(withdrawalV1.amount),
|
|
||||||
)
|
|
||||||
|
|
||||||
proc asPortalBlockData*(
|
|
||||||
payload: ExecutionPayloadV1
|
|
||||||
): (Hash32, BlockHeaderWithProof, PortalBlockBodyLegacy) =
|
|
||||||
let
|
|
||||||
txRoot = orderedTrieRoot(payload.transactions)
|
|
||||||
|
|
||||||
header = Header(
|
|
||||||
parentHash: payload.parentHash,
|
|
||||||
ommersHash: EMPTY_UNCLE_HASH,
|
|
||||||
coinbase: payload.feeRecipient,
|
|
||||||
stateRoot: payload.stateRoot,
|
|
||||||
transactionsRoot: txRoot,
|
|
||||||
receiptsRoot: payload.receiptsRoot,
|
|
||||||
logsBloom: distinctBase(payload.logsBloom).to(Bloom),
|
|
||||||
difficulty: default(DifficultyInt),
|
|
||||||
number: payload.blockNumber.distinctBase,
|
|
||||||
gasLimit: distinctBase(payload.gasLimit),
|
|
||||||
gasUsed: distinctBase(payload.gasUsed),
|
|
||||||
timestamp: payload.timestamp.EthTime,
|
|
||||||
extraData: payload.extraData.data,
|
|
||||||
mixHash: payload.prevRandao,
|
|
||||||
nonce: default(Bytes8),
|
|
||||||
baseFeePerGas: Opt.some(payload.baseFeePerGas),
|
|
||||||
withdrawalsRoot: Opt.none(Hash32),
|
|
||||||
blobGasUsed: Opt.none(uint64),
|
|
||||||
excessBlobGas: Opt.none(uint64),
|
|
||||||
)
|
|
||||||
|
|
||||||
headerWithProof = BlockHeaderWithProof(
|
|
||||||
header: ByteList[2048](rlp.encode(header)), proof: BlockHeaderProof.init()
|
|
||||||
)
|
|
||||||
|
|
||||||
var transactions: Transactions
|
|
||||||
for tx in payload.transactions:
|
|
||||||
discard transactions.add(TransactionByteList(distinctBase(tx)))
|
|
||||||
|
|
||||||
let body =
|
|
||||||
PortalBlockBodyLegacy(transactions: transactions, uncles: Uncles(@[byte 0xc0]))
|
|
||||||
|
|
||||||
(payload.blockHash, headerWithProof, body)
|
|
||||||
|
|
||||||
proc asPortalBlockData*(
|
|
||||||
payload: ExecutionPayloadV2 | ExecutionPayloadV3
|
|
||||||
): (Hash32, BlockHeaderWithProof, PortalBlockBodyShanghai) =
|
|
||||||
let
|
|
||||||
txRoot = orderedTrieRoot(payload.transactions)
|
|
||||||
withdrawalsRoot = Opt.some(orderedTrieRoot(payload.withdrawals))
|
|
||||||
|
|
||||||
# TODO: adjust blobGasUsed & excessBlobGas according to deneb fork!
|
|
||||||
header = Header(
|
|
||||||
parentHash: payload.parentHash,
|
|
||||||
ommersHash: EMPTY_UNCLE_HASH,
|
|
||||||
coinbase: payload.feeRecipient,
|
|
||||||
stateRoot: payload.stateRoot,
|
|
||||||
transactionsRoot: txRoot,
|
|
||||||
receiptsRoot: payload.receiptsRoot,
|
|
||||||
logsBloom: distinctBase(payload.logsBloom).to(Bloom),
|
|
||||||
difficulty: default(DifficultyInt),
|
|
||||||
number: payload.blockNumber.distinctBase,
|
|
||||||
gasLimit: distinctBase(payload.gasLimit),
|
|
||||||
gasUsed: distinctBase(payload.gasUsed),
|
|
||||||
timestamp: payload.timestamp.EthTime,
|
|
||||||
extraData: payload.extraData.data,
|
|
||||||
mixHash: payload.prevRandao,
|
|
||||||
nonce: default(Bytes8),
|
|
||||||
baseFeePerGas: Opt.some(payload.baseFeePerGas),
|
|
||||||
withdrawalsRoot: withdrawalsRoot,
|
|
||||||
blobGasUsed: Opt.none(uint64),
|
|
||||||
excessBlobGas: Opt.none(uint64),
|
|
||||||
)
|
|
||||||
|
|
||||||
headerWithProof = BlockHeaderWithProof(
|
|
||||||
header: ByteList[2048](rlp.encode(header)), proof: BlockHeaderProof.init()
|
|
||||||
)
|
|
||||||
|
|
||||||
var transactions: Transactions
|
|
||||||
for tx in payload.transactions:
|
|
||||||
discard transactions.add(TransactionByteList(distinctBase(tx)))
|
|
||||||
|
|
||||||
func toWithdrawal(x: WithdrawalV1): Withdrawal =
|
|
||||||
Withdrawal(
|
|
||||||
index: x.index.uint64,
|
|
||||||
validatorIndex: x.validatorIndex.uint64,
|
|
||||||
address: x.address,
|
|
||||||
amount: x.amount.uint64,
|
|
||||||
)
|
|
||||||
|
|
||||||
var withdrawals: Withdrawals
|
|
||||||
for w in payload.withdrawals:
|
|
||||||
discard withdrawals.add(WithdrawalByteList(rlp.encode(toWithdrawal(w))))
|
|
||||||
|
|
||||||
let body = PortalBlockBodyShanghai(
|
|
||||||
transactions: transactions, uncles: Uncles(@[byte 0xc0]), withdrawals: withdrawals
|
|
||||||
)
|
|
||||||
|
|
||||||
(payload.blockHash, headerWithProof, body)
|
|
||||||
|
|
||||||
proc run(config: BeaconBridgeConf) {.raises: [CatchableError].} =
|
|
||||||
# Required as both Eth2Node and LightClient requires correct config type
|
|
||||||
var lcConfig = config.asLightClientConf()
|
|
||||||
|
|
||||||
setupLogging(config.logLevel, config.logStdout, none(OutFile))
|
|
||||||
|
|
||||||
notice "Launching fluffy beacon chain light bridge",
|
|
||||||
cmdParams = commandLineParams(), config
|
|
||||||
|
|
||||||
let metadata = loadEth2Network(config.eth2Network)
|
|
||||||
|
|
||||||
for node in metadata.bootstrapNodes:
|
|
||||||
lcConfig.bootstrapNodes.add node
|
|
||||||
|
|
||||||
template cfg(): auto =
|
|
||||||
metadata.cfg
|
|
||||||
|
|
||||||
let
|
|
||||||
genesisState =
|
|
||||||
try:
|
|
||||||
template genesisData(): auto =
|
|
||||||
metadata.genesis.bakedBytes
|
|
||||||
|
|
||||||
newClone(
|
|
||||||
readSszForkedHashedBeaconState(
|
|
||||||
cfg, genesisData.toOpenArray(genesisData.low, genesisData.high)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
except CatchableError as err:
|
|
||||||
raiseAssert "Invalid baked-in state: " & err.msg
|
|
||||||
|
|
||||||
genesisTime = getStateField(genesisState[], genesis_time)
|
|
||||||
beaconClock = BeaconClock.init(genesisTime).valueOr:
|
|
||||||
error "Invalid genesis time in state", genesisTime
|
|
||||||
quit QuitFailure
|
|
||||||
|
|
||||||
getBeaconTime = beaconClock.getBeaconTimeFn()
|
|
||||||
|
|
||||||
genesis_validators_root = getStateField(genesisState[], genesis_validators_root)
|
|
||||||
forkDigests = newClone ForkDigests.init(cfg, genesis_validators_root)
|
|
||||||
|
|
||||||
genesisBlockRoot = get_initial_beacon_block(genesisState[]).root
|
|
||||||
|
|
||||||
rng = keys.newRng()
|
|
||||||
|
|
||||||
netKeys = getRandomNetKeys(rng[])
|
|
||||||
|
|
||||||
network = createEth2Node(
|
|
||||||
rng, lcConfig, netKeys, cfg, forkDigests, getBeaconTime, genesis_validators_root
|
|
||||||
)
|
|
||||||
|
|
||||||
portalRpcClient = newRpcHttpClient()
|
|
||||||
|
|
||||||
optimisticHandler = proc(
|
|
||||||
signedBlock: ForkedSignedBeaconBlock
|
|
||||||
): Future[void] {.async: (raises: [CancelledError]).} =
|
|
||||||
# TODO: Should not be gossiping optimistic blocks, but instead store them
|
|
||||||
# in a cache and only gossip them after they are confirmed due to an LC
|
|
||||||
# finalized header.
|
|
||||||
notice "New LC optimistic block",
|
|
||||||
opt = signedBlock.toBlockId(), wallSlot = getBeaconTime().slotOrZero
|
|
||||||
|
|
||||||
withBlck(signedBlock):
|
|
||||||
when consensusFork >= ConsensusFork.Bellatrix:
|
|
||||||
if forkyBlck.message.is_execution_block:
|
|
||||||
template payload(): auto =
|
|
||||||
forkyBlck.message.body
|
|
||||||
|
|
||||||
# TODO: Get rid of the asEngineExecutionPayload step?
|
|
||||||
let executionPayload = payload.asEngineExecutionPayload()
|
|
||||||
let (hash, headerWithProof, body) = asPortalBlockData(executionPayload)
|
|
||||||
|
|
||||||
logScope:
|
|
||||||
blockhash = history_content.`$` hash
|
|
||||||
|
|
||||||
block: # gossip header
|
|
||||||
let contentKey = blockHeaderContentKey(hash)
|
|
||||||
let encodedContentKey = contentKey.encode.asSeq()
|
|
||||||
|
|
||||||
try:
|
|
||||||
let peers = await portalRpcClient.portal_historyGossip(
|
|
||||||
toHex(encodedContentKey), SSZ.encode(headerWithProof).toHex()
|
|
||||||
)
|
|
||||||
info "Block header gossiped",
|
|
||||||
peers, contentKey = encodedContentKey.toHex()
|
|
||||||
except CatchableError as e:
|
|
||||||
error "JSON-RPC error", error = $e.msg
|
|
||||||
# TODO: clean-up when json-rpc gets async raises annotations
|
|
||||||
try:
|
|
||||||
await portalRpcClient.close()
|
|
||||||
except CatchableError:
|
|
||||||
discard
|
|
||||||
|
|
||||||
# For bodies to get verified, the header needs to be available on
|
|
||||||
# the network. Wait a little to get the headers propagated through
|
|
||||||
# the network.
|
|
||||||
await sleepAsync(2.seconds)
|
|
||||||
|
|
||||||
block: # gossip block
|
|
||||||
let contentKey = blockBodyContentKey(hash)
|
|
||||||
let encodedContentKey = contentKey.encode.asSeq()
|
|
||||||
|
|
||||||
try:
|
|
||||||
let peers = await portalRpcClient.portal_historyGossip(
|
|
||||||
encodedContentKey.toHex(), SSZ.encode(body).toHex()
|
|
||||||
)
|
|
||||||
info "Block body gossiped",
|
|
||||||
peers, contentKey = encodedContentKey.toHex()
|
|
||||||
except CatchableError as e:
|
|
||||||
error "JSON-RPC error", error = $e.msg
|
|
||||||
|
|
||||||
# TODO: clean-up when json-rpc gets async raises annotations
|
|
||||||
try:
|
|
||||||
await portalRpcClient.close()
|
|
||||||
except CatchableError:
|
|
||||||
discard
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
optimisticProcessor = initOptimisticProcessor(getBeaconTime, optimisticHandler)
|
|
||||||
|
|
||||||
lightClient = createLightClient(
|
|
||||||
network, rng, lcConfig, cfg, forkDigests, getBeaconTime, genesis_validators_root,
|
|
||||||
LightClientFinalizationMode.Optimistic,
|
|
||||||
)
|
|
||||||
|
|
||||||
### Beacon Light Client content bridging specific callbacks
|
|
||||||
proc onBootstrap(lightClient: LightClient, bootstrap: ForkedLightClientBootstrap) =
|
|
||||||
withForkyObject(bootstrap):
|
|
||||||
when lcDataFork > LightClientDataFork.None:
|
|
||||||
info "New Beacon LC bootstrap",
|
|
||||||
forkyObject, slot = forkyObject.header.beacon.slot
|
|
||||||
|
|
||||||
let
|
|
||||||
root = hash_tree_root(forkyObject.header)
|
|
||||||
contentKey = encode(bootstrapContentKey(root))
|
|
||||||
forkDigest =
|
|
||||||
forkDigestAtEpoch(forkDigests[], epoch(forkyObject.header.beacon.slot), cfg)
|
|
||||||
content = encodeBootstrapForked(forkDigest, bootstrap)
|
|
||||||
|
|
||||||
proc GossipRpcAndClose() {.async.} =
|
|
||||||
try:
|
|
||||||
let
|
|
||||||
contentKeyHex = contentKey.asSeq().toHex()
|
|
||||||
peers = await portalRpcClient.portal_beaconGossip(
|
|
||||||
contentKeyHex, content.toHex()
|
|
||||||
)
|
|
||||||
info "Beacon LC bootstrap gossiped", peers, contentKey = contentKeyHex
|
|
||||||
except CatchableError as e:
|
|
||||||
error "JSON-RPC error", error = $e.msg
|
|
||||||
|
|
||||||
await portalRpcClient.close()
|
|
||||||
|
|
||||||
asyncSpawn(GossipRpcAndClose())
|
|
||||||
|
|
||||||
proc onUpdate(lightClient: LightClient, update: ForkedLightClientUpdate) =
|
|
||||||
withForkyObject(update):
|
|
||||||
when lcDataFork > LightClientDataFork.None:
|
|
||||||
info "New Beacon LC update",
|
|
||||||
update, slot = forkyObject.attested_header.beacon.slot
|
|
||||||
|
|
||||||
let
|
|
||||||
period = forkyObject.attested_header.beacon.slot.sync_committee_period
|
|
||||||
contentKey = encode(updateContentKey(period.uint64, uint64(1)))
|
|
||||||
forkDigest = forkDigestAtEpoch(
|
|
||||||
forkDigests[], epoch(forkyObject.attested_header.beacon.slot), cfg
|
|
||||||
)
|
|
||||||
content = encodeLightClientUpdatesForked(forkDigest, @[update])
|
|
||||||
|
|
||||||
proc GossipRpcAndClose() {.async.} =
|
|
||||||
try:
|
|
||||||
let
|
|
||||||
contentKeyHex = contentKey.asSeq().toHex()
|
|
||||||
peers = await portalRpcClient.portal_beaconGossip(
|
|
||||||
contentKeyHex, content.toHex()
|
|
||||||
)
|
|
||||||
info "Beacon LC bootstrap gossiped", peers, contentKey = contentKeyHex
|
|
||||||
except CatchableError as e:
|
|
||||||
error "JSON-RPC error", error = $e.msg
|
|
||||||
|
|
||||||
await portalRpcClient.close()
|
|
||||||
|
|
||||||
asyncSpawn(GossipRpcAndClose())
|
|
||||||
|
|
||||||
proc onOptimisticUpdate(
|
|
||||||
lightClient: LightClient, update: ForkedLightClientOptimisticUpdate
|
|
||||||
) =
|
|
||||||
withForkyObject(update):
|
|
||||||
when lcDataFork > LightClientDataFork.None:
|
|
||||||
info "New Beacon LC optimistic update",
|
|
||||||
update, slot = forkyObject.attested_header.beacon.slot
|
|
||||||
|
|
||||||
let
|
|
||||||
slot = forkyObject.signature_slot
|
|
||||||
contentKey = encode(optimisticUpdateContentKey(slot.uint64))
|
|
||||||
forkDigest = forkDigestAtEpoch(
|
|
||||||
forkDigests[], epoch(forkyObject.attested_header.beacon.slot), cfg
|
|
||||||
)
|
|
||||||
content = encodeOptimisticUpdateForked(forkDigest, update)
|
|
||||||
|
|
||||||
proc GossipRpcAndClose() {.async.} =
|
|
||||||
try:
|
|
||||||
let
|
|
||||||
contentKeyHex = contentKey.asSeq().toHex()
|
|
||||||
peers = await portalRpcClient.portal_beaconGossip(
|
|
||||||
contentKeyHex, content.toHex()
|
|
||||||
)
|
|
||||||
info "Beacon LC bootstrap gossiped", peers, contentKey = contentKeyHex
|
|
||||||
except CatchableError as e:
|
|
||||||
error "JSON-RPC error", error = $e.msg
|
|
||||||
|
|
||||||
await portalRpcClient.close()
|
|
||||||
|
|
||||||
asyncSpawn(GossipRpcAndClose())
|
|
||||||
|
|
||||||
proc onFinalityUpdate(
|
|
||||||
lightClient: LightClient, update: ForkedLightClientFinalityUpdate
|
|
||||||
) =
|
|
||||||
withForkyObject(update):
|
|
||||||
when lcDataFork > LightClientDataFork.None:
|
|
||||||
info "New Beacon LC finality update",
|
|
||||||
update, slot = forkyObject.attested_header.beacon.slot
|
|
||||||
let
|
|
||||||
finalizedSlot = forkyObject.finalized_header.beacon.slot
|
|
||||||
contentKey = encode(finalityUpdateContentKey(finalizedSlot.uint64))
|
|
||||||
forkDigest = forkDigestAtEpoch(
|
|
||||||
forkDigests[], epoch(forkyObject.attested_header.beacon.slot), cfg
|
|
||||||
)
|
|
||||||
content = encodeFinalityUpdateForked(forkDigest, update)
|
|
||||||
|
|
||||||
proc GossipRpcAndClose() {.async.} =
|
|
||||||
try:
|
|
||||||
let
|
|
||||||
contentKeyHex = contentKey.asSeq().toHex()
|
|
||||||
peers = await portalRpcClient.portal_beaconGossip(
|
|
||||||
contentKeyHex, content.toHex()
|
|
||||||
)
|
|
||||||
info "Beacon LC bootstrap gossiped", peers, contentKey = contentKeyHex
|
|
||||||
except CatchableError as e:
|
|
||||||
error "JSON-RPC error", error = $e.msg
|
|
||||||
|
|
||||||
await portalRpcClient.close()
|
|
||||||
|
|
||||||
asyncSpawn(GossipRpcAndClose())
|
|
||||||
|
|
||||||
###
|
|
||||||
|
|
||||||
waitFor portalRpcClient.connect(config.rpcAddress, Port(config.rpcPort), false)
|
|
||||||
|
|
||||||
info "Listening to incoming network requests"
|
|
||||||
network.registerProtocol(
|
|
||||||
PeerSync,
|
|
||||||
PeerSync.NetworkState.init(cfg, forkDigests, genesisBlockRoot, getBeaconTime),
|
|
||||||
)
|
|
||||||
network.addValidator(
|
|
||||||
getBeaconBlocksTopic(forkDigests.phase0),
|
|
||||||
proc(signedBlock: phase0.SignedBeaconBlock): errors.ValidationResult =
|
|
||||||
toValidationResult(optimisticProcessor.processSignedBeaconBlock(signedBlock)),
|
|
||||||
)
|
|
||||||
network.addValidator(
|
|
||||||
getBeaconBlocksTopic(forkDigests.altair),
|
|
||||||
proc(signedBlock: altair.SignedBeaconBlock): errors.ValidationResult =
|
|
||||||
toValidationResult(optimisticProcessor.processSignedBeaconBlock(signedBlock)),
|
|
||||||
)
|
|
||||||
network.addValidator(
|
|
||||||
getBeaconBlocksTopic(forkDigests.bellatrix),
|
|
||||||
proc(signedBlock: bellatrix.SignedBeaconBlock): errors.ValidationResult =
|
|
||||||
toValidationResult(optimisticProcessor.processSignedBeaconBlock(signedBlock)),
|
|
||||||
)
|
|
||||||
network.addValidator(
|
|
||||||
getBeaconBlocksTopic(forkDigests.capella),
|
|
||||||
proc(signedBlock: capella.SignedBeaconBlock): errors.ValidationResult =
|
|
||||||
toValidationResult(optimisticProcessor.processSignedBeaconBlock(signedBlock)),
|
|
||||||
)
|
|
||||||
network.addValidator(
|
|
||||||
getBeaconBlocksTopic(forkDigests.deneb),
|
|
||||||
proc(signedBlock: deneb.SignedBeaconBlock): errors.ValidationResult =
|
|
||||||
toValidationResult(optimisticProcessor.processSignedBeaconBlock(signedBlock)),
|
|
||||||
)
|
|
||||||
lightClient.installMessageValidators()
|
|
||||||
|
|
||||||
waitFor network.startListening()
|
|
||||||
waitFor network.start()
|
|
||||||
|
|
||||||
proc onFinalizedHeader(
|
|
||||||
lightClient: LightClient, finalizedHeader: ForkedLightClientHeader
|
|
||||||
) =
|
|
||||||
withForkyHeader(finalizedHeader):
|
|
||||||
when lcDataFork > LightClientDataFork.None:
|
|
||||||
info "New LC finalized header", finalized_header = shortLog(forkyHeader)
|
|
||||||
|
|
||||||
proc onOptimisticHeader(
|
|
||||||
lightClient: LightClient, optimisticHeader: ForkedLightClientHeader
|
|
||||||
) =
|
|
||||||
withForkyHeader(optimisticHeader):
|
|
||||||
when lcDataFork > LightClientDataFork.None:
|
|
||||||
info "New LC optimistic header", optimistic_header = shortLog(forkyHeader)
|
|
||||||
|
|
||||||
lightClient.onFinalizedHeader = onFinalizedHeader
|
|
||||||
lightClient.onOptimisticHeader = onOptimisticHeader
|
|
||||||
lightClient.trustedBlockRoot = some config.trustedBlockRoot
|
|
||||||
|
|
||||||
if config.beaconLightClient:
|
|
||||||
lightClient.bootstrapObserver = onBootstrap
|
|
||||||
lightClient.updateObserver = onUpdate
|
|
||||||
lightClient.finalityUpdateObserver = onFinalityUpdate
|
|
||||||
lightClient.optimisticUpdateObserver = onOptimisticUpdate
|
|
||||||
|
|
||||||
func shouldSyncOptimistically(wallSlot: Slot): bool =
|
|
||||||
let optimisticHeader = lightClient.optimisticHeader
|
|
||||||
withForkyHeader(optimisticHeader):
|
|
||||||
when lcDataFork > LightClientDataFork.None:
|
|
||||||
# Check whether light client has synced sufficiently close to wall slot
|
|
||||||
const maxAge = 2 * SLOTS_PER_EPOCH
|
|
||||||
forkyHeader.beacon.slot >= max(wallSlot, maxAge.Slot) - maxAge
|
|
||||||
else:
|
|
||||||
false
|
|
||||||
|
|
||||||
var blocksGossipState: GossipState = {}
|
|
||||||
proc updateBlocksGossipStatus(slot: Slot) =
|
|
||||||
let
|
|
||||||
isBehind = not shouldSyncOptimistically(slot)
|
|
||||||
|
|
||||||
targetGossipState = getTargetGossipState(
|
|
||||||
slot.epoch, cfg.ALTAIR_FORK_EPOCH, cfg.BELLATRIX_FORK_EPOCH,
|
|
||||||
cfg.CAPELLA_FORK_EPOCH, cfg.DENEB_FORK_EPOCH, cfg.ELECTRA_FORK_EPOCH, isBehind,
|
|
||||||
)
|
|
||||||
|
|
||||||
template currentGossipState(): auto =
|
|
||||||
blocksGossipState
|
|
||||||
|
|
||||||
if currentGossipState == targetGossipState:
|
|
||||||
return
|
|
||||||
|
|
||||||
if currentGossipState.card == 0 and targetGossipState.card > 0:
|
|
||||||
debug "Enabling blocks topic subscriptions", wallSlot = slot, targetGossipState
|
|
||||||
elif currentGossipState.card > 0 and targetGossipState.card == 0:
|
|
||||||
debug "Disabling blocks topic subscriptions", wallSlot = slot
|
|
||||||
else:
|
|
||||||
# Individual forks added / removed
|
|
||||||
discard
|
|
||||||
|
|
||||||
let
|
|
||||||
newGossipForks = targetGossipState - currentGossipState
|
|
||||||
oldGossipForks = currentGossipState - targetGossipState
|
|
||||||
|
|
||||||
for gossipFork in oldGossipForks:
|
|
||||||
let forkDigest = forkDigests[].atConsensusFork(gossipFork)
|
|
||||||
network.unsubscribe(getBeaconBlocksTopic(forkDigest))
|
|
||||||
|
|
||||||
for gossipFork in newGossipForks:
|
|
||||||
let forkDigest = forkDigests[].atConsensusFork(gossipFork)
|
|
||||||
network.subscribe(
|
|
||||||
getBeaconBlocksTopic(forkDigest), blocksTopicParams, enableTopicMetrics = true
|
|
||||||
)
|
|
||||||
|
|
||||||
blocksGossipState = targetGossipState
|
|
||||||
|
|
||||||
proc onSecond(time: Moment) =
|
|
||||||
let wallSlot = getBeaconTime().slotOrZero()
|
|
||||||
updateBlocksGossipStatus(wallSlot + 1)
|
|
||||||
lightClient.updateGossipStatus(wallSlot + 1)
|
|
||||||
|
|
||||||
proc runOnSecondLoop() {.async.} =
|
|
||||||
let sleepTime = chronos.seconds(1)
|
|
||||||
while true:
|
|
||||||
let start = chronos.now(chronos.Moment)
|
|
||||||
await chronos.sleepAsync(sleepTime)
|
|
||||||
let afterSleep = chronos.now(chronos.Moment)
|
|
||||||
let sleepTime = afterSleep - start
|
|
||||||
onSecond(start)
|
|
||||||
let finished = chronos.now(chronos.Moment)
|
|
||||||
let processingTime = finished - afterSleep
|
|
||||||
trace "onSecond task completed", sleepTime, processingTime
|
|
||||||
|
|
||||||
onSecond(Moment.now())
|
|
||||||
lightClient.start()
|
|
||||||
|
|
||||||
asyncSpawn runOnSecondLoop()
|
|
||||||
while true:
|
|
||||||
poll()
|
|
||||||
|
|
||||||
when isMainModule:
|
|
||||||
{.pop.}
|
|
||||||
var config = makeBannerAndConfig("Nimbus beacon chain bridge", BeaconBridgeConf)
|
|
||||||
{.push raises: [].}
|
|
||||||
|
|
||||||
run(config)
|
|
|
@ -1,193 +0,0 @@
|
||||||
# Nimbus
|
|
||||||
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
|
||||||
# Licensed and distributed under either of
|
|
||||||
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
|
|
||||||
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
|
|
||||||
# at your option. This file may not be copied, modified, or distributed except according to those terms.
|
|
||||||
|
|
||||||
{.push raises: [].}
|
|
||||||
|
|
||||||
import std/os, json_serialization/std/net, beacon_chain/light_client, beacon_chain/conf
|
|
||||||
|
|
||||||
export net, conf
|
|
||||||
|
|
||||||
proc defaultDataDir*(): string =
|
|
||||||
let dataDir =
|
|
||||||
when defined(windows):
|
|
||||||
"AppData" / "Roaming" / "FluffyBeaconLCBridge"
|
|
||||||
elif defined(macosx):
|
|
||||||
"Library" / "Application Support" / "FluffyBeaconLCBridge"
|
|
||||||
else:
|
|
||||||
".cache" / "fluffy-beacon-lc-bridge"
|
|
||||||
|
|
||||||
getHomeDir() / dataDir
|
|
||||||
|
|
||||||
const defaultDataDirDesc* = defaultDataDir()
|
|
||||||
|
|
||||||
type BeaconBridgeConf* = object # Config
|
|
||||||
configFile* {.desc: "Loads the configuration from a TOML file", name: "config-file".}:
|
|
||||||
Option[InputFile]
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
logLevel* {.desc: "Sets the log level", defaultValue: "INFO", name: "log-level".}:
|
|
||||||
string
|
|
||||||
|
|
||||||
logStdout* {.
|
|
||||||
hidden,
|
|
||||||
desc:
|
|
||||||
"Specifies what kind of logs should be written to stdout (auto, colors, nocolors, json)",
|
|
||||||
defaultValueDesc: "auto",
|
|
||||||
defaultValue: StdoutLogKind.Auto,
|
|
||||||
name: "log-format"
|
|
||||||
.}: StdoutLogKind
|
|
||||||
|
|
||||||
# Storage
|
|
||||||
dataDir* {.
|
|
||||||
desc: "The directory where beacon_lc_bridge will store all data",
|
|
||||||
defaultValue: defaultDataDir(),
|
|
||||||
defaultValueDesc: $defaultDataDirDesc,
|
|
||||||
abbr: "d",
|
|
||||||
name: "data-dir"
|
|
||||||
.}: OutDir
|
|
||||||
|
|
||||||
# Portal JSON-RPC API server to connect to
|
|
||||||
rpcAddress* {.
|
|
||||||
desc: "Listening address of the Portal JSON-RPC server",
|
|
||||||
defaultValue: "127.0.0.1",
|
|
||||||
name: "rpc-address"
|
|
||||||
.}: string
|
|
||||||
|
|
||||||
rpcPort* {.
|
|
||||||
desc: "Listening port of the Portal JSON-RPC server",
|
|
||||||
defaultValue: 8545,
|
|
||||||
name: "rpc-port"
|
|
||||||
.}: Port
|
|
||||||
|
|
||||||
## Bridge options
|
|
||||||
beaconLightClient* {.
|
|
||||||
desc: "Enable beacon light client content bridging",
|
|
||||||
defaultValue: false,
|
|
||||||
name: "beacon-light-client"
|
|
||||||
.}: bool
|
|
||||||
|
|
||||||
## Beacon chain light client specific options
|
|
||||||
|
|
||||||
# For Consensus light sync - No default - Needs to be provided by the user
|
|
||||||
trustedBlockRoot* {.
|
|
||||||
desc:
|
|
||||||
"Recent trusted finalized block root to initialize the consensus light client from",
|
|
||||||
name: "trusted-block-root"
|
|
||||||
.}: Eth2Digest
|
|
||||||
|
|
||||||
# Network
|
|
||||||
eth2Network* {.
|
|
||||||
desc: "The Eth2 network to join", defaultValueDesc: "mainnet", name: "network"
|
|
||||||
.}: Option[string]
|
|
||||||
|
|
||||||
# Libp2p
|
|
||||||
bootstrapNodes* {.
|
|
||||||
desc: "Specifies one or more bootstrap nodes to use when connecting to the network",
|
|
||||||
abbr: "b",
|
|
||||||
name: "bootstrap-node"
|
|
||||||
.}: seq[string]
|
|
||||||
|
|
||||||
bootstrapNodesFile* {.
|
|
||||||
desc: "Specifies a line-delimited file of bootstrap Ethereum network addresses",
|
|
||||||
defaultValue: "",
|
|
||||||
name: "bootstrap-file"
|
|
||||||
.}: InputFile
|
|
||||||
|
|
||||||
listenAddress* {.
|
|
||||||
desc: "Listening address for the Ethereum LibP2P and Discovery v5 traffic",
|
|
||||||
defaultValueDesc: "*",
|
|
||||||
name: "listen-address"
|
|
||||||
.}: Option[IpAddress]
|
|
||||||
|
|
||||||
tcpPort* {.
|
|
||||||
desc: "Listening TCP port for Ethereum LibP2P traffic",
|
|
||||||
defaultValue: defaultEth2TcpPort,
|
|
||||||
defaultValueDesc: $defaultEth2TcpPortDesc,
|
|
||||||
name: "tcp-port"
|
|
||||||
.}: Port
|
|
||||||
|
|
||||||
udpPort* {.
|
|
||||||
desc: "Listening UDP port for node discovery",
|
|
||||||
defaultValue: defaultEth2TcpPort,
|
|
||||||
defaultValueDesc: $defaultEth2TcpPortDesc,
|
|
||||||
name: "udp-port"
|
|
||||||
.}: Port
|
|
||||||
|
|
||||||
# TODO: Select a lower amount of peers.
|
|
||||||
maxPeers* {.
|
|
||||||
desc: "The target number of peers to connect to",
|
|
||||||
defaultValue: 160, # 5 (fanout) * 64 (subnets) / 2 (subs) for a healthy mesh
|
|
||||||
name: "max-peers"
|
|
||||||
.}: int
|
|
||||||
|
|
||||||
hardMaxPeers* {.
|
|
||||||
desc: "The maximum number of peers to connect to. Defaults to maxPeers * 1.5",
|
|
||||||
name: "hard-max-peers"
|
|
||||||
.}: Option[int]
|
|
||||||
|
|
||||||
nat* {.
|
|
||||||
desc:
|
|
||||||
"Specify method to use for determining public address. " &
|
|
||||||
"Must be one of: any, none, upnp, pmp, extip:<IP>",
|
|
||||||
defaultValue: NatConfig(hasExtIp: false, nat: NatAny),
|
|
||||||
defaultValueDesc: "any",
|
|
||||||
name: "nat"
|
|
||||||
.}: NatConfig
|
|
||||||
|
|
||||||
enrAutoUpdate* {.
|
|
||||||
desc:
|
|
||||||
"Discovery can automatically update its ENR with the IP address " &
|
|
||||||
"and UDP port as seen by other nodes it communicates with. " &
|
|
||||||
"This option allows to enable/disable this functionality",
|
|
||||||
defaultValue: false,
|
|
||||||
name: "enr-auto-update"
|
|
||||||
.}: bool
|
|
||||||
|
|
||||||
agentString* {.
|
|
||||||
defaultValue: "nimbus",
|
|
||||||
desc: "Node agent string which is used as identifier in the LibP2P network",
|
|
||||||
name: "agent-string"
|
|
||||||
.}: string
|
|
||||||
|
|
||||||
discv5Enabled* {.desc: "Enable Discovery v5", defaultValue: true, name: "discv5".}:
|
|
||||||
bool
|
|
||||||
|
|
||||||
directPeers* {.
|
|
||||||
desc:
|
|
||||||
"The list of priviledged, secure and known peers to connect and" &
|
|
||||||
"maintain the connection to, this requires a not random netkey-file." &
|
|
||||||
"In the complete multiaddress format like:" &
|
|
||||||
"/ip4/<address>/tcp/<port>/p2p/<peerId-public-key>." &
|
|
||||||
"Peering agreements are established out of band and must be reciprocal",
|
|
||||||
name: "direct-peer"
|
|
||||||
.}: seq[string]
|
|
||||||
|
|
||||||
func asLightClientConf*(pc: BeaconBridgeConf): LightClientConf =
|
|
||||||
return LightClientConf(
|
|
||||||
configFile: pc.configFile,
|
|
||||||
logLevel: pc.logLevel,
|
|
||||||
logStdout: pc.logStdout,
|
|
||||||
logFile: none(OutFile),
|
|
||||||
dataDir: pc.dataDir,
|
|
||||||
eth2Network: pc.eth2Network,
|
|
||||||
bootstrapNodes: pc.bootstrapNodes,
|
|
||||||
bootstrapNodesFile: pc.bootstrapNodesFile,
|
|
||||||
listenAddress: pc.listenAddress,
|
|
||||||
tcpPort: pc.tcpPort,
|
|
||||||
udpPort: pc.udpPort,
|
|
||||||
maxPeers: pc.maxPeers,
|
|
||||||
hardMaxPeers: pc.hardMaxPeers,
|
|
||||||
nat: pc.nat,
|
|
||||||
enrAutoUpdate: pc.enrAutoUpdate,
|
|
||||||
agentString: pc.agentString,
|
|
||||||
discv5Enabled: pc.discv5Enabled,
|
|
||||||
directPeers: pc.directPeers,
|
|
||||||
trustedBlockRoot: pc.trustedBlockRoot,
|
|
||||||
web3Urls: @[],
|
|
||||||
jwtSecret: none(InputFile),
|
|
||||||
stopAtEpoch: 0,
|
|
||||||
)
|
|
|
@ -1,4 +0,0 @@
|
||||||
# Use only `secp256k1` public key cryptography as an identity in LibP2P.
|
|
||||||
-d:"libp2p_pki_schemes=secp256k1"
|
|
||||||
|
|
||||||
-d:"chronicles_sinks=textlines[dynamic],json[dynamic]"
|
|
Loading…
Reference in New Issue