Merge branch 'master' into vbuterin-patch-13
This commit is contained in:
commit
b2d13e3e23
|
@ -38,7 +38,6 @@ The primary source of load on the beacon chain are "attestations". Attestations
|
|||
| `GWEI_PER_ETH` | 10**9 | Gwei/ETH |
|
||||
| `DEPOSIT_CONTRACT_ADDRESS` | **TBD** | - |
|
||||
| `TARGET_COMMITTEE_SIZE` | 2**8 (= 256) | validators |
|
||||
| `GENESIS_TIME` | **TBD** | seconds |
|
||||
| `SLOT_DURATION` | 6 | seconds |
|
||||
| `CYCLE_LENGTH` | 2**6 (= 64) | slots | ~6 minutes |
|
||||
| `MIN_VALIDATOR_SET_CHANGE_INTERVAL` | 2**8 (= 256) | slots | ~25 minutes |
|
||||
|
@ -92,6 +91,15 @@ The primary source of load on the beacon chain are "attestations". Attestations
|
|||
| `ENTRY` | `0` |
|
||||
| `EXIT` | `1` |
|
||||
|
||||
**Domains for BLS signatures**
|
||||
|
||||
| Name | Value |
|
||||
| - | :-: |
|
||||
| `DOMAIN_DEPOSIT` | `0` |
|
||||
| `DOMAIN_ATTESTATION` | `1` |
|
||||
| `DOMAIN_PROPOSAL` | `2` |
|
||||
| `DOMAIN_LOGOUT` | `3` |
|
||||
|
||||
### PoW chain registration contract
|
||||
|
||||
The initial deployment phases of Ethereum 2.0 are implemented without consensus changes to the PoW chain. A registration contract is added to the PoW chain to deposit ETH. This contract has a `registration` function which takes as arguments `pubkey`, `withdrawal_credentials`, `randao_commitment` as defined in a `ValidatorRecord` below. A BLS `proof_of_possession` of types `bytes` is given as a final argument.
|
||||
|
@ -121,7 +129,7 @@ A `BeaconBlock` has the following fields:
|
|||
# Specials (e.g. logouts, penalties)
|
||||
'specials': [SpecialRecord],
|
||||
# Proposer signature
|
||||
'proposer_signature': ['uint256'],
|
||||
'proposer_signature': ['uint384'],
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -148,7 +156,7 @@ An `AttestationRecord` has the following fields:
|
|||
# Hash of last justified beacon block
|
||||
'justified_block_hash': 'hash32',
|
||||
# BLS aggregate signature
|
||||
'aggregate_sig': ['uint256']
|
||||
'aggregate_sig': ['uint384']
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -156,8 +164,6 @@ A `ProposalSignedData` has the following fields:
|
|||
|
||||
```python
|
||||
{
|
||||
# Fork version
|
||||
'fork_version': 'uint64',
|
||||
# Slot number
|
||||
'slot': 'uint64',
|
||||
# Shard number (or `2**64 - 1` for beacon chain)
|
||||
|
@ -171,8 +177,6 @@ An `AttestationSignedData` has the following fields:
|
|||
|
||||
```python
|
||||
{
|
||||
# Fork version
|
||||
'fork_version': 'uint64',
|
||||
# Slot number
|
||||
'slot': 'uint64',
|
||||
# Shard number
|
||||
|
@ -195,7 +199,7 @@ A `SpecialRecord` has the following fields:
|
|||
```python
|
||||
{
|
||||
# Kind
|
||||
'kind': 'uint8',
|
||||
'kind': 'uint64',
|
||||
# Data
|
||||
'data': 'bytes'
|
||||
}
|
||||
|
@ -258,7 +262,7 @@ A `ValidatorRecord` has the following fields:
|
|||
```python
|
||||
{
|
||||
# BLS public key
|
||||
'pubkey': 'uint256',
|
||||
'pubkey': 'uint384',
|
||||
# Withdrawal credentials
|
||||
'withdrawal_credentials': 'hash32',
|
||||
# RANDAO commitment
|
||||
|
@ -268,7 +272,7 @@ A `ValidatorRecord` has the following fields:
|
|||
# Balance in Gwei
|
||||
'balance': 'uint64',
|
||||
# Status code
|
||||
'status': 'uint8',
|
||||
'status': 'uint64',
|
||||
# Slot when validator last changed status (or 0)
|
||||
'last_status_change_slot': 'uint64'
|
||||
# Sequence number when validator exited (or 0)
|
||||
|
@ -337,7 +341,7 @@ For a block on the beacon chain to be processed by a node, four conditions have
|
|||
* The parent pointed to by the `ancestor_hashes[0]` has already been processed and accepted
|
||||
* An attestation from the _proposer_ of the block (see later for definition) is included along with the block in the network message object
|
||||
* The PoW chain block pointed to by the `processed_pow_receipt_root` has already been processed and accepted
|
||||
* The node's local clock time is greater than or equal to the minimum timestamp as computed by `GENESIS_TIME + block.slot * SLOT_DURATION`
|
||||
* The node's local clock time is greater than or equal to the minimum timestamp as computed by `state.genesis_time + block.slot * SLOT_DURATION`
|
||||
|
||||
If these conditions are not met, the client should delay processing the beacon block until the conditions are all satisfied.
|
||||
|
||||
|
@ -358,10 +362,13 @@ The beacon chain fork choice rule is a hybrid that combines justification and fi
|
|||
```python
|
||||
def lmd_ghost(store, start):
|
||||
validators = start.state.validators
|
||||
active_validators = [validators[i] for i in get_active_validator_indices(validators, start.slot)]
|
||||
attestation_targets = [get_latest_attestation_target(store, validator) for validator in active_validators]
|
||||
active_validators = [validators[i] for i in
|
||||
get_active_validator_indices(validators, start.slot)]
|
||||
attestation_targets = [get_latest_attestation_target(store, validator)
|
||||
for validator in active_validators]
|
||||
def get_vote_count(block):
|
||||
return len([target for target in attestation_targets if get_ancestor(store, target, block.slot) == block])
|
||||
return len([target for target in attestation_targets if
|
||||
get_ancestor(store, target, block.slot) == block])
|
||||
|
||||
head = start
|
||||
while 1:
|
||||
|
@ -706,7 +713,7 @@ The `add_validator` routine is defined below.
|
|||
|
||||
This routine should be run for every validator that is inducted as part of a log created on the PoW chain [TODO: explain where to check for these logs]. The status of the validators added after genesis is `PENDING_ACTIVATION`. These logs should be processed in the order in which they are emitted by the PoW chain.
|
||||
|
||||
First, a helper function:
|
||||
First, some helper functions:
|
||||
|
||||
```python
|
||||
def min_empty_validator(validators: List[ValidatorRecord], current_slot: int):
|
||||
|
@ -716,10 +723,18 @@ def min_empty_validator(validators: List[ValidatorRecord], current_slot: int):
|
|||
return None
|
||||
```
|
||||
|
||||
```python
|
||||
def get_fork_version(state: State, slot: int) -> int:
|
||||
return state.pre_fork_version if slot < state.fork_slot_number else state.post_fork_version
|
||||
|
||||
def get_domain(state: State, slot: int, base_domain: int) -> int:
|
||||
return get_fork_version(state, slot) * 2**32 + base_domain
|
||||
```
|
||||
|
||||
Now, to add a validator:
|
||||
|
||||
```python
|
||||
def add_validator(validators: List[ValidatorRecord],
|
||||
def add_validator(state: State,
|
||||
pubkey: int,
|
||||
proof_of_possession: bytes,
|
||||
withdrawal_credentials: Hash32,
|
||||
|
@ -731,9 +746,10 @@ def add_validator(validators: List[ValidatorRecord],
|
|||
signed_message = bytes32(pubkey) + bytes2(withdrawal_shard) + withdrawal_credentials + randao_commitment
|
||||
assert BLSVerify(pub=pubkey,
|
||||
msg=hash(signed_message),
|
||||
sig=proof_of_possession)
|
||||
sig=proof_of_possession,
|
||||
domain=get_domain(state, current_slot, DOMAIN_DEPOSIT))
|
||||
# Pubkey uniqueness
|
||||
assert pubkey not in [v.pubkey for v in validators]
|
||||
assert pubkey not in [v.pubkey for v in state.validators]
|
||||
rec = ValidatorRecord(
|
||||
pubkey=pubkey,
|
||||
withdrawal_credentials=withdrawal_credentials,
|
||||
|
@ -744,15 +760,17 @@ def add_validator(validators: List[ValidatorRecord],
|
|||
last_status_change_slot=current_slot,
|
||||
exit_seq=0
|
||||
)
|
||||
index = min_empty_validator(validators)
|
||||
index = min_empty_validator(state.validators)
|
||||
if index is None:
|
||||
validators.append(rec)
|
||||
return len(validators) - 1
|
||||
state.validators.append(rec)
|
||||
return len(state.validators) - 1
|
||||
else:
|
||||
validators[index] = rec
|
||||
state.validators[index] = rec
|
||||
return index
|
||||
```
|
||||
|
||||
`BLSVerify` is a function for verifying a BLS12-381 signature, defined in the BLS12-381 spec.
|
||||
|
||||
### Routine for removing a validator
|
||||
|
||||
```python
|
||||
|
@ -814,18 +832,18 @@ Verify that there are at most `MAX_ATTESTATION_COUNT` `AttestationRecord` object
|
|||
* Compute `parent_hashes` = `[get_block_hash(state, block, slot - CYCLE_LENGTH + i) for i in range(1, CYCLE_LENGTH - len(oblique_parent_hashes) + 1)] + oblique_parent_hashes` (eg, if `CYCLE_LENGTH = 4`, `slot = 5`, the actual block hashes starting from slot 0 are `Z A B C D E F G H I J`, and `oblique_parent_hashes = [D', E']` then `parent_hashes = [B, C, D' E']`). Note that when *creating* an attestation for a block, the hash of that block itself won't yet be in the `state`, so you would need to add it explicitly.
|
||||
* Let `attestation_indices` be `get_shards_and_committees_for_slot(state, slot)[x]`, choosing `x` so that `attestation_indices.shard` equals the `shard` value provided to find the set of validators that is creating this attestation record.
|
||||
* Verify that `len(attester_bitfield) == ceil_div8(len(attestation_indices))`, where `ceil_div8 = (x + 7) // 8`. Verify that bits `len(attestation_indices)....` and higher, if present (i.e. `len(attestation_indices)` is not a multiple of 8), are all zero.
|
||||
* Derive a group public key by adding the public keys of all of the attesters in `attestation_indices` for whom the corresponding bit in `attester_bitfield` (the ith bit is `(attester_bitfield[i // 8] >> (7 - (i %8))) % 2`) equals 1.
|
||||
* Let `fork_version = pre_fork_version if slot < fork_slot_number else post_fork_version`.
|
||||
* Verify that `aggregate_sig` verifies using the group pubkey generated and the serialized form of `AttestationSignedData(fork_version, slot, shard, parent_hashes, shard_block_hash, last_crosslinked_hash, shard_block_combined_data_root, justified_slot)` as the message.
|
||||
* [TO BE REMOVED IN PHASE 1] Verify that `shard_block_hash == bytes([0] * 32)`
|
||||
* Derive a `group_public_key` by adding the public keys of all of the attesters in `attestation_indices` for whom the corresponding bit in `attester_bitfield` (the ith bit is `(attester_bitfield[i // 8] >> (7 - (i % 8))) % 2`) equals 1.
|
||||
* Let `data = AttestationSignedData(slot, shard, parent_hashes, shard_block_hash, last_crosslinked_hash, shard_block_combined_data_root, justified_slot)`.
|
||||
* Check `BLSVerify(pubkey=group_public_key, msg=data, sig=aggregate_sig, domain=get_domain(state, slot, DOMAIN_ATTESTATION))`.
|
||||
* [TO BE REMOVED IN PHASE 1] Verify that `shard_block_hash == bytes([0] * 32)`.
|
||||
|
||||
Extend the list of `AttestationRecord` objects in the `state` with those included in the block, ordering the new additions in the same order as they came in the block.
|
||||
|
||||
### Verify proposer signature
|
||||
|
||||
Let `proposal_hash = hash(ProposalSignedData(fork_version, block.slot, 2**64 - 1, block_hash_without_sig))` where `block_hash_without_sig` is the hash of the block except setting `proposer_signature` to `[0, 0]`.
|
||||
Let `proposal_hash = hash(ProposalSignedData(block.slot, 2**64 - 1, block_hash_without_sig))` where `block_hash_without_sig` is the hash of the block except setting `proposer_signature` to `[0, 0]`.
|
||||
|
||||
Verify that `BLSVerify(pubkey=get_beacon_proposer(state, block.slot).pubkey, data=proposal_hash, sig=block.proposer_signature)` passes.
|
||||
Verify that `BLSVerify(pubkey=get_beacon_proposer(state, block.slot).pubkey, data=proposal_hash, sig=block.proposer_signature, domain=get_domain(state, block.slot, DOMAIN_PROPOSAL))` passes.
|
||||
|
||||
### Verify and process RANDAO reveal
|
||||
|
||||
|
@ -859,12 +877,12 @@ For each `SpecialRecord` `obj` in `block.specials`, verify that its `kind` is on
|
|||
```python
|
||||
{
|
||||
'validator_index': 'uint64',
|
||||
'signature': '[uint256]'
|
||||
'signature': '[uint384]'
|
||||
}
|
||||
```
|
||||
Perform the following checks:
|
||||
|
||||
* Let `fork_version = pre_fork_version if block.slot < fork_slot_number else post_fork_version`. Verify that `BLSVerify(pubkey=validators[data.validator_index].pubkey, msg=hash(LOGOUT_MESSAGE + bytes8(fork_version)), sig=data.signature)`
|
||||
* Verify that `BLSVerify(pubkey=validators[data.validator_index].pubkey, msg=bytes([0] * 32), sig=data.signature, domain=get_domain(state, current_slot, DOMAIN_LOGOUT))`
|
||||
* Verify that `validators[validator_index].status == ACTIVE`.
|
||||
* Verify that `block.slot >= last_status_change_slot + SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD`
|
||||
|
||||
|
@ -876,16 +894,16 @@ Run `exit_validator(data.validator_index, state, block, penalize=False, current_
|
|||
{
|
||||
'vote1_aggregate_sig_indices': '[uint24]',
|
||||
'vote1_data': AttestationSignedData,
|
||||
'vote1_aggregate_sig': '[uint256]',
|
||||
'vote1_aggregate_sig': '[uint384]',
|
||||
'vote2_aggregate_sig_indices': '[uint24]',
|
||||
'vote2_data': AttestationSignedData,
|
||||
'vote2_aggregate_sig': '[uint256]',
|
||||
'vote2_aggregate_sig': '[uint384]',
|
||||
}
|
||||
```
|
||||
|
||||
Perform the following checks:
|
||||
|
||||
* For each `aggregate_sig`, verify that `BLSVerify(pubkey=aggregate_pubkey([validators[i].pubkey for i in aggregate_sig_indices]), msg=vote_data, sig=aggsig)` passes.
|
||||
* For each `vote`, verify that `BLSVerify(pubkey=aggregate_pubkey([validators[i].pubkey for i in vote_aggregate_sig_indices]), msg=vote_data, sig=vote_aggregate_sig, domain=get_domain(state, vote_data.slot, DOMAIN_ATTESTATION))` passes.
|
||||
* Verify that `vote1_data != vote2_data`.
|
||||
* Let `intersection = [x for x in vote1_aggregate_sig_indices if x in vote2_aggregate_sig_indices]`. Verify that `len(intersection) >= 1`.
|
||||
* Verify that `vote1_data.justified_slot < vote2_data.justified_slot < vote2_data.slot <= vote1_data.slot`.
|
||||
|
@ -898,12 +916,12 @@ For each validator index `v` in `intersection`, if `state.validators[v].status`
|
|||
{
|
||||
'proposer_index': 'uint24',
|
||||
'proposal1_data': ProposalSignedData,
|
||||
'proposal1_signature': '[uint256]',
|
||||
'proposal1_signature': '[uint384]',
|
||||
'proposal2_data': ProposalSignedData,
|
||||
'proposal1_signature': '[uint256]',
|
||||
'proposal1_signature': '[uint384]',
|
||||
}
|
||||
```
|
||||
For each `proposal_signature`, verify that `BLSVerify(pubkey=validators[proposer_index].pubkey, msg=hash(proposal_data), sig=proposal_signature)` passes. Verify that `proposal1_data.slot == proposal2_data.slot` but `proposal1 != proposal2`. If `state.validators[proposer_index].status` does not equal `PENALIZED`, then run `exit_validator(proposer_index, state, penalize=True, current_slot=block.slot)`
|
||||
For each `proposal_signature`, verify that `BLSVerify(pubkey=validators[proposer_index].pubkey, msg=hash(proposal_data), sig=proposal_signature, domain=get_domain(state, proposal_data.slot, DOMAIN_PROPOSAL))` passes. Verify that `proposal1_data.slot == proposal2_data.slot` but `proposal1 != proposal2`. If `state.validators[proposer_index].status` does not equal `PENALIZED`, then run `exit_validator(proposer_index, state, penalize=True, current_slot=block.slot)`
|
||||
|
||||
#### DEPOSIT_PROOF
|
||||
|
||||
|
|
|
@ -19,7 +19,14 @@ Phase 1 depends upon all of the constants defined in [Phase 0](0_beacon-chain.md
|
|||
| Constant | Value | Unit | Approximation |
|
||||
|------------------------|-----------------|-------|---------------|
|
||||
| `CHUNK_SIZE` | 2**8 (= 256) | bytes | |
|
||||
| `MAX_SHARD_BLOCK_SIZE` | 2**15 (= 32768) | bytes | |
|
||||
| `SHARD_BLOCK_SIZE` | 2**14 (= 16384) | bytes | |
|
||||
|
||||
### Flags, domains, etc.
|
||||
|
||||
| Constant | Value |
|
||||
|------------------------|-----------------|
|
||||
| `SHARD_PROPOSER_DOMAIN`| 129 |
|
||||
| `SHARD_ATTESTER_DOMAIN`| 130 |
|
||||
|
||||
## Data Structures
|
||||
|
||||
|
@ -43,7 +50,9 @@ A `ShardBlock` object has the following fields:
|
|||
'data_root': 'hash32'
|
||||
# State root (placeholder for now)
|
||||
'state_root': 'hash32',
|
||||
# Attestation (including block signature)
|
||||
# Block signature
|
||||
'signature': ['uint256'],
|
||||
# Attestation
|
||||
'attester_bitfield': 'bytes',
|
||||
'aggregate_sig': ['uint256'],
|
||||
}
|
||||
|
@ -61,31 +70,27 @@ To validate a block header on shard `shard_id`, compute as follows:
|
|||
* Verify that `beacon_chain_ref` is the hash of a block in the beacon chain with slot less than or equal to `slot`. Verify that `beacon_chain_ref` is equal to or a descendant of the `beacon_chain_ref` specified in the `ShardBlock` pointed to by `parent_hash`.
|
||||
* Let `state` be the state of the beacon chain block referred to by `beacon_chain_ref`. Let `validators` be `[validators[i] for i in state.current_persistent_committees[shard_id]]`.
|
||||
* Assert `len(attester_bitfield) == ceil_div8(len(validators))`
|
||||
* Let `curblock_proposer_index = hash(state.randao_mix + bytes8(shard_id) + bytes8(slot)) % len(validators)`. Let `parent_proposer_index` be the same value calculated for the parent block.
|
||||
* Make sure that the `parent_proposer_index`'th bit in the `attester_bitfield` is set to 1.
|
||||
* Generate the group public key by adding the public keys of all the validators for whom the corresponding position in the bitfield is set to 1. Verify the `aggregate_sig` using this as the pubkey and the `parent_hash` as the message.
|
||||
* Let `proposer_index = hash(state.randao_mix + bytes8(shard_id) + bytes8(slot)) % len(validators)`. Let `msg` be the block but with the `block.signature` set to `[0, 0]`. Verify that `BLSVerify(pub=validators[proposer_index].pubkey, msg=hash(msg), sig=block.signature, domain=get_domain(state, slot, SHARD_PROPOSER_DOMAIN))` passes.
|
||||
* Generate the `group_public_key` by adding the public keys of all the validators for whom the corresponding position in the bitfield is set to 1. Verify that `BLSVerify(pub=group_public_key, msg=parent_hash, sig=block.aggregate_sig, domain=get_domain(state, slot, SHARD_ATTESTER_DOMAIN))` passes.
|
||||
|
||||
### Block Merklization helper
|
||||
|
||||
```python
|
||||
def merkle_root(block_body):
|
||||
assert len(block_body) == SHARD_BLOCK_SIZE
|
||||
chunks = SHARD_BLOCK_SIZE // CHUNK_SIZE
|
||||
o = [0] * chunks + [block_body[i * CHUNK_SIZE: (i+1) * CHUNK_SIZE] for i in range(chunks)]
|
||||
for i in range(chunks-1, 0, -1):
|
||||
o[i] = hash(o[i*2] + o[i*2+1])
|
||||
return o[1]
|
||||
```
|
||||
|
||||
### Verifying shard block data
|
||||
|
||||
At network layer, we expect a shard block header to be broadcast along with its `block_body`. First, we define a helper function that takes as input beacon chain state and outputs the max block size in bytes:
|
||||
At network layer, we expect a shard block header to be broadcast along with its `block_body`.
|
||||
|
||||
```python
|
||||
def shard_block_maxbytes(state):
|
||||
max_grains = MAX_SHARD_BLOCK_SIZE // CHUNK_SIZE
|
||||
validators_at_target_committee_size = SHARD_COUNT * TARGET_COMMITTEE_SIZE
|
||||
|
||||
# number of grains per block is proportional to the number of validators
|
||||
# up until `validators_at_target_committee_size`
|
||||
grains = min(
|
||||
len(get_active_validator_indices(state.validators)) * max_grains // validators_at_target_committee_size,
|
||||
max_grains
|
||||
)
|
||||
|
||||
return CHUNK_SIZE * grains
|
||||
```
|
||||
|
||||
* Verify that `len(block_body) == shard_block_maxbytes(state)`
|
||||
* Define `filler_bytes = next_power_of_2(len(block_body)) - len(block_body)`. Compute a simple binary Merkle tree of `block_body + bytes([0] * filler_bytes)` and verify that the root equals the `data_root` in the header.
|
||||
* Verify that `len(block_body) == SHARD_BLOCK_SIZE`
|
||||
* Verify that `merkle_root(block_body)` equals the `data_root` in the header.
|
||||
|
||||
### Verifying a crosslink
|
||||
|
||||
|
@ -93,23 +98,13 @@ A node should sign a crosslink only if the following conditions hold. **If a nod
|
|||
|
||||
First, the conditions must recursively apply to the crosslink referenced in `last_crosslink_hash` for the same shard (unless `last_crosslink_hash` equals zero, in which case we are at the genesis).
|
||||
|
||||
Second, we verify the `shard_block_combined_data_root`. Let `h` be the slot _immediately after_ the slot of the shard block included by the last crosslink, and `h+n-1` be the slot number of the block directly referenced by the current `shard_block_hash`. Let `B[i]` be the block at slot `h+i` in the shard chain. Let `bodies[0] .... bodies[n-1]` be the bodies of these blocks and `roots[0] ... roots[n-1]` the data roots. If there is a missing slot in the shard chain at position `h+i`, then `bodies[i] == b'\x00' * shard_block_maxbytes(state[i])` and `roots[i]` be the Merkle root of the empty data. Define `compute_merkle_root` be a simple Merkle root calculating function that takes as input a list of objects, where the list's length must be an exact power of two. Let `state[i]` be the beacon chain state at height `h+i` (if the beacon chain is missing a block at some slot, the state is unchanged), and `depths[i]` be equal to `log2(next_power_of_2(shard_block_maxbytes(state[i]) // CHUNK_SIZE))` (ie. the expected depth of the i'th data tree). We define the function for computing the combined data root as follows:
|
||||
Second, we verify the `shard_block_combined_data_root`. Let `h` be the slot _immediately after_ the slot of the shard block included by the last crosslink, and `h+n-1` be the slot number of the block directly referenced by the current `shard_block_hash`. Let `B[i]` be the block at slot `h+i` in the shard chain. Let `bodies[0] .... bodies[n-1]` be the bodies of these blocks and `roots[0] ... roots[n-1]` the data roots. If there is a missing slot in the shard chain at position `h+i`, then `bodies[i] == b'\x00' * shard_block_maxbytes(state[i])` and `roots[i]` be the Merkle root of the empty data. Define `compute_merkle_root` be a simple Merkle root calculating function that takes as input a list of objects, where the list's length must be an exact power of two. We define the function for computing the combined data root as follows:
|
||||
|
||||
```python
|
||||
def get_zeroroot_at_depth(n):
|
||||
o = b'\x00' * CHUNK_SIZE
|
||||
for i in range(n):
|
||||
o = hash(o + o)
|
||||
return o
|
||||
ZERO_ROOT = merkle_root(bytes([0] * SHARD_BLOCK_SIZE))
|
||||
|
||||
def mk_combined_data_root(depths, roots):
|
||||
default_value = get_zeroroot_at_depth(max(depths))
|
||||
data = [default_value for _ in range(next_power_of_2(len(roots)))]
|
||||
for i, (depth, root) in enumerate(zip(depths, roots)):
|
||||
value = root
|
||||
for j in range(depth, max(depths)):
|
||||
value = hash(value, get_zeroroot_at_depth(depth + j))
|
||||
data[i] = value
|
||||
def mk_combined_data_root(roots):
|
||||
data = roots + [ZERO_ROOT for _ in range(len(roots), next_power_of_2(len(roots)))]
|
||||
return compute_merkle_root(data)
|
||||
```
|
||||
|
||||
|
@ -117,12 +112,7 @@ This outputs the root of a tree of the data roots, with the data roots all adjus
|
|||
|
||||
```python
|
||||
def mk_combined_data_root(depths, bodies):
|
||||
default_value = get_zeroroot_at_depth(max(depths))
|
||||
padded_body_length = max([CHUNK_SIZE * 2**d for d in depths])
|
||||
data = b''
|
||||
for body in bodies:
|
||||
padded_body = body + bytes([0] * (padded_body_length - len(body)))
|
||||
data += padded_body
|
||||
data = b''.join(bodies)
|
||||
data += bytes([0] * (next_power_of_2(len(data)) - len(data))
|
||||
return compute_merkle_root([data[pos:pos+CHUNK_SIZE] for pos in range(0, len(data), CHUNK_SIZE)])
|
||||
```
|
||||
|
|
Loading…
Reference in New Issue