Merge branch 'master' into little-endian
This commit is contained in:
commit
c9b3a7b52e
|
@ -2,7 +2,9 @@
|
|||
|
||||
[![Join the chat at https://gitter.im/ethereum/sharding](https://badges.gitter.im/ethereum/sharding.svg)](https://gitter.im/ethereum/sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
|
||||
This repo hosts the current eth2.0 specifications. Discussions about design rationale and proposed changes can be brought up and discussed as issues. Solidified, agreed upon changes to spec can be made through pull requests.
|
||||
To learn more about sharding and eth2.0/Serenity, see the [sharding FAQ](https://github.com/ethereum/wiki/wiki/Sharding-FAQs) and the [research compendium](https://notes.ethereum.org/s/H1PGqDhpm).
|
||||
|
||||
This repo hosts the current eth2.0 specifications. Discussions about design rationale and proposed changes can be brought up and discussed as issues. Solidified, agreed upon changes to spec can be made through pull requests.
|
||||
|
||||
# Specs
|
||||
|
||||
|
@ -13,7 +15,7 @@ Core specifications for eth2.0 client validation can be found in [specs/core](sp
|
|||
## Design goals
|
||||
The following are the broad design goals for Ethereum 2.0:
|
||||
* to minimize complexity, even at the cost of some losses in efficiency
|
||||
* to remain live through major network partitions and when very large portions of nodes going offline
|
||||
* to remain live through major network partitions and when very large portions of nodes go offline
|
||||
* to select all components such that they are either quantum secure or can be easily swapped out for quantum secure counterparts when available
|
||||
* to utilize crypto and design techniques that allow for a large participation of validators in total and per unit time
|
||||
* to allow for a typical consumer laptop with `O(C)` resources to process/validate `O(1)` shards (including any system level validation such as the beacon chain)
|
||||
|
|
|
@ -0,0 +1,140 @@
|
|||
# BLS signature verification
|
||||
|
||||
**Warning: This document is pending academic review and should not yet be considered secure.**
|
||||
|
||||
## Table of contents
|
||||
<!-- TOC -->
|
||||
|
||||
- [BLS signature verification](#bls-signature-verification)
|
||||
- [Table of contents](#table-of-contents)
|
||||
- [Curve parameters](#curve-parameters)
|
||||
- [Point representations](#point-representations)
|
||||
- [G1 points](#g1-points)
|
||||
- [G2 points](#g2-points)
|
||||
- [Helpers](#helpers)
|
||||
- [`hash_to_G2`](#hash_to_g2)
|
||||
- [`modular_squareroot`](#modular_squareroot)
|
||||
- [Aggregation operations](#aggregation-operations)
|
||||
- [`bls_aggregate_pubkeys`](#bls_aggregate_pubkeys)
|
||||
- [`bls_aggregate_signatures`](#bls_aggregate_signatures)
|
||||
- [Signature verification](#signature-verification)
|
||||
- [`bls_verify`](#bls_verify)
|
||||
- [`bls_verify_multiple`](#bls_verify_multiple)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Curve parameters
|
||||
|
||||
The BLS12-381 curve parameters are defined [here](https://z.cash/blog/new-snark-curve).
|
||||
|
||||
## Point representations
|
||||
|
||||
We represent points in the groups G1 and G2 following [zkcrypto/pairing](https://github.com/zkcrypto/pairing/tree/master/src/bls12_381). We denote by `q` the field modulus and by `i` the imaginary unit.
|
||||
|
||||
### G1 points
|
||||
|
||||
A point in G1 is represented as a 384-bit integer `z` decomposed as a 381-bit integer `x` and three 1-bit flags in the top bits:
|
||||
|
||||
* `x = z % 2**381`
|
||||
* `a_flag = (z % 2**382) // 2**381`
|
||||
* `b_flag = (z % 2**383) // 2**382`
|
||||
* `c_flag = (z % 2**384) // 2**383`
|
||||
|
||||
Respecting bit ordering, `z` is decomposed as `(c_flag, b_flag, a_flag, x)`.
|
||||
|
||||
We require:
|
||||
|
||||
* `x < q`
|
||||
* `c_flag == 1`
|
||||
* if `b_flag == 1` then `a_flag == x == 0` and `z` represents the point at infinity
|
||||
* if `b_flag == 0` then `z` represents the point `(x, y)` where `y` is the valid coordinate such that `(y * 2) // q == a_flag`
|
||||
|
||||
### G2 points
|
||||
|
||||
A point in G2 is represented as a pair of 384-bit integers `(z1, z2)`. We decompose `z1` as above into `x1`, `a_flag1`, `b_flag1`, `c_flag1` and `z2` into `x2`, `a_flag2`, `b_flag2`, `c_flag2`.
|
||||
|
||||
We require:
|
||||
|
||||
* `x1 < q` and `x2 < q`
|
||||
* `a_flag2 == b_flag2 == c_flag2 == 0`
|
||||
* `c_flag1 == 1`
|
||||
* if `b_flag1 == 1` then `a_flag1 == x1 == x2 == 0` and `(z1, z2)` represents the point at infinity
|
||||
* if `b_flag1 == 0` then `(z1, z2)` represents the point `(x1 * i + x2, y)` where `y` is the valid coordinate such that the imaginary part `y_im` of `y` satisfies `(y_im * 2) // q == a_flag1`
|
||||
|
||||
## Helpers
|
||||
|
||||
### `hash_to_G2`
|
||||
|
||||
```python
|
||||
G2_cofactor = 305502333931268344200999753193121504214466019254188142667664032982267604182971884026507427359259977847832272839041616661285803823378372096355777062779109
|
||||
q = 4002409555221667393417789825735904156556882819939007885332058136124031650490837864442687629129015664037894272559787
|
||||
|
||||
def hash_to_G2(message: bytes32, domain: uint64) -> [uint384]:
|
||||
# Initial candidate x coordinate
|
||||
x_re = int.from_bytes(hash(message + bytes8(domain) + b'\x01'), 'big')
|
||||
x_im = int.from_bytes(hash(message + bytes8(domain) + b'\x02'), 'big')
|
||||
x_coordinate = Fq2([x_re, x_im]) # x = x_re + i * x_im
|
||||
|
||||
# Test candidate y coordinates until a one is found
|
||||
while 1:
|
||||
y_coordinate_squared = x_coordinate ** 3 + Fq2([4, 4]) # The curve is y^2 = x^3 + 4(i + 1)
|
||||
y_coordinate = modular_squareroot(y_coordinate_squared)
|
||||
if y_coordinate is not None: # Check if quadratic residue found
|
||||
return multiply_in_G2((x_coordinate, y_coordinate), G2_cofactor)
|
||||
x_coordinate += Fq2([1, 0]) # Add 1 and try again
|
||||
```
|
||||
|
||||
### `modular_squareroot`
|
||||
|
||||
`modular_squareroot(x)` returns a solution `y` to `y**2 % q == x`, and `None` if none exists. If there are two solutions the one with higher imaginary component is favored; if both solutions have equal imaginary component the one with higher real component is favored.
|
||||
|
||||
```python
|
||||
Fq2_order = q ** 2 - 1
|
||||
eighth_roots_of_unity = [Fq2([1,1]) ** ((Fq2_order * k) // 8) for k in range(8)]
|
||||
|
||||
def modular_squareroot(value: int) -> int:
|
||||
candidate_squareroot = value ** ((Fq2_order + 8) // 16)
|
||||
check = candidate_squareroot ** 2 / value
|
||||
if check in eighth_roots_of_unity[::2]:
|
||||
x1 = candidate_squareroot / eighth_roots_of_unity[eighth_roots_of_unity.index(check) // 2]
|
||||
x2 = -x1
|
||||
return x1 if (x1.coeffs[1].n, x1.coeffs[0].n) > (x2.coeffs[1].n, x2.coeffs[0].n) else x2
|
||||
return None
|
||||
```
|
||||
|
||||
## Aggregation operations
|
||||
|
||||
### `bls_aggregate_pubkeys`
|
||||
|
||||
Let `bls_aggregate_pubkeys(pubkeys: [uint384]) -> uint384` return `pubkeys[0] + .... + pubkeys[len(pubkeys)-1]`, where `+` is the elliptic curve addition operation over the G1 curve.
|
||||
|
||||
### `bls_aggregate_signatures`
|
||||
|
||||
Let `bls_aggregate_signatures(signatures: [[uint384]]) -> [uint384]` return `signatures[0] + .... + signatures[len(signatures)-1]`, where `+` is the elliptic curve addition operation over the G2 curve.
|
||||
|
||||
## Signature verification
|
||||
|
||||
In the following `e` is the pairing function and `g` is the G1 generator with the following coordinates (see [here](https://github.com/zkcrypto/pairing/tree/master/src/bls12_381#g1)):
|
||||
|
||||
```python
|
||||
g_x = 3685416753713387016781088315183077757961620795782546409894578378688607592378376318836054947676345821548104185464507
|
||||
g_y = 1339506544944476473020471379941921221584933875938349620426543736416511423956333506472724655353366534992391756441569
|
||||
g = Fq2(g_x, g_y)
|
||||
```
|
||||
|
||||
### `bls_verify`
|
||||
|
||||
Let `bls_verify(pubkey: uint384, message: bytes32, signature: [uint384], domain: uint64) -> bool`:
|
||||
|
||||
* Verify that `pubkey` is a valid G1 point.
|
||||
* Verify that `signature` is a valid G2 point.
|
||||
* Verify that `e(pubkey, hash_to_G2(message, domain)) == e(g, signature)`.
|
||||
|
||||
### `bls_verify_multiple`
|
||||
|
||||
Let `bls_verify_multiple(pubkeys: [uint384], messages: [bytes32], signature: [uint384], domain: uint64) -> bool`:
|
||||
|
||||
* Verify that each `pubkey` in `pubkeys` is a valid G1 point.
|
||||
* Verify that `signature` is a valid G2 point.
|
||||
* Verify that `len(pubkeys)` equals `len(messages)` and denote the length `L`.
|
||||
* Verify that `e(pubkeys[0], hash_to_G2(messages[0], domain)) * ... * e(pubkeys[L-1], hash_to_G2(messages[L-1], domain)) == e(g, signature)`.
|
File diff suppressed because it is too large
Load Diff
|
@ -18,8 +18,15 @@ Phase 1 depends upon all of the constants defined in [Phase 0](0_beacon-chain.md
|
|||
|
||||
| Constant | Value | Unit | Approximation |
|
||||
|------------------------|-----------------|-------|---------------|
|
||||
| `CHUNK_SIZE` | 2**8 (= 256) | bytes | |
|
||||
| `MAX_SHARD_BLOCK_SIZE` | 2**15 (= 32768) | bytes | |
|
||||
| `SHARD_CHUNK_SIZE` | 2**5 (= 32) | bytes | |
|
||||
| `SHARD_BLOCK_SIZE` | 2**14 (= 16384) | bytes | |
|
||||
|
||||
### Flags, domains, etc.
|
||||
|
||||
| Constant | Value |
|
||||
|------------------------|-----------------|
|
||||
| `SHARD_PROPOSER_DOMAIN`| 129 |
|
||||
| `SHARD_ATTESTER_DOMAIN`| 130 |
|
||||
|
||||
## Data Structures
|
||||
|
||||
|
@ -33,8 +40,8 @@ A `ShardBlock` object has the following fields:
|
|||
'slot': 'uint64',
|
||||
# What shard is it on
|
||||
'shard_id': 'uint64',
|
||||
# Parent block hash
|
||||
'parent_hash': 'hash32',
|
||||
# Parent block's hash of root
|
||||
'parent_root': 'hash32',
|
||||
# Beacon chain block
|
||||
'beacon_chain_ref': 'hash32',
|
||||
# Depth of the Merkle tree
|
||||
|
@ -43,9 +50,11 @@ A `ShardBlock` object has the following fields:
|
|||
'data_root': 'hash32'
|
||||
# State root (placeholder for now)
|
||||
'state_root': 'hash32',
|
||||
# Attestation (including block signature)
|
||||
'attester_bitfield': 'bytes',
|
||||
'aggregate_sig': ['uint256'],
|
||||
# Block signature
|
||||
'signature': ['uint384'],
|
||||
# Attestation
|
||||
'participation_bitfield': 'bytes',
|
||||
'aggregate_signature': ['uint384'],
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -53,63 +62,49 @@ A `ShardBlock` object has the following fields:
|
|||
|
||||
For a block on a shard to be processed by a node, the following conditions must be met:
|
||||
|
||||
* The `ShardBlock` pointed to by `parent_hash` has already been processed and accepted
|
||||
* The `ShardBlock` pointed to by `parent_root` has already been processed and accepted
|
||||
* The signature for the block from the _proposer_ (see below for definition) of that block is included along with the block in the network message object
|
||||
|
||||
To validate a block header on shard `shard_id`, compute as follows:
|
||||
|
||||
* Verify that `beacon_chain_ref` is the hash of a block in the beacon chain with slot less than or equal to `slot`. Verify that `beacon_chain_ref` is equal to or a descendant of the `beacon_chain_ref` specified in the `ShardBlock` pointed to by `parent_hash`.
|
||||
* Verify that `beacon_chain_ref` is the hash of a block in the beacon chain with slot less than or equal to `slot`. Verify that `beacon_chain_ref` is equal to or a descendant of the `beacon_chain_ref` specified in the `ShardBlock` pointed to by `parent_root`.
|
||||
* Let `state` be the state of the beacon chain block referred to by `beacon_chain_ref`. Let `validators` be `[validators[i] for i in state.current_persistent_committees[shard_id]]`.
|
||||
* Assert `len(attester_bitfield) == ceil_div8(len(validators))`
|
||||
* Let `curblock_proposer_index = hash(state.randao_mix + bytes8(shard_id) + bytes8(slot)) % len(validators)`. Let `parent_proposer_index` be the same value calculated for the parent block.
|
||||
* Make sure that the `parent_proposer_index`'th bit in the `attester_bitfield` is set to 1.
|
||||
* Generate the group public key by adding the public keys of all the validators for whom the corresponding position in the bitfield is set to 1. Verify the `aggregate_sig` using this as the pubkey and the `parent_hash` as the message.
|
||||
* Assert `len(participation_bitfield) == ceil_div8(len(validators))`
|
||||
* Let `proposer_index = hash(state.randao_mix + bytes8(shard_id) + bytes8(slot)) % len(validators)`. Let `msg` be the block but with the `block.signature` set to `[0, 0]`. Verify that `BLSVerify(pub=validators[proposer_index].pubkey, msg=hash(msg), sig=block.signature, domain=get_domain(state, slot, SHARD_PROPOSER_DOMAIN))` passes.
|
||||
* Generate the `group_public_key` by adding the public keys of all the validators for whom the corresponding position in the bitfield is set to 1. Verify that `BLSVerify(pub=group_public_key, msg=parent_root, sig=block.aggregate_signature, domain=get_domain(state, slot, SHARD_ATTESTER_DOMAIN))` passes.
|
||||
|
||||
### Block Merklization helper
|
||||
|
||||
```python
|
||||
def merkle_root(block_body):
|
||||
assert len(block_body) == SHARD_BLOCK_SIZE
|
||||
chunks = SHARD_BLOCK_SIZE // SHARD_CHUNK_SIZE
|
||||
o = [0] * chunks + [block_body[i * SHARD_CHUNK_SIZE: (i+1) * SHARD_CHUNK_SIZE] for i in range(chunks)]
|
||||
for i in range(chunks-1, 0, -1):
|
||||
o[i] = hash(o[i*2] + o[i*2+1])
|
||||
return o[1]
|
||||
```
|
||||
|
||||
### Verifying shard block data
|
||||
|
||||
At network layer, we expect a shard block header to be broadcast along with its `block_body`. First, we define a helper function that takes as input beacon chain state and outputs the max block size in bytes:
|
||||
At network layer, we expect a shard block header to be broadcast along with its `block_body`.
|
||||
|
||||
```python
|
||||
def shard_block_maxbytes(state):
|
||||
max_grains = MAX_SHARD_BLOCK_SIZE // CHUNK_SIZE
|
||||
validators_at_target_committee_size = SHARD_COUNT * TARGET_COMMITTEE_SIZE
|
||||
|
||||
# number of grains per block is proportional to the number of validators
|
||||
# up until `validators_at_target_committee_size`
|
||||
grains = min(
|
||||
len(get_active_validator_indices(state.validators)) * max_grains // validators_at_target_committee_size,
|
||||
max_grains
|
||||
)
|
||||
|
||||
return CHUNK_SIZE * grains
|
||||
```
|
||||
|
||||
* Verify that `len(block_body) == shard_block_maxbytes(state)`
|
||||
* Define `filler_bytes = next_power_of_2(len(block_body)) - len(block_body)`. Compute a simple binary Merkle tree of `block_body + bytes([0] * filler_bytes)` and verify that the root equals the `data_root` in the header.
|
||||
* Verify that `len(block_body) == SHARD_BLOCK_SIZE`
|
||||
* Verify that `merkle_root(block_body)` equals the `data_root` in the header.
|
||||
|
||||
### Verifying a crosslink
|
||||
|
||||
A node should sign a crosslink only if the following conditions hold. **If a node has the capability to perform the required level of verification, it should NOT follow chains on which a crosslink for which these conditions do NOT hold has been included, or a sufficient number of signatures have been included that during the next state recalculation, a crosslink will be registered.**
|
||||
|
||||
First, the conditions must recursively apply to the crosslink referenced in `last_crosslink_hash` for the same shard (unless `last_crosslink_hash` equals zero, in which case we are at the genesis).
|
||||
First, the conditions must recursively apply to the crosslink referenced in `last_crosslink_root` for the same shard (unless `last_crosslink_root` equals zero, in which case we are at the genesis).
|
||||
|
||||
Second, we verify the `shard_block_combined_data_root`. Let `h` be the slot _immediately after_ the slot of the shard block included by the last crosslink, and `h+n-1` be the slot number of the block directly referenced by the current `shard_block_hash`. Let `B[i]` be the block at slot `h+i` in the shard chain. Let `bodies[0] .... bodies[n-1]` be the bodies of these blocks and `roots[0] ... roots[n-1]` the data roots. If there is a missing slot in the shard chain at position `h+i`, then `bodies[i] == b'\x00' * shard_block_maxbytes(state[i])` and `roots[i]` be the Merkle root of the empty data. Define `compute_merkle_root` be a simple Merkle root calculating function that takes as input a list of objects, where the list's length must be an exact power of two. Let `state[i]` be the beacon chain state at height `h+i` (if the beacon chain is missing a block at some slot, the state is unchanged), and `depths[i]` be equal to `log2(next_power_of_2(shard_block_maxbytes(state[i]) // CHUNK_SIZE))` (ie. the expected depth of the i'th data tree). We define the function for computing the combined data root as follows:
|
||||
Second, we verify the `shard_block_combined_data_root`. Let `h` be the slot _immediately after_ the slot of the shard block included by the last crosslink, and `h+n-1` be the slot number of the block directly referenced by the current `shard_block_root`. Let `B[i]` be the block at slot `h+i` in the shard chain. Let `bodies[0] .... bodies[n-1]` be the bodies of these blocks and `roots[0] ... roots[n-1]` the data roots. If there is a missing slot in the shard chain at position `h+i`, then `bodies[i] == b'\x00' * shard_block_maxbytes(state[i])` and `roots[i]` be the Merkle root of the empty data. Define `compute_merkle_root` be a simple Merkle root calculating function that takes as input a list of objects, where the list's length must be an exact power of two. We define the function for computing the combined data root as follows:
|
||||
|
||||
```python
|
||||
def get_zeroroot_at_depth(n):
|
||||
o = b'\x00' * CHUNK_SIZE
|
||||
for i in range(n):
|
||||
o = hash(o + o)
|
||||
return o
|
||||
ZERO_ROOT = merkle_root(bytes([0] * SHARD_BLOCK_SIZE))
|
||||
|
||||
def mk_combined_data_root(depths, roots):
|
||||
default_value = get_zeroroot_at_depth(max(depths))
|
||||
data = [default_value for _ in range(next_power_of_2(len(roots)))]
|
||||
for i, (depth, root) in enumerate(zip(depths, roots)):
|
||||
value = root
|
||||
for j in range(depth, max(depths)):
|
||||
value = hash(value, get_zeroroot_at_depth(depth + j))
|
||||
data[i] = value
|
||||
def mk_combined_data_root(roots):
|
||||
data = roots + [ZERO_ROOT for _ in range(len(roots), next_power_of_2(len(roots)))]
|
||||
return compute_merkle_root(data)
|
||||
```
|
||||
|
||||
|
@ -117,18 +112,13 @@ This outputs the root of a tree of the data roots, with the data roots all adjus
|
|||
|
||||
```python
|
||||
def mk_combined_data_root(depths, bodies):
|
||||
default_value = get_zeroroot_at_depth(max(depths))
|
||||
padded_body_length = max([CHUNK_SIZE * 2**d for d in depths])
|
||||
data = b''
|
||||
for body in bodies:
|
||||
padded_body = body + bytes([0] * (padded_body_length - len(body)))
|
||||
data += padded_body
|
||||
data = b''.join(bodies)
|
||||
data += bytes([0] * (next_power_of_2(len(data)) - len(data))
|
||||
return compute_merkle_root([data[pos:pos+CHUNK_SIZE] for pos in range(0, len(data), CHUNK_SIZE)])
|
||||
return compute_merkle_root([data[pos:pos+SHARD_CHUNK_SIZE] for pos in range(0, len(data), SHARD_CHUNK_SIZE)])
|
||||
```
|
||||
|
||||
Verify that the `shard_block_combined_data_root` is the output of these functions.
|
||||
|
||||
### Shard block fork choice rule
|
||||
|
||||
The fork choice rule for any shard is LMD GHOST using the validators currently assigned to that shard, but instead of being rooted in the genesis it is rooted in the block referenced in the most recent accepted crosslink (ie. `state.crosslinks[shard].shard_block_hash`). Only blocks whose `beacon_chain_ref` is the block in the main beacon chain at the specified `slot` should be considered (if the beacon chain skips a slot, then the block at that slot is considered to be the block in the beacon chain at the highest slot lower than a slot).
|
||||
The fork choice rule for any shard is LMD GHOST using the validators currently assigned to that shard, but instead of being rooted in the genesis it is rooted in the block referenced in the most recent accepted crosslink (ie. `state.crosslinks[shard].shard_block_root`). Only blocks whose `beacon_chain_ref` is the block in the main beacon chain at the specified `slot` should be considered (if the beacon chain skips a slot, then the block at that slot is considered to be the block in the beacon chain at the highest slot lower than a slot).
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# [WIP] SimpleSerialize (SSZ) Spec
|
||||
|
||||
This is the **work in progress** document to describe `simpleserialize`, the
|
||||
This is the **work in progress** document to describe `SimpleSerialize`, the
|
||||
current selected serialization method for Ethereum 2.0 using the Beacon Chain.
|
||||
|
||||
This document specifies the general information for serializing and
|
||||
|
@ -13,19 +13,22 @@ deserializing objects and data types.
|
|||
* [Constants](#constants)
|
||||
* [Overview](#overview)
|
||||
+ [Serialize/Encode](#serializeencode)
|
||||
- [uint: 8/16/24/32/64/256](#uint-816243264256)
|
||||
- [uint](#uint)
|
||||
- [Bool](#bool)
|
||||
- [Address](#address)
|
||||
- [Hash](#hash)
|
||||
- [Bytes](#bytes)
|
||||
- [List/Vectors](#listvectors)
|
||||
- [Container](#container)
|
||||
+ [Deserialize/Decode](#deserializedecode)
|
||||
- [uint: 8/16/24/32/64/256](#uint-816243264256-1)
|
||||
- [uint](#uint-1)
|
||||
- [Bool](#bool-1)
|
||||
- [Address](#address-1)
|
||||
- [Hash](#hash-1)
|
||||
- [Bytes](#bytes-1)
|
||||
- [List/Vectors](#listvectors-1)
|
||||
- [Container](#container-1)
|
||||
+ [Tree Hash](#tree-hash)
|
||||
* [Implementations](#implementations)
|
||||
|
||||
## About
|
||||
|
@ -40,9 +43,9 @@ overhead.
|
|||
|
||||
| Term | Definition |
|
||||
|:-------------|:-----------------------------------------------------------------------------------------------|
|
||||
| `little` | Little Endian |
|
||||
| `byte_order` | Specifies [endianness:](https://en.wikipedia.org/wiki/Endianness) Big Endian or Little Endian. |
|
||||
| `len` | Length/Number of Bytes. |
|
||||
| `little` | Little endian. |
|
||||
| `byte_order` | Specifies [endianness](https://en.wikipedia.org/wiki/Endianness): big endian or little endian. |
|
||||
| `len` | Length/number of bytes. |
|
||||
| `to_bytes` | Convert to bytes. Should take parameters ``size`` and ``byte_order``. |
|
||||
| `from_bytes` | Convert from bytes to object. Should take ``bytes`` and ``byte_order``. |
|
||||
| `value` | The value to serialize. |
|
||||
|
@ -50,16 +53,22 @@ overhead.
|
|||
|
||||
## Constants
|
||||
|
||||
| Constant | Value | Definition |
|
||||
|:---------------|:-----:|:--------------------------------------------------------------------------------------|
|
||||
| `LENGTH_BYTES` | 4 | Number of bytes used for the length added before a variable-length serialized object. |
|
||||
| Constant | Value | Definition |
|
||||
|:------------------|:-----:|:--------------------------------------------------------------------------------------|
|
||||
| `LENGTH_BYTES` | 4 | Number of bytes used for the length added before a variable-length serialized object. |
|
||||
| `SSZ_CHUNK_SIZE` | 128 | Number of bytes for the chunk size of the Merkle tree leaf. |
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
### Serialize/Encode
|
||||
|
||||
#### uint: 8/16/24/32/64/256
|
||||
#### uint
|
||||
|
||||
| uint Type | Usage |
|
||||
|:---------:|:-----------------------------------------------------------|
|
||||
| `uintN` | Type of `N` bits unsigned integer, where ``N % 8 == 0``. |
|
||||
|
||||
|
||||
Convert directly to bytes the size of the int. (e.g. ``uint16 = 2 bytes``)
|
||||
|
||||
|
@ -75,7 +84,7 @@ buffer_size = int_size / 8
|
|||
return value.to_bytes(buffer_size, 'little')
|
||||
```
|
||||
|
||||
#### bool
|
||||
#### Bool
|
||||
|
||||
Convert directly to a single 0x00 or 0x01 byte.
|
||||
|
||||
|
@ -91,8 +100,7 @@ return b'\x01' if value is True else b'\x00'
|
|||
|
||||
#### Address
|
||||
|
||||
The `address` should already come as a hash/byte format. Ensure that length is
|
||||
**20**.
|
||||
The `address` should already come as a hash/byte format. Ensure that length is **20**.
|
||||
|
||||
| Check to perform | Code |
|
||||
|:-----------------------|:---------------------|
|
||||
|
@ -126,8 +134,7 @@ return value
|
|||
|
||||
For general `bytes` type:
|
||||
1. Get the length/number of bytes; Encode into a `4-byte` integer.
|
||||
2. Append the value to the length and return: ``[ length_bytes ] + [
|
||||
value_bytes ]``
|
||||
2. Append the value to the length and return: ``[ length_bytes ] + [ value_bytes ]``
|
||||
|
||||
| Check to perform | Code |
|
||||
|:-------------------------------------|:-----------------------|
|
||||
|
@ -174,7 +181,7 @@ return serialized_len + serialized_list_string
|
|||
|
||||
A container represents a heterogenous, associative collection of key-value pairs. Each pair is referred to as a `field`. To get the value for a given field, you supply the key which is a symbol unique to the container referred to as the field's `name`. The container data type is analogous to the `struct` type found in many languages like C or Go.
|
||||
|
||||
To serialize a container, obtain the set of its field's names and sort them lexicographically. For each field name in this sorted list, obtain the corresponding value and serialize it. Tightly pack the complete set of serialized values in the same order as the sorted field names into a buffer. Calculate the size of this buffer of serialized bytes and encode as a `4-byte` **little endian** `uint32`. Prepend the encoded length to the buffer. The result of this concatenation is the final serialized value of the container.
|
||||
To serialize a container, obtain the list of its field's names in the specified order. For each field name in this list, obtain the corresponding value and serialize it. Tightly pack the complete set of serialized values in the same order as the field names into a buffer. Calculate the size of this buffer of serialized bytes and encode as a `4-byte` **little endian** `uint32`. Prepend the encoded length to the buffer. The result of this concatenation is the final serialized value of the container.
|
||||
|
||||
|
||||
| Check to perform | Code |
|
||||
|
@ -183,9 +190,9 @@ To serialize a container, obtain the set of its field's names and sort them lexi
|
|||
|
||||
To serialize:
|
||||
|
||||
1. Get the names of the container's fields and sort them.
|
||||
1. Get the list of the container's fields.
|
||||
|
||||
2. For each name in the sorted list, obtain the corresponding value from the container and serialize it. Place this serialized value into a buffer. The serialized values should be tightly packed.
|
||||
2. For each name in the list, obtain the corresponding value from the container and serialize it. Place this serialized value into a buffer. The serialized values should be tightly packed.
|
||||
|
||||
3. Get the number of raw bytes in the serialized buffer. Encode that number as a `4-byte` **little endian** `uint32`.
|
||||
|
||||
|
@ -206,7 +213,7 @@ def get_type_for_field_name(typ, field_name):
|
|||
serialized_buffer = b''
|
||||
|
||||
typ = type(value)
|
||||
for field_name in sorted(get_field_names(typ)):
|
||||
for field_name in get_field_names(typ):
|
||||
field_value = get_value_for_field_name(value, field_name)
|
||||
field_type = get_type_for_field_name(typ, field_name)
|
||||
serialized_buffer += serialize(field_value, field_type)
|
||||
|
@ -233,7 +240,7 @@ At each step, the following checks should be made:
|
|||
|:-------------------------|:-----------------------------------------------------------|
|
||||
| Ensure sufficient length | ``length(rawbytes) >= current_index + deserialize_length`` |
|
||||
|
||||
#### uint: 8/16/24/32/64/256
|
||||
#### uint
|
||||
|
||||
Convert directly from bytes into integer utilising the number of bytes the same
|
||||
size as the integer length. (e.g. ``uint16 == 2 bytes``)
|
||||
|
@ -241,10 +248,10 @@ size as the integer length. (e.g. ``uint16 == 2 bytes``)
|
|||
All integers are interpreted as **little endian**.
|
||||
|
||||
```python
|
||||
assert(len(rawbytes) >= current_index + int_size)
|
||||
byte_length = int_size / 8
|
||||
new_index = current_index + int_size
|
||||
return int.from_bytes(rawbytes[current_index:current_index+int_size], 'little'), new_index
|
||||
new_index = current_index + byte_length
|
||||
assert(len(rawbytes) >= new_index)
|
||||
return int.from_bytes(rawbytes[current_index:current_index+byte_length], 'little'), new_index
|
||||
```
|
||||
|
||||
#### Bool
|
||||
|
@ -258,7 +265,7 @@ return True if rawbytes == b'\x01' else False
|
|||
|
||||
#### Address
|
||||
|
||||
Return the 20 bytes.
|
||||
Return the 20-byte deserialized address.
|
||||
|
||||
```python
|
||||
assert(len(rawbytes) >= current_index + 20)
|
||||
|
@ -343,10 +350,8 @@ Instantiate a container with the full set of deserialized data, matching each me
|
|||
|
||||
To deserialize:
|
||||
|
||||
1. Get the names of the container's fields and sort them.
|
||||
|
||||
2. For each name in the sorted list, attempt to deserialize a value for that type. Collect these values as they will be used to construct an instance of the container.
|
||||
|
||||
1. Get the list of the container's fields.
|
||||
2. For each name in the list, attempt to deserialize a value for that type. Collect these values as they will be used to construct an instance of the container.
|
||||
3. Construct a container instance after successfully consuming the entire subset of the stream for the serialized container.
|
||||
|
||||
**Example in Python**
|
||||
|
@ -376,30 +381,30 @@ assert(len(rawbytes) >= new_index)
|
|||
item_index = current_index + LENGTH_BYTES
|
||||
|
||||
values = {}
|
||||
for field_name in sorted(get_field_names(typ)):
|
||||
for field_name in get_field_names(typ):
|
||||
field_name_type = get_type_for_field_name(typ, field_name)
|
||||
values[field_name], item_index = deserialize(data, item_index, field_name_type)
|
||||
assert item_index == start + LENGTH_BYTES + length
|
||||
assert item_index == new_index
|
||||
return typ(**values), item_index
|
||||
```
|
||||
|
||||
### Tree_hash
|
||||
### Tree Hash
|
||||
|
||||
The below `tree_hash` algorithm is defined recursively in the case of lists and containers, and it outputs a value equal to or less than 32 bytes in size. For the final output only (ie. not intermediate outputs), if the output is less than 32 bytes, right-zero-pad it to 32 bytes. The goal is collision resistance *within* each type, not between types.
|
||||
The below `hash_tree_root` algorithm is defined recursively in the case of lists and containers, and it outputs a value equal to or less than 32 bytes in size. For the final output only (ie. not intermediate outputs), if the output is less than 32 bytes, right-zero-pad it to 32 bytes. The goal is collision resistance *within* each type, not between types.
|
||||
|
||||
We define `hash(x)` as `BLAKE2b-512(x)[0:32]`.
|
||||
Refer to [the helper function `hash`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#hash) of Phase 0 of the [Eth2.0 specs](https://github.com/ethereum/eth2.0-specs) for a definition of the hash function used below, `hash(x)`.
|
||||
|
||||
#### uint: 8/16/24/32/64/256, bool, address, hash32
|
||||
#### `uint8`..`uint256`, `bool`, `address`, `hash1`..`hash32`
|
||||
|
||||
Return the serialization of the value.
|
||||
|
||||
#### bytes, hash96
|
||||
#### `uint264`..`uintN`, `bytes`, `hash33`..`hashN`
|
||||
|
||||
Return the hash of the serialization of the value.
|
||||
|
||||
#### List/Vectors
|
||||
|
||||
First, we define some helpers and then the Merkle tree function. The constant `CHUNK_SIZE` is set to 128.
|
||||
First, we define some helpers and then the Merkle tree function.
|
||||
|
||||
```python
|
||||
# Merkle tree hash of a list of homogenous, non-empty items
|
||||
|
@ -409,10 +414,10 @@ def merkle_hash(lst):
|
|||
|
||||
if len(lst) == 0:
|
||||
# Handle empty list case
|
||||
chunkz = [b'\x00' * CHUNKSIZE]
|
||||
elif len(lst[0]) < CHUNKSIZE:
|
||||
chunkz = [b'\x00' * SSZ_CHUNK_SIZE]
|
||||
elif len(lst[0]) < SSZ_CHUNK_SIZE:
|
||||
# See how many items fit in a chunk
|
||||
items_per_chunk = CHUNKSIZE // len(lst[0])
|
||||
items_per_chunk = SSZ_CHUNK_SIZE // len(lst[0])
|
||||
|
||||
# Build a list of chunks based on the number of items in the chunk
|
||||
chunkz = [b''.join(lst[i:i+items_per_chunk]) for i in range(0, len(lst), items_per_chunk)]
|
||||
|
@ -423,28 +428,28 @@ def merkle_hash(lst):
|
|||
# Tree-hash
|
||||
while len(chunkz) > 1:
|
||||
if len(chunkz) % 2 == 1:
|
||||
chunkz.append(b'\x00' * CHUNKSIZE)
|
||||
chunkz.append(b'\x00' * SSZ_CHUNK_SIZE)
|
||||
chunkz = [hash(chunkz[i] + chunkz[i+1]) for i in range(0, len(chunkz), 2)]
|
||||
|
||||
# Return hash of root and length data
|
||||
return hash(chunkz[0] + datalen)
|
||||
```
|
||||
|
||||
To `tree_hash` a list, we simply do:
|
||||
To `hash_tree_root` a list, we simply do:
|
||||
|
||||
```python
|
||||
return merkle_hash([tree_hash(item) for item in value])
|
||||
return merkle_hash([hash_tree_root(item) for item in value])
|
||||
```
|
||||
|
||||
Where the inner `tree_hash` is a recursive application of the tree-hashing function (returning less than 32 bytes for short single values).
|
||||
Where the inner `hash_tree_root` is a recursive application of the tree-hashing function (returning less than 32 bytes for short single values).
|
||||
|
||||
|
||||
#### Container
|
||||
|
||||
Recursively tree hash the values in the container in order sorted by key, and return the hash of the concatenation of the results.
|
||||
Recursively tree hash the values in the container in the same order as the fields, and return the hash of the concatenation of the results.
|
||||
|
||||
```python
|
||||
return hash(b''.join([tree_hash(getattr(x, field)) for field in sorted(value.fields)))
|
||||
return hash(b''.join([hash_tree_root(getattr(x, field)) for field in value.fields]))
|
||||
```
|
||||
|
||||
|
||||
|
@ -457,3 +462,9 @@ return hash(b''.join([tree_hash(getattr(x, field)) for field in sorted(value.fie
|
|||
| Nim | [ https://github.com/status-im/nim-beacon-chain/blob/master/beacon_chain/ssz.nim ](https://github.com/status-im/nim-beacon-chain/blob/master/beacon_chain/ssz.nim) | Nim Implementation maintained SSZ. |
|
||||
| Rust | [ https://github.com/paritytech/shasper/tree/master/util/ssz ](https://github.com/paritytech/shasper/tree/master/util/ssz) | Shasper implementation of SSZ maintained by ParityTech. |
|
||||
| Javascript | [ https://github.com/ChainSafeSystems/ssz-js/blob/master/src/index.js ](https://github.com/ChainSafeSystems/ssz-js/blob/master/src/index.js) | Javascript Implementation maintained SSZ |
|
||||
| Java | [ https://www.github.com/ConsenSys/cava/tree/master/ssz ](https://www.github.com/ConsenSys/cava/tree/master/ssz) | SSZ Java library part of the Cava suite |
|
||||
| Go | [ https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz ](https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz) | Go implementation of SSZ mantained by Prysmatic Labs |
|
||||
|
||||
|
||||
## Copyright
|
||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||
|
|
Loading…
Reference in New Issue