rm research/fork_choice_rule/ (#949)

This commit is contained in:
tersec 2020-04-29 20:02:40 +00:00 committed by GitHub
parent 49cc9a9961
commit 7b840440bc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 0 additions and 872 deletions

View File

@ -1,66 +0,0 @@
Fork choice rule / Proof-of-stake
Specs implemented is Ghost LMD (Late-Message Driven Ghost)
Python research implementation: https://github.com/ethereum/research/tree/94ac4e2100a808a7097715003d8ad1964df4dbd9/clock_disparity
Mini-specs https://ethresear.ch/t/beacon-chain-casper-ffg-rpj-mini-spec/2760
(raw Markdown: https://ethresear.ch/raw/2760)
-------------------------
The purpose of this document is to give a "mini-spec" for the beacon chain mechanism for the purpose of security analysis, safety proofs and other academic reasoning, separate from relatively irrelevant implementation details.
### Beacon chain stage 1 (no justification, no dynasty changes)
Suppose there is a validator set $V = {V_1 ... V_n}$ (we assume for simplicity that all validators have an equal amount of "stake"), with subsets $S_1 .... S_{64}$ (no guarantee these subsets are disjoint, but we can guarantee $|S_i| \ge floor(\frac{|V|}{64})$), where $|x|$ refers to set size (ie. the number of validators, or whatever other kind of object, in $x$). Suppose also that the system generates a random permutation of validator indices, ${p_1 ... p_N}$.
> Note: if an attacker controls less than $\frac{1}{3}$ of the stake, then if $|S_i| \ge 892$ there is a less than $2^{-80}$ chance that the attacker controls more than $\frac{1}{2}$ of $S_i$, and there is a less than $2^{-100}$ chance that an attacker controls all 64 indices in a given span $i_k .... i_{k+63}$. We can assume that it is certain that neither of these things will happen (that is, we can assume there exists a substring of validator indices $p_{i_1}, p_{i_2} ...$ with $p_{i_{k+1}} - p_{i_k} < 64$ and that every $S_i$ is majority honest).
We divide time into **slots**; if the genesis timestamp of the system is $T_0$, then slot $i$ consists of the time period $[T_0 + 8i, T_0 + 8(i+1))$. When slot $i$ begins, validator $V_{p_{i\ mod\ N}}$ is expected to create ("propose") a block, which contains a pointer to some parent block that they perceive as the "head of the chain", and includes all of the **attestations** that they know about that have not yet been included into that chain. After 4 seconds, validators in $S_{i\ mod\ 64}$ are expected to take the newly published block (if it has actually been published) into account, determine what they think is the new "head of the chain" (if all is well, this will generally be the newly published block), and publish a (signed) attestation, $[current\_slot, h_1, h_2 .... h_{64}]$, where $h_1 ... h_{64}$ are the hashes of the ancestors of the head up to 64 slots (if a chain has missing slots between heights $a$ and $b$, then use the hash of the block at height $a$ for heights $a+1 .... b-1$), and $current\_slot$ is the current slot number.
The fork choice used is "latest message driven GHOST". The mechanism is as follows:
1. Set $H$ to equal the genesis block.
2. Let $M = [M_1 ... M_n]$ be the most-recent messages (ie. highest slot number messages) of each validator.
2. Choose the child of $H$ such that the subset of $M$ that attests to either that child or one of its descendants is largest; set $H$ to this child.
3. Repeat (2) until $H$ is a block with no descendants.
Claims:
* **Safety**: assuming the attacker controls less than $\frac{1}{3}$ of $V$, and selected the portion of $V$ to control before the validators were randomly sorted, the chain will never revert (ie. once a block is part of the canonical chain, it will be part of the canonical chain forever).
* **Incentive-compatibility**: assume that there is a reward for including attestations, and for one's attestation being included in the chain (and this reward is higher if the attestation is included earlier). Proposing blocks and attesting to blocks correctly is incentive-compatible.
* **Randomness fairness**: in the long run, the attacker cannot gain by manipulating the randomness
### Beacon chain stage 2 (add justification and finalization)
As the chain receives attestations, it keeps track of the total set of validators that attest to each block. The chain keeps track of a variable, $last\_justified\_slot$, which starts at 0. If, for some block $B$ in the chain, a set of validators $V_B$ attest to it, with $|V_B| \ge |V| * \frac{2}{3}$, then $last\_justified\_slot$ is increased to the maximum of its previous value and that block's slot number. Attestations are required to state the $last\_justified\_slot$ in the chain they are attesting to to get included in the chain.
If a span of blocks (in the same chain) with slots $s$, $s+1$ ... $s+64$ (65 slots altogether) all get justified, then the block at slot $s$ is finalized.
We change the fork choice rule above so that instead of starting $H$ from the genesis block, it starts from the justified block with the highest slot number.
We then add two slashing conditions:
* A validator cannot make two distinct attestations in the same slot
* A validator cannot make two attestations with slot numbers $t1$, $t2$ and last justified slots $s1$, $s2$ such that $s1 < s2 < t2 < t1$
Claims:
* **Safety**: once a block becomes finalized, it will always be part of the canonical chain as seen by any node that has downloaded the chain up to the block and the evidence finalizing the block, unless at least a set of validators $V_A$ with $|V_A| \ge |V| * \frac{1}{3}$ violated one of the two slashing conditions (possibly a combination of the two).
* **Plausible liveness**: given an "honest" validator set $V_H$ with $|V_H| \ge |V| * \frac{2}{3}$, $V_H$ by itself can always finalize a new block without violating slashing conditions.
### Beacon chain stage 3: adding dynamic validator sets
Every block $B$ comes with a subset of validators $S_B$, with the following restrictions:
* Define the _dynasty_ of a block recursively: $dynasty(genesis) = 0$, generally $dynasty(B) = dynasty(parent(B))$ _except_ when (i) $B$'s 128th ancestor was finalized (and this fact is known based on what is included in the chain before B) and (ii) a dynasty transition has not taken place within the last 256 ancestors of $B$, in which case $dynasty(B) = dynasty(parent(B)) + 1$.
* Define the **local validator set** of $B$ as $LVS(B) = S_B \cup S_{parent(B)}\ \cup\ ... \ \cup\ S_{parent^{63}(B)}$
* Suppose for two blocks in the chain, $B_1$ and $B_2$, $dynasty(B_2) - dynasty(B_1) = k$. Then, $|LVS(B_1)\ \cap\ LVS(B_2)| \ge LVS(B_1) * (1 - \frac{k}{60})$ (and likewise wrt $LVS(B_2)$). That is, at most $\frac{1}{60}$ of the local validator set changes with each dynasty.
We modify the fork choice rule so that the seeking process reaches a block that includes a higher dynasty, it switches to using the latest messages from that dynasty.
Claims:
* All of the above claims hold, with appropriate replacements of $V$ with $LVS(...)$, except with fault tolerance possibly reduced from $\frac{1}{3}$ to 30\%.

View File

@ -1,59 +0,0 @@
# Translation of the spec in README.md
## Stage 1
### Part 1
> --------------------------------------
>
> Suppose there is a validator set $V = {V_1 ... V_n}$
> (we assume for simplicity that all validators have an equal amount > of "stake"),
> with subsets $S_1 .... S_{64}$
> (no guarantee these subsets are disjoint, but we can guarantee
> $|S_i| \ge floor(\frac{|V|}{64})$), where $|x|$ refers to set size
> (ie. the number of validators, or whatever other kind of object, > in $x$).
> Suppose also that the system generates a random permutation of > validator indices,
> ${p_1 ... p_N}$.
>
> --------------------------------------
In v2.1 specs terms
```
Validator Vi => ValidatorRecord
Validators V => BeaconState.validators
Validator subset Si => ShardAndCommittee
```
### Part 2
> We divide time into **slots**; if the genesis timestamp of the system is $T_0$, then slot $i$ consists of the time period $[T_0 + 8i, T_0 + 8(i+1))$. When slot $i$ begins, validator $V_{p_{i\ mod\ N}}$ is expected to create ("propose") a block, which contains a pointer to some parent block that they perceive as the "head of the chain", and includes all of the **attestations** that they know about that have not yet been included into that chain. After 4 seconds, validators in $S_{i\ mod\ 64}$ are expected to take the newly published block (if it has actually been published) into account, determine what they think is the new "head of the chain" (if all is well, this will generally be the newly published block), and publish a (signed) attestation, $[current\_slot, h_1, h_2 .... h_{64}]$, where $h_1 ... h_{64}$ are the hashes of the ancestors of the head up to 64 slots (if a chain has missing slots between heights $a$ and $b$, then use the hash of the block at height $a$ for heights $a+1 .... b-1$), and $current\_slot$ is the current slot number.
In v2.1 specs terms
```
slot => BeaconBlock.slot
Validator Vpi mod N => get_beacon_proposer(BeaconState, slot) -> ValidatorRecord
Proposed block => ProposalSignedData (?)
Attestations => AttestationRecord
Attestations not included => BeaconState.pending_attestations
Signed attestation => AttestationSignedData
height => slot
```
### Part 3
> The fork choice used is "latest message driven GHOST". The mechanism is as follows:
>
> 1. Set $H$ to equal the genesis block.
> 2. Let $M = [M_1 ... M_n]$ be the most-recent messages (ie. highest slot number messages) of each validator.
> 2. Choose the child of $H$ such that the subset of $M$ that attests to either that child or one of its descendants is largest; set $H$ to this child.
> 3. Repeat (2) until $H$ is a block with no descendants.
In v2.1 specs terms
```
Current block H => BeaconBlock / BeaconBlock.state_root / BeaconState.recent_block_hashes[^1] (?)
Messages => AttestationSignedData (?)
Child of H => proc needed
```

View File

@ -1,76 +0,0 @@
# beacon_chain
# Copyright (c) 2018 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
# at your option. This file may not be copied, modified, or distributed except according to those terms.
# A port of https://github.com/ethereum/research/blob/master/clock_disparity/ghost_node.py
# Specs: https://ethresear.ch/t/beacon-chain-casper-ffg-rpj-mini-spec/2760
# Part of Casper+Sharding chain v2.1: https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ#
# Note that implementation is not updated to the latest v2.1 yet
import math, random
proc normal_distribution*(mean = 0, std = 1): int =
## Return an integer sampled from a normal distribution (gaussian)
## ⚠ This is not thread-safe
# Implementation via the Box-Muller method
# See https://en.wikipedia.org/wiki/BoxMuller_transform
let
mean = mean.float
std = std.float
var
z1 {.global.}: float
generate {.global.}: bool
generate = not generate
if not generate:
return int(z1 * std + mean)
let
u1 = rand(1.0)
u2 = rand(1.0)
R = sqrt(-2.0 * ln(u1))
z0 = R * cos(2 * PI * u2)
z1 = R * sin(2 * PI * u2)
return int(z0 * std + mean)
when isMainModule:
import sequtils, stats, strformat
func absolute_error(y_true, y: float): float =
## Absolute error: |y_true - y|
abs(y_true - y)
func relative_error(y_true, y: float): float =
## Relative error: |y_true - y|/|y_true|
abs(y_true - y)/abs(y_true)
let
mu = 1000
sigma = 12
a = newSeqWith(10000000, normal_distribution(mean = mu, std = sigma))
var statistics: RunningStat
for val in a:
statistics.push val
# Note: we use the sample standard deviation, not population
# See Bessel's correction and standard deviation estimation.
proc report(stat: string, value, expected: float) =
echo &"{stat:<20} {value:>9.4f} | Expected: {expected:>9.4f}"
echo &"Statistics on {a.len} samples"
report "Mean: ", statistics.mean, mu.float
report "Standard deviation: ", statistics.standardDeviationS, sigma.float
# Absolute error
doAssert absolute_error(mu.float, statistics.mean) < 0.6
doAssert absolute_error(sigma.float, statistics.standardDeviationS) < 0.01
# Relative error
doAssert relative_error(mu.float, statistics.mean) < 0.01
doAssert relative_error(sigma.float, statistics.standardDeviationS) < 0.01

View File

@ -1,258 +0,0 @@
# beacon_chain
# Copyright (c) 2018 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
# at your option. This file may not be copied, modified, or distributed except according to those terms.
# A port of https://github.com/ethereum/research/blob/master/clock_disparity/ghost_node.py
# Specs: https://ethresear.ch/t/beacon-chain-casper-ffg-rpj-mini-spec/2760
# Part of Casper+Sharding chain v2.1: https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ#
# Note that implementation is not updated to the latest v2.1 yet
import
# Stdlib
tables, deques, strutils, endians, strformat,
times, sequtils,
# Nimble packages
nimcrypto,
# Local imports
./fork_choice_types
proc broadcast(self: Node, x: BlockOrSig) =
if self.sleepy and self.timestamp != DurationZero:
return
self.network.broadcast(self, x)
self.on_receive(x)
proc log(self: Node, words: string, lvl = 3, all = false) =
if (self.id == 0 or all) and lvl >= 2:
echo self.id, " - ", words
func add_to_timequeue(self: Node, obj: Block) =
var i = 0
while i < self.timequeue.len and self.timequeue[i].min_timestamp < obj.min_timestamp:
inc i
self.timequeue.insert(obj, i)
func add_to_multiset[K, V](
self: Node,
multiset: TableRef[K, seq[V]],
k: K,
v: V or seq[V]) =
multiset.mgetOrPut(k, @[]).add v
func change_head(self: Node, chain: var seq[Eth2Digest], new_head: Block) =
chain.add newSeq[Eth2Digest](new_head.height + 1 - chain.len)
var (i, c) = (new_head.height, new_head.hash)
while c != chain[i]:
chain[i] = c
c = self.blocks[c].parent_hash
dec i
for idx, val in chain:
doAssert self.blocks[val].height == idx
func recalculate_head(self: Node) =
while true:
var
descendant_queue = initDeque[Eth2Digest]()
new_head: Eth2Digest
max_count = 0
descendant_queue.addFirst self.main_chain[^1]
while descendant_queue.len != 0:
let first = descendant_queue.popFirst()
if first in self.children:
for c in self.children[first]:
descendant_queue.addLast c
if self.scores.getOrDefault(first, 0) > max_count and first != self.main_chain[^1]:
new_head = first
max_count = self.scores.getOrDefault(first, 0)
if new_head != Eth2Digest(): # != default init, a 32-byte array of 0
self.change_head(self.main_chain, self.blocks[new_head])
else:
return
proc process_children(self: Node, h: Eth2Digest) =
if h in self.parentqueue:
for b in self.parentqueue[h]:
self.on_receive(b, reprocess = true)
self.parentqueue.del h
func get_common_ancestor(self: Node, hash_a, hash_b: Eth2Digest): Block =
var (a, b) = (self.blocks[hash_a], self.blocks[hash_b])
while b.height > a.height:
b = self.blocks[b.parent_hash]
while a.height > b.height:
a = self.blocks[a.parent_hash]
while a.hash != b.hash:
a = self.blocks[a.parent_hash]
b = self.blocks[b.parent_hash]
return a
func is_descendant(self: Node, hash_a, hash_b: Eth2Digest): bool =
let a = self.blocks[hash_a]
var b = self.blocks[hash_b]
while b.height > a.height:
b = self.blocks[b.parent_hash]
return a.hash == b.hash
proc have_ancestry(self: Node, h: Eth2Digest): bool =
let h = BlockHash(raw: h)
while h.raw != Genesis.hash:
if h notin self.processed:
return false
let wip = self.processed[h]
if wip is Block:
h.raw = Block(wip).parent_hash
return true
method on_receive(self: Node, blck: Block, reprocess = false) =
block: # Common part of on_receive
let hash = BlockHash(raw: blck.hash)
if hash in self.processed and not reprocess:
return
self.processed[hash] = blck
# parent not yet received
if blck.parent_hash notin self.blocks:
self.add_to_multiset(self.parentqueue, blck.parent_hash, blck)
return
# Too early
if blck.min_timestamp > self.timestamp:
self.add_to_timequeue(blck)
return
# Add the block
self.log "Processing beacon block " & blck.hash.data[0 .. ^4].toHex(false)
self.blocks[blck.hash] = blck
# Is the block building on the head? Then add it to the head!
if blck.parent_hash == self.main_chain[^1] or self.careless:
self.main_chain.add(blck.hash)
# Add child record
self.add_to_multiset(self.children, blck.parent_hash, blck.hash)
# Final steps
self.process_children(blck.hash)
self.network.broadcast(self, blck)
method on_receive(self: Node, sig: Sig, reprocess = false) =
block: # Common part of on_receive
let hash = SigHash(raw: sig.hash)
if hash in self.processed and not reprocess:
return
self.processed[hash] = sig
if sig.targets[0] notin self.blocks:
self.add_to_multiset(self.parentqueue, sig.targets[0], sig)
return
# Get common ancestor
let anc = self.get_common_ancestor(self.main_chain[^1], sig.targets[0])
let max_score = block:
var max = 0
for i in anc.height + 1 ..< self.main_chain.len:
max = max(max, self.scores.getOrDefault(self.main_chain[i], 0))
max
# Process scoring
var max_newchain_score = 0
for i in countdown(sig.targets.len - 1, 0):
let c = sig.targets[i]
let slot = sig.slot - 1 - i
var slot_key: array[4, byte]
bigEndian32(slot_key.addr, slot.unsafeAddr)
doAssert self.blocks[c].slot <= slot
# If a parent and child block have non-consecutive slots, then the parent
# block is also considered to be the canonical block at all of the intermediate
# slot numbers. We store the scores for the block at each height separately
var key: array[36, byte]
key[0 ..< 4] = slot_key
key[4 ..< 36] = c.data
self.scores_at_height[key] = self.scores_at_height.getOrDefault(key, 0) + 1
# For fork choice rule purposes, the score of a block is the highst score
# that it has at any height
self.scores[c] = max(self.scores.getOrDefault(c, 0), self.scores_at_height[key])
# If 2/3 of notaries vote for a block, it is justified
if self.scores_at_height[key] == NOTARIES * 2 div 3: # Shouldn't that be >= ?
self.justified[c] = true
var c2 = c
self.log &"Justified: {slot} {($c)[0 ..< 8]}"
# If SLOTS_PER_EPOCH+1 blocks are justified in a row, the oldest is
# considered finalized
var finalize = true
for slot2 in countdown(slot-1, max(slot - SLOTS_PER_EPOCH * 1, 0)):
# Note the max(...)-1 in spec is unneeded, Nim ranges are inclusive
if slot2 < self.blocks[c2].slot:
c2 = self.blocks[c2].parent_hash
var slot_key2: array[4, byte]
bigEndian32(slot_key2.addr, slot2.unsafeAddr)
var key2: array[36, byte]
key[0 ..< 4] = slot_key2
key[4 ..< 36] = c2.data
if self.scores_at_height.getOrDefault(key2, 0) < NOTARIES * 2 div 3:
finalize = false
self.log &"Not quite finalized: stopped at {slot2} needed {max(slot - SLOTS_PER_EPOCH, 0)}"
break
if slot2 < slot - SLOTS_PER_EPOCH - 1 and finalize and c2 notin self.finalized:
self.log &"Finalized: {self.blocks[c2].slot} {($c)[0 ..< 8]}"
self.finalized[c2] = true
# Find the maximum score of a block on the chain that this sig is weighing on
if self.blocks[c].slot > anc.slot:
max_newchain_score = max(max_newchain_score, self.scores[c])
# If it's higher, switch over the canonical chain
if max_newchain_score > max_score:
self.main_chain = self.mainchain[0 ..< anc.height + 1]
self.recalculate_head()
self.sigs[sig.hash] = sig
# Rebroadcast
self.network.broadcast(self, sig)
func get_sig_targets(self: Node, start_slot: int32): seq[Eth2Digest] =
# Get the portion of the main chain that is within the last SLOTS_PER_EPOCH
# slots, once again duplicating the parent in cases where the parent and
# child's slots are not consecutive
result = @[]
var i = self.main_chain.high
for slot in countdown(start_slot-1, max(start_slot - SLOTS_PER_EPOCH, 0)):
# Note the max(...)-1 in spec is unneeded, Nim ranges are inclusive
if slot < self.blocks[self.main_chain[i]].slot:
dec i
result.add self.main_chain[i]
for i, x in result:
doAssert self.blocks[x].slot <= start_slot - 1 - i
doAssert result.len == min(SLOTS_PER_EPOCH, start_slot)
proc tick*(self: Node) =
self.timestamp += initDuration(milliseconds = 100)
self.log &"Tick: {self.timestamp}", lvl=1
# Make a block?
let slot = int32 seconds(self.timestamp div SLOT_SIZE)
if slot > self.last_made_block and (slot mod NOTARIES) == self.id:
self.broadcast(
initBlock(self.blocks[
self.main_chain[^1]
], slot, self.id)
)
self.last_made_block = slot
# Make a sig?
if slot > self.last_made_sig and (slot mod SLOTS_PER_EPOCH) == self.id mod SLOTS_PER_EPOCH:
var sig_from = self.main_chain.high
while sig_from > 0 and self.blocks[self.main_chain[sig_from]].slot >= slot - SLOTS_PER_EPOCH:
dec sig_from
let sig = newSig(self.id, self.get_sig_targets(slot), slot, self.timestamp)
self.log &"Sig: {self.id} {sig.slot} {sig.targets.mapIt(($it)[0 ..< 4])}"
self.broadcast sig
self.last_made_sig = slot
# process time queue
while self.timequeue.len > 0 and self.timequeue[0].min_timestamp <= self.timestamp:
self.on_receive(self.timequeue[0], reprocess = true)
self.timequeue.delete(0) # This is expensive, but we can't use a queue due to random insertions in add_to_timequeue

View File

@ -1,35 +0,0 @@
# beacon_chain
# Copyright (c) 2018 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
# at your option. This file may not be copied, modified, or distributed except according to those terms.
# A port of https://github.com/ethereum/research/blob/master/clock_disparity/ghost_node.py
# Specs: https://ethresear.ch/t/beacon-chain-casper-ffg-rpj-mini-spec/2760
# Part of Casper+Sharding chain v2.1: https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ#
# Note that implementation is not updated to the latest v2.1 yet
import
./fork_choice_types, ./networksim, ./fork_choice_rule, ./distributions,
sequtils, times, strformat, tables
let net = newNetworkSimulator(latency = 22)
for i in 0'i32 ..< NOTARIES:
net.agents.add newNode(
id = i,
network = net,
timestamp = initDuration(seconds = max(normal_distribution(300, 300), 0)) div 10,
sleepy = i mod 4 == 0
)
net.generate_peers()
net.run(steps = 100000)
for n in net.agents:
echo &"Local timestamp: {n.timestamp:>.1}, timequeue len {n.timequeue.len}"
echo "Main chain head: ", n.blocks[n.main_chain[^1]].height
echo "Total main chain blocks received: ", toSeq(values(n.blocks)).filterIt(it is Block).len
# echo "Notarized main chain blocks received: ", toSeq(values(n.blocks)).filterIt((it is Block) and n.is_notarized(it)).len - 1

View File

@ -1,200 +0,0 @@
# beacon_chain
# Copyright (c) 2018 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
# at your option. This file may not be copied, modified, or distributed except according to those terms.
# A port of https://github.com/ethereum/research/blob/master/clock_disparity/ghost_node.py
# Specs: https://ethresear.ch/t/beacon-chain-casper-ffg-rpj-mini-spec/2760
# Part of Casper+Sharding chain v2.1: https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ#
# Note that implementation is not updated to the latest v2.1 yet
import
# Stdlib
tables, deques, strutils, hashes, times,
random,
# Nimble packages
nimcrypto,
../../beacon_chain/spec/digest
const
NOTARIES* = 100 # Committee size in Casper v2.1
SLOT_SIZE* = 6 # Slot duration in Casper v2.1
SLOTS_PER_EPOCH* = 25 # Cycle length in Casper v2.
# TODO, clear up if reference semantics are needed
# for the tables. I.e. what's their maximum size.
type
BlockOrSig* = ref object of RootObj
# For some reason, Block and Sig have to be stored
# in an heterogenous container.
# So we use inheritance to erase types
BlockOrSigHash* = ref object of RootObj
BlockHash* = ref object of BlockOrSigHash
raw*: Eth2Digest
SigHash* = ref object of BlockOrSigHash
raw*: MDigest[384]
Block* = ref object of BlockOrSig
contents*: array[32, byte]
parent_hash*: Eth2Digest
hash*: Eth2Digest
height*: int # slot in Casper v2.1 spec
proposer*: int32
slot*: int32
##########################################
func min_timestamp*(self: Block): Duration =
const slot_size = initDuration(seconds = SLOT_SIZE)
result = slot_size * self.slot
let Genesis* = Block()
proc initBlock*(parent: Block, slot, proposer: int32): Block =
new result
for val in result.contents.mitems:
val = rand(0.byte .. 7.byte)
if not parent.isNil:
result.parent_hash = parent.hash
result.height = parent.height + 1
var ctx: keccak256
ctx.init()
ctx.update(result.parent_hash.data)
ctx.update(result.contents)
ctx.finish(result.hash.data)
ctx.clear()
doAssert slot mod NOTARIES == proposer
result.proposer = proposer
result.slot = slot
##########################################
func hash*(x: MDigest): Hash =
## Allow usage of MDigest in hashtables
# We just keep the first 64 bits of the digest
const bytes = x.type.bits div 8
const nb_ints = bytes div sizeof(int) # Hash is a distinct int
result = cast[array[nb_ints, Hash]](x)[0]
# Alternatively hash for real
# result = x.unsafeAddr.hashData(bytes)
method hash*(x: BlockOrSigHash): Hash {.base.}=
raise newException(ValueError, "Not implemented error. Please implement in child types")
method hash*(x: BlockHash): Hash =
## Allow usage of Blockhash in tables
x.raw.hash
method hash*(x: SigHash): Hash =
## Allow usage of Sighash in tables
x.raw.hash
func hash*(x: Duration): Hash =
## Allow usage of Duration in tables
# Due to rpivate fields, we use pointer + length as a hack:
# https://github.com/nim-lang/Nim/issues/8857
result = hashData(x.unsafeAddr, x.sizeof)
#########################################
type
NetworkSimulator* = ref object
agents*: seq[Node]
latency_distribution_sample*: proc (): Duration
time*: Duration
objqueue*: TableRef[Duration, seq[tuple[recipient: Node, obj: BlockOrSig]]]
peers*: TableRef[int, seq[Node]]
reliability*: float
Sig* = ref object of BlockOrSig
# TODO: unsure if this is still relevant in Casper v2.1
proposer*: int64 # the validator that creates a block
targets*: seq[Eth2Digest] # the hash of blocks proposed
slot*: int32 # slot number
timestamp*: Duration # ts in the ref implementation
hash*: MDigest[384] # The signature (BLS12-384)
Node* = ref object
blocks*: TableRef[Eth2Digest, Block]
sigs*: TableRef[MDigest[384], Sig]
main_chain*: seq[Eth2Digest]
timequeue*: seq[Block]
parentqueue*: TableRef[Eth2Digest, seq[BlockOrSig]]
children*: TableRef[Eth2Digest, seq[Eth2Digest]]
scores*: TableRef[Eth2Digest, int]
scores_at_height*: TableRef[array[36, byte], int] # Should be slot not height in v2.1
justified*: TableRef[Eth2Digest, bool]
finalized*: TableRef[Eth2Digest, bool]
timestamp*: Duration
id*: int32
network*: NetworkSimulator
used_parents*: TableRef[Eth2Digest, Node]
processed*: TableRef[BlockOrSigHash, BlockOrSig]
sleepy*: bool
careless*: bool
first_round*: bool
last_made_block*: int32
last_made_sig*: int32
proc newSig*(
proposer: int32,
targets: seq[Eth2Digest],
slot: int32,
ts: Duration): Sig =
new result
result.proposer = proposer
result.targets = targets
result.slot = slot
result.timestamp = ts
for val in result.hash.data.mitems:
val = rand(0.byte .. 7.byte)
proc newNode*(
id: int32,
network: NetworkSimulator,
sleepy, careless = false,
timestamp = DurationZero
): Node =
new result
result.id = id
result.network = network
result.timestamp = timestamp
result.sleepy = sleepy
result.careless = careless
result.main_chain = @[Genesis.hash]
result.blocks = {Genesis.hash: Genesis}.newTable
# Boilerplate empty initialization
result.processed = newTable[BlockOrSigHash, BlockOrSig]()
result.children = newTable[Eth2Digest, seq[Eth2Digest]]()
result.parentqueue = newTable[Eth2Digest, seq[BlockOrSig]]()
result.scores = newTable[Eth2Digest, int]()
result.scores_at_height = newTable[array[36, byte], int]()
result.sigs = newTable[MDigest[384], Sig]()
result.justified = newTable[Eth2Digest, bool]()
###########################################################
# Forward declarations
method on_receive*(self: Node, obj: BlockOrSig, reprocess = false) {.base.} =
raise newException(ValueError, "Not implemented error. Please implement in child types")
###########################################################
###########################################################
# Common
func broadcast*(self: NetworkSimulator, sender: Node, obj: BlockOrSig) =
for p in self.peers[sender.id]:
let recv_time = self.time + self.latency_distribution_sample()
if recv_time notin self.objqueue:
self.objqueue[recv_time] = @[]
self.objqueue[recv_time].add (p, obj)
###########################################################

View File

@ -1,93 +0,0 @@
# beacon_chain
# Copyright (c) 2018 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
# at your option. This file may not be copied, modified, or distributed except according to those terms.
# A port of https://github.com/ethereum/research/blob/master/clock_disparity/ghost_node.py
# Specs: https://ethresear.ch/t/beacon-chain-casper-ffg-rpj-mini-spec/2760
# Part of Casper+Sharding chain v2.1: https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ#
import
tables, times, sugar, random,
./fork_choice_types, ./fork_choice_rule, ./distributions
proc newNetworkSimulator*(latency: int): NetworkSimulator =
new result
result.latency_distribution_sample = () => initDuration(
seconds = max(
0,
normal_distribution(latency, latency * 2 div 5)
)
)
result.reliability = 0.9
result.objqueue = newTable[Duration, seq[(Node, BlockOrSig)]]()
result.peers = newTable[int, seq[Node]]()
proc generate_peers*(self: NetworkSimulator, num_peers = 5) =
self.peers.clear()
var p: seq[Node]
for a in self.agents:
p.setLen(0) # reset without involving GC/realloc
while p.len <= num_peers div 2:
p.add self.agents.rand()
if p[^1] == a:
discard p.pop()
self.peers.mgetOrPut(a.id, @[]).add p
for peer in p:
self.peers.mgetOrPut(peer.id, @[]).add a
proc tick*(self: NetworkSimulator) =
if self.time in self.objqueue:
# on_receive, calls broadcast which will enqueue new BlockOrSig in objqueue
# so we can't for loop like in EF research repo (modifying length is not allowed)
var ros: seq[tuple[recipient: Node, obj: BlockOrSig]]
shallowCopy(ros, self.objqueue[self.time])
var i = 0
while i < ros.len:
let (recipient, obj) = ros[i]
if rand(1.0) < self.reliability:
recipient.on_receive(obj)
inc i
self.objqueue.del self.time
for a in self.agents:
a.tick()
self.time += initDuration(seconds = 1)
proc run*(self: NetworkSimulator, steps: int) =
for i in 0 ..< steps:
self.tick()
# func broadcast*(self: NetworkSimulator, sender: Node, obj: BlockOrSig)
# ## defined in fork_choice_types.nim
proc direct_send(self: NetworkSimulator, to_id: int32, obj: BlockOrSig) =
for a in self.agents:
if a.id == to_id:
let recv_time = self.time + self.latency_distribution_sample()
# if recv_time notin self.objqueue: # Unneeded with seq "not nil" changes
# self.objqueue[recv_time] = @[]
self.objqueue[recv_time].add (a, obj)
proc knock_offline_random(self: NetworkSimulator, n: int) =
var ko = initTable[int32, Node]()
while ko.len < n:
let c = rand(self.agents)
ko[c.id] = c
# for c in ko.values: # Unneeded with seq "not nil" changes
# self.peers[c.id] = @[]
for a in self.agents:
self.peers[a.id] = lc[x | (x <- self.peers[a.id], x.id notin ko), Node] # List comprehension
proc partition(self: NetworkSimulator) =
var a = initTable[int32, Node]()
while a.len < self.agents.len div 2:
let c = rand(self.agents)
a[c.id] = c
for c in self.agents:
if c.id in a:
self.peers[c.id] = lc[x | (x <- self.peers[c.id], x.id in a), Node]
else:
self.peers[c.id] = lc[x | (x <- self.peers[c.id], x.id notin a), Node]

View File

@ -1,85 +0,0 @@
# beacon_chain
# Copyright (c) 2018 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
# at your option. This file may not be copied, modified, or distributed except according to those terms.
# This file contains the implementation of the beacon chain fork choice rule.
# The chosen rule is a hybrid that combines justification and finality
# with Latest Message Driven (LMD) Greediest Heaviest Observed SubTree (GHOST)
#
# The latest version can be seen here:
# https://github.com/ethereum/eth2.0-specs/blob/master/specs/beacon-chain.md
#
# How wrong the code is:
# https://github.com/ethereum/eth2.0-specs/compare/126a7abfa86448091a0e037f52966b6a9531a857...master
#
# A standalone research implementation can be found here:
# - https://github.com/ethereum/research/blob/94ac4e2100a808a7097715003d8ad1964df4dbd9/clock_disparity/lmd_node.py
# A minispec in mathematical notation including proofs can be found here:
# - https://ethresear.ch/t/beacon-chain-casper-ffg-rpj-mini-spec/2760
# Note that it might be out-of-sync with the official spec.
import
tables, hashes,
../../beacon_chain/spec/[datatypes, helpers, digest, crypto],
../../beacon_chain/ssz
type
AttesterIdx = int
BlockHash = Eth2Digest
Store = object
# This is private to each validator.
# It holds the set of attestations and blocks that a validator
# has observed and verified.
#
# We uniquely identify each block via it's block hash
# and each attester via it's attester index (from AttestationRecord object)
# TODO/Question: Should we use the public key? That would defeat the pubkey aggregation purpose
verified_attestations: Table[AttesterIdx, ref seq[AttestationData]]
# TODO: We assume that ref seq[AttestationData] is queue, ordered by
# a pair (slot, observation time).
verified_blocks: Table[BlockHash, BeaconBlock]
finalized_head: BeaconBlock
justified_head: BeaconBlock
func hash(x: BlockHash): Hash =
## Hash for Keccak digests for Nim hash tables
# We just slice the first 4 or 8 bytes of the block hash
# depending of if we are on a 32 or 64-bit platform
const size = x.sizeof
const num_hashes = size div sizeof(int)
result = cast[array[num_hashes, Hash]](x)[0]
func get_parent(store: Store, blck: BeaconBlock): BeaconBlock =
store.verified_blocks[blck.previous_block_root]
func get_ancestor(store: Store, blck: BeaconBlock, slot: uint64): BeaconBlock =
## Find the ancestor with a specific slot number
if blck.slot == slot:
return blck
else:
return store.get_ancestor(store.get_parent(blck), slot)
# TODO: what if the slot was never observed/verified?
func get_latest_attestation(store: Store, validatorIdx: AttesterIdx): AttestationData =
## Search for the attestation with the highest slot number
## If multiple attestation have the same slot number, keep the first one.
# We assume that in `verified_attestations: Table[AttesterIdx, seq[AttestationData]]`
# `seq[AttestationSignedData]` is a queue ordered by (slot, observation time)
let attestations = store.verified_attestations[validatorIdx]
result = attestations[^1] # Pick the last attestation
for idx in countdown(attestations[].len - 2, 0): # From the second to last attestation to 0, check if they have the same slot.
if attestations[idx].slot == result.slot: # if yes it was observed earlier
result = attestations[idx]
else: # otherwise we are at the first observed attestation with the highest slot
return
func get_latest_attestation_target(store: Store, validatorIdx: AttesterIdx): BlockHash =
store.get_latest_attestation(validatorIdx).beacon_block_root