Adds `create_light_client_bootstrap` and `create_light_client_update`
functions as a reference implementation for serving light client data.
This also enables a new test harness to verify that light client data
gets applied to a `LightClientStore` as expected.
Introduces a new `LightClientBootstrap` structure to allow setting up a
`LightClientStore` with the initial sync committee and block header from
a user-configured trusted block root.
This leads to new cases where the `LightClientStore` is only aware of
the current but not the next sync committee. As a side effect of these
new cases, the store's `finalized_header` may now advance into the next
sync committee period before a corresponding `LightClientUpdate` with
the new sync committee is obtained, improving responsiveness.
Note that so far, `LightClientUpdate.attested_header.slot` needed to be
newer than `LightClientStore.finalized_header.slot`. However, it is now
necessary to also consider certain older updates to try and backfill the
`next_sync_committee`. The `is_better_update` helper is also updated to
improve `best_valid_update` tracking.
`LightClientUpdate` structures currently use different merkle proof root
depending on the presence of `finalized_header`. By always rooting it in
the same state (the `attested_header.state_root`), logic gets simpler.
Caveats:
- In periods of extended non-finality, `update.finalized_header` may now
be outdated by several sync committee periods. The old implementation
rejected such updates as the `next_sync_committee` in them was stale,
but the new implementation can properly handle this case.
- The `next_sync_committee` can no longer be considered finalized based
on `is_finality_update`. Instead, waiting until `finalized_header` is
in the `attested_header`'s sync committee period is now necessary.
- Because `update.finalized_header > store.finalized_header` no longer
holds (for updates with finality), an `is_better_update` helper is
added to improve `best_valid_update` tracking (in the past, finalized
updates with supermajority participation would always directly apply)
This PR builds on prior work from:
- @hwwhww at https://github.com/ethereum/consensus-specs/pull/2829
The producer of `LightClientUpdate` structures usually does not know how
far the `LightClientStore` on the client side has advanced. Updates are
currently rejected when including a redundant `next_sync_committee` not
advancing the `LightClientStore`. Behaviour is changed to allow this.
When `state.finalized_checkpoint` references the genesis slot, it points
to an empty `root`, instead of the actual genesis block hash. This patch
updates the `LightClientUpdate` logic to allow including finality proofs
for genesis `finalized_checkpoint.root`, better supporting non-mainnet.
When including such a finality proof, the proof is for the empty `root`,
but `finalized_header` is kept zeroed out to signify this edge case.
The `fork_version` field in `LightClientUpdate` can be derived from the
`update.signature_slot` value by consulting the locally configured fork
schedule. The light client already needs access to the fork schedule to
determine the `GeneralizedIndex` values used for merkle proofs, and the
memory layouts of the structures (including `LightClientUpdate`). The
`fork_version` itself is network dependent and doesn't reveal that info.
* Ignore subset aggregates
When aggregates are propagated through the network, it is often the case
that a better aggregate has already been seen - in particular, this
happens when an aggregator has not been able to include itself in the
mesh and therefore publishes an aggregate with only its own
attestations.
This new ignore rule allows dropping all aggregates that are
(non-strict) subsets of aggregates that have already been seen on the
network. In particular, it does not mandate dropping aggregates where a
union of previous aggregates would cause it to become a subset).
The logic for allowing this is based on the premise that any aggregate
that has already been seen by a peer will also have been seen by its
neighbours - a subset aggregate (strict or not) brings no new value to
the aggregation algorithm, except in the extreme edge case where you
could combine several such sparse aggregates into a single, more dense
"combined" aggregate and thus use less block space.
Further, as a small benefit, computing the `hash_tree_root` of the full
aggregate is generally not done -however, `hash_tree_root(data)` is
already done for other purposes as this is used as index in the beacon
API.
* add subset ignore rule to sync contributions as well
* typo
As the sync committee signs the previous block, the situation arises at
every sync committee period boundary, that the new sync committee signs
a block in the previous sync committee period. The light client cannot
reliably detect this condition (e.g., assume that this is the case when
it is currently on the last slot of a sync committee period), because
the last couple slots of a sync committee period may not have a block.
For example, when receiving a `LightClientUpdate` that is constructed
as in the following illustration, it is unknown whether `sync_aggregate`
was signed by the current or next sync committee at `attested_header`.
```
slot N N + 1 | N + 2 (slot not sent!)
|
+-----------------+ \ / | +----------------+
| attested_header | <--- X ----|---- | sync_aggregate |
+-----------------+ / \ | +----------------+
missed |
|
sync committee
period boundary
```
This patch addresses this edge case by including the slot at which the
`sync_aggregate` was created into the `LightClientUpdate` object.
Note that the `signature_slot` cannot be trusted beyond the purpose of
signature verification, as it could be manipulated to any other slot
within the same sync committee period and fork version, without making
the `sync_aggregate` invalid.
When a light client updates its `finalized_header` using a forced update
because of the timeout, and the new header was not signed by enough sync
committee participants to pass `get_safety_threshold(store)`, it may
occur that `store.finalized_header.slot > store.optimistic_header.slot`.
This patch ensures that the `optimistic_header` is updated to the latest
`finalized_header` if that happens, so that it always indicates the
latest known and accepted head.
There were a couple instances where a division was used on an epoch
to derive the corresponding sync committee period instead of calling the
`compute_sync_committee_period` function.
These instances were changed to also use the function.
In the light client docs a mentioning of a function trigger is lacking
the `genesis_validators_root` argument. This patch adds that argument
to the documentation to match the real function signature. It also
slightly improves the grammar.
This renames the `sync_committee_aggregate` field of `LightClientUpdate`
to `sync_aggregate` for consistency with the terminology in the rest of
the spec.
1. Replace `header` and `finality_header` with `attested_header` (always the header signed by the committee) and `finailzed_header` (always the header verified by the Merkle branch)
2. Remove `LightClientSnapshot`, fold its fields into `LightClientStore` for simplicity
1. Simplify `valid_updates` to `best_valid_update` so the `LightClientStore` only needs to store O(1) data
2. Track an optimistic head, by looking for the highest-slot header which passes a safety threshold
Sync committee rewards as currently implemented significantly increase variance in proposer rewards: https://github.com/ethereum/eth2.0-specs/issues/2448
For example, if there are 200000 validators (6.4m ETH staked), then during each 1/4-eek (~54 hour) period there is a chance of 512/200000 that a validator will get accepted into the sync committee, so on average that will happen once every 200000/512 * 1/4 = 97.6 eeks, or close to two years. The payout of this "lottery" is 1/8 of that, or ~12.2 eeks (a bit less than four months) of revenue. This is much more severe than block proposing (a chance of 1/200000 per slot, or a lottery worth ~0.38 eeks of revenue once every ~3.05 eeks).
This PR makes three changes to cut make the sync committee lottery less drastic and bring variance closer in line with what is available from block proposing:
* Reduce the `SYNC_REWARD_WEIGHT` from 8 to 2
* Add a penalty for not participating in the sync committee, so that despite the first change the total net reward for participating vs not participating is only cut down by 2x
* Reduce the sync committee period from 1/4 eek to 1/8 eek (~27 hours)
With these three factors combined, the lottery reduces to ~1.5 eeks of revenue, on average occurring every ~48 eeks. Validators who are maximally unlucky (ie. never become part of a sync committee) only lose ~3.12% of their rewards instead of ~12.5%.
The compromises that this approach makes are:
* In the extreme case where >50% of proposers are operating efficiently, being in a sync committee becomes a net burden. However, this should be extremely rare, and in such cases validators would likely be suffering inactivity leak penalties anyway.
* Incentive to participate in a sync committee decreased by 2x (but this is IMO an improvement; sync committees are _not_ as important as proposals and deserve to have lower rewards)
* Minimum data syncing needed to maintain a light client increases by 2x (from 24 kB per 54 hours to 24 kB per 27 hours). A burden for on-chain light clients, but still insignificant for others.