Ignore subset aggregates (#2847)

* Ignore subset aggregates

When aggregates are propagated through the network, it is often the case
that a better aggregate has already been seen - in particular, this
happens when an aggregator has not been able to include itself in the
mesh and therefore publishes an aggregate with only its own
attestations.

This new ignore rule allows dropping all aggregates that are
(non-strict) subsets of aggregates that have already been seen on the
network. In particular, it does not mandate dropping aggregates where a
union of previous aggregates would cause it to become a subset).

The logic for allowing this is based on the premise that any aggregate
that has already been seen by a peer will also have been seen by its
neighbours - a subset aggregate (strict or not) brings no new value to
the aggregation algorithm, except in the extreme edge case where you
could combine several such sparse aggregates into a single, more dense
"combined" aggregate and thus use less block space.

Further, as a small benefit, computing the `hash_tree_root` of the full
aggregate is generally not done -however, `hash_tree_root(data)` is
already done for other purposes as this is used as index in the beacon
API.

* add subset ignore rule to sync contributions as well

* typo
This commit is contained in:
Jacek Sieka 2022-05-17 15:05:22 +02:00 committed by GitHub
parent 20a90f1df7
commit bab5e402df
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 3 additions and 2 deletions

View File

@ -144,6 +144,7 @@ def get_sync_subcommittee_pubkeys(state: BeaconState, subcommittee_index: uint64
- _[REJECT]_ `contribution_and_proof.selection_proof` selects the validator as an aggregator for the slot -- i.e. `is_sync_committee_aggregator(contribution_and_proof.selection_proof)` returns `True`.
- _[REJECT]_ The aggregator's validator index is in the declared subcommittee of the current sync committee --
i.e. `state.validators[contribution_and_proof.aggregator_index].pubkey in get_sync_subcommittee_pubkeys(state, contribution.subcommittee_index)`.
- _[IGNORE]_ A valid sync committee contribution with equal `slot`, `beacon_block_root` and `subcommittee_index` whose `aggregation_bits` is non-strict superset has _not_ already been seen.
- _[IGNORE]_ The sync committee contribution is the first valid contribution received for the aggregator with index `contribution_and_proof.aggregator_index`
for the slot `contribution.slot` and subcommittee index `contribution.subcommittee_index`
(this requires maintaining a cache of size `SYNC_COMMITTEE_SIZE` for this topic that can be flushed after each slot).

View File

@ -337,7 +337,7 @@ The following validations MUST pass before forwarding the `signed_aggregate_and_
(a client MAY queue future aggregates for processing at the appropriate slot).
- _[REJECT]_ The aggregate attestation's epoch matches its target -- i.e. `aggregate.data.target.epoch ==
compute_epoch_at_slot(aggregate.data.slot)`
- _[IGNORE]_ The valid aggregate attestation defined by `hash_tree_root(aggregate)` has _not_ already been seen
- _[IGNORE]_ A valid aggregate attestation defined by `hash_tree_root(aggregate.data)` whose `aggregation_bits` is a non-strict superset has _not_ already been seen.
(via aggregate gossip, within a verified block, or through the creation of an equivalent aggregate locally).
- _[IGNORE]_ The `aggregate` is the first valid aggregate received for the aggregator
with index `aggregate_and_proof.aggregator_index` for the epoch `aggregate.data.target.epoch`.