Ignore subset aggregates (#2847)
* Ignore subset aggregates When aggregates are propagated through the network, it is often the case that a better aggregate has already been seen - in particular, this happens when an aggregator has not been able to include itself in the mesh and therefore publishes an aggregate with only its own attestations. This new ignore rule allows dropping all aggregates that are (non-strict) subsets of aggregates that have already been seen on the network. In particular, it does not mandate dropping aggregates where a union of previous aggregates would cause it to become a subset). The logic for allowing this is based on the premise that any aggregate that has already been seen by a peer will also have been seen by its neighbours - a subset aggregate (strict or not) brings no new value to the aggregation algorithm, except in the extreme edge case where you could combine several such sparse aggregates into a single, more dense "combined" aggregate and thus use less block space. Further, as a small benefit, computing the `hash_tree_root` of the full aggregate is generally not done -however, `hash_tree_root(data)` is already done for other purposes as this is used as index in the beacon API. * add subset ignore rule to sync contributions as well * typo
This commit is contained in:
parent
20a90f1df7
commit
bab5e402df
|
@ -144,6 +144,7 @@ def get_sync_subcommittee_pubkeys(state: BeaconState, subcommittee_index: uint64
|
|||
- _[REJECT]_ `contribution_and_proof.selection_proof` selects the validator as an aggregator for the slot -- i.e. `is_sync_committee_aggregator(contribution_and_proof.selection_proof)` returns `True`.
|
||||
- _[REJECT]_ The aggregator's validator index is in the declared subcommittee of the current sync committee --
|
||||
i.e. `state.validators[contribution_and_proof.aggregator_index].pubkey in get_sync_subcommittee_pubkeys(state, contribution.subcommittee_index)`.
|
||||
- _[IGNORE]_ A valid sync committee contribution with equal `slot`, `beacon_block_root` and `subcommittee_index` whose `aggregation_bits` is non-strict superset has _not_ already been seen.
|
||||
- _[IGNORE]_ The sync committee contribution is the first valid contribution received for the aggregator with index `contribution_and_proof.aggregator_index`
|
||||
for the slot `contribution.slot` and subcommittee index `contribution.subcommittee_index`
|
||||
(this requires maintaining a cache of size `SYNC_COMMITTEE_SIZE` for this topic that can be flushed after each slot).
|
||||
|
|
|
@ -252,7 +252,7 @@ Likewise, clients MUST NOT emit or propagate messages larger than this limit.
|
|||
The optional `from` (1), `seqno` (3), `signature` (5) and `key` (6) protobuf fields are omitted from the message,
|
||||
since messages are identified by content, anonymous, and signed where necessary in the application layer.
|
||||
Starting from Gossipsub v1.1, clients MUST enforce this by applying the `StrictNoSign`
|
||||
[signature policy](https://github.com/libp2p/specs/blob/master/pubsub/README.md#signature-policy-options).
|
||||
[signature policy](https://github.com/libp2p/specs/blob/master/pubsub/README.md#signature-policy-options).
|
||||
|
||||
The `message-id` of a gossipsub message MUST be the following 20 byte value computed from the message data:
|
||||
* If `message.data` has a valid snappy decompression, set `message-id` to the first 20 bytes of the `SHA256` hash of
|
||||
|
@ -337,7 +337,7 @@ The following validations MUST pass before forwarding the `signed_aggregate_and_
|
|||
(a client MAY queue future aggregates for processing at the appropriate slot).
|
||||
- _[REJECT]_ The aggregate attestation's epoch matches its target -- i.e. `aggregate.data.target.epoch ==
|
||||
compute_epoch_at_slot(aggregate.data.slot)`
|
||||
- _[IGNORE]_ The valid aggregate attestation defined by `hash_tree_root(aggregate)` has _not_ already been seen
|
||||
- _[IGNORE]_ A valid aggregate attestation defined by `hash_tree_root(aggregate.data)` whose `aggregation_bits` is a non-strict superset has _not_ already been seen.
|
||||
(via aggregate gossip, within a verified block, or through the creation of an equivalent aggregate locally).
|
||||
- _[IGNORE]_ The `aggregate` is the first valid aggregate received for the aggregator
|
||||
with index `aggregate_and_proof.aggregator_index` for the epoch `aggregate.data.target.epoch`.
|
||||
|
|
Loading…
Reference in New Issue