* Ignore subset aggregates
When aggregates are propagated through the network, it is often the case
that a better aggregate has already been seen - in particular, this
happens when an aggregator has not been able to include itself in the
mesh and therefore publishes an aggregate with only its own
attestations.
This new ignore rule allows dropping all aggregates that are
(non-strict) subsets of aggregates that have already been seen on the
network. In particular, it does not mandate dropping aggregates where a
union of previous aggregates would cause it to become a subset).
The logic for allowing this is based on the premise that any aggregate
that has already been seen by a peer will also have been seen by its
neighbours - a subset aggregate (strict or not) brings no new value to
the aggregation algorithm, except in the extreme edge case where you
could combine several such sparse aggregates into a single, more dense
"combined" aggregate and thus use less block space.
Further, as a small benefit, computing the `hash_tree_root` of the full
aggregate is generally not done -however, `hash_tree_root(data)` is
already done for other purposes as this is used as index in the beacon
API.
* add subset ignore rule to sync contributions as well
* typo
When nodes are syncing but have not yet reached the canonical `head`,
they cannot determine whether nodes they are connected to serve a valid
history or are making bogus claims in their `Status` advertisement.
Thus, the best course of action that a client can take is to vote for
its "current" best synced head, regardless of whether it's connected to
peers that claim to have other heads or not.
However, in the p2p spec, we penalize such peers with a `REJECT` - this
should be an `IGNORE` instead because this vote is correct per the spec,
albeit "late" according to the validating clients' view of the chain.
The spec reserves the libp2p error code range `[3, 127]` for future use
but actually defines error code `3` as `ResourceUnavailable`. This patch
updates the reserved range to `[4, 127]`.
* Typing problem fixed: `process_block_header` passes `Bytes32()` to `state_root` of `BeaconBlockHeader`, which type is `Root`
* Typing problem fixed in `initialize_beacon_state_from_eth1`: `len` returns an `int` value, while `deposit_count=uint64` of `Eth1Data` has type `uint64`
* Typing problem fixed in `process_rewards_and_penalties`: `numerator` of type `int` passed to `weight` parameter of `get_flag_index_deltas`, which has type `uint64`
* Typing problem fixed in `process_attestation`; `False` passes as `crosslink_success` parameter of `PendingAttestation`, which has type `boolean`. `False` is an instance of `(python.)bool` and is not an instance of `(ssz.)boolean`
* Typing problem fixed: `shard_data_roots` of `ShardTransition` has type `List[Bytes32]`, but its elements are used as if they were `Root` values, e.g. in `process_chunk_challenge` method: passed to `data_root` of `CustodyChunkChallengeRecord` which has type `Root`
* Typing problem fixed in `process_custody_final_updates`: `index` has type `int`, while `validator_indices_in_records` has type `Set[ValidatorIndex]`, so tesing whether `index in validator_indices_in_records` can be risky, depending on implementation details. `ValidatorIndex(index) in validator_indices_in_records` is a safer variant.
* Typing problem fixed: `slashed` parameter of `pack_compact_validator` has type `(python.)bool`, however in `committee_to_compact_committee` a value of `(ssz.)boolean` is passed as a value of the parameter
* Typing problem fixed: `inactivity_scores` is a `List[uint64,...]`, while it is intialized/appended with values of `(python.)int` type
* fixed according to @protolambda suggestions
* changed types of _WEIGHT constants and appropriate variables/parameters, according to @protolambda suggestions
* revert code formatting back
* Introduced ZERO_ROOT according to @protolambda 's suggestion
* Reverted back to , according to @protolambda comments