8 Commits

Author SHA1 Message Date
Hsiao-Wei Wang
85d5a9abaf
[squashed] shard chain updates wip
PR feedback from Danny and some refactor

1. Add stub `PHASE_1_GENESIS_SLOT`
2. Rename `get_updated_gasprice`  to `compute_updated_gasprice`
3. Rename `compute_shard_data_roots` to `compute_shard_body_roots`

Apply shard transition for the skipped slots

Refactor `shard_state_transition`

Get `beacon_parent_root` from offset slot

Add more test

Add `verify_shard_block_message`

Add `> 0`

Keep `beacon_parent_block` unchanged in `is_valid_fraud_proof`

Remove some lines

Fix type

Refactor + simplify skipped slot processing
2020-05-02 02:31:54 +08:00
Hsiao-Wei Wang
9724cb832d
Apply suggestions from code review from @djrtwo
Co-Authored-By: Danny Ryan <dannyjryan@gmail.com>
2020-05-02 02:31:53 +08:00
Hsiao-Wei Wang
afa12caf1f
Refactor get_shard_state_transition_result 2020-05-02 02:31:53 +08:00
Hsiao-Wei Wang
4e8a7ff115
[squashed] shard transition wip
Fix the wrong `get_shard_proposer_index` parameters order

Phase 1 WIP

Add shard transition basic test

Fix lint error

Fix
2020-05-02 02:31:10 +08:00
Hsiao-Wei Wang
247a6c8fca
Add verify_fraud_proof function 2020-05-02 02:31:09 +08:00
Hsiao-Wei Wang
5f69afea38
Make shard_state_transition more like beacon state_transition function 2020-05-02 02:31:08 +08:00
vbuterin
2a91b43eaf
Remove shard block chunking
Only store a 32 byte root for every shard block

Rationale: originally, I added shard block chunking (store 4 chunks for every shard block instead of one root) to facilitate construction of data availability roots. However, it turns out that there is an easier technique. Set the width of the data availability rectangle's rows to be 1/4 the max size of a shard block, so each block would fill multiple rows. Then, non-full blocks will generally create lots of zero rows. For example if the block bodies are `31415926535` and `897932` with a max size of 24 bytes, the rows might look like this:

```
31415926
53500000
00000000
89793200
00000000
00000000
```
Zero rows would extend rightward to complete zero rows, and when extending downward we can count the number of zero rows, and reduce the number of extra rows that we make, so we only make a new row for every nonzero row in the original data. This way we get only a close-to-optimal ~4-5x blowup in the data even if the data has zero rows in the middle.
2020-01-28 17:31:51 -07:00
protolambda
4732c7beb1
merge in dev (v0.10) and fix reorg/lint issues 2020-01-13 18:55:21 +01:00