add more notes about vlaidator architecture

This commit is contained in:
Danny Ryan 2019-01-06 10:13:48 -07:00 committed by GitHub
parent c19044488e
commit 0d9d327b84
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 2 additions and 0 deletions

View File

@ -114,6 +114,8 @@
* Question was asked about the validator client architecture:
* Question was that, in the spec, we have to validate that block-state-root is equal to the tree hash of the state. So, does that mean as a validator client, you would actually have to conmpute the state transition on the client side - and then attach it to the block? And then you issue the block to the beacon node?
* Danny chimed in saying - no, that relation, the node is calculating that state root and providing it to the validator. The _providing_ essentially is a block proposal to sign. Similar to a PoW miner, the node provides a proposal that the PoW miner supposed to try and hash. So the heavy lifting should happen in the node. The main information that needs to pass along to the validator is enough information that the validator can decide if this is a _safe_ or a _dangerous_ message to sign. Strategies for the validator to assess the validity of the information, that is more of a trust relationship. And the validator should be asking multiple nodes perhaps. But, essentially, the signing entity should not be doing much of the heavy lifting. You wouldn't want to be passing the entire state to the validator. But instead, passing more of the block proposals to sign.
* Raul from Prysmatic followed up with: We think that having the validator client rely upon the node for syncing shards and to not have it connected to the p2p network actually makes it harder to decouple the validator from a single node.
* Danny: I think the opposite. I think it makes it much more difficult. The validator should be responsible for signing messages and remembering the relevant details of those messages to have the requisite information to continue to safely sign messages in the future. It can request this information from any nodes -- a nodes it runs, a set of nodes it runs, a node it runs and a public service node, etc. The validator then makes local decisions about what it should sign, store any information it needs to about these decisions so that it can make good decisions in the future, and then it passes these signed decisions along to any entity to broadcast them to the network. Once you start putting p2p requirements on a validator, you've increased the scope of this entity massively, and you've also directly connected a validator with signing keys to the internet and to potentially malicious peers. So even just from a security standpoint you've moved a validator out of a place of isolation and into a risky position. Back to the swapping, if the validator only asks questions about the state of the world then it becomes very easy to swap from whom the validator is asking these questions as long as there is a common interface for these requests. If we add p2p requirements, state processing requirements, etc, we actually just begin to build a node inside of a validator. If there are already robust node implementations, why rebuild these components in a separate place. And again, you cannot sync shard data without a constant connection to the beacon chain. If a validator were to sync shards directly, it would have to constantly be passing information between the beacon chain and the shard chains and would end up coming a core piece of communication infrastructure between these components. Beyond that, a node needs to be able to sync shards without a validator existing because there are many use cases of a node outside of validation.
* Justin Drake discussed that he doesn't think a decision has been finalized on the endian-ness of ssz. Encouraged others to comment on the open pull request.
* Also commented on the honest behavior to add more clarity -- basically, step0 is just apply the validator rules on the various blocks that you've received and then you've got this block tree. So you have various forks. Then you apply the fork choice rule, so you get a single canonical blockchain. And now you're duty as an honest proposer is to build on top of the tip of this canonical chain. There's basically only two things needed to do from the point of view of deposit roots. 1.) You need to cast a vote for a deposit root from the Ethereum1.0 deposit contract. And the rule there is that we want to vote the latest one which is contained in the block, which has height 0 mod some power of 2. So, for example, every 1024 blocks on the Ethereum1.0 chain you're going to have a corresponding deposit root for whatever you consider the canonical Etherum1.0 chain. And then you just vote for that. And as soon as you have the required threshold of validators who have voted for that specific root (right now it is called it's called `processed_deposit_root`, but it will soon be called `latest_deposit_root`) _So_, right now you have this latest deposit root, and the second thing you need to do, 2.) Is to include deposit receipts from Ethereum1.0 into Ethereum2.0. You need to include them _in order_, you need to include up to 16 of them (specified in the `MAX_DEPOSITS` constant), and you need to include them up to the latest deposit root that has been voted upon.
* Danny chimed in, adding that we are currently missing a validity condition on the ordering of those deposits. (will be added in the spec)