research/old_casper_poc1
Vitalik Buterin 91aaea20f0 Reorganized research repo 2017-06-20 02:36:22 -04:00
..
README.md Reorganized research repo 2017-06-20 02:36:22 -04:00
casper.py Reorganized research repo 2017-06-20 02:36:22 -04:00
distributions.py Reorganized research repo 2017-06-20 02:36:22 -04:00
networksim.py Reorganized research repo 2017-06-20 02:36:22 -04:00
run.py Reorganized research repo 2017-06-20 02:36:22 -04:00
voting_strategy.py Reorganized research repo 2017-06-20 02:36:22 -04:00

README.md

Casper proof of stake algorithm

Casper is a new proof of stake algorithm being developed by Vitalik Buterin, Vlad Zamfir et al, which tries to resolve, or optimally compromise between, several of the outstanding issues in previous proof of stake algorithms. Proof of stake has many benefits, chief among which is the potential for massive reductions in electricity costs as well as more rapid convergence toward a state of "finality" where transactions cannot be reverted no matter what, but so far some of these issues have caused concern among developers and users, and led to a reduced willingness to adopt proof-of-stake protocols.

The three primary issues with existing proof of stake algorithms can be described as follows.

  • Nothing at stake: if there are multiple "forks" of a blockchain, there is no incentive for validators not to register a vote on every fork of the chain. This may end up a fatal problem even without any attackers, but if an attacker is present then they can, in theory, bribe everyone $0.001 to do this, and themselves mine only on their own fork, and thus revert an arbitrarily long chain at near-zero cost.

Some blockchains (eg. Tendermint) solve the first problem via a "slashing" mechanic where validators are required to post security bonds, which can be taken away if a user double-votes. This insight can even be taken much further and provide a very strong robustness property where a double spend cannot happen without a very large quantity of security bonds getting destroyed. However, to make this work, a requirement must be added where some portion x > 0.5 of validators must either sign every block or sign the chain after some height, so that a successful double spend requires at least 2x - 1 of validators to sign two blocks (as x signed the original chain, x signed the new chain, and 2x - 1 signing both chains is the mathematically smallest possible overlap; you can see this yourself via Venn diagrams). Hence, 2x - 1 of all security deposits (eg. at x = 2/3 this means one third) must be destroyed for a fork to happen. However, tis itself introduces two new problems:

  • Sudden dropout ice age: what if more than 1-x of nodes suddenly drop dead? Then the blockchain may "freeze" forever.
  • Stuck scenario: what if, somehow, half of nodes sign one block and the other half sign the other block, so the chain cannot proceed further without a large quantity of deposits being destroyed? This is arguably as fatal a scenario as a successful double spend, and so getting to this point too should not be possible without a large quantity of security bond slashing.

Casper introduces a proof of stake model which solves this using a similar (arguably cop-out) approach as Bitcoin: rather then trying to go from zero to finality in one step, we make it a gradual process, where validators progressively put more and more "stake" behind blocks until these blocks reach finality. The fundamental approach relies on a mechanism known as consensus-by-bet: essentially, instead of simply voting on which answer to give when the consensus mechanism requires a choice to be made, validators bet on the answer, and the bets themselves determine the answer.

More precisely, at every block height, validators must achieve binary consensus (ie. all agree either 0 or 1) on the question: has a block been submitted at this height, and if so what block hash? We can model this by imagining a ball which starts off at the top of a hill, where there are multiple ramps down the hill that the ball can take: one ramp representing "no block" and one ramp representing "yes there is a block" (in the special malicious case where a validator submits two blocks, there may be two ramps representing "yes, there is a block and it's X" and "yes, there is a block and it's Y". The simple intuition behind the consensus algorithm is that each voter's strategy is:

  • Take the median position that everyone else is on (or, more precisely, mode by orientation and 33rd percentile by latitude)
  • Apply one step of "gravity" from this position. Randomly jiggle a bit to prevent "stuck" scenarios
  • Make this your new position, and publicly create a vote announcing this

In the case that no block has been submitted, the "yes there is a block" ramp is closed off, and so there is only one ramp to go down. Once a block has fully gone down one particular ramp from the point of view of 2/3 of validators, consider it "finalized". This process takes place in parallel for every block height, and is continuous; validators can update their votes on every height every time a new block comes in.

This algorithm can be analyzed from two perspectives: BFT analysis and economic analysis. From a BFT analysis perspective, the 33rd percentile rule makes the algorithm equivalent to a highly multi-round version of DLS-like pre-commit/commit/finalize algorithms. From an economic analysis perspective, the algorithm is much more interesting, and in fact borrows more from prediction market and scoring rule theory than it does from consensus theory. Every vote is a bet, and the bets are incentivized via a scoring rule. The "altitude" on the ramps can be thought of as a probability estimate, where the top of the hill is 50% and the bottom of the hill is 99.99%; if you bet 97% on a particular outcome then that means that you are willing to accept a penalty of 97 if the result does NOT converge toward that outcome in exchange for a reward of 3 if the result does converge in that direction. The closer your bet gets to 100%, the more you gain if you are right, but the more you lose if you are wrong, until at 99.99% if you are wrong you lose your entire deposit. The idea is that validators get progressively more and more confident about a particular outcome as they see other validators getting more confident, and thus the system eventually (and in fact, logarithmically quickly) does converge on one outcome.

The exact formula for this contains two functions, s(p) (which determines the payout given a bet p if you are right) and f(p) (which determines the payout given a bet p if you are wrong). To simplify things, we can also express the probability in terms of odds (which we'll denote q); for example, odds of 5:1 imply that something is five times more likely to happen than not happen, and in general we define q = p/(1-p). In order for the formula to be incentive-compatible, we need (in probability form) s'(p) * p = -f'(p) * (1-p), or (in odds form) s'(q) * q = -f'(q). I propose the following functions:

s(q) = q
f(q) = -q^2 / 2

As q gets high, you may notice that the penalties get high very quickly, though the rewards also increase albeit less drastically. For example, if your belief is that the odds of a given state (either 0 or 1) succeeding are 500:1, then if you bet q = 400, your expected return is 500/501 * s(400) + 1/501 * f(400) = 500/501 * 400 - 1/501 * 80000 = 239.52, if you bet q = 500 your expected return is 249.50, and if you bet q = 600 your expected return is 239.52; if you bet q = 1000 your expected return drops to zero and if you bet even more confidently your expected return becomes negative. In general, we expect risk aversion to lead to a bias where users converge toward an outcome more slowly than they "should" but this effect should only be moderately large; the "take the median of everyone else and apply one step of gravity" model is a good first approximation, and particularly ensures that you are not taking bets which are more than ~2.7x as risky/confident as bets that have already been taken by other validators.

"Stuck" scenarios are impossible, as no one will vote confidently for a particular block unless a majority has already gone most of the way along the same direction; the worst outcome that may happen is one where the ball "remains at the top of the hill" for an extended period of time due to a combination of bad strategy selection and perhaps some malicious betting that tries to counteract any movements along any of the "ramps" going down. A possible counteraction is creating a default strategy that pushes harder and harder toward the "no block" outcome as time goes on.

The sudden dropout ice age risk is dealt with via a compromise: if 67% of validators are present, then the system converges; however, if fewer than 67% of validators are present then the system continues to produce blocks, but simply without agreeing on finality for any of them. Hence, in such a situation the system will simply have roughly the same level of security as systems like NXT.

Ongoing research questions:

  • Do we detect such ice ages? If so, do we respond to them in a particular way (eg. reducing the 67% threshold over time)? Perhaps do we eventually switch to requiring only 67% of recently online nodes to agree on a block? Perhaps a proof of work backstop?
  • What is the most efficient way to express a signature? This algorithm carries a high data overhead, and so any reduction in the overhead would be highly appreciated.