More edits
This commit is contained in:
parent
6a924554f4
commit
fa0dea7b5c
Binary file not shown.
|
@ -64,15 +64,15 @@ We give an introduction to the consensus algorithm details of Casper: the Friend
|
||||||
\section{Introduction}
|
\section{Introduction}
|
||||||
\label{sect:intro}
|
\label{sect:intro}
|
||||||
|
|
||||||
Over the past few years there has been considerable research into ``proof of stake''-based blockchain consensus algorithms. In a proof of stake system, a blockchain grows and agrees on new blocks through a process where anyone who holds coins inside of the system can participate, and the amount of influence that any given coin holder has is proportional to the number of coins (or ``stake'') that they hold. This represents an alternative to proof of work ``mining'', allowing blockchains to operate without the high hardware and electricity costs that proof of work blockhains require.
|
In the past few years there has been considerable research into ``proof of stake''-based blockchain consensus algorithms. In a proof of stake system, a blockchain grows and agrees on new blocks through a process where anyone who holds coins inside of the system can participate, and the amount of influence that any given coin holder has is proportional to the number of coins (or ``stake'') that they hold. This represents an alternative to proof of work ``mining'', allowing blockchains to operate without the high hardware and electricity costs that proof of work blockhains require.
|
||||||
|
|
||||||
There have been two major schools of thought in proof of stake algorithm design. The earlier of the two, \textit{chain-based proof of stake}, tries to closely mirror the mechanics of proof of work, featuring a chain of blocks and an algorithm that ``simulates'' mining by pseudorandomly assigning the right to create new blocks to stakeholders. This includes Peercoin\cite{king2012ppcoin}, Blackcoin\cite{vasin2014blackcoin} and Iddo Bentov's work\cite{bentov2016pos}.
|
There are two major schools of thought in proof of stake algorithm design. The first, \textit{chain-based proof of stake}, tries to closely mimic the mechanics of proof of work, featuring a chain of blocks and an algorithm that ``simulates'' mining by pseudorandomly assigning the right to create new blocks to stakeholders. This includes Peercoin\cite{king2012ppcoin}, Blackcoin\cite{vasin2014blackcoin} and Iddo Bentov's work\cite{bentov2016pos}.
|
||||||
|
|
||||||
The other school, \textit{BFT-based proof of stake}, is based on a thirty year old body of research into \textit{Byzantine fault tolerant consensus algorithms} such as PBFT \cite{castro1999practical}. BFT algorithms tend to have strong and rigorously proven mathematical properties; for example, one can usually mathematically prove that as long as at more than $\frac{2}{3}$ of participants in the protocol are following the protocol correctly, then the algorithm cannot possibly finalize conflicting block hashes at the same time (``safety''); this result holds regardless of network latency. The suggestion of repurposing BFT algorithms for proof of stake was first introduced by Tendermint\cite{kwon2014tendermint}.
|
The other school, \textit{BFT-based proof of stake}, is based on a thirty year old body of research into \textit{Byzantine fault tolerant consensus algorithms} such as PBFT \cite{castro1999practical}. BFT algorithms tend to have strong and rigorously proven mathematical properties; for example, one can usually mathematically prove that as long as at more than $\frac{2}{3}$ of participants in the protocol are following the protocol correctly, then the algorithm cannot possibly finalize conflicting block hashes at the same time (``safety''); this result holds regardless of network latency. The idea of repurposing BFT algorithms for proof of stake was first introduced by Tendermint\cite{kwon2014tendermint}.
|
||||||
|
|
||||||
\subsection{Our Work}
|
\subsection{Our Work}
|
||||||
|
|
||||||
We follow the BFT tradition, though with some modifications. Casper the Friendly Finality Gadget is an \textit{overlay} atop a \textit{proposal mechanism}---a mechanism which proposes \textit{checkpoints}. Casper is responsible for \textit{finalizing} these checkpoints. Casper provides safety, but does not guarantee liveness---Casper depends on the proposal mechanism for liveness. That is, even if the proposal mechanism is wholly controlled by attackers, Casper prevents attackers from finalizing two conflicting checkpoints, however, the attackers can prevent Casper from finalizing any future checkpoints.
|
We follow the BFT tradition, though with some modifications. Casper the Friendly Finality Gadget is an \textit{overlay} atop a \textit{proposal mechanism}---a mechanism which proposes \textit{checkpoints} (this is similar to the common BFT abstraction of ``leader election'' except it is more complex so as to accommodate Casper's chain-based nature). Casper is responsible for \textit{finalizing} these checkpoints. Casper provides safety, but does not guarantee liveness by itself---Casper depends on the proposal mechanism for liveness. That is, even if the proposal mechanism is wholly controlled by attackers, Casper prevents attackers from finalizing two conflicting checkpoints, however, the attackers can prevent Casper from finalizing any future checkpoints.
|
||||||
|
|
||||||
The proposal mechanism will initially be the existing Ethereum proof of work chain, making the first version of Casper a \textit{hybrid PoW/PoS algorithm} that relies on proof of work for liveness but not safety, but in future versions the proposal mechanism can be substituted with something else.
|
The proposal mechanism will initially be the existing Ethereum proof of work chain, making the first version of Casper a \textit{hybrid PoW/PoS algorithm} that relies on proof of work for liveness but not safety, but in future versions the proposal mechanism can be substituted with something else.
|
||||||
|
|
||||||
|
@ -82,16 +82,15 @@ Our algorithm introduces several new properties that BFT algorithms do not neces
|
||||||
|
|
||||||
\item We add \textit{accountability}. If a validator violates the rules, we can detect the violation, and know who violated the rule: ``$\ge \frac{1}{3}$ violated the rules, \textit{and we know who they are}''. Accountability allows us to penalize malfeasant validators, solving the \textit{nothing at stake} problem\cite{} that often plagues chain-based PoS. The penalty is the malfeasant validator's entire deposit. This maximum penalty provides a bulwark against violating the protocol by making violations immensely expensive. The economic robustness of protocol guarantees is much higher than the size of the rewards that the system pays out during normal operation. This provides a \textit{much stronger} security guarantee than possible with proof of work.
|
\item We add \textit{accountability}. If a validator violates the rules, we can detect the violation, and know who violated the rule: ``$\ge \frac{1}{3}$ violated the rules, \textit{and we know who they are}''. Accountability allows us to penalize malfeasant validators, solving the \textit{nothing at stake} problem\cite{} that often plagues chain-based PoS. The penalty is the malfeasant validator's entire deposit. This maximum penalty provides a bulwark against violating the protocol by making violations immensely expensive. The economic robustness of protocol guarantees is much higher than the size of the rewards that the system pays out during normal operation. This provides a \textit{much stronger} security guarantee than possible with proof of work.
|
||||||
|
|
||||||
\item We introduce a provably safe way for the validator set to change over time.
|
\item We introduce a provably safe way for the validator set to change over time (Section \ref{sect:join_and_leave}).
|
||||||
\item We introduce a way to recover from attacks where more than $\frac{1}{3}$ of validators drop offline, at the cost of a very weak \textit{tradeoff synchronicity assumption}.
|
\item We introduce a way to recover from attacks where more than $\frac{1}{3}$ of validators drop offline, at the cost of a very weak \textit{tradeoff synchronicity assumption} (Section \ref{sect:leak}).
|
||||||
\item The design of the algorithm as an overlay makes it easier to implement as an upgrade to an existing proof of work chain.
|
\item The design of the algorithm as an overlay makes it easier to implement as an upgrade to an existing proof of work chain.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
We will describe the protocol in stages, starting with a simple version (Section \ref{sect:protocol}) and then progressively adding features such as validator set changes (Section \ref{sect:join_and_leave}) and mass liveness fault recovery.
|
We will describe the protocol in stages, starting with a simple version (Section \ref{sect:protocol}) and then progressively adding features such as validator set changes (Section \ref{sect:join_and_leave}) and mass liveness fault recovery.
|
||||||
|
|
||||||
\section{The Protocol}
|
\section{The Casper Protocol}
|
||||||
\label{sect:protocol}
|
\label{sect:protocol}
|
||||||
|
|
||||||
In the simple version, we assume there is a set of validators and a \textit{proposal mechanism} which is any system that proposes blocks (such as a proof of work chain). Under normal operation, almost all of the blocks proposed by the proposal mechanism would form a chain, but if the proposal mechanism operation is faulty (in proof of work, this may arise due to high latency or 51\% attacks) there may be multiple divergent chains being extended at the same time.
|
In the simple version, we assume there is a set of validators and a \textit{proposal mechanism} which is any system that proposes blocks (such as a proof of work chain). Under normal operation, almost all of the blocks proposed by the proposal mechanism would form a chain, but if the proposal mechanism operation is faulty (in proof of work, this may arise due to high latency or 51\% attacks) there may be multiple divergent chains being extended at the same time.
|
||||||
|
|
||||||
|
|
||||||
|
@ -236,9 +235,9 @@ Suppose that all existing validators have sent some sequence of prepare and comm
|
||||||
\section{Tweaking the Proposal Mechanism}
|
\section{Tweaking the Proposal Mechanism}
|
||||||
\label{sect:forkchoice}
|
\label{sect:forkchoice}
|
||||||
|
|
||||||
Although Casper is chiefly an overlay on top of a proposal mechanism, in order to translate the \textit{plausible liveness} proven into the previous section into \textit{actual liveness in practice}, the proposal mechanism needs to be Casper-aware. If the proposal mechanism isn't Casper-aware, as is the case with a proof of work chain that follows the typical fork-choice rule of ``always build atop the longest chain'', Casper can get stuck where no further checkpoints are finalized. We see one such example in \figref{fig:forkchoice}.
|
Although Casper is chiefly an overlay on top of a proposal mechanism, in order to translate the \textit{plausible liveness} proven into the previous section into \textit{actual liveness in practice}, the proposal mechanism needs to be Casper-aware. If the proposal mechanism isn't Casper-aware, as is the case with a proof of work chain that follows the typical fork-choice rule of ``always build atop the longest chain'', there are pathological scenarios where Casper gets ``stuck'' and any blocks built atop the longest chain cannot be finalized (or even justified) without violating a Commandment---one such pathological scenario is the fork in \figref{fig:forkchoice}.
|
||||||
|
|
||||||
In this case, $HASH1$ or any descendant thereof cannot be finalized without slashing $\frac{1}{6}$ of validators. However, miners on a proof of work chain would interpret $HASH1$ as the head and forever keep mining descendants of it, ignoring the chain based on $HASH0^\prime$ which actually could get finalized.
|
In this case, $A_1$ or any descendant thereof cannot be finalized without slashing at least $\frac{1}{6}$ of validators. Finalizing $A_1$ itself would require $\frac{1}{3}$ of validators to violate Commandment \textbf{I} due to conflict with $B_1$, and finalizing any descendant would require $\frac{2}{3}$ prepares using $S$ as \hashsource, creating an intersection of size at least $\frac{1}{6}$ of validators that also committed $B_1$ and would thus violate Commandment \textbf{II}. However, miners on a proof of work chain would interpret $HASH1$ as the head and forever keep mining descendants of it, ignoring the chain based on $B_1$ which actually could get finalized.
|
||||||
|
|
||||||
\begin{figure}[h!tb]
|
\begin{figure}[h!tb]
|
||||||
\centering
|
\centering
|
||||||
|
@ -269,11 +268,13 @@ This rule ensures that if there is a checkpoint such that no conflicting checkpo
|
||||||
\section{Allowing Dynamic Validator Sets}
|
\section{Allowing Dynamic Validator Sets}
|
||||||
\label{sect:join_and_leave}
|
\label{sect:join_and_leave}
|
||||||
|
|
||||||
The set of validators needs to be able to change. New validators need to be able to join, and existing validators need to be able to leave. To accomplish this, we define a variable kept track of in the state called the \textit{dynasty} counter. When a user sends a ``deposit'' transaction to become a validator, if this transaction is included in dynasty $n$, then the validator will be \textit{inducted} in dynasty $n+2$. The dynasty counter increments when the chain detects that the checkpoint of the current epoch that is part of its own history has been \textit{perfectly finalized} (that is, the checkpoint of epoch \epoch must be finalized during epoch \epoch, and the chain must learn about this before epoch \epoch ends). In simpler terms, when a user sends a ``deposit'' transaction, they need to wait for the transaction to be perfectly finalized, and then they need to wait again for the next epoch to be finalized; after this, they become part of the validator set. We call such a validator's \textit{start dynasty} $n+2$.
|
The set of validators needs to be able to change. New validators must be able to join, and existing validators must be able to leave. To accomplish this, we define a variable in the state called the \textit{dynasty counter}. When a would-be valdiator's deposit message is included in dynasty $d$, then the validator will be \textit{inducted} at the start of dynasty $d+2$. We call $d+2$ this validator's \textit{start dynasty}.
|
||||||
|
|
||||||
For a validator to leave, they must send a ``withdraw'' message. If their withdraw message gets included during dynasty $n$, the validator similarly leaves the validator set during dynasty $n+2$; we call $n+2$ their \textit{end dynasty}. When a validator withdraws, their deposit is locked for a long period of time (the \textit{withdrawal delay}, for now think ``four months'') before they can take their money out; if they are caught violating a slashing condition within that time then their deposit is forfeited.
|
The dynasty counter increments when the chain detects that the checkpoint of the current epoch that is part of its own history has been \textit{perfectly finalized} (that is, the checkpoint of epoch \epoch must be finalized during epoch \epoch, and the chain must learn about this before epoch \epoch ends). In simpler terms, when a user sends a ``deposit'' transaction, they need to wait for the transaction to be perfectly finalized, and then they need to wait again for the next epoch to be finalized; after this, they become part of the validator set.
|
||||||
|
|
||||||
For a checkpoint to be justified, it must be prepared by a set of validators which contains (i) at least $\frac{2}{3}$ of the current dynasty (that is, validators with $startDynasty \le curDynasty < endDynasty$), and (ii) at least $\frac{2}{3}$ of the previous dyansty (that is, validators with $startDynasty \le curDynasty - 1 < endDynasty$. Finalization with commits works similarly. The current and previous dynasties will usually greatly overlap; but in cases where they substantially diverge this ``stitching'' mechanism ensures that dynasty divergences do not lead to situations where a finality reversion or other failure can happen because different messages are signed by different validator sets and so equivocation is avoided.
|
For a validator to leave, they must send a ``withdraw'' message. If their withdraw message gets included during dynasty $n$, the validator similarly leaves the validator set during dynasty $n+2$; we call $n+2$ their \textit{end dynasty}. When a validator withdraws, their deposit is locked for a long period of time, the \textit{withdrawal delay} (think ``four months'') before they can take their money out. If, during the withdrawal delay, the validator violates any Commandment, the deposit is forfeited.
|
||||||
|
|
||||||
|
For a checkpoint to be justified, it must be prepared by a set of validators which contains (i) at least $\frac{2}{3}$ of the current dynasty (that is, validators with $startDynasty \le curDynasty < endDynasty$), and (ii) at least $\frac{2}{3}$ of the previous dyansty (that is, validators with $startDynasty \le curDynasty - 1 < endDynasty$. Finalization works similarly. The current and previous dynasties will usually greatly overlap; but in cases where they substantially diverge this ``stitching'' mechanism ensures that dynasty divergences do not lead to situations where a finality reversion or other failure can happen because different messages are signed by different validator sets and so equivocation is avoided.
|
||||||
|
|
||||||
\begin{figure}[h!tb]
|
\begin{figure}[h!tb]
|
||||||
\centering
|
\centering
|
||||||
|
@ -284,7 +285,7 @@ For a checkpoint to be justified, it must be prepared by a set of validators whi
|
||||||
|
|
||||||
\subsection{Long Range Attacks}
|
\subsection{Long Range Attacks}
|
||||||
|
|
||||||
Note that the withdrawal delay introduces a synchronicity assumption \textit{between validators and clients}. Because validators can withdraw their deposits after the withdrawal delay, there is an attack where a coalition of validators which had more than $\frac{2}{3}$ of deposits \textit{long ago in the past} withdraws their deposits, and then uses their historical deposits to finalize a new chain that conflicts with the original chain without fear of getting slashed.
|
Note that the withdrawal delay introduces a synchronicity assumption \textit{between validators and clients}. Because validators can withdraw their deposits after the withdrawal delay, there is an attack where a coalition of validators which had more than $\frac{2}{3}$ of deposits \textit{long ago in the past} withdraws their deposits, and then uses their historical deposits to finalize a new chain that conflicts with the original chain without fear of getting slashed. Despite violating slashing conditions to make a chain split, because the attacker has already withdrawn on both chains they do not lose any money. This is called the \textit{long-range atack}.
|
||||||
|
|
||||||
\begin{figure}
|
\begin{figure}
|
||||||
\centering
|
\centering
|
||||||
|
|
Loading…
Reference in New Issue