Reworked Casper basics paper

This commit is contained in:
Vitalik Buterin 2017-08-27 06:54:43 -04:00
parent 75b8a77bcf
commit 96e8081b30
55 changed files with 344 additions and 593 deletions

View File

@ -1,334 +0,0 @@
%\title{Casper the Friendly Finality Gadget: Basic Structure}
\title{A Parsimonious Stake-based Finalty Mechanism}
\author{
Vitalik Buterin \\
Ethereum Foundation}
\documentclass[12pt, final]{article}
\input{eth_header.tex}
%% Special symbols we'll probably iterate on
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% we will probably iterate on these symbols until we have a notation we like
\newcommand{\epoch}{\ensuremath{e}\xspace}
\newcommand{\hash}{\textnormal{h}\xspace}
% symbols for the epoch and hash source
\newcommand{\epochsource}{\ensuremath{\epoch_{\star}}\xspace}
\newcommand{\hashsource}{\ensuremath{\hash_{\star}}\xspace}
\newcommand{\signature}{\ensuremath{\mathcal{S}}\xspace}
\newcommand{\totaldeposit}{\textnormal{TD}\xspace}
\newcommand{\gamesymbol}{\reflectbox{G}}
\newcommand{\msgPREPARE}{\textbf{\textsc{prepare}}\xspace}
\newcommand{\msgCOMMIT}{\textbf{\textsc{commit}}\xspace}
% Symbols for the Last Justified Epoch and Hash
\newcommand{\epochLJ}{\ensuremath{\epoch_{\textnormal{LJ}}}\xspace}
\newcommand{\hashLJ}{\ensuremath{\hash_{\textnormal{LJ}}}\xspace}
% Symbols for the Last Finalized Epoch and Hash
\newcommand{\epochLF}{\ensuremath{\epoch_{\textnormal{LF}}}\xspace}
\newcommand{\hashLF}{\ensuremath{\hash_{\textnormal{LF}}}\xspace}
% Griefing Factor symbol
\newcommand{\GF}[1]{\mathds{GF}\left( #1 \right)\xspace}
% Genesis block symbol
\newcommand{\Genesisblock}{\ensuremath{\mathds{G}}\xspace}
\begin{document}
\maketitle
\TODO{ \today }
\begin{abstract}
We give an introduction to the consensus algorithm details of Casper: the Friendly Finality Gadget, as an overlay on an existing proof of work blockchain such as Ethereum. Byzantine fault tolerance analysis is included, but economic incentive analysis is out of scope. \todo{To be reworked at the very end}
\end{abstract}
\section{Introduction}
\label{sect:intro}
\todo{Probably talk about prior Proof of Stake / Finality based work here.}
\section{Principles}
\todo{This might be stuff that we can roll into the Introduction}
Casper the Friendly Finality Gadget is an overlay on top of some kind of ``proposal mechanism'' - a mechanism which ``proposes'' blocks which the Casper mechanism can then set in stone by ``finalizing'' them. Casper depends on the proposal mechanism for liveness, but not safety; that is, if the proposal mechanism is entirely controlled by attackers, then the attackers can prevent Casper from finalizing any future checkpoints, but cannot cause a safety failure in Casper---i.e., they cannot force Casper to finalize two conflicting blocks.
The base mechanism is heavily inspired by partially synchronous systems such as Tendermint \cite{kwon2014tendermint} and PBFT \cite{castro1999practical} , and thus has $\frac{1}{3}$ Byzantine fault tolerance and is safe under asynchrony and dependent on the proposal mechanism for liveness.
In Section \ref{sect:leak} we introduce a modification which increases Byzantine fault tolerance to $\frac{1}{2}$, with the proviso that attackers with size $\frac{1}{3} < \alpha < \frac{1}{2}$ can delay new blocks being finalized by some period of time $D$ (think $D \approx$ 3 weeks), at the cost of a ``tradeoff synchrony assumption'' where fault tolerance decreases as network latency goes up, decreasing to potentially zero when network latency reaches $D$.
In the Casper Phase 1 implementation for Ethereum, the ``proposal mechanism'' is the existing proof of work chain, modified to have a greatly reduced block reward because the chain no longer relies as heavily on proof of work for security, and we describe how the Casper mechanism, and fork choice rule, can be ``overlaid'' onto the proof of work mechanism in order to add Casper's guarantees.
\section{The Protocol}
\label{sect:protocol}
In the Casper protocol, there is a set of validators, and in each epoch validators have the ability to send two kinds of messages:
\begin{table}[h!bt]
\centering
\subfloat[\msgPREPARE format]{ \begin{tabular}{l l} \toprule
\textbf{Notation} & \textbf{Description} \\
\midrule
\hash & the hash to justify \\
\epoch & the current epoch \\
$\hashsource$ & the most recent justified hash \\
$\epochsource$ & the epoch containing hash $\hashsource$ \\
\signature & signature from the validator's private key of the tuplet $(\hash,\epoch,\hashsource,\epochsource)$. \\
\bottomrule
\end{tabular} \label{tbl:prepare} }
\subfloat[\msgCOMMIT format]{ \begin{tabular}{l l} \toprule
\textbf{Notation} & \textbf{Description} \\
\midrule
\hash & the hash to finalize \\
\epoch & the current epoch \\
\signature & signature from the validator's private key \\
\bottomrule
\end{tabular}
\label{tbl:commit} }
\caption{The schematic of the \msgPREPARE and \msgCOMMIT messages.}
\label{fig:messages}
\end{table}
Each validator has a \emph{deposit size}; when a validator joins their deposit size is equal to the number of coins that they deposited, and from there on each validator's deposit size rises and falls with rewards and penalties. For the rest of this paper, when we say ``$\frac{2}{3}$ of validators'', we are referring to a \emph{deposit-weighted} fraction; that is, a set of validators whose sum deposit size equals to at least $\frac{2}{3}$ of the total deposit size of the entire set of validators.
Every hash $\hash$ has one of three possible states: \emph{fresh}, \emph{justified}, and \emph{finalized}. Every hash starts as \emph{fresh}. The hash at the beginning of the current epoch converts from fresh to \emph{justified} if, during the current epoch $\epoch$, $\frac{2}{3}$ Prepares are sent of the form
\begin{equation}
\langle \msgPREPARE, \epoch, \hash, \epochsource, \hashsource, \signature \rangle
\label{eq:msgPREPARE}
\end{equation}
for some specific $\epochsource$ and $\hashsource$. A hash $\hash$ can only be justified if and only if its $\hashsource$ is already justified or finalized.
Additionally, a hash converts from justified to \emph{finalized}, if $\frac{2}{3}$ of validators Commit
\begin{equation}
\langle \msgCOMMIT, \epoch, \hash, \signature \rangle \; ,
\label{eq:msgCOMMIT}
\end{equation}
for the same \epoch and \hash as in \eqref{eq:msgPREPARE}. The $\hash$ is the block hash of the block at the start of the epoch. A hash $\hash$ being justified entails that all fresh (non-finalized) preceding blocks are also justified. A hash $\hash$ being finalized entails that all preceding blocks are also finalized, regardless of whether they were previously fresh or justified. An ``ideal execution'' of the protocol is one where, at the start of every epoch, every validator Prepares and Commits the first blockhash of each epoch, specifying the same $\epochsource$ and $\hashsource$.
\begin{figure}[h!tb]
\centering
\includegraphics[width=5.5in]{prepares_commits.png}
\caption{Illustrating the sequence of Prepares and Commits. \todo{revise?}}
\label{fig:prepares_and_commits}
\end{figure}
An \textit{epoch} is a period of 100 blocks; epoch $n$ begins at block $n * 100$ and ends at block $n * 100 + 99$.
The final in each epoch in a \emph{checkpoint block}. Checkpoint blocks are special
A \textit{checkpoint for epoch $n$} is a block with number $n * 100 - 1$; in a smoothly running blockchain there will usually be only one checkpoint per epoch, but due to natural network latency or deliberate attacks there may be multiple competing checkpoints during some epochs. The \textit{predecessor} of a checkpoint is the checkpoint in the prior epoch, and an \textit{lineage} of a checkpoint is the set of all preceding checkpoints, recursively, to the Genesis block \Genesisblock.
\TODO{Probably define some mathematical notation here for Block $B$, Checkpoint $C$, Predecessor, Successor, Lineage, etc.}
We define the \textit{lineage hash} of a checkpoint as follows:
\begin{itemize}
\item The lineage hash of the implied ``genesis checkpoint'' of epoch 0 is thirty two zero bytes.
\item The lineage hash of any other checkpoint is the keccak256 hash \todo{cite} of the lineage hash of its predecessor checkpoint concatenated with the hash of the checkpoint.
\begin{equation}
AH(n) \equiv \texttt{keccak256}\left[ \texttt{CONCAT}\left( AH(n-1), \texttt{keccak256}[C_n] \right)\right]
\end{equation}
\end{itemize}
Lineage hashes thus form a direct hash chain, and otherwise have a one-to-one correspondence with checkpoint hashes.
During epoch $n$, validators are expected to send prepare and commit messages specifying $n$ as their \epoch, and the lineage hash of some checkpoint for epoch $n$ as their \hash. Prepare messages may specify as \hashsource a checkpoint for any previous epoch (preferably the preceding checkpoint) of \hash, and which is \textit{justified} (see below), and the \epochsource is expected to be the epoch of that checkpoint.
Each validator has a \textit{deposit size}; when a validator joins their deposit size is equal to the number of coins that they deposited, and from there on each validator's deposit size rises and falls as the validator receives rewards and penalties. For the rest of this paper, when we say ``$\frac{2}{3}$ of validators'', we are referring to a \textit{deposit-weighted} fraction; that is, a set of validators whose combined deposit size equals to at least $\frac{2}{3}$ of the total deposit size of the entire set of validators. We initially consider the set of validators, and their deposit sizes to be static static, but in Section \ref{sect:join_and_leave} show how to validators can join and leave the validator set.
We also add the following new requirements:
\begin{itemize}
\item For a checkpoint to be finalized, it must be justified.
\item For a checkpoint to be justified, the \hashsource used to justify it must itself be justified.
\item Prepare and commit messages are only accepted as part of blocks; that is, for a client to consider a checkpoint as finalized, the client must see a chain \todo{notation?} such that the chain terminating at that block, $\frac{2}{3}$ commits for that hash have been included and processed.
\end{itemize}
This immensely simplifies our finalty mechanism because we can now have a fork choice rule where the ``score'' of a block only depends on the block and its children, putting it into a similar category as more traditional PoW-based fork choice rules such as the longest chain rule and GHOST\cite{sompolinsky2013accelerating}. However, this fork choice rule is also \textit{finality-bearing}: there exists a ``finality'' mechanism that has the property that (i) the fork choice rule always prefers finalized blocks over non-finalized competing blocks, and (ii) it is impossible for two incompatible checkpoints to be finalized unless at least $\frac{1}{3}$ of the validators violated a Casper Commandment (a.k.a. slashing conditions),
\begin{enumerate}
\item[\textbf{I.}] \textsc{A validator shalt not publish two or more nonidentical Prepares for same epoch.}
This is equivalent to that each validator may Prepare to exactly one (\hash, \epochsource, \hashsource) triplet per epoch.
\item[\textbf{II.}] \textsc{A validator shalt not publish an Commit between the epochs of a Prepare statement.}
Equivalently, a validator will not publish
\begin{equation*}
\langle \msgPREPARE, \epoch_p, \hash_p, \epochsource, \hashsource, \signature \rangle \hspace{0.5in} \textnormal{\textsc{and}} \hspace{0.5in} \langle \msgCOMMIT, \epoch_c, \hash_c, \signature \rangle \;,
\label{eq:msgPREPARE}
\end{equation*}
where the epochs satisfy $\epochsource < \epoch_c < \epoch_p$.
\end{enumerate}
\begin{lemma}[Enforcable Slashing Conditions]
\label{lemma:slashingconditions}
If a group of attackers violates a slashing condition, their deposits will be slashed as long as \todo{condition here.}
\begin{proof}
\todo{Proof goes here.}
\end{proof}
\end{lemma}
Earlier versions of Casper had four slashing conditions,\cite{minslashing} but we can reduce to two because of the three new requirements above; they ensure that blocks will not register commits or prepares that violate the other two conditions.
\section{Proofs of Safety and Plausible Liveness}
\label{sect:theorems}
We give a proof of two properties of Casper: \textit{accountable safety} and \textit{plausible liveness}. Accountable safety means that two conflicting checkpoints cannot be finalized unless $\geq \frac{1}{3}$ of validators violate a slashing condition (meaning at least $\frac{\totaldeposit}{3}$ is lost). Honest validators will never violate slashing conditions, so this implies the usual Byzantine fault tolerance safety property, but expressing this in terms of slashing conditions means that we are actually proving a stronger claim: if two conflicting checkpoints get finalized, then at least $\frac{1}{3}$ of validators were malicious, \textit{and we know whom to blame, and so we can maximally penalize them in order to make such faults expensive}.
Plausible liveness means that it is always possible for $\frac{2}{3}$ of honest validators to finalize a new checkpoint, regardless of what previous events took place.
In \figref{fig:conflicting_checkpoints}, we see two conflicting checkpoints $A$ (epoch $e_A$) and $B$ (epoch $e_B$) are finalized.
\begin{figure}[h!tb]
\centering
\includegraphics[width=5in]{conflicting_checkpoints.png}
\caption{\todo{fill me in.}}
\label{fig:conflicting_checkpoints}
\end{figure}
\begin{theorem}[Accountable Safety]
\label{theorem:safety}
Two conflicting checkpoints cannot be finalized unless $\geq \frac{1}{3}$ of validators violate a slashing condition (meaning at least $\frac{\totaldeposit}{3}$ is lost).
\begin{proof}
This implies $\frac{2}{3}$ commits and $\frac{2}{3}$ prepares in epochs $\epoch_A$ and $e_B$. In the trivial case where $\epoch_A = \epoch_B$, this implies that some intersection of $\frac{1}{3}$ of validators must have violated \textbf{NO\_DBL\_PREPARE}. In other cases, there must exist two chains $\Genesisblock < \ldots < \epoch_A^2 < \epoch_A^1 < \epoch_A$ and $\Genesisblock < \ldots < \epoch_B^2 < \epoch_B^1 < \epoch_B$ of justified checkpoints, both terminating at the genesis. Suppose without loss of generality that $\epoch_A > \epoch_B$. Then, there must be some $\epoch_A^i$ that either $\epoch_A^i = ]\epoch_B$ or $\epoch_A^i > \epoch_B > \epoch_A^{i+1}$. In the first case, since $A^i$ and $B$ both have $\frac{2}{3}$ prepares, at least $\frac{1}{3}$ of validators violated \textbf{NO\_DBL\_PREPARE}. Otherwise, $B$ has $\frac{2}{3}$ commits and there exist $\frac{2}{3}$ prepares with $\epoch > B$ and $\epochsource < B$, so at least $\frac{1}{3}$ of validators violated \textbf{PREPARE\_COMMIT\_CONSISTENCY}. This proves accountable safety.
\end{proof}
\end{theorem}
\begin{theorem}[Plausible Liveness]
\label{theorem:liveness}
It is always possible for $\frac{2}{3}$ of honest validators to finalize a new checkpoint, regardless of what previous events took place.
\begin{proof}
Suppose that all existing validators have sent some sequence of prepare and commit messages. Let $M$ with epoch $\epoch_M$ be the highest-epoch checkpoint that was justified. Honest validators have not committed on any block which is not justified. Hence, neither slashing condition stops them from making prepares on a child of $M$, using $\epoch_M$ as $\epochsource$, and then committing this child.
\end{proof}
\end{theorem}
\section{Fork Choice Rule}
\label{sect:forkchoice}
The mechanism described above ensures \textit{plausible liveness}; however, it by itself does not ensure \textit{actual liveness} - that is, while the mechanism cannot get stuck in the strict sense, it could still enter a scenario where the proposal mechanism (i.e. the proof of work chain) gets into a state where it never ends up creating a checkpoint that could get finalized.
In \figref{fig:forkchoice} we see one possible example. In this case, $HASH1$ or any descendant thereof cannot be finalized without slashing $\frac{1}{6}$ of validators. However, miners on a proof of work chain would interpret $HASH1$ as the head and forever keep mining descendants of it, ignoring the chain based on $HASH0^\prime$ which actually could get finalized.
\begin{figure}[h!tb]
\centering
\includegraphics[width=5.5in]{fork4.png}
\caption{\todo{fill me in.}}
\label{fig:forkchoice}
\end{figure}
In fact, when \textit{any} checkpoint gets $k > \frac{1}{3}$ commits, no conflicting checkpoint can get finalized without $k - \frac{1}{3}$ of validators getting slashed. This necessitates modifying the fork choice rule used by participants in the underlying proposal mechanism (as well as users and validators): instead of blindly following a longest-chain rule, there needs to be an overriding rule that (i) finalized checkpoints are favored, and (ii) when there are no further finalized checkpoints, checkpoints with more (justified) commits are favored.
One complete description of such a rule would be:
\begin{enumerate}
\item Start with HEAD equal to the genesis of the chain.
\item Select the descendant checkpoint of HEAD with the most commits (only justified checkpoints are admissible)
\item Repeat (2) until no descendant with commits exists.
\item Choose the longest proof of work chain from there.
\end{enumerate}
The commit-following part of this rule can be viewed in some ways as mirroring the ``greegy heaviest observed subtree'' (GHOST) rule that has been proposed for proof of work chains\cite{sompolinsky2013accelerating}. The symmetry is as follows, in GHOST, a node starts with the head at the genesis, then begins to move forward down the chain, and if it encounters a block with multiple children then it chooses the child that has the larger quantity of work built on top of it (including the child block itself and its descendants).
In this algorithm, we follow a similar approach, except we repeatedly seek the child that comes the closest to achieving finality. Commits on a descendant are implicitly commits on all of its lineage, and so if a given descendant of a given block has more commits than any other descendant, then we know that all children along the chain from the head to this descendant are closer to finality than any of their siblings; hence, looking for the \textit{descendant} with the most commits and not just the \textit{child} replicates the GHOST principle most faithfully. Finalizing a checkpoint requires $\frac{2}{3}$ commits within a \textit{single} epoch, and so we do not try to sum up commits across epochs and instead simply take the maximum.
This rule ensures that if there is a checkpoint such that no conflicting checkpoint can be finalized without at least some validators violating slashing conditions, then this is the checkpoint that will be viewed as the ``head'' and thus that validators will try to commit on.
\section{Allowing Dynamic Validator Sets}
\label{sect:join_and_leave}
The set of validators needs to be able to change. New validators need to be able to join, and existing validators need to be able to leave. To accomplish this, we define a variable kept track of in the state called the \textit{dynasty} counter. When a user sends a ``deposit'' transaction to become a validator, if this transaction is included in dynasty $n$, then the validator will be \textit{inducted} in dynasty $n+2$. The dynasty counter increments when the chain detects that the checkpoint of the current epoch that is part of its own history has been finalized (that is, the checkpoint of epoch \epoch must be finalized during epoch \epoch, and the chain must learn about this before epoch \epoch ends). In simpler terms, when a user sends a ``deposit'' transaction, they need to wait for the transaction to be finalized, and then they need to wait again for that epoch to be finalized; after this, they become part of the validator set. We call such a validator's \textit{start dynasty} $n+2$. \todo{The conditions here feel different. Do we mean we want to have two \emph{perfect finalizations} (finalizing the epoch immediately prior) or simply two finalizations?}
For a validator to leave, ze must send a ``withdraw'' message. If their withdraw message gets included during dynasty $n$, the validator similarly leaves the validator set during dynasty $n+2$; we call $n+2$ their \textit{end dynasty}. When a validator withdraws, their deposit is locked for four months \todo{how determined?} before they can take their money out; if they are caught violating a slashing condition within that time then their deposit is forfeited.
For a checkpoint to be justified, it must be prepared by a set of validators which contains (i) at least $\frac{2}{3}$ of the current dynasty (that is, validators with $startDynasty \le curDynasty < endDynasty$), and (ii) at least $\frac{2}{3}$ of the previous dyansty (that is, validators with $startDynasty \le curDynasty - 1 < endDynasty$. Finalization with commits works similarly. The current and previous dynasties will usually greatly overlap; but in cases where they substantially diverge this ``stitching'' mechanism ensures that dynasty divergences do not lead to situations where a finality reversion or other failure can happen because different messages are signed by different validator sets and so equivocation is avoided.
\begin{figure}[h!tb]
\centering
\includegraphics[width=4.5in]{validator_set_misalignment.png}
\caption{\todo{This is meant to demonstrate the need for the ``stitching'' of dynamic validator sets, correct?}}
\label{fig:dynamic}
\end{figure}
\subsection{Recovering from Castastrophic Crashes}
\label{sect:leak}
Suppose that $>\frac{1}{3}$ of validators crash-fail at the same time---i.e, they are no longer connected to the network due to a network partition, computer failure, or are malicious actors. Then, no later checkpoint will be able to get finalized.
We can recover from this by instituting a ```leak'' (eq.~\ref{eq:leakformula}) which increases the longer validators do not prepare or commit any checkpoints, until eventually their deposit sizes decrease low enough that the validators that \textit{are} preparing and committing are a $\frac{2}{3}$ supermajority.
Given two constants $B$, the number of blocks until leak completely dissipates an inactive vaidator's deposit, and $s$ the ``steepness'' of the curve, we have the formula for the validator leak as a function of the number of inactive blocks as,
\begin{equation}
\operatorname{leak}(x) = \frac{s + 1}{B^{s + 1}} x^s \; , \textnormal{\todo{notation likely to be changed.}}
\label{eq:leakformula}
\end{equation}
for which the derivation is given in Appendix \ref{app:leak}. One can set parameters $B$ and $s$ depending on the desired penalty curves.
Note that this does introduce the possibility of two conflicting checkpoints being finalized, with validators only losing money on one of the two checkpoints as seen in \figref{fig:commitsync}.
\begin{figure}[h!tb]
\centering
\includegraphics[width=4in]{CommitsSync.png}
\caption{\todo{caption here.}}
\label{fig:commitsync}
\end{figure}
If the goal is simply to achieve maximally close to 50\% fault tolerance, then clients should simply favor the finalized checkpoint that they received earlier. However, if clients are also interested in defeating 51\% censorship attacks, then they may want to at least sometimes choose the minority chain. All forms of ``51\% attacks'' can thus be resolved fairly cleanly via ``user-activated soft forks'' that reject what would normally be the dominant chain. Particularly, note that finalizing even one block on the dominant chain precludes the attacking validators from preparing on the minority chain because of Commandment II, at least until their balances decrease to the point where the minority can commit, so such a fork would also serve the function of costing the majority attacker a very large portion of their deposits.
\section{Conclusions}
This introduces the basic workings of Casper the Friendly Finality Gadget's prepare and commit mechanism and fork choice rule, in the context of Byzantine fault tolerance analysis. Separate papers will serve the role of explaining and analyzing incentives inside of Casper, and the different ways that they can be parametrized and the consequences of these paramtrizations.
\textbf{Future Work.} \todo{fill me in}
\textbf{Acknowledgements.} We thank Virgil Griffith for review and Sandro Lera for mathematics.
%\section{References}
\bibliographystyle{abbrv}
\bibliography{ethereum}
\input{appendix.tex}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}

View File

@ -1,236 +0,0 @@
%%%% NIPS Macros (LaTex)
%%%% Style File
%%%% Dec 12, 1990 Rev Aug 14, 1991; Sept, 1995; April, 1997; April, 1999
% This file can be used with Latex2e whether running in main mode, or
% 2.09 compatibility mode.
%
% If using main mode, you need to include the commands
% \documentclass{article}
% \usepackage{nips10submit_e,times}
% as the first lines in your document. Or, if you do not have Times
% Roman font available, you can just use
% \documentclass{article}
% \usepackage{nips10submit_e}
% instead.
%
% If using 2.09 compatibility mode, you need to include the command
% \documentstyle[nips10submit_09,times]{article}
% as the first line in your document. Or, if you do not have Times
% Roman font available, you can include the command
% \documentstyle[nips10submit_09]{article}
% instead.
% Change the overall width of the page. If these parameters are
% changed, they will require corresponding changes in the
% maketitle section.
%
\usepackage{eso-pic} % used by \AddToShipoutPicture
\renewcommand{\topfraction}{0.95} % let figure take up nearly whole page
\renewcommand{\textfraction}{0.05} % let figure take up nearly whole page
% Define nipsfinal, set to true if nipsfinalcopy is defined
\newif\ifnipsfinal
\nipsfinalfalse
\def\nipsfinalcopy{\nipsfinaltrue}
\font\nipstenhv = phvb at 8pt % *** IF THIS FAILS, SEE nips10submit_e.sty ***
% Specify the dimensions of each page
\setlength{\paperheight}{11in}
\setlength{\paperwidth}{8.5in}
\oddsidemargin .5in % Note \oddsidemargin = \evensidemargin
\evensidemargin .5in
\marginparwidth 0.07 true in
%\marginparwidth 0.75 true in
%\topmargin 0 true pt % Nominal distance from top of page to top of
%\topmargin 0.125in
\topmargin -0.625in
\addtolength{\headsep}{0.25in}
\textheight 9.0 true in % Height of text (including footnotes & figures)
\textwidth 5.5 true in % Width of text line.
\widowpenalty=10000
\clubpenalty=10000
% \thispagestyle{empty} \pagestyle{empty}
\flushbottom \sloppy
% We're never going to need a table of contents, so just flush it to
% save space --- suggested by drstrip@sandia-2
\def\addcontentsline#1#2#3{}
% Title stuff, taken from deproc.
\def\maketitle{\par
\begingroup
\def\thefootnote{\fnsymbol{footnote}}
\def\@makefnmark{\hbox to 0pt{$^{\@thefnmark}$\hss}} % for perfect author
% name centering
% The footnote-mark was overlapping the footnote-text,
% added the following to fix this problem (MK)
\long\def\@makefntext##1{\parindent 1em\noindent
\hbox to1.8em{\hss $\m@th ^{\@thefnmark}$}##1}
\@maketitle \@thanks
\endgroup
\setcounter{footnote}{0}
\let\maketitle\relax \let\@maketitle\relax
\gdef\@thanks{}\gdef\@author{}\gdef\@title{}\let\thanks\relax}
% The toptitlebar has been raised to top-justify the first page
% Title (includes both anonimized and non-anonimized versions)
\def\@maketitle{\vbox{\hsize\textwidth
\linewidth\hsize \vskip 0.1in \toptitlebar \centering
{\LARGE\bf \@title\par} \bottomtitlebar % \vskip 0.1in % minus
\ifnipsfinal
\def\And{\end{tabular}\hfil\linebreak[0]\hfil
\begin{tabular}[t]{c}\bf\rule{\z@}{24pt}\ignorespaces}%
\def\AND{\end{tabular}\hfil\linebreak[4]\hfil
\begin{tabular}[t]{c}\bf\rule{\z@}{24pt}\ignorespaces}%
\begin{tabular}[t]{c}\bf\rule{\z@}{24pt}\@author\end{tabular}%
\else
\begin{tabular}[t]{c}\bf\rule{\z@}{24pt}
Anonymous Author(s) \\
Affiliation \\
Address \\
\texttt{email} \\
\end{tabular}%
\fi
\vskip 0.3in minus 0.1in}}
\renewenvironment{abstract}{\vskip.075in\centerline{\large\bf
Abstract}\vspace{0.5ex}\begin{quote}}{\par\end{quote}\vskip 1ex}
% sections with less space
\def\section{\@startsection {section}{1}{\z@}{-2.0ex plus
-0.5ex minus -.2ex}{1.5ex plus 0.3ex
minus0.2ex}{\large\bf\raggedright}}
\def\subsection{\@startsection{subsection}{2}{\z@}{-1.8ex plus
-0.5ex minus -.2ex}{0.8ex plus .2ex}{\normalsize\bf\raggedright}}
\def\subsubsection{\@startsection{subsubsection}{3}{\z@}{-1.5ex
plus -0.5ex minus -.2ex}{0.5ex plus
.2ex}{\normalsize\bf\raggedright}}
\def\paragraph{\@startsection{paragraph}{4}{\z@}{1.5ex plus
0.5ex minus .2ex}{-1em}{\normalsize\bf}}
\def\subparagraph{\@startsection{subparagraph}{5}{\z@}{1.5ex plus
0.5ex minus .2ex}{-1em}{\normalsize\bf}}
\def\subsubsubsection{\vskip
5pt{\noindent\normalsize\rm\raggedright}}
% Footnotes
\footnotesep 6.65pt %
\skip\footins 9pt plus 4pt minus 2pt
\def\footnoterule{\kern-3pt \hrule width 12pc \kern 2.6pt }
\setcounter{footnote}{0}
% Lists and paragraphs
\parindent 0pt
\topsep 4pt plus 1pt minus 2pt
\partopsep 1pt plus 0.5pt minus 0.5pt
\itemsep 2pt plus 1pt minus 0.5pt
\parsep 2pt plus 1pt minus 0.5pt
\parskip .5pc
%\leftmargin2em
\leftmargin3pc
\leftmargini\leftmargin \leftmarginii 2em
\leftmarginiii 1.5em \leftmarginiv 1.0em \leftmarginv .5em
%\labelsep \labelsep 5pt
\def\@listi{\leftmargin\leftmargini}
\def\@listii{\leftmargin\leftmarginii
\labelwidth\leftmarginii\advance\labelwidth-\labelsep
\topsep 2pt plus 1pt minus 0.5pt
\parsep 1pt plus 0.5pt minus 0.5pt
\itemsep \parsep}
\def\@listiii{\leftmargin\leftmarginiii
\labelwidth\leftmarginiii\advance\labelwidth-\labelsep
\topsep 1pt plus 0.5pt minus 0.5pt
\parsep \z@ \partopsep 0.5pt plus 0pt minus 0.5pt
\itemsep \topsep}
\def\@listiv{\leftmargin\leftmarginiv
\labelwidth\leftmarginiv\advance\labelwidth-\labelsep}
\def\@listv{\leftmargin\leftmarginv
\labelwidth\leftmarginv\advance\labelwidth-\labelsep}
\def\@listvi{\leftmargin\leftmarginvi
\labelwidth\leftmarginvi\advance\labelwidth-\labelsep}
\abovedisplayskip 7pt plus2pt minus5pt%
\belowdisplayskip \abovedisplayskip
\abovedisplayshortskip 0pt plus3pt%
\belowdisplayshortskip 4pt plus3pt minus3pt%
% Less leading in most fonts (due to the narrow columns)
% The choices were between 1-pt and 1.5-pt leading
%\def\@normalsize{\@setsize\normalsize{11pt}\xpt\@xpt} % got rid of @ (MK)
\def\normalsize{\@setsize\normalsize{11pt}\xpt\@xpt}
\def\small{\@setsize\small{10pt}\ixpt\@ixpt}
\def\footnotesize{\@setsize\footnotesize{10pt}\ixpt\@ixpt}
\def\scriptsize{\@setsize\scriptsize{8pt}\viipt\@viipt}
\def\tiny{\@setsize\tiny{7pt}\vipt\@vipt}
\def\large{\@setsize\large{14pt}\xiipt\@xiipt}
\def\Large{\@setsize\Large{16pt}\xivpt\@xivpt}
\def\LARGE{\@setsize\LARGE{20pt}\xviipt\@xviipt}
\def\huge{\@setsize\huge{23pt}\xxpt\@xxpt}
\def\Huge{\@setsize\Huge{28pt}\xxvpt\@xxvpt}
\def\toptitlebar{\hrule height4pt\vskip .25in\vskip-\parskip}
\def\bottomtitlebar{\vskip .29in\vskip-\parskip\hrule height1pt\vskip
.09in} %
%Reduced second vskip to compensate for adding the strut in \@author
% Vertical Ruler
% This code is, largely, from the CVPR 2010 conference style file
% ----- define vruler
\makeatletter
\newbox\nipsrulerbox
\newcount\nipsrulercount
\newdimen\nipsruleroffset
\newdimen\cv@lineheight
\newdimen\cv@boxheight
\newbox\cv@tmpbox
\newcount\cv@refno
\newcount\cv@tot
% NUMBER with left flushed zeros \fillzeros[<WIDTH>]<NUMBER>
\newcount\cv@tmpc@ \newcount\cv@tmpc
\def\fillzeros[#1]#2{\cv@tmpc@=#2\relax\ifnum\cv@tmpc@<0\cv@tmpc@=-\cv@tmpc@\fi
\cv@tmpc=1 %
\loop\ifnum\cv@tmpc@<10 \else \divide\cv@tmpc@ by 10 \advance\cv@tmpc by 1 \fi
\ifnum\cv@tmpc@=10\relax\cv@tmpc@=11\relax\fi \ifnum\cv@tmpc@>10 \repeat
\ifnum#2<0\advance\cv@tmpc1\relax-\fi
\loop\ifnum\cv@tmpc<#1\relax0\advance\cv@tmpc1\relax\fi \ifnum\cv@tmpc<#1 \repeat
\cv@tmpc@=#2\relax\ifnum\cv@tmpc@<0\cv@tmpc@=-\cv@tmpc@\fi \relax\the\cv@tmpc@}%
% \makevruler[<SCALE>][<INITIAL_COUNT>][<STEP>][<DIGITS>][<HEIGHT>]
\def\makevruler[#1][#2][#3][#4][#5]{\begingroup\offinterlineskip
\textheight=#5\vbadness=10000\vfuzz=120ex\overfullrule=0pt%
\global\setbox\nipsrulerbox=\vbox to \textheight{%
{\parskip=0pt\hfuzz=150em\cv@boxheight=\textheight
\cv@lineheight=#1\global\nipsrulercount=#2%
\cv@tot\cv@boxheight\divide\cv@tot\cv@lineheight\advance\cv@tot2%
\cv@refno1\vskip-\cv@lineheight\vskip1ex%
\loop\setbox\cv@tmpbox=\hbox to0cm{{\nipstenhv\hfil\fillzeros[#4]\nipsrulercount}}%
\ht\cv@tmpbox\cv@lineheight\dp\cv@tmpbox0pt\box\cv@tmpbox\break
\advance\cv@refno1\global\advance\nipsrulercount#3\relax
\ifnum\cv@refno<\cv@tot\repeat}}\endgroup}%
\makeatother
% ----- end of vruler
% \makevruler[<SCALE>][<INITIAL_COUNT>][<STEP>][<DIGITS>][<HEIGHT>]
\def\nipsruler#1{\makevruler[12pt][#1][1][3][0.993\textheight]\usebox{\nipsrulerbox}}
\AddToShipoutPicture{%
\ifnipsfinal\else
\nipsruleroffset=\textheight
\advance\nipsruleroffset by -3.7pt
\color[rgb]{.7,.7,.7}
\AtTextUpperLeft{%
\put(\LenToUnit{-35pt},\LenToUnit{-\nipsruleroffset}){%left ruler
\nipsruler{\nipsrulercount}}
}
\fi
}

View File

@ -1,12 +1,12 @@
import random
import datetime
diffs = [1634.61 * 10**12]
hashpower = diffs[0] / 20.99
times = [1502461725]
diffs = [2096.34 * 10**12]
hashpower = diffs[0] / 22.83
times = [1503633831]
for i in range(4144703, 6010000):
for i in range(4201073, 6010000):
blocktime = random.expovariate(hashpower / diffs[-1])
adjfac = max(1 - int(blocktime / 10), -99) / 2048.
newdiff = diffs[-1] * (1 + adjfac)

View File

@ -38,6 +38,6 @@ If, during an epoch $e$, for some specific ancestry hash $h$, for any specific (
\subsection{Todo}
\begin{itemize}
\item \sout{Reference the various Figures within the text so we more easily know what goes with what.}
\item \todo{Reference the various Figures within the text so we more easily know what goes with what.}
\item \todo{fill me in}
\end{itemize}

Binary file not shown.

View File

@ -0,0 +1,308 @@
\title{Casper the Friendly Finality Gadget}
\author{
Vitalik Buterin \\
Ethereum Foundation}
\documentclass[12pt, final]{article}
%\input{eth_header.tex}
\usepackage{color}
\newcommand*{\todo}[1]{\color{red} #1}
\newcommand{\eqref}[1]{eq.~\ref{#1}}
\newcommand{\figref}[1]{Figure~\ref{#1}}
\usepackage{amsthm}
\newtheorem{theorem}{Theorem}
\usepackage{graphicx}
\graphicspath{{figs/}{figures/}{images/}{./}}
%% Special symbols we'll probably iterate on
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% we will probably iterate on these symbols until we have a notation we like
\newcommand{\epoch}{\ensuremath{e}\space}
\newcommand{\hash}{\textnormal{h}\space}
% symbols for the epoch and hash source
\newcommand{\epochsource}{\ensuremath{\epoch_{\star}}\space}
\newcommand{\hashsource}{\ensuremath{\hash_{\star}}\space}
\newcommand{\signature}{\ensuremath{\mathcal{S}}\space}
\newcommand{\totaldeposit}{\textnormal{TD}\space}
\newcommand{\gamesymbol}{\reflectbox{G}}
\newcommand{\msgPREPARE}{\textbf{\textsc{prepare}}\space}
\newcommand{\msgCOMMIT}{\textbf{\textsc{commit}}\space}
% Symbols for the Last Justified Epoch and Hash
\newcommand{\epochLJ}{\ensuremath{\epoch_{\textnormal{LJ}}}\space}
\newcommand{\hashLJ}{\ensuremath{\hash_{\textnormal{LJ}}}\space}
% Symbols for the Last Finalized Epoch and Hash
\newcommand{\epochLF}{\ensuremath{\epoch_{\textnormal{LF}}}\space}
\newcommand{\hashLF}{\ensuremath{\hash_{\textnormal{LF}}}\space}
% Griefing Factor symbol
\newcommand{\GF}[1]{\mathds{GF}\left( #1 \right)\space}
% Genesis block symbol
\newcommand{\Genesisblock}{\ensuremath{G}\space}
\begin{document}
\maketitle
\begin{abstract}
We give an introduction to the consensus algorithm details of Casper: the Friendly Finality Gadget, as an overlay on an existing proof of work blockchain such as Ethereum. Casper is a partial consensus mechanism inspired by a combination of existing proof of stake algorithm research and Byzantine fault tolerant consensus theory, which if overlaid onto another blockchain (which could theoretically be proof of work or proof of stake) adds strong \textit{finality} guarantees that improve the blockchain's resistance to transaction reversion (or ``double spend'') attacks.
\end{abstract}
\section{Introduction}
\label{sect:intro}
Over the past few years there has been considerable research into ``proof of stake''-based blockchain consensus algorithms. In a proof of stake system, a blockchain grows and agrees on new blocks through a process where anyone who holds coins inside of the system can participate, and the amount of influence that any given coin holder has is proportional to the number of coins (or ``stake'') that they hold. This represents an alternative to proof of work ``mining'', allowing blockchains to operate without the high hardware and electricity costs that proof of work blockhains require.
There have been two major schools of thought in proof of stake algorithm design. The earlier of the two, \textit{chain-based proof of stake}, tries to closely mirror the mechanics of proof of work, featuring a chain of blocks and an algorithm that ``simulates'' mining by pseudorandomly assigning the right to create new blocks to stakeholders. This includes Peercoin\cite{king2012ppcoin}, Blackcoin\cite{vasin2014blackcoin} and Iddo Bentov's work\cite{bentov2016pos}.
The other school, \textit{BFT-based proof of stake}, is based on a thirty year old body of research into \textit{Byzantine fault tolerant consensus algorithms} such as PBFT \cite{castro1999practical}. BFT algorithms tend to have strong and rigorously proven mathematical properties; for example, one can usually mathematically prove that as long as at more than $\frac{2}{3}$ of participants in the protocol are following the protocol correctly, then the algorithm cannot possibly finalize conflicting block hashes at the same time (``safety''); this result holds regardless of network latency. The suggestion of repurposing BFT algorithms for proof of stake was first introduced by Tendermint\cite{kwon2014tendermint}.
\subsection{Our Work}
We present an algorithm that follows in the latter tradition, though with some modifications. Casper the Friendly Finality Gadget takes the form of an \textit{overlay} on top of some kind of \textit{proposal mechanism} - a mechanism which proposes \textit{checkpoints} which the Casper mechanism can then set in stone by \textit{finalizing} them. Casper depends on the proposal mechanism for liveness, but not safety; that is, if the proposal mechanism is entirely controlled by attackers, then the attackers can prevent Casper from finalizing any future checkpoints, but cannot cause a safety failure in Casper---i.e., they cannot force Casper to finalize two conflicting blocks.
The proposal mechanism will initially be the existing Ethereum proof of work chain, making the first version of Casper a \textit{hybrid PoW/PoS algorithm} that relies on proof of work for liveness but not safety, but in future versions the proposal mechanism can be substituted with something else.
Our algorithm introduces several new properties that BFT algorithms by themselves do not necessarily support. First, we change the emphasis of the proof statement from the traditional ``as long as more than $\frac{2}{3}$ of validators are honest, there will be no safety failures'' to the contrapositive ``if there is a safety failure, that implies that $\ge \frac{1}{3}$ of validators violated some protocol rule'', and furthermore we add \textit{accountability}: ``$\ge \frac{1}{3}$ violated the rules, \textit{and we know who they are}''.
Accountability allows us to penalize malfeasant validators, solving the \textit{nothing at stake} problem that often plagues chain-based proof of stake algorithms. The size of the penalty is equal to the size of validators' entire deposits; this ensures that the cost of violating protocol guarantees is much higher than the size of the rewards that the system pays out during normal operation, achieving a much \textit{stronger} security guarantee than is possible with proof of work.
Second, the design of the algorithm as an overlay makes it easier to implement as an upgrade to a proof of work base chain. Third, we introduce a provably safe way of allowing the validator set to change over time. Finally, we introduce a way to recover from attacks where more than $\frac{1}{3}$ of validators drop offline, at the cost of a very weak \textit{tradeoff synchronicity assumption}.
\section{The Protocol}
\label{sect:protocol}
We will describe the protocol in stages, starting with a simple version and then progressively adding features such as validator set changes and mass liveness fault recovery. In the simple version, we simply assume that there is a set of validators, as well as a \textit{proposal mechanism} which is either a proof of work chain or something which exhibits similar behavior. We define an \textit{epoch} as a range of 100 blocks (e.g. blocks 600...699 are epoch 6), and a \textit{checkpoint} as the hash of a block right before the start of an epoch. The \textit{epoch of a checkpoint} is the epoch \textit{after} the checkpoint, e.g. the epoch of a checkpoint which is the hash of some block 599 is 6. Validators have the ability to make two types of messages:
$$\langle \msgPREPARE, \hash, \epoch, \hashsource, \epochsource, \signature \rangle$$
\begin{tabular}{l l}
\textbf{Notation} & \textbf{Description} \\
\hash & a checkpoint hash \\
\epoch & the epoch of the checkpoint \\
$\hashsource$ & the most recent justified hash \\
$\epochsource$ & the epoch of $\hashsource$ \\
\signature & signature of $(\hash,\epoch,\hashsource,\epochsource)$ from the validator's private key \\
\end{tabular} \label{tbl:prepare}
$$ $$
$$\langle \msgCOMMIT, \hash, \epoch, \signature \rangle$$
\begin{tabular}{l l}
\textbf{Notation} & \textbf{Description} \\
\hash & a checkpoint hash \\
\epoch & the epoch of the checkpoint \\
\signature & signature from the validator's private key \\
\end{tabular}
\label{tbl:commit}
\label{fig:messages}
$$ $$
Each validator has a \emph{deposit size}; when a validator joins their deposit size is equal to the number of coins that they deposited, and from there on each validator's deposit size rises and falls with rewards and penalties. For the rest of this paper, when we say ``$\frac{2}{3}$ of validators'', we are referring to a \emph{deposit-weighted} fraction; that is, a set of validators whose sum deposit size equals to at least $\frac{2}{3}$ of the total deposit size of the entire set of validators. ``$\frac{2}{3}$ prepares'' will be used as shorthand for ``prepares from $\frac{2}{3}$ of validators''. We also use $\epoch(\hash)$ to denote ``the epoch of $\hash$''.
Every checkpoint hash $\hash$ has one of three possible states: \emph{fresh}, \emph{justified}, and \emph{finalized}. Every hash starts as \emph{fresh}. A hash $\hash$ converts from fresh to \emph{justified} if $\frac{2}{3}$ of validators send prepares of the form
\begin{equation}
\langle \msgPREPARE, \epoch(\hash), \hash, \epoch(\hashsource), \hashsource, \signature \rangle
\label{eq:msgPREPARE}
\end{equation}
for some specific $\hashsource$. A hash $\hash$ can only be justified if its $\hashsource$ is already justified or finalized.
Additionally, a hash $\hash$ converts from justified to \emph{finalized}, if $\frac{2}{3}$ of validators commit
\begin{equation}
\langle \msgCOMMIT, \epoch(\hash), \hash, \signature \rangle \; ,
\label{eq:msgCOMMIT}
\end{equation}
An ``ideal execution'' of the protocol is one where, at the start of every epoch, every validator prepares and commits the same checkpoint for that epoch, specifying the same $\epochsource$ and $\hashsource$.
\begin{figure}[h!tb]
\centering
\includegraphics[width=5.5in]{prepares_commits.png}
\caption{Illustrating prepares, commits and checkpoints. Arrows represent \textit{dependency} (e.g. a commit depends on there being $\frac{2}{3}$ existing prepares)}
\label{fig:prepares_and_commits}
\end{figure}
During epoch $n$, validators are expected to send prepare and commit messages with $\epoch = n$ and $h$ equal to a checkpoint of epoch $n$. Prepare messages may specify as \hashsource a checkpoint for any previous epoch (preferably the preceding checkpoint) of \hash, and which is \textit{justified} (see below), and the \epochsource is expected to be the epoch of that checkpoint.
Validators only pay attention to prepares and commits if they have been included in blocks, even if those blocks are not part of the main chain. This simplifies our finalty mechanism because it allows it to be expressed as a fork choice rule where the ``score'' of a block only depends on the block and its children, putting it into a similar category as more traditional PoW-based fork choice rules such as the longest chain rule and GHOST\cite{sompolinsky2013accelerating}.
Unlike GHOST, however, this fork choice rule is also \textit{finality-bearing}: there exists a ``finality'' mechanism that has the property that (i) the fork choice rule always prefers finalized blocks over non-finalized competing blocks, and (ii) it is impossible for two incompatible checkpoints to be finalized unless at least $\frac{1}{3}$ of the validators violated one of the two Casper Commandments (a.k.a. slashing conditions):
\begin{enumerate}
\item[\textbf{I.}] \textsc{A validator shalt not publish two or more nonidentical Prepares for same epoch.}
In other words, a validator may Prepare at most exactly one (\hash, \epochsource, \hashsource) triplet for any given epoch \epoch.
\item[\textbf{II.}] \textsc{A validator shalt not publish an Commit between the epochs of a Prepare statement.}
Equivalently, a validator may not publish
\begin{equation}
\langle \msgPREPARE, \epoch_p, \hash_p, \epochsource, \hashsource, \signature \rangle \hspace{0.5in} \textnormal{\textsc{and}} \hspace{0.5in} \langle \msgCOMMIT, \epoch_c, \hash_c, \signature \rangle \;,
\label{eq:msgPREPARE}
\end{equation}
where the epochs satisfy $\epochsource < \epoch_c < \epoch_p$.
\end{enumerate}
If a validator violates a slashing condition, the evidence that they did this can be included into the blockchain as a transaction, at which point the validator's entire deposit will be taken away, with a 4\% ``finder's fee'' given to the submitter of the evidence transaction.
Earlier versions of Casper had four slashing conditions,\cite{minslashing} but we can reduce to two because of the requirements that (i) finalized hashes must be justified, and (ii) justified hashes must point to an already justified ancestor; these requirements ensure that blocks will not register commits or prepares that violate the other two slashing conditions, making them superfluous.
\section{Proofs of Safety and Plausible Liveness}
\label{sect:theorems}
We give a proof of two properties of Casper: \textit{accountable safety} and \textit{plausible liveness}. Accountable safety means that two conflicting checkpoints cannot be finalized unless $\geq \frac{1}{3}$ of validators violate a slashing condition (meaning at least one third of the total deposits are lost). Honest validators will never violate slashing conditions, so this implies the usual Byzantine fault tolerance safety property, but expressing this in terms of slashing conditions means that we are actually proving a stronger claim: if two conflicting checkpoints get finalized, then at least $\frac{1}{3}$ of validators were malicious, \textit{and we know whom to blame, and so we can maximally penalize them in order to make such faults expensive}.
Plausible liveness means that it is always possible for $\frac{2}{3}$ of honest validators to finalize a new checkpoint, regardless of what previous events took place.
\begin{figure}[h!tb]
\centering
\includegraphics[width=5in]{conflicting_checkpoints.png}
\caption{Two checkpoints are \textit{conflicting} if they are on distinct chains, i.e. one is not an ancestor or a descendant of the other.}
\label{fig:conflicting_checkpoints}
\end{figure}
\begin{theorem}[Accountable Safety]
\label{theorem:safety}
Two conflicting checkpoints cannot be finalized unless $\geq \frac{1}{3}$ of validators violate a slashing condition.
\begin{proof}
Suppose the two conflicting checkpoints are $A$ in epoch $\epoch_A$ and $B$ in epoch $\epoch_B$. If both are finalized, this implies $\frac{2}{3}$ commits and $\frac{2}{3}$ prepares in epochs $\epoch_A$ and $e_B$. In the trivial case where $\epoch_A = \epoch_B$, this implies that some intersection of $\frac{1}{3}$ of validators must have violated slashing condition (1). In other cases, there must exist two chains $\Genesisblock < \ldots < \epoch_A^2 < \epoch_A^1 < \epoch_A$ and $\Genesisblock < \ldots < \epoch_B^2 < \epoch_B^1 < \epoch_B$ of justified checkpoints, both terminating at the genesis. Suppose without loss of generality that $\epoch_A > \epoch_B$. Then, there must be some $\epoch_A^i$ that either $\epoch_A^i = \epoch_B$ or $\epoch_A^i > \epoch_B > \epoch_A^{i+1}$. In the first case, since $A^i$ and $B$ both have $\frac{2}{3}$ prepares, at least $\frac{1}{3}$ of validators violated slashing condition (I). Otherwise, $B$ has $\frac{2}{3}$ commits and there exist $\frac{2}{3}$ prepares with $\epoch > B$ and $\epochsource < B$, so at least $\frac{1}{3}$ of validators violated slashing condition (II).
\end{proof}
\end{theorem}
\begin{theorem}[Plausible Liveness]
\label{theorem:liveness}
It is always possible for $\frac{2}{3}$ of honest validators to finalize a new checkpoint, regardless of what previous events took place.
\begin{proof}
Suppose that all existing validators have sent some sequence of prepare and commit messages. Let $M$ with epoch $\epoch_M$ be the highest-epoch checkpoint that was justified, and let $n \ge \epoch_M$ be the highest epoch in which an honest validator prepared. Honest validators have not committed on any block which is not justified. Hence, neither slashing condition stops them from making prepares on a descendant of $M$ in epoch $n+1$, using $\epoch_M$ as $\epochsource$, and then committing this child.
\end{proof}
\end{theorem}
\section{Fork Choice Rule}
\label{sect:forkchoice}
The mechanism described above ensures \textit{plausible liveness}; however, it by itself does not ensure \textit{actual liveness} - that is, while the mechanism cannot get stuck in the strict sense, it could still enter a scenario where the proposal mechanism (i.e. the proof of work chain) gets into a state where it never ends up creating a checkpoint that could get finalized.
In \figref{fig:forkchoice} we see one possible example. In this case, $HASH1$ or any descendant thereof cannot be finalized without slashing $\frac{1}{6}$ of validators. However, miners on a proof of work chain would interpret $HASH1$ as the head and forever keep mining descendants of it, ignoring the chain based on $HASH0^\prime$ which actually could get finalized.
\begin{figure}[h!tb]
\centering
\includegraphics[width=5.5in]{fork4.png}
\caption{Miners following the traditional proof of work fork choice rule would create blocks on HASH1, but because of the slashing conditions only blocks on top of HASH1' can be finalized.}
\label{fig:forkchoice}
\end{figure}
In fact, when \textit{any} checkpoint gets $k > \frac{1}{3}$ commits, no conflicting checkpoint can get finalized without $k - \frac{1}{3}$ of validators getting slashed. This necessitates modifying the fork choice rule used by participants in the underlying proposal mechanism (as well as users and validators): instead of blindly following a longest-chain rule, there needs to be an overriding rule that (i) finalized checkpoints are favored, and (ii) when there are no further finalized checkpoints, checkpoints with more (justified) commits are favored.
One complete description of such a rule would be:
\begin{enumerate}
\item Start with HEAD equal to the genesis of the chain.
\item Select the descendant checkpoint of HEAD with the most commits (only justified checkpoints are admissible)
\item Repeat (2) until no descendant with commits exists.
\item Choose the longest proof of work chain from there.
\end{enumerate}
The commit-following part of this rule can be viewed as mirroring the ``greegy heaviest observed subtree'' (GHOST) rule that has been proposed for proof of work chains\cite{sompolinsky2013accelerating}. The symmetry is as follows. In GHOST, a node starts with the head at the genesis, then begins to move forward down the chain, and if it encounters a block with multiple children then it chooses the child that has the larger quantity of work built on top of it (including the child block itself and its descendants).
In this algorithm, we follow a similar approach, except we repeatedly seek the child that comes the closest to achieving finality. Commits on a descendant are implicitly commits on all of its lineage, and so if a given descendant of a given block has more commits than any other descendant, then we know that all children along the chain from the head to this descendant are closer to finality than any of their siblings; hence, looking for the \textit{descendant} with the most commits and not just the \textit{child} replicates the GHOST principle most faithfully. Finalizing a checkpoint requires $\frac{2}{3}$ commits within a \textit{single} epoch, and so we do not try to sum up commits across epochs and instead simply take the maximum.
This rule ensures that if there is a checkpoint such that no conflicting checkpoint can be finalized without at least some validators violating slashing conditions, then this is the checkpoint that will be viewed as the ``head'' and thus that validators will try to commit on.
\section{Allowing Dynamic Validator Sets}
\label{sect:join_and_leave}
The set of validators needs to be able to change. New validators need to be able to join, and existing validators need to be able to leave. To accomplish this, we define a variable kept track of in the state called the \textit{dynasty} counter. When a user sends a ``deposit'' transaction to become a validator, if this transaction is included in dynasty $n$, then the validator will be \textit{inducted} in dynasty $n+2$. The dynasty counter increments when the chain detects that the checkpoint of the current epoch that is part of its own history has been \textit{perfectly finalized} (that is, the checkpoint of epoch \epoch must be finalized during epoch \epoch, and the chain must learn about this before epoch \epoch ends). In simpler terms, when a user sends a ``deposit'' transaction, they need to wait for the transaction to be perfectly finalized, and then they need to wait again for the next epoch to be finalized; after this, they become part of the validator set. We call such a validator's \textit{start dynasty} $n+2$.
For a validator to leave, they must send a ``withdraw'' message. If their withdraw message gets included during dynasty $n$, the validator similarly leaves the validator set during dynasty $n+2$; we call $n+2$ their \textit{end dynasty}. When a validator withdraws, their deposit is locked for a long period of time (the \textit{withdrawal delay}, for now think ``four months'') before they can take their money out; if they are caught violating a slashing condition within that time then their deposit is forfeited.
For a checkpoint to be justified, it must be prepared by a set of validators which contains (i) at least $\frac{2}{3}$ of the current dynasty (that is, validators with $startDynasty \le curDynasty < endDynasty$), and (ii) at least $\frac{2}{3}$ of the previous dyansty (that is, validators with $startDynasty \le curDynasty - 1 < endDynasty$. Finalization with commits works similarly. The current and previous dynasties will usually greatly overlap; but in cases where they substantially diverge this ``stitching'' mechanism ensures that dynasty divergences do not lead to situations where a finality reversion or other failure can happen because different messages are signed by different validator sets and so equivocation is avoided.
\begin{figure}[h!tb]
\centering
\includegraphics[width=4in]{validator_set_misalignment.png}
\caption{Without the validator set stitching mechanism, it's possible for two conflicting checkpoints to be finalized with no validators slashed}
\label{fig:dynamic}
\end{figure}
\subsection{Long Range Attacks}
Note that the withdrawal delay introduces a synchronicity assumption \textit{between validators and clients}. Because validators can withdraw their deposits after the withdrawal delay, there is an attack where a coalition of validators which had more than $\frac{2}{3}$ of deposits \textit{long ago in the past} withdraws their deposits, and then uses their historical deposits to finalize a new chain that conflicts with the original chain without fear of getting slashed.
\begin{figure}[h!tb]
\centering
\includegraphics[width=3in]{LongRangeAttacks.png}
\caption{Despite violating slashing conditions to make a chain split, because the attacker has already withdrawn on both chains they do not lose any money. This is often called a \textit{long-range atack}.}
\label{fig:dynamic}
\end{figure}
We solve this problem by simply having clients not accept a finalized checkpoint that conflicts with finalized checkpoints that they already know about. Suppose that clients can be relied on to log on at least once every time $\delta$, and the withdrawal delay is $W$. Suppose an attacker sends one finalized checkpoint at time $0$, and then another right after. We pessimistically suppose the first checkpoint arrives at all clients at time $0$, and that the second reaches a client at time $\delta$. The client will then know of the fraud, and will be able to create and publish an evidence transaction. We then add a consensus rule that requires clients to reject chains that do not include evidence transactions that the client has known about for time $\delta$. Hence, clients will not accept a chain that has not included the evidence transaction within time $2 * \delta$. So if $W > 2 * \delta$ then slashing conditions are enforcible.
In practice, this means that if the withdrawal delay is four months, then clients will need to log on at least once per two months to avoid accepting bad chains for which attackers cannot be penalized.
\section{Recovering from Castastrophic Crashes}
\label{sect:leak}
Suppose that $>\frac{1}{3}$ of validators crash-fail at the same time---i.e, they are no longer connected to the network due to a network partition, computer failure, or are malicious actors. Then, no later checkpoint will be able to get finalized.
We can recover from this by instituting a ``leak'' which dissipates the deposits of validators that do not prepare or commit, until eventually their deposit sizes decrease low enough that the validators that \textit{are} preparing and committing are a $\frac{2}{3}$ supermajority. The simplest possible formula is something like ``validators with deposit size $D$ lose $D * p$ in every epoch in which they do not prepare and commit'', though to resolve catastrophic crashes more quickly a formula which increases the rate of dissipation in the event of a long streak of non-finalized blocks may be optimal.
The dissipated portion of deposits can either be burned or simply forcibly withdrawn and immediately refunded to the validator; which of the two strategies to use, or what combination, is an economic incentive concern and thus outside the scope of this paper.
Note that this does introduce the possibility of two conflicting checkpoints being finalized, with validators only losing money on one of the two checkpoints as seen in \figref{fig:commitsync}.
\begin{figure}[h!tb]
\centering
\includegraphics[width=4in]{CommitsSync.png}
\caption{The checkpoint on the left can be finalized immediately. The checkpoint on the right can be finalized after some time, once offline validator deposits sufficiently dissipate.}
\label{fig:commitsync}
\end{figure}
If the goal is simply to achieve maximally close to 50\% fault tolerance, then clients should simply favor the finalized checkpoint that they received earlier. However, if clients are also interested in defeating 51\% censorship attacks, then they may want to at least sometimes choose the minority chain. All forms of ``51\% attacks'' can thus be resolved fairly cleanly via ``user-activated soft forks'' that reject what would normally be the dominant chain. Particularly, note that finalizing even one block on the dominant chain precludes the attacking validators from preparing on the minority chain because of Commandment II, at least until their balances decrease to the point where the minority can commit, so such a fork would also serve the function of costing the majority attacker a very large portion of their deposits.
\section{Conclusions}
This introduces the basic workings of Casper the Friendly Finality Gadget's prepare and commit mechanism and fork choice rule, in the context of Byzantine fault tolerance analysis. Separate papers will serve the role of explaining and analyzing incentives inside of Casper, and the different ways that they can be parametrized and the consequences of these paramtrizations.
\section{Acknowledgements}
We thank Virgil Griffith for review and Sandro Lera for mathematics.
%\section{References}
\bibliographystyle{abbrv}
\bibliography{ethereum}
%\input{appendix.tex}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}

View File

@ -1,6 +1,6 @@
% make in NIPS format
\usepackage{nips10submit_e}
\nipsfinalcopy
% \usepackage{nips10submit_e}
% \nipsfinalcopy
@ -21,7 +21,7 @@
%\usepackage{ lmodern } % I prefer this over times
\usepackage{array} % replacement for eqnarray. Must be BEFORE \usepackage{arydshln}
\usepackage{units} % for \nicefrac{\alpha}{\beta}
% \usepackage{units} % for \nicefrac{\alpha}{\beta}
\usepackage{amsthm} % for theorems
@ -32,22 +32,22 @@
%\usepackage{wasysym}
\usepackage{textcomp, marvosym} % pretty symbols
\usepackage{booktabs} % for much better looking tables
% \usepackage{booktabs} % for much better looking tables
% for indicator functions
\usepackage{dsfont}
% \usepackage{dsfont}
% For automatic capitalizaton of section names, etc.
\usepackage{titlesec,titlecaps}
% \usepackage{titlesec,titlecaps}
\Addlcwords{is with of the and in}
% \Addlcwords{is with of the and in}
%\Addlcwords{of the}
%\Addlcwords{and}
\titleformat{\section}[block]{}{\normalfont\Large\bfseries\thesection.\;}{0pt}{\formatsectiontitle}
% \titleformat{\section}[block]{}{\normalfont\Large\bfseries\thesection.\;}{0pt}{\formatsectiontitle}
\newcommand{\formatsectiontitle}[1]{\normalfont\Large\bfseries\titlecap{#1}}
\titleformat{\subsection}[block]{}{\normalfont\large\bfseries\thesubsection.\;}{0pt}{\formatsubsectiontitle}
% \titleformat{\subsection}[block]{}{\normalfont\large\bfseries\thesubsection.\;}{0pt}{\formatsubsectiontitle}
\newcommand{\formatsubsectiontitle}[1]{\normalfont\large\bfseries\titlecap{#1}}
@ -55,21 +55,21 @@
% for pretty Euler script
\usepackage[mathscr]{euscript}
\usepackage{bold-extra}
% \usepackage[mathscr]{euscript}
% \usepackage{bold-extra}
\usepackage{fullpage} % save trees
% \usepackage{fullpage} % save trees
\usepackage{subfig}
% \usepackage{subfig}
%\usepackage{float} % for \subfloat
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% More customizable Lists
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Better symbols custom enumerative lists, define any symbol you'd like
\usepackage{enumitem}
% \usepackage{enumitem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@ -112,12 +112,12 @@
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% for \sout{} for strikeout
\usepackage[normalem]{ulem}
% \usepackage[normalem]{ulem}
% for better manipulation of tables
\usepackage{makecell}
\renewcommand\theadfont{\bfseries}
% \usepackage{makecell}
% \renewcommand\theadfont{\bfseries}
%-----------------------------------------------------------------------------

View File

@ -15,6 +15,18 @@
year={1999}
}
@article{bitcoinwikipos,
title={Bitcoin Wiki: Proof of Stake, Cunicula's Implementation and Meni's implementation},
url={https://en.bitcoin.it/wiki/Proof_of_Stake#Cunicula.27s_Implementation_of_Mixed_Proof-of-Work_and_Proof-of-Stake}
}
@article{slasher,
title={Slasher: A Punitive Proof-of-Stake Algorithm},
year={2014},
author={Vitalik Buterin},
url={https://blog.ethereum.org/2014/01/15/slasher-a-punitive-proof-of-stake-algorithm/}
}
@article{sompolinsky2013accelerating,
title={Accelerating Bitcoin's Transaction Processing. Fast Money Grows on Trees, Not Chains.},
author={Sompolinsky, Yonatan and Zohar, Aviv},
@ -24,7 +36,7 @@
year={2013}
}
@misc{kwon2014tendermint,
@article{kwon2014tendermint,
title={Tendermint: Consensus without mining},
year={2014}
author={Kwon, Jae},
@ -55,6 +67,7 @@
author={King, Sunny and Nadal, Scott},
journal={self-published paper, August},
volume={19},
url={https://decred.org/research/king2012.pdf},
year={2012}
}

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 KiB

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 319 KiB

After

Width:  |  Height:  |  Size: 319 KiB

View File

Before

Width:  |  Height:  |  Size: 133 KiB

After

Width:  |  Height:  |  Size: 133 KiB

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View File

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 44 KiB

View File

Before

Width:  |  Height:  |  Size: 133 KiB

After

Width:  |  Height:  |  Size: 133 KiB

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB