research/papers/casper-economics/casper_economics_basic.tex

431 lines
34 KiB
TeX
Raw Normal View History

2017-08-07 09:23:24 +00:00
\title{Incentives in Casper the Friendly Finality Gadget}
\author{
Vitalik Buterin \\
2017-08-27 13:55:46 +00:00
Ethereum Foundation}
2017-08-07 09:23:24 +00:00
\documentclass[12pt, final]{article}
2017-08-27 13:55:46 +00:00
%\input{eth_header.tex}
\usepackage{color}
\newcommand*{\todo}[1]{\color{red} #1}
\newcommand{\eqref}[1]{eq.~\ref{#1}}
\newcommand{\figref}[1]{Figure~\ref{#1}}
\usepackage{amsthm}
\newtheorem{theorem}{Theorem}
\newtheorem{definition}{Definition}
\usepackage{graphicx}
\graphicspath{{figs/}{figures/}{images/}{./}}
2017-08-07 09:23:24 +00:00
%% Special symbols we'll probably iterate on
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2017-08-27 13:55:46 +00:00
% we will probably iterate on these symbols until we have a notation we like
\newcommand{\epoch}{\ensuremath{e}\space}
\newcommand{\hash}{\textnormal{h}\space}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
% symbols for the epoch and hash source
\newcommand{\epochsource}{\ensuremath{\epoch_{\star}}\space}
\newcommand{\hashsource}{\ensuremath{\hash_{\star}}\space}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
\newcommand{\signature}{\ensuremath{\mathcal{S}}\space}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
\newcommand{\totaldeposit}{\textnormal{TD}\space}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
\newcommand{\gamesymbol}{\reflectbox{G}}
2017-08-18 09:51:31 +00:00
2017-08-27 13:55:46 +00:00
\newcommand{\msgPREPARE}{\textbf{\textsc{prepare}}\space}
\newcommand{\msgCOMMIT}{\textbf{\textsc{commit}}\space}
2017-08-18 09:51:31 +00:00
2017-08-27 13:55:46 +00:00
% Symbols for the Last Justified Epoch and Hash
\newcommand{\epochLJ}{\ensuremath{\epoch_{\textnormal{LJ}}}\space}
\newcommand{\hashLJ}{\ensuremath{\hash_{\textnormal{LJ}}}\space}
2017-08-18 09:51:31 +00:00
2017-08-27 13:55:46 +00:00
% Symbols for the Last Finalized Epoch and Hash
\newcommand{\epochLF}{\ensuremath{\epoch_{\textnormal{LF}}}\space}
\newcommand{\hashLF}{\ensuremath{\hash_{\textnormal{LF}}}\space}
2017-08-18 09:51:31 +00:00
2017-08-27 13:55:46 +00:00
% Griefing Factor symbol
\newcommand{\GF}[1]{GF\left( #1 \right)\space}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
% Genesis block symbol
\newcommand{\Genesisblock}{\ensuremath{G}\space}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
% Econ-specific symbols
\newcommand{\BIR}{\textsc{BIR}\space}
\newcommand{\BP}{\textsc{BP}\space}
\newcommand{\NCP}{\textsc{NCP}\space}
\newcommand{\NCCP}{\textsc{NCCP}\space}
\newcommand{\NPP}{\textsc{NPP}\space}
\newcommand{\NPCP}{\textsc{NPCP}\space}
2017-08-07 09:23:24 +00:00
\begin{document}
2017-08-27 13:55:46 +00:00
\maketitle
2017-08-07 09:23:24 +00:00
\begin{abstract}
2017-08-27 13:55:46 +00:00
We give an introduction to the incentives in the Casper the Friendly Finality Gadget protocol, and show how the protocol behaves under individual choice analysis, collective choice analysis and griefing factor analysis. We show that (i) the protocol is a Nash equilibrium assuming any individual validator's deposit makes up less than $\frac{1}{3}$ of the total, (ii) collectively, the validators lose from causing protocol faults, and there is a minimum ratio between the losses incurred by the validators and the seriousness of the fault, and (iii) the griefing factor can be bounded above by $1$, though we will prefer an alternative model that bounds the griefing factor at $2$ in exchange for other benefits. We also describe tradeoffs between protocol fairness and incentivization and fallbacks to extra-protocol resolution mechanisms such as market-driven chain splits.
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
We assume the "Casper the Friendly Finality Gadget" paper as a dependency.
\end{abstract}
2017-08-18 09:51:31 +00:00
2017-08-27 13:55:46 +00:00
\section{Recap: The Casper Protocol}
2017-08-23 14:09:05 +00:00
\label{sect:casperprotocol}
2017-08-07 09:23:24 +00:00
In the Casper protocol, there is a set of validators, and in each epoch validators have the ability to send two kinds of messages:
2017-08-27 13:55:46 +00:00
$$\langle \msgPREPARE, \hash, \epoch, \hashsource, \epochsource, \signature \rangle$$
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
\begin{tabular}{l l}
2017-08-18 09:51:31 +00:00
\textbf{Notation} & \textbf{Description} \\
2017-08-27 13:55:46 +00:00
\hash & a checkpoint hash \\
\epoch & the epoch of the checkpoint \\
2017-08-18 09:51:31 +00:00
$\hashsource$ & the most recent justified hash \\
2017-08-27 13:55:46 +00:00
$\epochsource$ & the epoch of $\hashsource$ \\
\signature & signature of $(\hash,\epoch,\hashsource,\epochsource)$ from the validator's private key \\
\end{tabular} \label{tbl:prepare}
$$ $$
$$\langle \msgCOMMIT, \hash, \epoch, \signature \rangle$$
2017-08-18 09:51:31 +00:00
2017-08-27 13:55:46 +00:00
\begin{tabular}{l l}
2017-08-18 09:51:31 +00:00
\textbf{Notation} & \textbf{Description} \\
2017-08-27 13:55:46 +00:00
\hash & a checkpoint hash \\
\epoch & the epoch of the checkpoint \\
2017-08-18 09:51:31 +00:00
\signature & signature from the validator's private key \\
\end{tabular}
2017-08-27 13:55:46 +00:00
\label{tbl:commit}
2017-08-07 09:23:24 +00:00
\label{fig:messages}
2017-08-27 13:55:46 +00:00
$$ $$
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
The blockchain state maintains the \textit{current validator set} $V_c: v \rightarrow R^+$, a mapping of validators to their deposit sizes (non-negative real numbers) and the \textit{previous validator set} $V_p: v \rightarrow R^+$. The \textit{total current deposit size} is equal to $\sum_{v \in V} V_c[v]$, the sum of all deposits in the current validator set, and the \textit{total previous deposit size} is likewise equal to the $\sum_{v \in V} V_p[v]$. Validators can deposit $n$ coins to join both validator sets with deposit size $n$, and a validator with deposit size $n'$ can withdraw $n'$ coins with a delay. For any deposit or withdraw action to fully take effect, three checkpoints need to be finalized in a chain after the withdraw is included in that chain (validators get inducted to and ejected from the current validator set first, so after two finalized hashes, a validator will be in one validator set but not the other).
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
An \textit{epoch} is a range of 100 blocks (e.g. blocks 600...699 are epoch 6), and a \textit{checkpoint} as the hash of a block right before the start of an epoch. The \textit{epoch of a checkpoint} is the epoch \textit{after} the checkpoint, e.g. the epoch of a checkpoint which is the hash of some block 599 is 6.
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
We will use ``$p$ of validators'' for any fraction $p$ (eg. $\frac{2}{3}$) as shorthand for ``some set of validators $V_s$ such that $\sum_{v \in V_s} V_c[v] \ge \sum_{v \in V_c} V_c[v] * p$ and $\sum_{v \in V_s} V_p[v] \ge \sum_{v \in V_p} V_p[v] * p$'' - that is, such a set must make up \textit{both} $p$ of the current validator set \textit{and} $p$ of the previous validator set. The ``portion of validators that did X'' refers to the largest value $0 \le p \le 1$ such that $p$ of validators (using the definition above) did X.
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
Every checkpoint hash $\hash$ has one of three possible states: \emph{fresh}, \emph{justified}, and \emph{finalized}. Every hash starts as \emph{fresh}. A hash $\hash$ converts from fresh to \emph{justified} if $\frac{2}{3}$ of validators send prepares for $\hash$ with the same $(\epoch, \hashsource, \epochsource)$ triplet. An ``ideal execution'' of the protocol is one where, at the start of every epoch, every validator prepares and commits the same checkpoint for that epoch, specifying the same $\epochsource$ and $\hashsource$; thus, in every epoch, that checkpoint gets finalized. We wish to incentivize this ideal execution.
2017-08-23 14:09:05 +00:00
2017-08-27 13:55:46 +00:00
Possible deviations from this ideal execution that we want to minimize or avoid include:
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
\begin{itemize}
\item Safety failures, i.e. two incompatible checkpoints getting finalized.
\item Liveness failures, i.e. a checkpoint not getting finalized during some epoch.
\end{itemize}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
These are both failures \textit{of the protocol}. The next step from here is \textit{fault assignment} - if a failure of the protocol were to happen, determine what failures \textit{of individual validators} could have caused it, so that we can penalize them.
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
\subsection{Safety faults}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
There exists a proof that any safety fault can only be caused by at least $\frac{1}{3}$ of validators violating one of the two Casper Commandments (``slashing conditions''), defined below:
2017-08-23 14:09:05 +00:00
2017-08-18 09:51:31 +00:00
\begin{enumerate}
2017-08-27 13:55:46 +00:00
\item[\textbf{I.}] \textsc{A validator shalt not publish two or more nonidentical Prepares for same epoch.}
2017-08-23 14:09:05 +00:00
2017-08-27 13:55:46 +00:00
In other words, a validator may Prepare at most exactly one (\hash, \epochsource, \hashsource) triplet for any given epoch \epoch.
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
\item[\textbf{II.}] \textsc{A validator shalt not publish an Commit between the epochs of a Prepare statement.}
2017-08-23 14:09:05 +00:00
2017-08-27 13:55:46 +00:00
Equivalently, a validator may not publish
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
\begin{equation}
2017-08-23 14:09:05 +00:00
\langle \msgPREPARE, \epoch_p, \hash_p, \epochsource, \hashsource, \signature \rangle \hspace{0.5in} \textnormal{\textsc{and}} \hspace{0.5in} \langle \msgCOMMIT, \epoch_c, \hash_c, \signature \rangle \;,
2017-08-18 09:51:31 +00:00
\label{eq:msgPREPARE}
2017-08-27 13:55:46 +00:00
\end{equation}
2017-08-07 09:23:24 +00:00
2017-08-23 14:09:05 +00:00
where the epochs satisfy $\epochsource < \epoch_c < \epoch_p$.
2017-08-27 13:55:46 +00:00
\end{enumerate}
2017-08-23 14:09:05 +00:00
2017-08-27 13:55:46 +00:00
Hence, we can adequately penalize safety failures by simply taking away the deposits of any validator that violates either of the two slashing conditions.
2017-08-23 14:09:05 +00:00
2017-08-27 13:55:46 +00:00
\subsection{Liveness Faults}
2017-08-23 14:09:05 +00:00
2017-08-27 13:55:46 +00:00
Penalizing liveness faults is more difficult. If the only kind of faulty behavior that were possible is nodes going offline, then penalization would also be simple: find the validators that did not send prepares and commits during any epoch, and take away their deposits. However, there are several other faulty behaviors that are possible:
2017-08-23 14:09:05 +00:00
2017-08-27 13:55:46 +00:00
\begin{enumerate}
\item Preparing or committing too late
\item Preparing a different \hash from the hash prepared by most other validators.
\item Using a different \hashsource and \epochsource from that used by most other validators.
\item Network latency
\item A majority coalition finalizing a chain that does not include prepares or commits sent by those outside of some coalition (a ``censorship fault'')
\item A majority coalition waiting for other validators to prepare one \hash, and then preparing another \hash instead.
\item A majority coalition waiting for other validators to prepare with one \hashsource, and then preparing another \hashsource instead.
\end{enumerate}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
The list above is deliberately organized symmetrically, to illustrate a fundamental problem with attributing liveness faults known as \textit{speaker/listener fault equivalence}: given only a transcript of messages that were sent earlier, that contain a record of user B sending a message that shows the absence of an expected message from user A, this could arise because A was not speaking, or because B was not listening, and \textit{there is no way to tell the two apart}. In this case, (1) and (5) are indistinguishable, as are (2) and (6), and (3) and (7). Finally, all seven may be indistuiguishable from network latency.
2017-08-23 14:09:05 +00:00
2017-08-27 13:55:46 +00:00
What this means is that, in a liveness fault, we cannot unambiguously determine who was at fault, and this creates a fundamental tension between \textit{disincentivizing harm} and \textit{fairness} - between sufficiently penalizing validators who are malicious and not excessively penalizing validators who are not at fault. A protocol which absolutely ensures that innocent validators will not lose money must thus rely only on rewards, not on penalties, for discouraging non-uniquely-attributable faults, and so will only have a cryptoeconomic security margin equal to the size of the rewards that it issues. A protocol that penalizes suspected validators to the maximum will be one where innocent validators will not feel comfortable participating, which itself reduces security.
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
A third ``way out'' is punting to off-chain governance. If a fault could have been caused by either A or B, then split the chain in half, on one branch penalize A, on the other branch penalize B, and let the market sort it out. We can theorize that the market will prefer branches where malfeasant validators control a smaller portion of the validator set, and so on the chain that ``wins'' the validators that the market subjectively deems to have been responsible for the fault will lose money and the innocent valdidators will not.
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
[diagram]
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
However, there must be some cost to triggering a ``governance event''; otherwise, attackers could trigger these events as a deliberate strategy in order to breed continual chaos among the users of a blockchain. The social value of blockchains largely comes from the fact that their progression is mostly automated, and so the more we can reduce the need for users to appeal to the social layer the better.
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\section{Rewards and Penalties}
2017-08-07 09:23:24 +00:00
2017-08-27 13:55:46 +00:00
We define the following nonnegative functions, all of which return a nonnegative scalar with no units. Technically these values can exceed 1.0; in any situation which appears to call for reducing some validator's deposit size to a negative value, the deposit size should instead simply be reduced to zero.
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\begin{itemize}
\item $\BIR(\totaldeposit)$: returns the base interest rate paid to a validator, taking as an input the current total quantity of deposited coins.
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\item $\BP(\totaldeposit, \epoch - \epochLF )$: returns the ``base penalty constant''---a value expressed as a percentage rate that is used as the scaling factor for all penalties; for example, if at the current time $\BP(\cdot, \cdot, \cdot) = 0.001$, then a penalty of $1.5$ means a validator loses $0.15\%$ of their deposit. Takes as inputs the current total quantity of deposited coins $\totaldeposit$, the current epoch $e$ and the last finalized epoch \epochLF. Note that in a perfect protocol execution, $\epoch - \epochLF = 1$.
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\item $\NPCP(\alpha)$ (``non-prepare collective penalty''): if $\alpha$ of validators ($0 \leq \alpha \leq 1$) are not seen to have Prepared during an epoch, then \emph{all} validators are charged a penalty of $\NCCP(\alpha)$. $\NPCP$ must be monotonically increasing, and satisfy $\NPCP(0) = 0$.
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\item $\NCCP(\alpha)$ (``non-commit collective penalty''): if $\alpha$ of validators ($0 \leq \alpha \leq 1$) are not seen to have Committed during an epoch, and that epoch had a justified hash so any validator \emph{could} have Committed, then all validators are charged a penalty proportional to $\NCCP(\alpha)$. $\NCCP$ must be monotonically increasing, and satisfy $\NCCP(0) = 0$.
\end{itemize}
We also define the following nonnegative constants:
2017-08-07 09:23:24 +00:00
\begin{itemize}
2017-08-23 14:09:05 +00:00
\item $\NPP$ (``non-prepare penalty''): the penalty for not Preparing any block during the epoch. \todo{correct?}
\item $\NCP$ (``non-commit penalty''): the penalty for not Committing any block during the epoch, if there was a justified hash which the validator \emph{could} have Committed. \todo{correct?}
2017-08-07 09:23:24 +00:00
\end{itemize}
2017-08-23 14:09:05 +00:00
Note that a validator publishing a Prepare/Commit doesn't entail escaping a \NPP/\NCP; it could be the case that either because of high network latency or a malicious majority censorship attack, the Prepares and Commits are not included into the blockchain in time and so the incentivization mechanism does not see them. Likewise, for $\NPCP$ and $\NCCP$, the $\alpha$ input is the proportion of validators whose Prepares and Commits are \emph{not visible}, not the proportion of validators who \emph{tried to send} a Prepare/Commit.
2017-08-07 09:23:24 +00:00
2017-08-23 14:09:05 +00:00
When we talk about Preparing and Committing the ``correct value'', we are referring to the hash \hash and the parent epoch $\epochsource$ and parent hash $\hashsource$.
2017-08-18 09:51:31 +00:00
2017-08-23 14:09:05 +00:00
We define the following reward and penalty schedule. This is the procedure for rewards and penalties, and is the entirety of the incentivization structure. It runs at the \emph{end} of every epoch:
\begin{enumerate}
\item All validators get a reward of $\BIR(\totaldeposit)$ (e.g., if $\BIR(\totaldeposit) = 0.0002$ then a validator with $10,000$ coins deposited gets a per-epoch reward of $2$ coins)
\item If the protocol does not see a Prepare from a given validator during the epoch, the validator is penalized $\BP(\totaldeposit, \epoch - \epochLF) * \NPP$ \todo{how does the incentive mechanism know \epoch?}
\item If the protocol does not see a Commit from a given validator during the epoch, and a block was justified (so a Commit \emph{could have} been seen), the validator is penalized $\BP(\totaldeposit, \epoch - \epochLF) * \NCP$.
\item If the protocol saw Prepares from proportion $p$ validators during the epoch, then \emph{every} validator is penalized $\BP(\totaldeposit, \epoch - \epochLF) * \NPCP(1 - p)$.
\item If the protocol saw Commits from proportion $p$ validators during the epoch, and a block was justified (so validators \emph{could have} Commited), then \emph{every} validator is penalized $\BP(\totaldeposit, \epoch - \epochLF) * \NCCP(1 - p)$.
\item The blockchain's recorded \epochLF and \hashLF are updated to the latest values. \todo{correct?}
\end{enumerate}
2017-08-18 09:51:31 +00:00
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\section{Three theorems}
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
We seek to prove the following:
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\begin{theorem}[\todo{First theorem}]
\label{theorem1}
2017-08-23 14:09:05 +00:00
If no validator has more than $\frac{1}{3}$ of the total deposit, i.e., $\max_i(D) \leq \frac{\totaldeposit}{3}$, then Preparing the last blockhash of the previous epoch and then Committing that hash is a Nash equilibrium. (Section \ref{sect:indivchoice})
2017-08-18 09:51:31 +00:00
\end{theorem}
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\begin{theorem}[\todo{Second theorem}]
\label{theorem2}
2017-08-23 14:09:05 +00:00
Even if all validators collude, the ratio of the harm inflicted on the network and the penalties paid by the colluding validators is upperbounded by some constant. (Section \ref{sect:collectivechoice}) Note that this requires a measure of ``harm inflicted''.
2017-08-18 09:51:31 +00:00
\end{theorem}
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\begin{theorem}[\todo{Third theorem}]
\label{theorem3}
2017-08-23 14:09:05 +00:00
Even when the attackers hold a majority of the total deposit, the ratio of the penalty incurred by the victims of an attack and penalty incurred by the attackers, or \emph{griefing factor}, is at most 2. (Section \ref{sect:griefingfactor})
2017-08-18 09:51:31 +00:00
\end{theorem}
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\subsection{Individual choice analysis}
\label{sect:indivchoice}
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
The individual choice analysis is simple. Suppose that during epoch $\epoch$ the proposal mechanism Prepares a hash $\hash$ and the Casper incentivization mechanism specifies some $\epochsource$ and $\hashsource$. Because, as per definition of the Nash equilibrium, we are assuming that all validators except for the validator that we are analyzing are following the equilibrium strategy, we know that $\ge \frac{2}{3}$ of validators Prepared in the last epoch and so $\epochsource = \epoch - 1$, and $\hashsource$ is the direct parent of $\hash$.
2017-08-23 14:09:05 +00:00
Hence, the PREPARE\_COMMIT\_CONSISTENCY slashing condition poses no barrier to Preparing $(\epoch, \hash, \epochsource, \hashsource)$. Since, in epoch \epoch, we are assuming that all other validators \emph{will} Prepare these values and then Commit \hash, we know \hash will be a hash in the main chain, and so a validator will pay a penalty if they do not Prepare $(\epoch, \hash, \epochsource, \hashsource)$, and they can avoid the penalty if they do Prepare these values.
2017-08-18 09:51:31 +00:00
We are assuming there are $\frac{2}{3}$ Prepares for $(\epoch, \hash, \epochsource, \hashsource)$, and so \textsc{prepare\_req} also poses no barrier to committing \hash. Committing \hash allows a validator to avoid \NCP. Hence, there is an economic incentive to Commit \hash. This shows that, if the proposal mechanism succeeds at presenting to validators a single primary choice, Preparing and Committing the value selected by the proposal mechanism is a Nash equilibrium.
2017-08-23 14:09:05 +00:00
\begin{table}[h!bt]
\centering
2017-08-27 13:55:46 +00:00
Preparing $\langle \epoch, \hash, \epochsource, \hashsource \rangle$
{ \begin{tabular}{l c}
\textbf{Action} & \textbf{Payoff} \\
2017-08-23 14:09:05 +00:00
Preparing & 0 \\
Not Preparing & $-\NPP - \NPCP(\alpha)$ \\
2017-08-27 13:55:46 +00:00
\end{tabular}} \hspace{0.5in} Committing $\langle \epoch, \hash \rangle$
{ \begin{tabular}{l c}
\textbf{Action} & \textbf{Payoff} \\
2017-08-23 14:09:05 +00:00
Commiting & 0 \\
Not Commiting & $-\NCP - \NCCP(\alpha)$ \\
\end{tabular}
\label{tbl:commit} }
\caption{Payoffs for ideal individual behaviors.}
\label{fig:individualpayoffs}
\end{table}
2017-08-18 09:51:31 +00:00
\subsection{Collective choice model}
\label{sect:collectivechoice}
2017-08-23 14:09:05 +00:00
To model the protocol in a collective-choice context, we first define a \emph{protocol utility function}. The protocol utility function quantifies ``how well the protocol execution is doing''. Although our specific protocol utility function cannot be derived from first principles, we can intuitively justify it. We define our protocol utility function as,
2017-08-07 09:23:24 +00:00
\begin{equation}
2017-08-23 14:09:05 +00:00
U \equiv \sum_{k = 0}^{\epoch} - \log_2\left[ k - \epochLF \right] - M F \; .
\label{eq:utilityfunction}
2017-08-07 09:23:24 +00:00
\end{equation}
2017-08-27 13:55:46 +00:00
\todo{the above equation might be able to simplifiable}
2017-08-07 09:23:24 +00:00
Where:
\begin{itemize}
\item $\epoch$ is the current epoch, starting from $0$.
2017-08-18 09:51:31 +00:00
\item $\epochLF$ is the index of the last finalized epoch. \todo{To be clear, does the \epochLF change with the term $k$, or is it fixed?}
2017-08-07 09:23:24 +00:00
\item $M$ is a very large constant.
2017-08-23 14:09:05 +00:00
\item $F$ is an Indicator Function. It returns $1$ if a safety failure has taken place, otherwise 0. A safety failure is defined as the mechanism finalizing two conflicting blocks. This is discussed in Apppendix \ref{app:safetyfailure}
2017-08-07 09:23:24 +00:00
\end{itemize}
2017-08-18 09:51:31 +00:00
The second term in the function is easy to justify: safety failures are very bad. The first term is trickier. To see how the first term works, consider the case where every epoch such that $\epoch$ mod $N$, for some $N$, is zero is finalized and other epochs are not. The average total over each $N$-epoch slice will be roughly $\sum_{i=1}^N -\log_2(i) \approx N * \left[ \log_2(N) - \frac{1}{\ln(2)} \right]$. Hence, the utility per block will be roughly $-\log_2(N)$. This basically states that a blockchain with some finality time $N$ has utility roughly $-\log(N)$, or in other words \emph{increasing the finality time of a blockchain by a constant factor causes a constant loss of utility}. The utility difference between 1 minute and 2 minute finality is the same as the utility difference between 1 hour and 2 hour finality.
2017-08-07 09:23:24 +00:00
2017-08-23 14:09:05 +00:00
This can be justified in two ways. First, one can intuitively argue that a user's psychological discomfort of waiting for finality roughly matches a logarithmic schedule. At the very least, the difference between 3600 sec and 3610 sec finality feels much more negligible than the difference between 1 sec and 11 sec finality, and so the claim that the difference between 10 sec and 20 sec finality is similar to the difference between 1 hour finality and 2 hour finality seems reasonable.\footnote{One can look at various blockchain use cases, and see that they are roughly logarithmically uniformly distributed along the range of finality times between around 200 miliseconds (``Starcraft on the blockchain'') and one week (land registries and the like). \todo{add a citation for this or delete.}}
2017-08-07 09:23:24 +00:00
2017-08-23 14:09:05 +00:00
Now, we need to show that, for any given total deposit size, $\frac{loss\_to\_protocol\_utility}{validator\_penalties}$ is bounded. There are two ways to reduce protocol utility: (i) cause a safety failure, or (ii) prevent finality by having $> \frac{1}{3}$ of deposit-weighted validators not Prepare or Commit to the same hash. Causing a safety failure requires violating one of the Casper Commandments (Section \ref{sect:casperprotocol}) and thus ensures immense loss in deposits. In the second case, in a chain that has not been finalized for $\epoch - \epochLF$ epochs, the penalty to attackers is at least,
2017-08-07 09:23:24 +00:00
\begin{equation}
2017-08-27 13:55:46 +00:00
\min \left[ \NPP \left(\frac{1}{3}\right) + \NPCP\left(\frac{1}{3}\right), \NCP \left(\frac{1}{3}\right) + \NCCP\left(\frac{1}{3}\right) \right] * \BP(\totaldeposit, \epoch - \epochLF) \\
\left(\frac{1}{3}\right) \min \left[ \NPP + \NPCP, \NCP + \NCCP \right] * \BP(\totaldeposit, \epoch - \epochLF) \\
2017-08-07 09:23:24 +00:00
\end{equation}
To enforce a ratio between validator losses and loss to protocol utility, we set,
\begin{equation}
2017-08-18 09:51:31 +00:00
\BP(\totaldeposit, \epoch - \epochLF) \equiv \frac{k_1}{\totaldeposit^p} + k_2 * \lfloor \log_2(\epoch - \epochLF) \rfloor\; .
2017-08-07 09:23:24 +00:00
\end{equation}
2017-08-27 13:55:46 +00:00
\todo{what is $p$ in the in the above equation?}
2017-08-07 09:23:24 +00:00
The first term serves to take profits for non-committers away; the second term creates a penalty which is proportional to the loss in protocol utility.
2017-08-23 14:09:05 +00:00
This connection between validator losses and loss to protocol utility has several consequences. First, it establishes that harming the protocolexecution is always a net loss, with the net loss increasing with the harm inflicted. Second, it establishes that the protocol approximates the properties of a \emph{game} \cite{monderer1996potential}. Potential games have the property that Nash equilibria of the game correspond to local maxima of the potential function (in this case, protocol utility), and so correctly following the protocol is a Nash equilibrium even in cases where attackers control $>\frac{1}{3}$ of the total deposit.
Here, the protocol utility function is not a perfect potential function, as it does not always take into account changes in the \emph{quantity} of Prepares and Commits whereas validator rewards do, but it does come close. \todo{Could someone do better than our \eqref{eq:utilityfunction}?}
2017-08-07 09:23:24 +00:00
2017-08-18 09:51:31 +00:00
\subsection{Griefing factor analysis}
\label{sect:griefingfactor}
2017-08-23 14:09:05 +00:00
Griefing factor analysis quanitfies the risk to honest validators. In general, if all validators are honest, and if network latency stays below half \todo{half, right?} the time of an epoch, then they face zero penalties. In the case where malicious validators exist, however, they can create penalties for themselves as well as honest validators.
2017-08-07 09:23:24 +00:00
2017-08-23 14:09:05 +00:00
We define the degree that malicious validators can create penalties for honest validators relative to their own penalties as the ``griefing factor'' of a game. We define this as,
2017-08-07 09:23:24 +00:00
\begin{equation}
2017-08-23 14:09:05 +00:00
\GF{ \gamesymbol,C } \equiv \max_{S \in strategies(T \setminus C)} \frac{loss(C) }{\min[ 0, loss(Players \setminus C) ] } \; .
2017-08-07 09:23:24 +00:00
\end{equation}
2017-08-27 13:55:46 +00:00
\todo{I need to work on this equation more. I don't like it yet.}
2017-08-07 09:23:24 +00:00
\begin{definition}
A strategy used by a coalition in a given mechanism has a \emph{griefing factor} $B$ if it can be shown that this strategy imposes a loss of $B * x$ to those outside the coalition at the cost of a loss of $x$ to those inside the coalition. If all strategies that cause deviations from some given baseline state have griefing factors less than or equal to some bound B, then we call B a \emph{griefing factor bound}. \todo{I plan to write this in terms of classical game theory.}
\end{definition}
2017-08-18 09:51:31 +00:00
A strategy that imposes a loss to outsiders either at no cost to a coalition, or to the benefit of a coalition, is said to have a griefing factor of infinity. Proof of work blockchains have a griefing factor bound of infinity because a 51\% coalition can double its revenue by refusing to include blocks from other participants and waiting for difficulty adjustment to reduce the difficulty. With selfish mining, the griefing factor may be infinity for coalitions of size as low as 23.21\%. \cite{selfishminingBTC}
2017-08-07 09:23:24 +00:00
\begin{figure}[h!bt]
\centering
\includegraphics[width=3in]{cs.pdf}
\caption{Plotting the griefing factor as a function of the proportion of players coordinating to grief.}
\label{fig:GF}
\end{figure}
2017-08-27 13:55:46 +00:00
Then to define the griefing factor over the entire game, we sum the area under the curve in \figref{fig:GF} leading to,
2017-08-23 14:09:05 +00:00
\begin{equation}
2017-08-27 13:55:46 +00:00
\GF{ \gamesymbol } \equiv \int_{0}^{1} GF( \gamesymbol, \alpha ) \; d\alpha \; .
2017-08-23 14:09:05 +00:00
\end{equation}
2017-08-07 09:23:24 +00:00
Let us start off our griefing analysis by not taking into account validator churn, so the validator set is always the same. In Casper, we can identify the following deviating strategies:
\begin{enumerate}
2017-08-23 14:09:05 +00:00
\item A minority of validators do not Prepare, or Prepare incorrect values.
\item (Mirror image of 1) A censorship attack where a majority of validators does not accept Prepares from a minority of validators (or other isomorphic attacks such as waiting for the minority to Prepare hash $H_1$ and then preparing $H_2$, making $H_2$ the dominant chain and denying the victims their rewards).
2017-08-07 09:23:24 +00:00
\item A minority of validators do not commit.
\item (Mirror image of 3) A censorship attack where a majority of validators does not accept commits from a minority of validators.
\end{enumerate}
2017-08-23 14:09:05 +00:00
Notice that, from the point of view of griefing factor analysis, it is immaterial whether or not any hash in a given epoch was justified or finalized. The Casper mechanism only pays attention to finalization in order to calculate $\BP(D, \epoch - \epochLF)$, the penalty scaling factor. This value scales penalties evenly for all participants, so it does not affect griefing factors.
2017-08-07 09:23:24 +00:00
Let us now analyze the attack types:
\begin{table}
2017-08-23 14:09:05 +00:00
\centering
\renewcommand{\arraystretch}{2}
\begin{tabular}{l l l }
2017-08-27 13:55:46 +00:00
\textbf{Attack} & \textbf{Amount lost by malicious validators} & \textbf{Amount lost by honest validators} \\
2017-08-23 14:09:05 +00:00
Minority of size $\alpha < \frac{1}{2}$ non-Prepares & $\NPP * \alpha + \NPCP(\alpha) * \alpha$ & $\NPCP(\alpha) * (1-\alpha)$ \\
Majority censors $\alpha < \frac{1}{2}$ Prepares & $\NPCP(\alpha) * (1-\alpha)$ & $\NPP * \alpha + \NPCP(\alpha) * \alpha$ \\
Minority of size $\alpha < \frac{1}{2}$ non-Commits & $\NCP * \alpha + \NCCP(\alpha) * \alpha$ & $\NCCP(\alpha) * (1-\alpha)$ \\
Majority censors $\alpha < \frac{1}{2}$ Commits & $\NCCP(\alpha) * (1-\alpha)$ & $\NCP * \alpha + \NCCP(\alpha) * \alpha$ \\
\end{tabular}
\caption{Attacks on the protocols and their costs to malicious validators and honest validators.}
2017-08-07 09:23:24 +00:00
\end{table}
2017-08-23 14:09:05 +00:00
\subsection{Shape of the penalities}
There is a symmetry between the non-Prepare case and the non-Commit case, so we assume $\frac{\NCCP(\alpha)}{\NCP} = \frac{\NPCP(\alpha)}{\NPP}$. Also, from a protocol utility standpoint (\ref{fig:utility}), increasing Commits are always useful as long as $p > \frac{1}{3}$, as it gives at least some economic security against finality reversions. However, Prepares $< \frac{2}{3}$ is exceedingly harmful as is it prevents \emph{any} Commits.
2017-08-07 09:23:24 +00:00
2017-08-23 14:09:05 +00:00
\begin{figure}[h!bt]
\centering
2017-08-27 13:55:46 +00:00
Utility with function of $p${ \includegraphics[width=3in]{goodness-with-p.pdf} \label{fig:utility} }
\NCCP and \NPCP as a function of $\alpha${ \includegraphics[width=3in]{cs.pdf} \label{fig:collectivepenalties} }
2017-08-23 14:09:05 +00:00
\caption{Plotting the griefing factor as a function of the proportion of players coordinating to grief.}
\label{fig:GF}
\end{figure}
In the normal case, anything less than $\frac{1}{3}$ Commits provides no economic security, so we can treat $p_c < \frac{1}{3}$ Commits as equivalent to no Commits; this thus suggests $\NPP = 2 * \NCP$. We can also normalize $\NCP = 1$.
Now, let us analyze the griefing factors, to try to determine an optimal shape for $\NCCP$. The griefing factor for non-Committing is,
2017-08-07 09:23:24 +00:00
\begin{equation}
2017-08-27 13:55:46 +00:00
GF = \frac{(1-\alpha) * \NCCP(\alpha)}{\alpha * (1 + \NCCP(\alpha))} \; .
2017-08-07 09:23:24 +00:00
\end{equation}
2017-08-23 14:09:05 +00:00
The griefing factor for censoring is the inverse of this. If we want the griefing factor for non-Committing to equal one, then we could compute:
2017-08-07 09:23:24 +00:00
\begin{eqnarray}
\alpha * (1 + \NCCP(\alpha)) &=& (1-\alpha) * \NCCP(\alpha) \\
\frac{1 + \NCCP(\alpha)}{\NCCP(\alpha)} &=& \frac{1-\alpha}{\alpha} \\
\frac{1}{\NCCP(\alpha)} &=& \frac{1-\alpha}{\alpha} - 1 \\
\NCCP(\alpha) &=& \frac{\alpha}{1-2\alpha}
\end{eqnarray}
2017-08-23 14:09:05 +00:00
Note that for $\alpha = \frac{1}{2}$, this would set the \NCCP to infinity. Hence, with this design a griefing factor of $1$ is infeasible. We \emph{can} achieve that effect in a different way - by making \NCP itself a function of $\alpha$; in this case, $\NCCP = 1$ and $\NCP = \max[0, 1 - 2\alpha]$ would achieve the desired effect. If we want to keep the formula for \NCP constant, and the formula for \NCCP reasonably simple and bounded, then one alternative is to set $\NCCP(\alpha) = \frac{\alpha}{1-\alpha}$; this keeps griefing factors bounded between $\frac{1}{2}$ and $2$.
2017-08-07 09:23:24 +00:00
\section{Pools}
2017-08-23 14:09:05 +00:00
In a traditional (i.e., not sharded or otherwise scalable) blockchain, there is a limit to the number of validators that can be supported, because each validator imposes a substantial amount of overhead on the system. If we accept a maximum overhead of two consensus messages per second, and an epoch time of 1400 seconds, then this means that the system can handle 1400 validators (not 2800 because we need to count prepares and commits). Given that the number of individual users interested in staking will likely exceed 1400, this necessarily means that most users will participate through some kind of ``stake pool''.
2017-08-07 09:23:24 +00:00
There are several possible kinds of stake pools:
\begin{itemize}
\item \textbf{Fully centrally managed}: users $B_1 \ldots B_n$ send coins to pool operator $A$. $A$ makes a few deposit transactions containing their combined balances, fully controls the Prepare and Commit process, and occasionally withdraws one of their deposits to accommodate users wishing to withdraw their balances. Requires complete trust.
\item \textbf{Centrally managed but trust-reduced}: users $B_1 \ldots B_n$ send coins to a pool contract. The contract sends a few deposit transactions containing their combined balances, assigning pool operator $A$ control over the Prepare and Commit process, and the task of keeping track of withdrawal requests. $A$ occasionally withdraws one of their deposits to accommodate users wishing to withdraw their balances; the withdrawals go directly into the contract, which ensures each user's right to withdraw a proportional share. Users need to trust the operator not to get their deposits penalized, but the operator cannot steal the coins. The trust requirement can be reduced further if the pool operator themselves contributes a large portion of the coins, as this will disincentivize them from staking maliciously.
\item \textbf{2-of-3}: a user makes a deposit transaction and specifies as validation code a 2-of-3 multisig, consisting of (i) the user's online key, (ii) the pool operator's online key, and (iii) the user's offline backup key. The need for two keys to sign off on a prepare, Commit or withdraw minimizes key theft risk, and a liveness failure on the pool side can be handled by the user using their backup key.
\item \textbf{Multisig managed}: users $B_1 \ldots B_n$ send coins to a pool contract that works in the exact same way as a centrally managed pool, except that a multisig of several semi-trusted parties needs to approve each Prepare and Commit message.
\item \textbf{Collective}: users $B_1 \ldots B_n$ send coins to a pool contract that that works in the exact same way as a centrally managed poolg
, except that a threshold signature of at least portion $p$ of the users themselves (say, $p = 0.6$) needs to approve each Prepare and Commit messagge.
\end{itemize}
We expect pools of different types to emerge to accomodate smaller users. In the long term, techniques such as blockchain sharding will make it possible to increase the number of users that can validate directly, and extensions to allow validators to temporarily ``drop out'' from the validator set when they are offline can mitigate liveness risk.
\section{Conclusions}
The above analysis gives a parametrized scheme for incentivizing in Casper, and shows that it is a Nash equilibrium in an uncoordinated-choice model with a wide variety of settings. We then attempt to derive one possible set of specific values for the various parameters by starting from desired objectives, and choosing values that best meet the desired objectives. This analysis does not include non-economic attacks, as those are covered by other materials, and does not cover more advanced economic attacks, including extortion and discouragement attacks. We hope to see more research in these areas, as well as in the abstract theory of what considerations should be taken into account when designing reward and penalty schedules.
2017-08-23 14:09:05 +00:00
\textbf{Future Work.} We would like to see a better protocol utility function \eqref{eq:utilityfunction}. \todo{fill me in}
2017-08-07 09:23:24 +00:00
\textbf{Acknowledgements.} We thank Virgil Griffith for review.
2017-08-23 14:09:05 +00:00
%\section{References}
2017-08-07 09:23:24 +00:00
\bibliographystyle{abbrv}
\bibliography{ethereum}
\input{appendix.tex}
\end{document}
2017-08-27 13:55:46 +00:00