Added censorship rejection paper

This commit is contained in:
Vitalik Buterin 2017-08-13 04:22:09 -04:00
parent cbcfe1f8ca
commit 37cb1a848b
40 changed files with 1238 additions and 4 deletions

View File

@ -1,12 +1,12 @@
import random
import datetime
diffs = [1215.03 * 10**12]
hashpower = diffs[0] / 18.76
times = [1499790856]
diffs = [1634.61 * 10**12]
hashpower = diffs[0] / 20.99
times = [1502461725]
for i in range(4008238, 6010000):
for i in range(4144703, 6010000):
blocktime = random.expovariate(hashpower / diffs[-1])
adjfac = max(1 - int(blocktime / 10), -99) / 2048.
newdiff = diffs[-1] * (1 + adjfac)

BIN
papers/Censorship1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
papers/Censorship2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

BIN
papers/Censorship3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

BIN
papers/Censorship3p5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
papers/Censorship4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
papers/Censorship5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
papers/Censorship6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
papers/Censorship6b.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
papers/Censorship6c.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
papers/Censorship6d.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
papers/Censorship7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,12 @@
\relax
\@writefile{toc}{\contentsline {section}{\numberline {1}Introduction}{1}}
\@writefile{toc}{\contentsline {section}{\numberline {2}Rewards and Penalties}{2}}
\@writefile{toc}{\contentsline {section}{\numberline {3}Claims}{4}}
\@writefile{toc}{\contentsline {section}{\numberline {4}Individual choice analysis}{5}}
\@writefile{toc}{\contentsline {section}{\numberline {5}Collective choice model}{5}}
\@writefile{toc}{\contentsline {section}{\numberline {6}Griefing factor analysis}{7}}
\@writefile{toc}{\contentsline {section}{\numberline {7}Pools}{9}}
\bibstyle{abbrv}
\bibdata{main}
\@writefile{toc}{\contentsline {section}{\numberline {8}Conclusions}{11}}
\@writefile{toc}{\contentsline {section}{\numberline {9}References}{11}}

View File

@ -0,0 +1,234 @@
This is pdfTeX, Version 3.14159265-2.6-1.40.16 (TeX Live 2015/Debian) (preloaded format=pdflatex 2017.6.27) 7 AUG 2017 03:41
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
**casper_basic_structure.tex
(./casper_basic_structure.tex
LaTeX2e <2016/02/01>
Babel <3.9q> and hyphenation patterns for 3 language(s) loaded.
(/usr/share/texlive/texmf-dist/tex/latex/base/article.cls
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
(/usr/share/texlive/texmf-dist/tex/latex/base/size12.clo
File: size12.clo 2014/09/29 v1.4h Standard LaTeX file (size option)
)
\c@part=\count79
\c@section=\count80
\c@subsection=\count81
\c@subsubsection=\count82
\c@paragraph=\count83
\c@subparagraph=\count84
\c@figure=\count85
\c@table=\count86
\abovecaptionskip=\skip41
\belowcaptionskip=\skip42
\bibindent=\dimen102
)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/graphicx.sty
Package: graphicx 2014/10/28 v1.0g Enhanced LaTeX Graphics (DPC,SPQR)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty
Package: keyval 2014/10/28 v1.15 key=value parser (DPC)
\KV@toks@=\toks14
)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/graphics.sty
Package: graphics 2016/01/03 v1.0q Standard LaTeX Graphics (DPC,SPQR)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/trig.sty
Package: trig 2016/01/03 v1.10 sin cos tan (DPC)
)
(/usr/share/texlive/texmf-dist/tex/latex/latexconfig/graphics.cfg
File: graphics.cfg 2010/04/23 v1.9 graphics configuration of TeX Live
)
Package graphics Info: Driver file: pdftex.def on input line 95.
(/usr/share/texlive/texmf-dist/tex/latex/pdftex-def/pdftex.def
File: pdftex.def 2011/05/27 v0.06d Graphics/color for pdfTeX
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/infwarerr.sty
Package: infwarerr 2010/04/08 v1.3 Providing info/warning/error messages (HO)
)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ltxcmds.sty
Package: ltxcmds 2011/11/09 v1.22 LaTeX kernel commands for general use (HO)
)
\Gread@gobject=\count87
))
\Gin@req@height=\dimen103
\Gin@req@width=\dimen104
)
(/usr/share/texlive/texmf-dist/tex/latex/tools/tabularx.sty
Package: tabularx 2014/10/28 v2.10 `tabularx' package (DPC)
(/usr/share/texlive/texmf-dist/tex/latex/tools/array.sty
Package: array 2014/10/28 v2.4c Tabular extension package (FMi)
\col@sep=\dimen105
\extrarowheight=\dimen106
\NC@list=\toks15
\extratabsurround=\skip43
\backup@length=\skip44
)
\TX@col@width=\dimen107
\TX@old@table=\dimen108
\TX@old@col=\dimen109
\TX@target=\dimen110
\TX@delta=\dimen111
\TX@cols=\count88
\TX@ftn=\toks16
)
\c@definition=\count89
No file casper_basic_structure.aux.
\openout1 = `casper_basic_structure.aux'.
LaTeX Font Info: Checking defaults for OML/cmm/m/it on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for OMS/cmsy/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for OMX/cmex/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
(/usr/share/texlive/texmf-dist/tex/context/base/supp-pdf.mkii
[Loading MPS to PDF converter (version 2006.09.02).]
\scratchcounter=\count90
\scratchdimen=\dimen112
\scratchbox=\box26
\nofMPsegments=\count91
\nofMParguments=\count92
\everyMPshowfont=\toks17
\MPscratchCnt=\count93
\MPscratchDim=\dimen113
\MPnumerator=\count94
\makeMPintoPDFobject=\count95
\everyMPtoPDFconversion=\toks18
) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/pdftexcmds.sty
Package: pdftexcmds 2011/11/29 v0.20 Utility functions of pdfTeX for LuaTeX (HO
)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifluatex.sty
Package: ifluatex 2010/03/01 v1.3 Provides the ifluatex switch (HO)
Package ifluatex Info: LuaTeX not detected.
)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty
Package: ifpdf 2011/01/30 v2.3 Provides the ifpdf switch (HO)
Package ifpdf Info: pdfTeX in PDF mode is detected.
)
Package pdftexcmds Info: LuaTeX not detected.
Package pdftexcmds Info: \pdf@primitive is available.
Package pdftexcmds Info: \pdf@ifprimitive is available.
Package pdftexcmds Info: \pdfdraftmode found.
)
(/usr/share/texlive/texmf-dist/tex/latex/oberdiek/epstopdf-base.sty
Package: epstopdf-base 2010/02/09 v2.5 Base part for package epstopdf
(/usr/share/texlive/texmf-dist/tex/latex/oberdiek/grfext.sty
Package: grfext 2010/08/19 v1.1 Manage graphics extensions (HO)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/kvdefinekeys.sty
Package: kvdefinekeys 2011/04/07 v1.3 Define keys (HO)
))
(/usr/share/texlive/texmf-dist/tex/latex/oberdiek/kvoptions.sty
Package: kvoptions 2011/06/30 v3.11 Key value format for package options (HO)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/kvsetkeys.sty
Package: kvsetkeys 2012/04/25 v1.16 Key value parser (HO)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/etexcmds.sty
Package: etexcmds 2011/02/16 v1.5 Avoid name clashes with e-TeX commands (HO)
Package etexcmds Info: Could not find \expanded.
(etexcmds) That can mean that you are not using pdfTeX 1.50 or
(etexcmds) that some package has redefined \expanded.
(etexcmds) In the latter case, load this package earlier.
)))
Package grfext Info: Graphics extension search list:
(grfext) [.png,.pdf,.jpg,.mps,.jpeg,.jbig2,.jb2,.PNG,.PDF,.JPG,.JPE
G,.JBIG2,.JB2,.eps]
(grfext) \AppendGraphicsExtensions on input line 452.
(/usr/share/texlive/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg
File: epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Liv
e
))
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <14.4> on input line 14.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <7> on input line 14.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <12> on input line 20.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <8> on input line 20.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <6> on input line 20.
[1
{/var/lib/texmf/fonts/map/pdftex/updmap/pdftex.map}]
LaTeX Font Info: Try loading font information for OMS+cmr on input line 29.
(/usr/share/texlive/texmf-dist/tex/latex/base/omscmr.fd
File: omscmr.fd 2014/09/29 v2.5h Standard LaTeX font definitions
)
LaTeX Font Info: Font shape `OMS/cmr/m/n' in size <12> not available
(Font) Font shape `OMS/cmsy/m/n' tried instead on input line 29.
[2] [3] [4]
Overfull \hbox (10.7546pt too wide) in paragraph at lines 80--81
[]\OT1/cmr/m/n/12 Hence, the PRE-PARE[]COMMIT[]CONSISTENCY slash-ing con-di-tio
n poses
[]
Overfull \hbox (5.39354pt too wide) in paragraph at lines 82--83
[]\OT1/cmr/m/n/12 We are as-sum-ing that there are $[]$ pre-pares for $(\OML/cm
m/m/it/12 e; H; epoch[]; hash[]\OT1/cmr/m/n/12 )$,
[]
[5]
Overfull \hbox (15.85455pt too wide) in paragraph at lines 105--105
[]\OT1/cmr/m/n/12 Now, we need to show that, for any given to-tal de-posit size
, $[]$
[]
[6] [7]
Underfull \hbox (badness 3291) in paragraph at lines 147--147
[]|\OT1/cmr/m/n/12 Amount lost by at-
[]
Overfull \hbox (17.62482pt too wide) in paragraph at lines 147--148
[][]
[]
[8] [9] [10]
No file casper_basic_structure.bbl.
[11] (./casper_basic_structure.aux) )
Here is how much of TeX's memory you used:
1608 strings out of 494953
22878 string characters out of 6180977
83571 words of memory out of 5000000
4895 multiletter control sequences out of 15000+600000
9456 words of font info for 33 fonts, out of 8000000 for 9000
14 hyphenation exceptions out of 8191
37i,10n,23p,898b,181s stack positions out of 5000i,500n,10000p,200000b,80000s
</usr/share/texlive/texmf-dist/fonts/type1
/public/amsfonts/cm/cmbx10.pfb></usr/share/texlive/texmf-dist/fonts/type1/publi
c/amsfonts/cm/cmbx12.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsf
onts/cm/cmex10.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/c
m/cmmi12.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmmi
6.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmmi8.pfb><
/usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmr10.pfb></usr/sh
are/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmr12.pfb></usr/share/tex
live/texmf-dist/fonts/type1/public/amsfonts/cm/cmr17.pfb></usr/share/texlive/te
xmf-dist/fonts/type1/public/amsfonts/cm/cmr8.pfb></usr/share/texlive/texmf-dist
/fonts/type1/public/amsfonts/cm/cmsy10.pfb></usr/share/texlive/texmf-dist/fonts
/type1/public/amsfonts/cm/cmsy8.pfb></usr/share/texlive/texmf-dist/fonts/type1/
public/amsfonts/cm/cmti12.pfb>
Output written on casper_basic_structure.pdf (11 pages, 178027 bytes).
PDF statistics:
92 PDF objects out of 1000 (max. 8388607)
65 compressed objects within 1 object stream
0 named destinations out of 1000 (max. 500000)
1 words of extra memory for PDF output out of 10000 (max. 10000000)

Binary file not shown.

View File

@ -0,0 +1,197 @@
\title{Casper the Friendly Finality Gadget: Basic Structure}
\author{
Vitalik Buterin \\
Ethereum Foundation
}
\date{\today}
\documentclass[12pt]{article}
\usepackage{graphicx}
\usepackage{tabularx}
\newtheorem{definition}{Definition}
\begin{document}
\maketitle
\begin{abstract}
We give an introduction to the non-economic details of Casper: the Friendly Finality Gadget, Phase 1.
\end{abstract}
\section{Introduction}
In the Casper protocol, there is a set of validators, and in each epoch validators have the ability to send two kinds of messages: $$[PREPARE, epoch, hash, epoch_{source}, hash_{source}]$$ and $$[COMMIT, epoch, hash]$$
Each validator has a \textit{deposit size}; when a validator joins their deposit size is equal to the number of coins that they deposited, and from there on each validator's deposit size rises and falls as the validator receives rewards and penalties. For the rest of this paper, when we say ``$\frac{2}{3}$ of validators", we are referring to a \textit{deposit-weighted} fraction; that is, a set of validators whose combined deposit size equals to at least $\frac{2}{3}$ of the total deposit size of the entire set of validators. We also use ``$\frac{2}{3}$ commits" as shorthand for ``commits from $\frac{2}{3}$ of validators".
If, during an epoch $e$, for some specific checkpoint hash $h$, $\frac{2}{3}$ prepares are sent of the form $$[PREPARE, e, h, epoch_{source}, hash_{source}]$$ with some specific $epoch_{source}$ and some specific $hash_{source}$, then $h$ is considered \textit{justified}. If $\frac{2}{3}$ commits are sent of the form $$[COMMIT, e, h]$$ then $h$ is considered \textit{finalized}. The $hash$ is the block hash of the block at the start of the epoch, so a $hash$ being finalized means that that block, and all of its ancestors, are also finalized. An ``ideal execution'' of the protocol is one where, during every epoch, every validator prepares and commits some block hash at the start of that epoch, specifying the same $epoch_{source}$ and $hash_{source}$. We want to try to create incentives to encourage this ideal execution.
Possible deviations from this ideal execution that we want to minimize or avoid include:
\begin{itemize}
\item Any of the four slashing conditions get violated.
\item During some epoch, we do not get $\frac{2}{3}$ commits for the $hash$ that received $\frac{2}{3}$ prepares.
\item During some epoch, we do not get $\frac{2}{3}$ prepares for the same \\ $(h, hash_{source}, epoch_{source})$ combination.
\end{itemize}
From within the view of the blockchain, we only see the blockchain's own history, including messages that were passed in. In a history that contains some blockhash $H$, our strategy will be to reward validators who prepared and committed $H$, and not reward prepares or commits for any hash $H\prime \ne H$. The blockchain state will also keep track of the most recent hash in its own history that received $\frac{2}{3}$ prepares, and only reward prepares whose $epoch_{source}$ and $hash_{source}$ point to this hash. These two techniques will help to ``coordinate'' validators toward preparing and committing a single hash with a single source, as required by the protocol.
\section{Rewards and Penalties}
We define the following constants and functions:
\begin{itemize}
\item $BIR(D)$: determines the base interest rate paid to each validator, taking as an input the current total quantity of deposited ether.
\item $BP(D, e, LFE)$: determines the ``base penalty constant'' - a value expressed as a percentage rate that is used as the ``scaling factor'' for all penalties; for example, if at the current time $BP(...) = 0.001$, then a penalty of size $1.5$ means a validator loses $0.15\%$ of their deposit. Takes as inputs the current total quantity of deposited ether $D$, the current epoch $e$ and the last finalized epoch $LFE$. Note that in a ``perfect'' protocol execution, $e - LFE$ always equals $1$.
\item $NCP$ (``non-commit penalty''): the penalty for not committing, if there was a justified hash which the validator \textit{could} have committed
\item $NCCP(\alpha)$ (``non-commit collective penalty''): if $\alpha$ of validators are not seen to have committed during an epoch, and that epoch had a justified hash so any validator \textit{could} have committed, then all validators are charged a penalty proportional to $NCCP(\alpha)$. Must be monotonically increasing, and satisfy $NCCP(0) = 0$.
\item $NPP$ (``non-prepare penalty''): the penalty for not preparing
\item $NPCP(\alpha)$ (``non-prepare collective penalty''): if $\alpha$ of validators are not seen to have prepared during an epoch, then all validators are charged a penalty proportional to $NCCP(\alpha)$. Must be monotonically increasing, and satisfy $NPCP(0) = 0$.
\end{itemize}
Note that preparing and committing does not guarantee that the validator will not incur $NPP$ and $NCP$; it could be the case that either because of very high network latency or a malicious majority censorship attack, the prepares and commits are not included into the blockchain in time and so the incentivization mechanism does not know about them. For $NPCP$ and $NCCP$ similarly, the $\alpha$ input is the portion of validators whose prepares and commits are \textit{included}, not the portion of validators who \textit{tried to send} prepares and commits.
When we talk about preparing and committing the ``correct value", we are referring to the $hash$ and $epoch_{source}$ and $hash_{source}$ recommended by the protocol state, as described above.
We now define the following reward and penalty schedule, which runs every epoch.
\begin{itemize}
\item Let $D$ be the current total quantity of deposited ether, and $e - LFE$ be the number of epochs since the last finalized epoch.
\item All validators get a reward of $BIR(D)$ every epoch (eg. if $BIR(D) = 0.0002$ then a validator with $10000$ coins deposited gets a per-epoch reward of $2$ coins)
\item If the protocol does not see a prepare from a given validator during the given epoch, they are penalized $BP(D, e, LFE) * NPP$
\item If the protocol saw prepares from portion $p_p$ validators during the given epoch, \textit{every} validator is penalized $BP(D, e, LFE) * NPCP(1 - p_p)$
\item If the protocol does not see a commit from a given validator during the given epoch, and a prepare was justified so a commit \textit{could have} been seen, they are penalized $BP(D, E, LFE) * NCP$.
\item If the protocol saw commits from portion $p_c$ validators during the given epoch, and a prepare was justified so any validator \textit{could have} committed, then \textit{every} validator is penalized $BP(D, e, LFE) * NCCP(1 - p_p)$
\end{itemize}
This is the entirety of the incentivization structure, though without functions and constants defined; we will define these later, attempting as much as possible to derive the specific values from desired objectives and first principles. For now we will only say that all constants are positive and all functions output non-negative values for any input within their range. Additionally, $NPCP(0) = NCCP(0) = 0$ and $NPCP$ and $NCCP$ must both be nondecreasing.
\section{Claims}
We seek to prove the following:
\begin{itemize}
\item If each validator has less than $\frac{1}{3}$ of total deposits, then preparing and committing the value suggested by the proposal mechanism is a Nash equilibrium.
\item Even if all validators collude, the ratio between the harm incurred by the protocol and the penalties paid by validators is bounded above by some constant. Note that this requires a measure of ``harm incurred by the protocol"; we will discuss this in more detail later.
\item The \textit{griefing factor}, the ratio between penalties incurred by validators who are victims of an attack and penalties incurred by the validators that carried out the attack, can be bounded above by 2, even in the case where the attacker holds a majority of the total deposits.
\end{itemize}
\section{Individual choice analysis}
The individual choice analysis is simple. Suppose that the proposal mechanism selects a hash $H$ to prepare for epoch $e$, and the Casper incentivization mechanism specifies some $epoch_{source}$ and $hash_{source}$. Because, as per definition of the Nash equilibrium, we are assuming that all validators except for one particular validator that we are analyzing is following the equilibrium strategy, we know that $\ge \frac{2}{3}$ of validators prepared in the last epoch and so $epoch_{source} = e - 1$, and $hash_{source}$ is the direct parent of $H$.
Hence, the PREPARE\_COMMIT\_CONSISTENCY slashing condition poses no barrier to preparing $(e, H, epoch_{source}, hash_{source})$. Since, in epoch $e$, we are assuming that all other validators \textit{will} prepare these values and then commit $H$, we know $H$ will be the hash in the main chain, and so a validator will pay a penalty proportional to $NPP$ (plus a further penalty from their marginal contribution to the $NPCP$ penalty) if they do not prepare $(e, H, epoch_{source}, hash_{source})$, and they can avoid this penalty if they do prepare these values.
We are assuming that there are $\frac{2}{3}$ prepares for $(e, H, epoch_{source}, hash_{source})$, and so PREPARE\_REQ poses no barrier to committing $H$. Committing $H$ allows a validator to avoid $NCP$ (as well as their marginal contribution to $NCCP$). Hence, there is an economic incentive to commit $H$. This shows that, if the proposal mechanism succeeds at presenting to validators a single primary choice, preparing and committing the value selected by the proposal mechanism is a Nash equilibrium.
\section{Collective choice model}
To model the protocol in a collective-choice context, we will first define a \textit{protocol utility function}. The protocol utility function defines ``how well the protocol execution is doing". The protocol utility function cannot be derived mathematically; it can only be conceived and justified intuitively.
Our protocol utility function is:
$$U = \sum_{e = 0}^{e_c} -log_2(e - LFE(e)) - M * F$$
Where:
\begin{itemize}
\item $e$ is the current epoch, going from epoch $0$ to $e_c$, the current epoch
\item $LFE(e)$ is the last finalized epoch before $e$
\item $M$ is a very large constant
\item $F$ is 1 if a safety failure has taken place, otherwise 0
\end{itemize}
The second term in the function is easy to justify: safety failures are very bad. The first term is trickier. To see how the first term works, consider the case where every epoch such that $e$ mod $N$, for some $N$, is zero is finalized and other epochs are not. The average total over each $N$-epoch slice will be roughly $\sum_{i=1}^N -log_2(i) \approx N * (log_2(N) - \frac{1}{ln(2)})$. Hence, the utility per block will be roughly $-log_2(N)$. This basically states that a blockchain with some finality time $N$ has utility roughly $-log(N)$, or in other words \textit{increasing the finality time of a blockchain by a constant factor causes a constant loss of utility}. The utility difference between 1 minute finality and 2 minute finality is the same as the utility difference between 1 hour finality and 2 hour finality.
This can be justified in two ways. First, one can intuitively argue that a user's psychological estimation of the discomfort of waiting for finality roughly matches this kind of logarithmic utility schedule. At the very least, it should be clear that the difference between 3600 second finality and 3610 second finality feels much more negligible than the difference between 1 second finality and 11 second finality, and so the claim that the difference between 10 second finality and 20 second finality is similar to the difference between 1 hour finality and 2 hour finality should not seem farfetched. Second, one can look at various blockchain use cases, and see that they are roughly logarithmically uniformly distributed along the range of finality times between around 200 miliseconds (``Starcraft on the blockchain") and one week (land registries and the like).
Now, we need to show that, for any given total deposit size, $\frac{loss\_to\_protocol\_utility}{validator\_penalties}$ is bounded. There are two ways to reduce protocol utility: (i) cause a safety failure, and (ii) have $\ge \frac{1}{3}$ of validators not prepare or not commit to prevent finality. In the first case, validators lose a large amount of deposits for violating the slashing conditions. In the second case, in a chain that has not been finalized for $e - LFE$ epochs, the penalty to attackers is $$min(NPP * \frac{1}{3} + NPCP(\frac{1}{3}), NCP * \frac{1}{3} + NCCP(\frac{1}{3})) * BP(D, e, LFE)$$
To enforce a ratio between validator losses and loss to protocol utility, we set:
$$BP(D, e, LFE) = \frac{k}{D^p} + k_2 * floor(log_2(e - LFE))$$
The first term serves to take profits for non-committers away; the second term creates a penalty which is proportional to the loss in protocol utility.
\section{Griefing factor analysis}
Griefing factor analysis is important because it provides one way to quanitfy the risk to honest validators. In general, if all validators are honest, and if network latency stays below the length of an epoch, then validators face zero risk beyond the usual risks of losing or accidentally divulging access to their private keys. In the case where malicious validators exist, however, they can interfere in the protocol in ways that cause harm to both themselves and honest validators.
We can approximately define the "griefing factor" as follows:
\begin{definition}
A strategy used by a coalition in a given mechanism exhibits a \textit{griefing factor} $B$ if it can be shown that this strategy imposes a loss of $B * x$ to those outside the coalition at the cost of a loss of $x$ to those inside the coalition. If all strategies that cause deviations from some given baseline state exhibit griefing factors less than or equal to some bound B, then we call B a \textit{griefing factor bound}.
\end{definition}
A strategy that imposes a loss to outsiders either at no cost to a coalition, or to the benefit of a coalition, is said to have a griefing factor of infinity. Proof of work blockchains have a griefing factor bound of infinity because a 51\% coalition can double its revenue by refusing to include blocks from other participants and waiting for difficulty adjustment to reduce the difficulty. With selfish mining, the griefing factor may be infinity for coalitions of size as low as 23.21\%.
Let us start off our griefing analysis by not taking into account validator churn, so the validator set is always the same. In Casper, we can identify the following deviating strategies:
\begin{enumerate}
\item A minority of validators do not prepare, or prepare incorrect values.
\item (Mirror image of 1) A censorship attack where a majority of validators does not accept prepares from a minority of validators (or other isomorphic attacks such as waiting for the minority to prepare hash $H1$ and then preparing $H2$, making $H2$ the dominant chain and denying the victims their rewards)
\item A minority of validators do not commit.
\item (Mirror image of 3) A censorship attack where a majority of validators does not accept commits from a minority of validators
\end{enumerate}
Notice that, from the point of view of griefing factor analysis, it is immaterial whether or not any hash in a given epoch was justified or finalized. The Casper mechanism only pays attention to finalization in order to calculate $DF(D, e, LFE)$, the penalty scaling factor. This value scales penalties evenly for all participants, so it does not affect griefing factors.
Let us now analyze the attack types:
\begin{tabularx}{\textwidth}{|X|X|X|}
\hline
Attack & Amount lost by attacker & Amount lost by victims \\
\hline
Minority of size $\alpha < \frac{1}{2}$ non-prepares & $NPP * \alpha + NPCP(\alpha) * \alpha$ & $NPCP(\alpha) * (1-\alpha)$ \\
Majority censors $\alpha < \frac{1}{2}$ prepares & $NPCP(\alpha) * (1-\alpha)$ & $NPP * \alpha + NPCP(\alpha) * \alpha$ \\
Minority of size $\alpha < \frac{1}{2}$ non-commits & $NCP * \alpha + NCCP(\alpha) * \alpha$ & $NCCP(\alpha) * (1-\alpha)$ \\
Majority censors $\alpha < \frac{1}{2}$ commits & $NCCP(\alpha) * (1-\alpha)$ & $NCP * \alpha + NCCP(\alpha) * \alpha$ \\
\hline
\end{tabularx}
In general, we see a perfect symmetry between the non-commit case and the non-prepare case, so we can assume $\frac{NCCP(\alpha)}{NCP} = \frac{NPCP(\alpha)}{NPP}$. Also, from a protocol utility standpoint, we can make the observation that seeing $\frac{1}{3} \le p_c < \frac{2}{3}$ commits is better than seeing fewer commits, as it gives at least some economic security against finality reversions, so we do want to reward this scenario more than the scenario where we get $\frac{1}{3} \le p_c < \frac{2}{3}$ prepares. Another way to view the situation is to observe that $\frac{1}{3}$ non-prepares causes \textit{everyone} to non-commit, so it should be treated with equal severity.
In the normal case, anything less than $\frac{1}{3}$ commits provides no economic security, so we can treat $p_c < \frac{1}{3}$ commits as equivalent to no commits; this thus suggests $NPP = 2 * NCP$. We can also normalize $NCP = 1$.
Now, let us analyze the griefing factors, to try to determine an optimal shape for $NCCP$. The griefing factor for non-committing is:
$$\frac{(1-\alpha) * NCCP(\alpha)}{\alpha * (1 + NCCP(\alpha))}$$
The griefing factor for censoring is the inverse of this. If we want the griefing factor for non-committing to equal one, then we could compute:
$$\alpha * (1 + NCCP(\alpha)) = (1-\alpha) * NCCP(\alpha)$$
$$\frac{1 + NCCP(\alpha)}{NCCP(\alpha)} = \frac{1-\alpha}{\alpha}$$
$$\frac{1}{NCCP(\alpha)} = \frac{1-\alpha}{\alpha} - 1$$
$$NCCP(\alpha) = \frac{\alpha}{1-2\alpha}$$
Note that for $\alpha = \frac{1}{2}$, this would set the $NCCP$ to infinity. Hence, with this design a griefing factor of $1$ is infeasible. We \textit{can} achieve that effect in a different way - by making $NCP$ itself a function of $\alpha$; in this case, $NCCP = 1$ and $NCP = max(0, 1 - 2 * \alpha)$ would achieve the desired effect. If we want to keep the formula for $NCP$ constant, and the formula for $NCCP$ reasonably simple and bounded, then one alternative is to set $NCCP(\alpha) = \frac{\alpha}{1-\alpha}$; this keeps griefing factors bounded between $\frac{1}{2}$ and $2$.
\section{Pools}
In a traditional (ie. not sharded or otherwise scalable) blockchain, there is a limit to the number of validators that can be supported, because each validator imposes a substantial amount of overhead on the system. If we accept a maximum overhead of two consensus messages per second, and an epoch time of 1400 seconds, then this means that the system can handle 1400 validators (not 2800 because we need to count prepares and commits). Given that the number of individual users interested in staking will likely exceed 1400, this necessarily means that most users will participate through some kind of ``stake pool''.
Two other reasons to participate in stake pools are (i) to mitigate \textit{key theft risk} (i.e. an attacker hacking into their online machine and stealing the key), and (ii) to mitigate \textit{liveness risk}, the possibility that the validator node will go offline, perhaps because the operator does not have the time to manage a high-uptime setup.
There are several possible kinds of stake pools:
\begin{itemize}
\item \textbf{Fully centrally managed}: users $B_1 ... B_n$ send coins to pool operator $A$. $A$ makes a few deposit transactions containing their combined balances, fully controls the prepare and commit process, and occasionally withdraws one of their deposits to accommodate users wishing to withdraw their balances. Requires complete trust.
\item \textbf{Centrally managed but trust-reduced}: users $B_1 ... B_n$ send coins to a pool contract. The contract sends a few deposit transactions containing their combined balances, assigning pool operator $A$ control over the prepare and commit process, and the task of keeping track of withdrawal requests. $A$ occasionally withdraws one of their deposits to accommodate users wishing to withdraw their balances; the withdrawals go directly into the contract, which ensures each user's right to withdraw a proportional share. Users need to trust the operator not to get their coins lose, but the operator cannot steal the coins. The trust requirement can be reduced further if the pool operator themselves contributes a large portion of the coins, as this will disincentivize them from staking maliciously.
\item \textbf{2-of-3}: a user makes a deposit transaction and specifies as validation code a 2-of-3 multisig, consisting of (i) the user's online key, (ii) the pool operator's online key, and (iii) the user's offline backup key. The need for two keys to sign off on a prepare, commit or withdraw minimizes key theft risk, and a liveness failure on the pool side can be handled by the user sending their backup key to another pool.
\item \textbf{Multisig managed}: users $B_1 ... B_n$ send coins to a pool contract that works in the exact same way as a centrally managed pool, except that a multisig of several semi-trusted parties needs to approve each prepare and commit message.
\item \textbf{Collective}: users $B_1 ... B_n$ send coins to a pool contract that that works in the exact same way as a centrally managed poolg
, except that a threshold signature of at least portion $p$ of the users themselves (say, $p = 0.6$) needs to approve each prepare and commit messagge.
\end{itemize}
We expect pools of different types to emerge to accomodate smaller users. In the long term, techniques such as blockchain sharding will make it possible to increase the number of users that can participate as validators directly, and techniques that allow validators to temporarily ``log out'' of the validator set when they are offline can mitigate liveness risk.
\section{Conclusions}
The above analysis gives a parametrized scheme for incentivizing in Casper, and shows that it is a Nash equilibrium in an uncoordinated-choice model with a wide variety of settings. We then attempt to derive one possible set of specific values for the various parameters by starting from desired objectives, and choosing values that best meet the desired objectives. This analysis does not include non-economic attacks, as those are covered by other materials, and does not cover more advanced economic attacks, including extortion and discouragement attacks. We hope to see more research in these areas, as well as in the abstract theory of what considerations should be taken into account when designing reward and penalty schedules.
\section{References}
\bibliographystyle{abbrv}
\bibliography{main}
Optimal selfish mining strategies in Bitcoin; Ayelet Sapirshtein, Yonatan Sompolinsky, and Aviv Zohar: https://arxiv.org/pdf/1507.06183.pdf
\end{document}

View File

@ -0,0 +1,12 @@
\relax
\@writefile{toc}{\contentsline {section}{\numberline {1}Introduction}{1}}
\@writefile{toc}{\contentsline {section}{\numberline {2}Rewards and Penalties}{3}}
\@writefile{toc}{\contentsline {section}{\numberline {3}Claims}{5}}
\@writefile{toc}{\contentsline {section}{\numberline {4}Individual choice analysis}{5}}
\@writefile{toc}{\contentsline {section}{\numberline {5}Collective choice model}{6}}
\@writefile{toc}{\contentsline {section}{\numberline {6}Griefing factor analysis}{7}}
\@writefile{toc}{\contentsline {section}{\numberline {7}Pools}{10}}
\@writefile{toc}{\contentsline {section}{\numberline {8}Conclusions}{11}}
\bibstyle{abbrv}
\bibdata{main}
\@writefile{toc}{\contentsline {section}{\numberline {9}References}{12}}

View File

@ -0,0 +1,242 @@
This is pdfTeX, Version 3.14159265-2.6-1.40.16 (TeX Live 2015/Debian) (preloaded format=pdflatex 2017.6.27) 25 JUL 2017 00:09
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
**casper_economics_basic.tex
(./casper_economics_basic.tex
LaTeX2e <2016/02/01>
Babel <3.9q> and hyphenation patterns for 3 language(s) loaded.
(/usr/share/texlive/texmf-dist/tex/latex/base/article.cls
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
(/usr/share/texlive/texmf-dist/tex/latex/base/size12.clo
File: size12.clo 2014/09/29 v1.4h Standard LaTeX file (size option)
)
\c@part=\count79
\c@section=\count80
\c@subsection=\count81
\c@subsubsection=\count82
\c@paragraph=\count83
\c@subparagraph=\count84
\c@figure=\count85
\c@table=\count86
\abovecaptionskip=\skip41
\belowcaptionskip=\skip42
\bibindent=\dimen102
)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/graphicx.sty
Package: graphicx 2014/10/28 v1.0g Enhanced LaTeX Graphics (DPC,SPQR)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty
Package: keyval 2014/10/28 v1.15 key=value parser (DPC)
\KV@toks@=\toks14
)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/graphics.sty
Package: graphics 2016/01/03 v1.0q Standard LaTeX Graphics (DPC,SPQR)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/trig.sty
Package: trig 2016/01/03 v1.10 sin cos tan (DPC)
)
(/usr/share/texlive/texmf-dist/tex/latex/latexconfig/graphics.cfg
File: graphics.cfg 2010/04/23 v1.9 graphics configuration of TeX Live
)
Package graphics Info: Driver file: pdftex.def on input line 95.
(/usr/share/texlive/texmf-dist/tex/latex/pdftex-def/pdftex.def
File: pdftex.def 2011/05/27 v0.06d Graphics/color for pdfTeX
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/infwarerr.sty
Package: infwarerr 2010/04/08 v1.3 Providing info/warning/error messages (HO)
)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ltxcmds.sty
Package: ltxcmds 2011/11/09 v1.22 LaTeX kernel commands for general use (HO)
)
\Gread@gobject=\count87
))
\Gin@req@height=\dimen103
\Gin@req@width=\dimen104
)
(/usr/share/texlive/texmf-dist/tex/latex/tools/tabularx.sty
Package: tabularx 2014/10/28 v2.10 `tabularx' package (DPC)
(/usr/share/texlive/texmf-dist/tex/latex/tools/array.sty
Package: array 2014/10/28 v2.4c Tabular extension package (FMi)
\col@sep=\dimen105
\extrarowheight=\dimen106
\NC@list=\toks15
\extratabsurround=\skip43
\backup@length=\skip44
)
\TX@col@width=\dimen107
\TX@old@table=\dimen108
\TX@old@col=\dimen109
\TX@target=\dimen110
\TX@delta=\dimen111
\TX@cols=\count88
\TX@ftn=\toks16
)
\c@definition=\count89
(./casper_economics_basic.aux)
\openout1 = `casper_economics_basic.aux'.
LaTeX Font Info: Checking defaults for OML/cmm/m/it on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for OMS/cmsy/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for OMX/cmex/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 13.
LaTeX Font Info: ... okay on input line 13.
(/usr/share/texlive/texmf-dist/tex/context/base/supp-pdf.mkii
[Loading MPS to PDF converter (version 2006.09.02).]
\scratchcounter=\count90
\scratchdimen=\dimen112
\scratchbox=\box26
\nofMPsegments=\count91
\nofMParguments=\count92
\everyMPshowfont=\toks17
\MPscratchCnt=\count93
\MPscratchDim=\dimen113
\MPnumerator=\count94
\makeMPintoPDFobject=\count95
\everyMPtoPDFconversion=\toks18
) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/pdftexcmds.sty
Package: pdftexcmds 2011/11/29 v0.20 Utility functions of pdfTeX for LuaTeX (HO
)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifluatex.sty
Package: ifluatex 2010/03/01 v1.3 Provides the ifluatex switch (HO)
Package ifluatex Info: LuaTeX not detected.
)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty
Package: ifpdf 2011/01/30 v2.3 Provides the ifpdf switch (HO)
Package ifpdf Info: pdfTeX in PDF mode is detected.
)
Package pdftexcmds Info: LuaTeX not detected.
Package pdftexcmds Info: \pdf@primitive is available.
Package pdftexcmds Info: \pdf@ifprimitive is available.
Package pdftexcmds Info: \pdfdraftmode found.
)
(/usr/share/texlive/texmf-dist/tex/latex/oberdiek/epstopdf-base.sty
Package: epstopdf-base 2010/02/09 v2.5 Base part for package epstopdf
(/usr/share/texlive/texmf-dist/tex/latex/oberdiek/grfext.sty
Package: grfext 2010/08/19 v1.1 Manage graphics extensions (HO)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/kvdefinekeys.sty
Package: kvdefinekeys 2011/04/07 v1.3 Define keys (HO)
))
(/usr/share/texlive/texmf-dist/tex/latex/oberdiek/kvoptions.sty
Package: kvoptions 2011/06/30 v3.11 Key value format for package options (HO)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/kvsetkeys.sty
Package: kvsetkeys 2012/04/25 v1.16 Key value parser (HO)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/etexcmds.sty
Package: etexcmds 2011/02/16 v1.5 Avoid name clashes with e-TeX commands (HO)
Package etexcmds Info: Could not find \expanded.
(etexcmds) That can mean that you are not using pdfTeX 1.50 or
(etexcmds) that some package has redefined \expanded.
(etexcmds) In the latter case, load this package earlier.
)))
Package grfext Info: Graphics extension search list:
(grfext) [.png,.pdf,.jpg,.mps,.jpeg,.jbig2,.jb2,.PNG,.PDF,.JPG,.JPE
G,.JBIG2,.JB2,.eps]
(grfext) \AppendGraphicsExtensions on input line 452.
(/usr/share/texlive/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg
File: epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Liv
e
))
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <14.4> on input line 14.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <7> on input line 14.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <10.95> on input line 16.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <8> on input line 16.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <6> on input line 16.
Overfull \hbox (0.3523pt too wide) in paragraph at lines 16--17
[]\OT1/cmr/m/n/10.95 We give an in-tro-duc-tion to the in-cen-tives in the Casp
er the Friendly
[]
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <12> on input line 20.
[1
{/var/lib/texmf/fonts/map/pdftex/updmap/pdftex.map}]
LaTeX Font Info: Try loading font information for OMS+cmr on input line 29.
(/usr/share/texlive/texmf-dist/tex/latex/base/omscmr.fd
File: omscmr.fd 2014/09/29 v2.5h Standard LaTeX font definitions
)
LaTeX Font Info: Font shape `OMS/cmr/m/n' in size <12> not available
(Font) Font shape `OMS/cmsy/m/n' tried instead on input line 29.
[2] [3] [4]
Overfull \hbox (10.7546pt too wide) in paragraph at lines 80--81
[]\OT1/cmr/m/n/12 Hence, the PRE-PARE[]COMMIT[]CONSISTENCY slash-ing con-di-tio
n poses
[]
Overfull \hbox (5.39354pt too wide) in paragraph at lines 82--83
[]\OT1/cmr/m/n/12 We are as-sum-ing that there are $[]$ pre-pares for $(\OML/cm
m/m/it/12 e; H; epoch[]; hash[]\OT1/cmr/m/n/12 )$,
[]
[5] [6]
Overfull \hbox (15.85455pt too wide) in paragraph at lines 105--105
[]\OT1/cmr/m/n/12 Now, we need to show that, for any given to-tal de-posit size
, $[]$
[]
[7]
Underfull \hbox (badness 3291) in paragraph at lines 147--147
[]|\OT1/cmr/m/n/12 Amount lost by at-
[]
Overfull \hbox (17.62482pt too wide) in paragraph at lines 147--148
[][]
[]
[8] [9] [10] [11]
No file casper_economics_basic.bbl.
[12] (./casper_economics_basic.aux) )
Here is how much of TeX's memory you used:
1613 strings out of 494953
22986 string characters out of 6180977
84568 words of memory out of 5000000
4898 multiletter control sequences out of 15000+600000
10072 words of font info for 35 fonts, out of 8000000 for 9000
14 hyphenation exceptions out of 8191
37i,10n,23p,1059b,181s stack positions out of 5000i,500n,10000p,200000b,80000s
</usr/share/texlive/texmf-dist/fonts/type1
/public/amsfonts/cm/cmbx10.pfb></usr/share/texlive/texmf-dist/fonts/type1/publi
c/amsfonts/cm/cmbx12.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsf
onts/cm/cmex10.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/c
m/cmmi12.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmmi
6.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmmi8.pfb><
/usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmr10.pfb></usr/sh
are/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmr12.pfb></usr/share/tex
live/texmf-dist/fonts/type1/public/amsfonts/cm/cmr17.pfb></usr/share/texlive/te
xmf-dist/fonts/type1/public/amsfonts/cm/cmr8.pfb></usr/share/texlive/texmf-dist
/fonts/type1/public/amsfonts/cm/cmsy10.pfb></usr/share/texlive/texmf-dist/fonts
/type1/public/amsfonts/cm/cmsy8.pfb></usr/share/texlive/texmf-dist/fonts/type1/
public/amsfonts/cm/cmti12.pfb>
Output written on casper_economics_basic.pdf (12 pages, 181449 bytes).
PDF statistics:
95 PDF objects out of 1000 (max. 8388607)
67 compressed objects within 1 object stream
0 named destinations out of 1000 (max. 500000)
1 words of extra memory for PDF output out of 10000 (max. 10000000)

View File

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View File

@ -0,0 +1,11 @@
\relax
\@writefile{toc}{\contentsline {section}{\numberline {1}Introduction}{2}}
\@writefile{toc}{\contentsline {section}{\numberline {2}Forgiving Rejection}{3}}
\@writefile{toc}{\contentsline {section}{\numberline {3}Unforgiving Rejection}{7}}
\@writefile{toc}{\contentsline {section}{\numberline {4}Incentives in the Uncoordinated Majority Model}{9}}
\@writefile{toc}{\contentsline {section}{\numberline {5}Incentives in the Majority Attacker Model}{10}}
\@writefile{toc}{\contentsline {section}{\numberline {6}Timelock Timers}{13}}
\@writefile{toc}{\contentsline {section}{\numberline {7}Proof of Work Timers}{14}}
\@writefile{toc}{\contentsline {section}{\numberline {8}Censorship and Transaction Fees}{16}}
\bibstyle{abbrv}
\bibdata{main}

View File

@ -0,0 +1,361 @@
This is pdfTeX, Version 3.14159265-2.6-1.40.16 (TeX Live 2015/Debian) (preloaded format=pdflatex 2017.6.27) 13 AUG 2017 04:22
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
**censorship_rejection.tex
(./censorship_rejection.tex
LaTeX2e <2016/02/01>
Babel <3.9q> and hyphenation patterns for 3 language(s) loaded.
(/usr/share/texlive/texmf-dist/tex/latex/base/article.cls
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
(/usr/share/texlive/texmf-dist/tex/latex/base/size12.clo
File: size12.clo 2014/09/29 v1.4h Standard LaTeX file (size option)
)
\c@part=\count79
\c@section=\count80
\c@subsection=\count81
\c@subsubsection=\count82
\c@paragraph=\count83
\c@subparagraph=\count84
\c@figure=\count85
\c@table=\count86
\abovecaptionskip=\skip41
\belowcaptionskip=\skip42
\bibindent=\dimen102
)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/graphicx.sty
Package: graphicx 2014/10/28 v1.0g Enhanced LaTeX Graphics (DPC,SPQR)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty
Package: keyval 2014/10/28 v1.15 key=value parser (DPC)
\KV@toks@=\toks14
)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/graphics.sty
Package: graphics 2016/01/03 v1.0q Standard LaTeX Graphics (DPC,SPQR)
(/usr/share/texlive/texmf-dist/tex/latex/graphics/trig.sty
Package: trig 2016/01/03 v1.10 sin cos tan (DPC)
)
(/usr/share/texlive/texmf-dist/tex/latex/latexconfig/graphics.cfg
File: graphics.cfg 2010/04/23 v1.9 graphics configuration of TeX Live
)
Package graphics Info: Driver file: pdftex.def on input line 95.
(/usr/share/texlive/texmf-dist/tex/latex/pdftex-def/pdftex.def
File: pdftex.def 2011/05/27 v0.06d Graphics/color for pdfTeX
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/infwarerr.sty
Package: infwarerr 2010/04/08 v1.3 Providing info/warning/error messages (HO)
)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ltxcmds.sty
Package: ltxcmds 2011/11/09 v1.22 LaTeX kernel commands for general use (HO)
)
\Gread@gobject=\count87
))
\Gin@req@height=\dimen103
\Gin@req@width=\dimen104
)
(./censorship_rejection.aux)
\openout1 = `censorship_rejection.aux'.
LaTeX Font Info: Checking defaults for OML/cmm/m/it on input line 12.
LaTeX Font Info: ... okay on input line 12.
LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 12.
LaTeX Font Info: ... okay on input line 12.
LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 12.
LaTeX Font Info: ... okay on input line 12.
LaTeX Font Info: Checking defaults for OMS/cmsy/m/n on input line 12.
LaTeX Font Info: ... okay on input line 12.
LaTeX Font Info: Checking defaults for OMX/cmex/m/n on input line 12.
LaTeX Font Info: ... okay on input line 12.
LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 12.
LaTeX Font Info: ... okay on input line 12.
(/usr/share/texlive/texmf-dist/tex/context/base/supp-pdf.mkii
[Loading MPS to PDF converter (version 2006.09.02).]
\scratchcounter=\count88
\scratchdimen=\dimen105
\scratchbox=\box26
\nofMPsegments=\count89
\nofMParguments=\count90
\everyMPshowfont=\toks15
\MPscratchCnt=\count91
\MPscratchDim=\dimen106
\MPnumerator=\count92
\makeMPintoPDFobject=\count93
\everyMPtoPDFconversion=\toks16
) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/pdftexcmds.sty
Package: pdftexcmds 2011/11/29 v0.20 Utility functions of pdfTeX for LuaTeX (HO
)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifluatex.sty
Package: ifluatex 2010/03/01 v1.3 Provides the ifluatex switch (HO)
Package ifluatex Info: LuaTeX not detected.
)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty
Package: ifpdf 2011/01/30 v2.3 Provides the ifpdf switch (HO)
Package ifpdf Info: pdfTeX in PDF mode is detected.
)
Package pdftexcmds Info: LuaTeX not detected.
Package pdftexcmds Info: \pdf@primitive is available.
Package pdftexcmds Info: \pdf@ifprimitive is available.
Package pdftexcmds Info: \pdfdraftmode found.
)
(/usr/share/texlive/texmf-dist/tex/latex/oberdiek/epstopdf-base.sty
Package: epstopdf-base 2010/02/09 v2.5 Base part for package epstopdf
(/usr/share/texlive/texmf-dist/tex/latex/oberdiek/grfext.sty
Package: grfext 2010/08/19 v1.1 Manage graphics extensions (HO)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/kvdefinekeys.sty
Package: kvdefinekeys 2011/04/07 v1.3 Define keys (HO)
))
(/usr/share/texlive/texmf-dist/tex/latex/oberdiek/kvoptions.sty
Package: kvoptions 2011/06/30 v3.11 Key value format for package options (HO)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/kvsetkeys.sty
Package: kvsetkeys 2012/04/25 v1.16 Key value parser (HO)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/etexcmds.sty
Package: etexcmds 2011/02/16 v1.5 Avoid name clashes with e-TeX commands (HO)
Package etexcmds Info: Could not find \expanded.
(etexcmds) That can mean that you are not using pdfTeX 1.50 or
(etexcmds) that some package has redefined \expanded.
(etexcmds) In the latter case, load this package earlier.
)))
Package grfext Info: Graphics extension search list:
(grfext) [.png,.pdf,.jpg,.mps,.jpeg,.jbig2,.jb2,.PNG,.PDF,.JPG,.JPE
G,.JBIG2,.JB2,.eps]
(grfext) \AppendGraphicsExtensions on input line 452.
(/usr/share/texlive/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg
File: epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Liv
e
))
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <14.4> on input line 13.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <7> on input line 13.
Overfull \hbox (3.05936pt too wide) in paragraph at lines 19--20
[]\OT1/cmr/m/n/10.95 The main prior work in this di-rec-tion is Casper [cite],
which achieves
[]
[1
{/var/lib/texmf/fonts/map/pdftex/updmap/pdftex.map}]
<Censorship1.png, id=11, 465.74pt x 171.64125pt>
File: Censorship1.png Graphic file (type png)
<use Censorship1.png>
Package pdftex.def Info: Censorship1.png used on input line 27.
(pdftex.def) Requested size: 401.50146pt x 147.97263pt.
Overfull \hbox (29.12628pt too wide) in paragraph at lines 27--28
[][]
[]
Overfull \hbox (20.55408pt too wide) in paragraph at lines 33--34
\OT1/cmr/m/n/12 For sim-plic-ity, let us as-sume that there ex-ists a sim-ple N
akamoto-style blockchain,
[]
<Censorship2.png, id=13, 368.37625pt x 156.585pt> [2 <./Censorship1.png>]
File: Censorship2.png Graphic file (type png)
<use Censorship2.png>
Package pdftex.def Info: Censorship2.png used on input line 35.
(pdftex.def) Requested size: 401.50146pt x 170.6647pt.
Overfull \hbox (29.12628pt too wide) in paragraph at lines 35--36
[][]
[]
<Censorship3.png, id=20, 372.39125pt x 261.97874pt>
File: Censorship3.png Graphic file (type png)
<use Censorship3.png>
Package pdftex.def Info: Censorship3.png used on input line 41.
(pdftex.def) Requested size: 401.50146pt x 282.46912pt.
<Censorship3p5.png, id=21, 457.71pt x 261.97874pt>
File: Censorship3p5.png Graphic file (type png)
<use Censorship3p5.png>
Package pdftex.def Info: Censorship3p5.png used on input line 42.
(pdftex.def) Requested size: 401.50146pt x 229.81447pt.
Overfull \hbox (29.12628pt too wide) in paragraph at lines 41--43
[][]
[]
Overfull \hbox (11.50146pt too wide) in paragraph at lines 41--43
[]
[]
[3 <./Censorship2.png>] [4 <./Censorship3.png> <./Censorship3p5.png>]
<Censorship4.png, id=31, 502.87875pt x 237.88875pt>
File: Censorship4.png Graphic file (type png)
<use Censorship4.png>
Package pdftex.def Info: Censorship4.png used on input line 50.
(pdftex.def) Requested size: 401.50146pt x 189.9301pt.
Overfull \hbox (29.12628pt too wide) in paragraph at lines 50--51
[][]
[]
[5 <./Censorship4.png>] [6]
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <12> on input line 66.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <8> on input line 66.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <6> on input line 66.
<Censorship5.png, id=39, 565.11125pt x 346.29375pt>
File: Censorship5.png Graphic file (type png)
<use Censorship5.png>
Package pdftex.def Info: Censorship5.png used on input line 70.
(pdftex.def) Requested size: 401.50146pt x 246.03935pt.
Overfull \hbox (29.12628pt too wide) in paragraph at lines 70--71
[][]
[]
[7] [8 <./Censorship5.png>]
File: Censorship3p5.png Graphic file (type png)
<use Censorship3p5.png>
Package pdftex.def Info: Censorship3p5.png used on input line 84.
(pdftex.def) Requested size: 401.50146pt x 229.81447pt.
Overfull \hbox (29.12628pt too wide) in paragraph at lines 84--85
[][]
[]
[9] [10] <Censorship6b.png, id=56, 693.59125pt x 206.7725pt>
File: Censorship6b.png Graphic file (type png)
<use Censorship6b.png>
Package pdftex.def Info: Censorship6b.png used on input line 100.
(pdftex.def) Requested size: 401.50146pt x 119.6978pt.
<Censorship6.png, id=57, 638.385pt x 211.79124pt>
File: Censorship6.png Graphic file (type png)
<use Censorship6.png>
Package pdftex.def Info: Censorship6.png used on input line 101.
(pdftex.def) Requested size: 401.50146pt x 133.20297pt.
<Censorship6c.png, id=58, 638.385pt x 211.79124pt>
File: Censorship6c.png Graphic file (type png)
<use Censorship6c.png>
Package pdftex.def Info: Censorship6c.png used on input line 102.
(pdftex.def) Requested size: 401.50146pt x 133.20297pt.
Overfull \hbox (29.12628pt too wide) in paragraph at lines 100--103
[][]
[]
Overfull \hbox (11.50146pt too wide) in paragraph at lines 100--103
[]
[]
Overfull \hbox (11.50146pt too wide) in paragraph at lines 100--103
[]
[]
[11 <./Censorship6b.png> <./Censorship6.png>]
<Censorship6d.png, id=66, 638.385pt x 206.7725pt>
File: Censorship6d.png Graphic file (type png)
<use Censorship6d.png>
Package pdftex.def Info: Censorship6d.png used on input line 108.
(pdftex.def) Requested size: 401.50146pt x 130.0465pt.
Overfull \hbox (29.12628pt too wide) in paragraph at lines 108--109
[][]
[]
[12 <./Censorship6c.png> <./Censorship6d.png>] [13]
<Censorship7.png, id=81, 402.50375pt x 228.855pt>
File: Censorship7.png Graphic file (type png)
<use Censorship7.png>
Package pdftex.def Info: Censorship7.png used on input line 137.
(pdftex.def) Requested size: 401.50146pt x 228.28522pt.
Overfull \hbox (29.12628pt too wide) in paragraph at lines 137--138
[][]
[]
[14] [15 <./Censorship7.png>] [16]
No file censorship_rejection.bbl.
LaTeX Font Info: Try loading font information for OMS+cmr on input line 157.
(/usr/share/texlive/texmf-dist/tex/latex/base/omscmr.fd
File: omscmr.fd 2014/09/29 v2.5h Standard LaTeX font definitions
)
LaTeX Font Info: Font shape `OMS/cmr/m/n' in size <12> not available
(Font) Font shape `OMS/cmsy/m/n' tried instead on input line 157.
Overfull \hbox (102.56387pt too wide) in paragraph at lines 157--158
[]\OT1/cmr/m/n/12 Casper is cur-rently a work in progress. See https://medium.c
om/@VitalikButerin/minimal-
[]
Overfull \hbox (160.74289pt too wide) in paragraph at lines 157--158
\OT1/cmr/m/n/12 slashing-conditions-20f0b500fc6c, https://github.com/ethereum/r
esearch/tree/master/casper4/papers
[]
Overfull \hbox (161.23318pt too wide) in paragraph at lines 158--159
[]\OT1/cmr/m/n/12 Christopher Na-toli and Vin-cent Gramoli, ``The Bal-ance At-t
ack'': https://arxiv.org/pdf/1612.09426.pdf
[]
Overfull \hbox (67.64943pt too wide) in paragraph at lines 160--161
[]\OT1/cmr/m/n/12 Jeff Cole-man, ``State Chan-nels: An Ex-pla-na-tion'': http:/
/www.jeffcoleman.ca/state-
[]
Overfull \hbox (83.92566pt too wide) in paragraph at lines 161--162
[]\OT1/cmr/m/n/12 http://www.econinfosec.org/archive/weis2013/papers/KrollDavey
FeltenWEIS2013.pdf
[]
[17] (./censorship_rejection.aux) )
Here is how much of TeX's memory you used:
1571 strings out of 494953
22794 string characters out of 6180977
77754 words of memory out of 5000000
4848 multiletter control sequences out of 15000+600000
9664 words of font info for 34 fonts, out of 8000000 for 9000
14 hyphenation exceptions out of 8191
37i,6n,23p,1266b,227s stack positions out of 5000i,500n,10000p,200000b,80000s
</usr/share/texlive/texmf-dist/fonts/type1/p
ublic/amsfonts/cm/cmbx10.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/
amsfonts/cm/cmbx12.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfon
ts/cm/cmex10.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/
cmmi12.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmmi6.
pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmmi8.pfb></u
sr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmr10.pfb></usr/shar
e/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmr12.pfb></usr/share/texli
ve/texmf-dist/fonts/type1/public/amsfonts/cm/cmr17.pfb></usr/share/texlive/texm
f-dist/fonts/type1/public/amsfonts/cm/cmr6.pfb></usr/share/texlive/texmf-dist/f
onts/type1/public/amsfonts/cm/cmr8.pfb></usr/share/texlive/texmf-dist/fonts/typ
e1/public/amsfonts/cm/cmsy10.pfb></usr/share/texlive/texmf-dist/fonts/type1/pub
lic/amsfonts/cm/cmsy6.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/ams
fonts/cm/cmsy8.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/c
m/cmti10.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmti
12.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmtt12.pfb
>
Output written on censorship_rejection.pdf (17 pages, 316546 bytes).
PDF statistics:
150 PDF objects out of 1000 (max. 8388607)
91 compressed objects within 1 object stream
0 named destinations out of 1000 (max. 500000)
56 words of extra memory for PDF output out of 10000 (max. 10000000)

Binary file not shown.

View File

@ -0,0 +1,165 @@
\title{Automated Censorship Attack Rejection}
\author{
Vitalik Buterin \\
Ethereum Foundation
}
\date{\today}
\documentclass[12pt]{article}
\usepackage{graphicx}
\begin{document}
\maketitle
\begin{abstract}
Most research into blockchains assumes that the majority of miners or validators are honest, or at least are non-coordinating and either honest or economically motivated, and tries to prove claims taking these assumptions for granted. It is typically assumed that if 51\% of miners or validators actually are malicious and colluding, then nothing can be done to prevent the blockchain from being destroyed.
This paper is part of a line of research that seeks to challenge this assumption, showing that there are in fact ways that we can automatically detect and quasi-automatically recover from 51\% attacks, though usually at the cost of ``extra-protocol governance assumptions'' - that is, an assumption that the \textit{users} of a protocol can, in extremis, manually verify which of a set of competing chains launched what would intuitively be viewed as an attack, and choose the honest chain. The goal of this line of research is to \textit{minimize} the need for extra-protocol governance, including mathematically proving bounds on the number and scale of ``governance events'' that an attacker can trigger at a given cost, and \textit{standardize} its form, allowing it to be used for the specific purpose of detecting and rejecting majority-driven attacks while maintaining strong and credible norms that would make sure it continues to be difficult to abuse social coordination to trick a community into accepting chains that violate desired guarantees.
The main prior work in this direction is Casper [cite], which achieves the guarantee that even in the case of a 51\% finality reversion attack, the attacker can still be heavily penalized. Here, we seek to go further than finality reversion attacks and cover the other major category of 51\% attack: \textit{censorship}. We discuss the possibilities and limits of \textit{automated censorship rejection} - that is, the use of ``smart fork choice rules'' in blockchains that are aware of messages that have been broadcasted and that need to be included in the blockchain, and reject any chain that does not include these messages within some ``grace period''.
\end{abstract}
\section{Introduction}
One of the main classes of majority attack against blockchains is the \textit{censorship attack}: a majority attacking coalition builds a chain that refuses to accept transactions or messages that an ordinary validator, miner or client would accept, either depriving their victims of revenues from the consensus mechanism or depriving users of their right to get transactions included into the blockchain.
Unlike invalid block attacks or finality reversion attacks, which can both be detected by a purely passive client by means of a single non-interactive proof, censorship attacks are inherently more difficult to detect. This is because the definition of a censorship attack involves a listener (i) hearing about a message, and (ii) not taking an action based on that message that they should have taken, which inherently depends on events that have taken place in the past, which a client that is only online in the present can have no knowledge of.
\includegraphics[width=400px]{Censorship1.png}
Most blockchain projects make no effort to even try to deal with the problem of majority coalition attacks such as censorship; an honest majority is usually one of the assumptions in any formal security proofs and models. Our goal is to see if we can go further.
\section{Forgiving Rejection}
For simplicity, let us assume that there exists a simple Nakamoto-style blockchain, where consensus is determined by the longest chain rule.
\includegraphics[width=400px]{Censorship2.png}
Suppose further that there is only one type of transaction. This type of transaction requires a very large amount of proof of work to create, for anti-denial-of-service reasons, and we make the computational assumption that there does not exist enough computing power in the world to create more valid transactions than a single node can verify as part of the process of verifying a block (we are deliberately very generous with assumptions at first to illustrate the basic principle).
Now, let us modify the Nakamoto fork choice rule with an additional rule: if a node sees a transaction X for the first time at time T, then starting from time T + 168 hours, it will reject all chains that do not include X. We assume that transactions are always re-broadcasted, so that all honest clients and miners see all transactions (this is a reasonable assumption because it is already the case with blocks).
\includegraphics[width=400px]{Censorship3.png}
\includegraphics[width=400px]{Censorship3p5.png}
Notice that the primary guarantee of non-censorship is now met trivially. If a node sees a transaction X, then the blockchain that it accepts must, by definition, be a blockchain that includes X, at least after 168 hours. The harder claim to evaluate has to do with \textit{agreement}: that is to say, can we guarantee that a blockchain with miners running these rules will not permanently split?
Let us first suppose a synchrony assumption of one minute. Proof of work assumes a synchrony assumption implicitly, as if excessive network latency is permitted then it is quite easily possible for two chains to grow in parallel forever [cite 1], so this introduces no new assumptions. If this synchrony assumption applies both between miner and miner and between miner and client, then in the normal case no new risks are introduced: all miners will hear about all broadcasted transactions within one minute, miners include transactions immediately, and so analysis is identical to the standard Nakamoto case.
Now let us consider the possibility of \textit{temporary} 51\% attacks. Suppose that an extremely powerful miner broadcasts a block tree (that is, a chain or a diverging set of chains), and after this the miner's hashpower drops back to well below 51\%. For any given (block, transaction) pair, there is a maximum difference of two minutes in each client's perceived delay between the transaction and the block, and so it is very possible that some transaction was published (and seen by all clients) at time T, a chain not including that transaction published at time T + 168 hours - 30 seconds, with some clients perceiving the chain as having arrived before the 168 hour window and some clients perceiving the chain as having arrived after this window.
\includegraphics[width=400px]{Censorship4.png}
Consider the set of blocks which are considered "chain heads" by at least some honest miners (though possibly not all for temporal reasons). During every instant of time there is some probability that an honest miner will discover and publish a block that extends the one of these chain heads that is longer than all the others, and which includes every transaction that causes other miners to reject the block. The new chain head in this situation will be accepted by all clients and miners, and will be seen as taking precedence over all the others, and so all miners and clients will converge on it.
We can now weaken the synchrony assumption: suppose that only \textit{miners} have a guaranteed synchrony of one minute; \textit{clients} may log off for unboundedly long periods of time. Suppose that a client logs on after a very long time. They will then receive transactions that other clients heard earlier. Hence, they may accept chains that other clients reject (for example, if a transaction that was actually broadcasted 169 hours ago has not been included in some chain, the newly-online client will still accept that chain because they only saw the transaction much more recently), but they will under no circumstance reject chains that clients that have been online all along will accept.
If there is no 51\% attack going on at present, then by definition there is no longer chain than the chain that online clients and miners accept, and we know that the newly online client will also accept it. Now, suppose that there is an attack taking place. Then, it is already known from Nakamoto consensus research that the chains that different clients accept may be different, because 51\% attacks can cause persistent chain splits. The newly online client may well accept a chain that other clients reject. However, when the attack subsides, a chain that other clients accept will once again be the longest one, and so the newly online client will also accept it.
The algorithm achieves these properties because it is \textit{forgiving} - it accepts any chain that was censoring transactions initially, as soon as that chain stops censoring. However, the guarantee is very weak - it simply guarantees that clients will be on a chain that accepts transactions they know about, without any regard for how long they were censored in the meantime. It also assumes that the set of transactions that must be included is very small. Next, we will see to what extent we can expand and generalize these guarantees.
The arguments given in the previous section do not directly depend on any deep properties of Nakamoto consensus. \textit{Any} consensus algorithm that has some notion of a fork-choice rule can be used in its place. This includes GHOST [cite 2], and even partially-synchronous ``finality-bearing'' fork-choice rules such as the Casper fork choice rule.
\section{Unforgiving Rejection}
We can now try to strengthen the non-censorship requirement. The most natural use-case for strong anti-censorship protection is interactive verification - that is, mechanisms on blockchains where there exists some time period during which anyone can submit evidence that some fact F is false, and if no one submits this evidence during that period then it is assumed that F is in fact true. Mechanisms of this type include payment or state channels [cite 3], the Lightning Network [cite 4], Truebit [cite 5] or Plasma [cite 6]. This time period must have some definite length, and if a chain fails to include evidence within this length then it should be rejected \textit{unforgivingly} - that is, rejected forever, even if some child block later added to the chain accepts the transaction.
Preventing permanent disagreement in this model requires a hard synchrony assumption. Suppose that network latency is bounded by $d_n$. Suppose that, \textit{between validators}, the above algorithm runs with maximum permitted delay $d_v$, and rejection becomes \textit{unforgiving} if a transaction is not included in a chain within time $d_r$. When a transaction is sent, it will reach all validators within time $d_n$. By $max(d_v, d_n)$, honest validators will reject blocks not including that transaction, and so clients will not see any honest-node-supported chains not including the transaction after time $d_n + max(d_v, d_n)$. If we can count on the system reaching consensus within some time $d_f$ (eg. six blocks in typical proof of work chains), then this implies that as long as the consensus algorithm is functioning, there will not be problems as long as $d_r > d_n + max(d_v, d_n) + d_f$. For many interactive games, it is perfectly acceptable to set challenge periods several weeks long, and so we can see that setting $d_r$ to some length longer than a week may be an attractive option.
But what happens if there \textit{is} a 51\% attack? By adding this unforgiving rejection feature, we lose the ability to cleanly reject chains after successful 51\% attacks; instead, if a 51\% attack can persist long enough, it can create a \textit{permanent} chain split - or at least, a chain split that lasts until it is resolved via manual intervention.
\includegraphics[width=400px]{Censorship5.png}
This is a genuine disadvantage for the security of proof of work chains that adopt this feature, albeit a surmountable one. Without unforgiving rejection, a 51\% attack on a proof of work chain would need to be permanent in order to cause a permanent chain split. With unforgiving rejection, the a 51\% attack that lasts one week is enough. However, the difference between the economic capacity needed to maintain a 51\% attack for two weeks and the capacity needed to maintain one forever is not that large.
In proof of stake, the disadvantage is even smaller, as a 51\% attack triggered even once may by itself be enough to create a permanent chain split, so by adopting unforgiving rejection we appear to lose nothing.
Even though off-chain governance - that is, clients manually selecting the chain that they perceive as being the honest chain and rejecting the attacking chain, is thus required to resolve 51\% attacks in both proof of work and proof of stake, by introducing unforgiving rejection we gain a major advantage. Because the censorship rejection mechanism forces a chain split, a censoring 51\% attacker no longer wins ``cleanly'' by default. Without automated censorship rejection rejection, the community must actively coordinate to reject a censoring 51\% attack chain. With rejection, at least some clients reject the censoring chain, and so there is no longer the pressure of a default in the attacker's favor - instead, the community must choose between two options of equal status, except that one is an attacking chain and the other is not, and so it seems much more plausible that the losing chain will win.
\section{Incentives in the Uncoordinated Majority Model}
There are two ways to analyze the incentives of automated censorship rejection. The first is to look at automated rejection \textit{between miners only}, and explore its game-theoretic properties in a model where all actors have minority hashpower. The second is to assume that the attacker has a great majority of all valdators (and we will assume that they are proof of stake validators, with some specific incentive rules), and try to prove that any harmful attack on the network will, in expectation, cost the attacker a substantial amount of money.
Pure forgiving rejection can be viewed as a Nakamoto blockchain where some blocks are, from the point of view of some miners, explicitly prevented from being the head, but a block on top of such a block can always be created which allows that chain to be the head again.
\includegraphics[width=400px]{Censorship3p5.png}
The question to ask is: is the rule of not mining on top of censoring chains, even if those chains are longer, incentive-compatible? The answer by itself appears to be no: mining a \textit{non-censoring block} on top of the longest chain is a dominant strategy over mining a block on top of the non-censoring chain. But then, the miners who create the censoring chain still get rewarded, and so there is no incentive to create a non-censoring block instead of a censoring block. Hence the entire scheme unravels.
But what if the scoring rule instead \textit{penalizes} censoring blocks - we have a rule that censoring blocks receive a score contribution of $\epsilon$, instead of a score contribution equal to the block's difficulty? Then, suppose that there is zero latency, so there is perfect shared knowledge on which blocks are censoring. In this case, this new rule is a deterministic monotonic fork choice rule as defined by Kroll et al [cite], and so it is a Nash equilibrium.
Suppose now that there is latency, and so sometimes miners will disagree over which blocks have the highest score. Mining is like a Keynesian beauty contest: one wins not by mining on what \textit{they} think is the main chain, but rather what they think \textit{others} will think is the main chain (where those others in turn are trying to guess what everyone else thinks is the main chain). With a fully deterministic scoring rule, the two perfectly align; with a subjective scoring rule they may not.
However, if we preserve the property that the block that a miner thinks is the head \textit{is much more likely than any other block} to be the block that other miners think is the head, then the game-theoretic argument remains effective. If each miner does not know their place in the distribution of network latency, then this property does hold; if all that each miner knows is that there are two groups of miners, A and B, with different views of what is the head, then they will a priori believe that they are more likely to be in the larger group. However, if miners \textit{can} see their network latencies and patterns of what blocks other miners create, then there will inevitably be strategies that are superior to the default one. So the incentive alignment is not perfect.
\section{Incentives in the Majority Attacker Model}
Now we can look at a malicious majority attacker. Our goal is roughly this: \textit{there must be a maximum ratio} between the amount of harm that (in expectation) an attacker does to the protocol and the amount of money that (in expectation) the attacker loses. To be able to talk about ``harm'' mathematically, we must define a ``protocol utility function''. Because chains that are censoring a transaction for more than two weeks cannot be valid by definition, and finality reversion attacks are currently out of scope, our protocol utility function need only consider one type of ``harm'' to the network: the chain split. Specifically, we can say $U = -\sum_{cs} (1 - win(cs))$: utility is the (negative, because we are measuring harm) sum, over all chain splits, of the portion of clients that were not on the winning fork, and thus had to manually switch chains.
We will then also use a simple proof of stake incentivization scheme: in a chain split, all validators are allowed to only support one fork (or else they get slashed), and on each fork, the validators who supported any other fork lose x\% of their deposits (we'll say 50\% for now). Let us now suppose that an attacker has 99\% of all deposits. They decide not to include a transaction until some time close to its latest allowed inclusion time. There are several possibilities, which we can describe by viewing the statistical distribution of the time delta that clients \textit{see} between the transaction being published and the transaction being included:
\includegraphics[width=400px]{Censorship6b.png}
\includegraphics[width=400px]{Censorship6.png}
\includegraphics[width=400px]{Censorship6c.png}
In the first case, the transaction is delayed, and the attacker suffers no losses, but the delay is within tolerance and so no protocol guarantees are violated. In the second case, all clients reject the attacker's chain, instead going to a minority chain. The attacker loses money on the minority chain, and technically there is no harm to protocol utility. In the third case, half of the clients follow the minority chain and half do not. In this case, we can make the assumption that because the chain where the attacker loses money (instead of non-attacking validators) is more valuable, this chain will win. Utility is $-\frac{1}{2}$, and the attacker loses half of their money on the chain that the community accepts.
But there is also a fourth case:
\includegraphics[width=400px]{Censorship6d.png}
Here, there is also a chain split, but only a very small portion of clients get split off from the majority chain. It is now much less plausible that this minority chain will win, and so we assume as a worst case scenario that the majority chain will always win in these cases. Utility is $-0.01$, and the attacker loses no money. This is a costless attack, which is what we want to avoid.
\section{Timelock Timers}
Let us now try a different approach. Suppose that instead of a hard timeout of one week, we have a different mechanism, based on timelock cryptography. A timelock function is a function $f(x) = y$ such that it takes some amount of sequential work to compute $f$ which cannot be sped up with parallelization, but where a result $(x, y)$ can be verified quickly. One example of a timelock function is Sloth [cite], roughly defined as follows:
\begin{verbatim}
def sloth(h, n):
for i in range(n):
h = pow(h, (p+1)/4, p)
h += (1 if i % 2 else -1)
\end{verbatim}
Where $p$ is some prime. A result can be verified by running the same code, but with the two lines in the loop flipped, and the power replaced by a modular squaring; this is $O(log(p))$ times faster than the original computation. If a solution submitted is required to provide $n$ intermediate values, the result can be verified using $n$ threads in parallel, speeding verification up further.
We then have the following rule. When a transaction is included into a block with hash $h$, all clients start calculating $sloth(h, n)$ where $n$ is such that a result can be computed in one week. Once this value is computed, we \textit{retroactively} determine the maximum transaction inclusion window, as $168 - dist\_coeff * log_2(\frac{2^{256}}{sloth(h, n)})$ hours.
The intent is simple: make any attack that stands a chance of isolating a few users also have some chance of splitting the userbase in half. Now, we can describe the model as follows. Suppose that the distribution of clients' perceived time delta between transmission and inclusion has tails that are exponential or sub-exponential; that is, there exists some fixed $d$ such that if portion $\frac{1}{2^n+1}$ of the distribution is less than some time $t$, then half of the distribution is less than $t + n * d$. Note that this is a non-trivial assumption; for example, the normal distribution does \textit{not} satisfy this property, as the formula $\approx e^{-x^2}$ is \textit{super-exponential}. In the case that such a distribution exists, we will not be able to guarantee safety properties for the outliers. However, note that \textit{any} ``strong synchrony assumption'' as used in Byzantine fault tolerance literature implies a bounded distribution, which is inherently sub-exponential.
Let us set $dist\_coeff = d$. This now implies that any strategy which has probability $p_{2^{-n-1}}$ of ``forking off'' at least portion $2^{-n-1}$ of clients also has a probability of at least $p_{2^{-n-1}} * 2^{-n}$ of placing the distribution $n * d$ closer to the cutoff than expected, and thus forking off half of nodes. Thus, an attack which on average causes a utility loss $k$, on average causes a monetary loss to the attacker proportional to $k$.
\section{Proof of Work Timers}
The above approach inherently relied on a form of randomness, and a form of randomness that the attacker could not control. Another strategy is to use a different form of unpredictable randomness: proof of work. Here, we could have a rule that if, time $168 - dist\_coeff * n$ hours after a transaction, a client sees a proof of work solution that passes difficulty threshold $2^n$, then any blocks that come after that point are invalid unless they build on top of chains that include the transaction.
The problem is that a dishonest miner could publish a transaction at time $t_0$, then at time $t_0 + 168 - dist\_coeff * 50 - \delta$ publish a proof of work solution that passes the difficulty threshold $2^50$. A few seconds later they publish a block which includes the transaction. $\delta$ can be set in such a way that most nodes will \textit{perceive} $\delta < 0$, and so the proof of work solution is not sufficient, but a few nodes will perceive $\delta > 0$, and thus perceive the solution as being sufficient, and hence the transaction as arriving too late.
\includegraphics[width=400px]{Censorship7.png}
Suppose that an attacker has portion $h$ of hashpower. Once a transaction is published and not immediately included, there is a race to create a proof of work solution that triggers the timeout. Fraction $h$ of the time, the attacker succeeds first, and can accomplish the above attack cost-free. Fraction $1-h$ of the time, someone else succeeds first, and they can simply publish the solution. If the attacker has perfect control over the network, then they can still isolate a small portion of the nodes, but if they do not, then there will likely be a large split. Hence, if the attacker is not in control of most clients' network latency, then the cost of triggering disruption by not including a transaction is a cost of $1-h$.
Incentive compatiblity in the absence of a 51\% attacker is easy to show: because there are no 51\% attackers, all transactions will almost certainly be included well ahead of schedule, and so unforgiving rejection introduces no new obstacles. In the presence of a 51\% attacker, no strategy is reliably incentive-compatible, because in general for any strategy the attacker can always adopt a counter-strategy that penalizes validators which are detected to be engaging in that strategy. We assume that clients have a substantial preference for chains containing validators which do not have a track record of censoring them, and note that if the main chain is censoring then there is an incentive to build on the longest chain which is not censoring, and so an optimal honest validator strategy may in fact be something similar to forgiving rejection with a near-zero grace period.
\section{Censorship and Transaction Fees}
How do we extend this to the case where very large quantities of transactions get sent, and we want to ensure that fee-paying transactions get included? The challenge here is that users in the network may now send very large quantities of transactions, and there may not be enough block space to include all of them before the grace period expires; at least some users must be excluded. But what we *can* do is detect and prevent situations where higher-fee-paying transactions are excluded but lower-fee-paying transactions are allowed in.
Let us proceed as follows. The blockchain state contains a set of \textit{opportunities}. A \textit{transaction} consumes exactly one opportunity, and creates zero or more new opportunities. An opportunity could be a UTXO, an $(account, nonce)$ pair or one of many other schemes. We assume that the set of opportunities consumed by a transaction is statically determinable and unchanging, and the set of opportunities created by a transaction may be dynamic (i.e. dependent on the blockchain state at the time of transaction inclusion). Opportunities cannot be consumed more than once. A transaction also is seen as paying some \textit{fee level}, defined as total fee paid divided by blockchain space consumed, and fee, fee level and blockchain space are all statically determinable. Note that this is more restrictive than Ethereum, where blockchain space consumed and hence total fee are dynamic, and is more restrictive than Bitcoin, where transactions may consume multiple input UTXOs. The blockchain also contains some notion of a \textit{minimum fee level}, and constantly adjusts this fee level so that on average, blocks are half full. Fees equal to the fee level are destroyed; the surplus is given to validators.
A client keeps track of the set of all opportunities in all states that were received in the last several weeks (we assume that a block cannot build on top of a parent block that was seen a very long time ago; this can be accomplished with the same censorship rejection machinery that we use for transactions). For each opportunity, it keeps a map $opportunity \times timeslot \rightarrow transaction$, where a timeslot is a period of one hour during which a transaction could have been received, and the transaction stored is the transaction paying the highest fee that consumes that opportunity. When a node sees a block, it scans through all opportunities for all timeslots which are sufficiently far back that any transaction from that timeslot, and looks for any transaction that pays a fee higher than the minimum fee in the block that has not yet been processed. If such a transaction exists, then the block is rejected.
The map $opportunity \times timeslot \rightarrow transaction$ can be pruned and compressed. One can add a time requiring each transaction to have a maximum inclusion time, and if it is (legally) not included before then, then it can be ignored. This allows pruning old timeslots. Second, a transaction received in one timeslot can also replace transactions in later timeslots that pay lower fees; hence, representation can often be much more compact. In a worst case scenario a transaction sender can send a chain of transactions that pay progressively higher fees at progressively later times to try to consume a given opportunity, but this would only be possible during an active censorship attack.
\bibliographystyle{abbrv}
\bibliography{main}
\begin{itemize}
\item Casper is currently a work in progress. See https://medium.com/@VitalikButerin/minimal-slashing-conditions-20f0b500fc6c, https://github.com/ethereum/research/tree/master/casper4/papers and https://github.com/ethereum/casper for recent work.
\item Christopher Natoli and Vincent Gramoli, ``The Balance Attack'': https://arxiv.org/pdf/1612.09426.pdf
\item Aviv Zohar and Yonatan Sompolinsky, ``Fast Money Grows on Trees, not Chains'': https://eprint.iacr.org/2013/881.pdf
\item Jeff Coleman, ``State Channels: An Explanation'': http://www.jeffcoleman.ca/state-channels/
\item http://www.econinfosec.org/archive/weis2013/papers/KrollDaveyFeltenWEIS2013.pdf
\item https://eprint.iacr.org/2015/366.pdf
\end{itemize}
\end{document}