Added work-in-progress Casper economics paper
This commit is contained in:
parent
c8a6d23553
commit
d1c82f1414
Binary file not shown.
|
@ -0,0 +1,145 @@
|
|||
\title{Incentives in Casper the Friendly Finality Gadget}
|
||||
\author{
|
||||
Vitalik Buterin \\
|
||||
Ethereum Foundation
|
||||
}
|
||||
\date{\today}
|
||||
|
||||
\documentclass[12pt]{article}
|
||||
\usepackage{graphicx}
|
||||
|
||||
\begin{document}
|
||||
\maketitle
|
||||
\begin{abstract}
|
||||
We give an introduction to the incentives in the Casper the Friendly Finality Gadget protocol, and show how the protocol behaves under individual choice analysis, collective choice analysis and griefing factor analysis. We define a ``protocol utility function'' that represents the protocol's view of how well it is being executed, and connect the incentive structure to the utility function. We show that (i) the protocol is a Nash equilibrium assuming any individual validator's deposit makes up less than $\frac{1}{3}$ of the total, (ii) in a collective choice model, where all validators are controlled by one actor, harming protocol utility hurts the cartel's revenue, and there is an upper bound on the ratio between the reduction in protocol utility from an attack and the cost to the attacker, and (iii) the griefing factor can be bounded above by $1$, though we will prefer an alternative model that bounds the griefing factor at $2$ in exchange for other benefits.
|
||||
\end{abstract}
|
||||
|
||||
\section{Introduction}
|
||||
In the Casper protocol, there is a set of validators, and in each epoch validators have the ability to send two kinds of messages: $$[PREPARE, epoch, hash, epoch_{source}, hash_{source}]$$ and $$[COMMIT, epoch, hash]$$
|
||||
|
||||
Each validator has a \textit{deposit size}; when a validator joins their deposit size is equal to the number of coins that they deposited, and from there on each validator's deposit size rises and falls as the validator receives rewards and penalties. For the rest of this paper, when we say ``$\frac{2}{3}$ of validators", we are referring to a \textit{deposit-weighted} fraction; that is, a set of validators whose combined deposit size equals to at least $\frac{2}{3}$ of the total deposit size of the entire set of validators. We also use ``$\frac{2}{3}$ commits" as shorthand for ``commits from $\frac{2}{3}$ of validators".
|
||||
|
||||
If, during an epoch $e$, for some specific checkpoint hash $h$, $\frac{2}{3}$ prepares are sent of the form $$[PREPARE, e, h, epoch_{source}, hash_{source}]$$ with some specific $epoch_{source}$ and some specific $hash_{source}$, then $h$ is considered \textit{justified}. If $\frac{2}{3}$ commits are sent of the form $$[COMMIT, e, h]$$ then $h$ is considered \textit{finalized}. The $hash$ is the block hash of the block at the start of the epoch, so a $hash$ being finalized means that that block, and all of its ancestors, are also finalized. An ``ideal execution'' of the protocol is one where, during every epoch, every validator prepares and commits some block hash at the start of that epoch, specifying the same $epoch_{source}$ and $hash_{source}$. We want to try to create incentives to encourage this ideal execution.
|
||||
|
||||
Possible deviations from this ideal execution that we want to minimize or avoid include:
|
||||
|
||||
\begin{itemize}
|
||||
\item Any of the four slashing conditions get violated.
|
||||
\item During some epoch, we do not get $\frac{2}{3}$ commits for the $hash$ that received $\frac{2}{3}$ prepares.
|
||||
\item During some epoch, we do not get $\frac{2}{3}$ prepares for the same \\ $(h, hash_{source}, epoch_{source})$ combination.
|
||||
\end{itemize}
|
||||
|
||||
From within the view of the blockchain, we only see the blockchain's own history, including messages that were passed in. In a history that contains some blockhash $H$, our strategy will be to reward validators who prepared and committed $H$, and not reward prepares or commits for any hash $H\prime \ne H$. The blockchain state will also keep track of the most recent hash in its own history that received $\frac{2}{3}$ prepares, and only reward prepares whose $epoch_{source}$ and $hash_{source}$ point to this hash. These two techniques will help to ``coordinate'' validators toward preparing and committing a single hash with a single source, as required by the protocol.
|
||||
|
||||
\section{Rewards and Penalties}
|
||||
|
||||
We define the following constants and functions:
|
||||
|
||||
\begin{itemize}
|
||||
\item $p$: determines how the rewards and penalties paid or deducted from each validator decrease as the total deposit size increases
|
||||
\item $k$: a constant determining the base reward and penalty size
|
||||
\item $NCP$ (``non-commit penalty''): the penalty for not committing, if there was a justified hash which the validator \textit{could} have committed
|
||||
\item $NCCP(\alpha)$ (``non-commit collective penalty''): if $\alpha$ of validators are not seen to have committed during an epoch, and that epoch had a justified hash so any validator \textit{could} have committed, then all validators are charged a penalty proportional to $NCCP(\alpha)$
|
||||
\item $NPP$ (``non-prepare penalty''): the penalty for not preparing
|
||||
\item $NPCP(\alpha)$ (``non-prepare collective penalty''): if $\alpha$ of validators are not seen to have prepared during an epoch, then all validators are charged a penalty proportional to $NCCP(\alpha)$
|
||||
\item $f(e, LFE)$: a factor applied to all rewards and penalties that depends on the current epoch $e$ and the last finalized epoch $LFE$. Note that in a ``perfect'' protocol execution, $e - LFE$ always equals $1$.
|
||||
\end{itemize}
|
||||
|
||||
Note that preparing and committing does not guarantee that the validator will not incur $NPP$ and $NCP$; it could be the case that either because of very high network latency or a malicious majority censorship attack, the prepares and commits are not included into the blockchain in time and so the incentivization mechanism does not know about them. For $NPCP$ and $NCCP$ similarly, the $\alpha$ input is the portion of validators whose prepares and commits are \textit{included}, not the portion of validators who \textit{tried to send} prepares and commits.
|
||||
|
||||
When we talk about preparing and committing the ``correct value", we are referring to the $hash$ and $epoch_{source}$ and $hash_{source}$ recommended by the protocol state, as described above.
|
||||
|
||||
We now define the following reward and penalty schedule, where a validator with deposit size $V_d$ gets a reward or penalty equal to $V_d$ times the values given below:
|
||||
|
||||
\begin{itemize}
|
||||
\item Let $BIR = \frac{k}{D^p}$ (the ``base interest rate")
|
||||
\item All validators get a reward of $BIR$
|
||||
\item If a validator did not prepare the correct value, they are penalized $BIR * f * NPP$
|
||||
\item If $p_p$ validators prepared the correct value, every validator is penalized $BIR * f * NPCP(1 - p_p)$
|
||||
\item If a validator did not commit the correct value, and the protocol sees that the correct value was justified so they \textit{could} have committed, they are penalized $BIR * f * NCP$
|
||||
\item If $p_c$ validators committed the correct value, and the protocol sees that the correct value was justified so they \textit{could} have committed, every validator is penalized $BIR * f * NCCP(1 - p_c)$
|
||||
\end{itemize}
|
||||
|
||||
This is the entirety of the incentivization structure.
|
||||
|
||||
\section{Claims}
|
||||
|
||||
We seek to prove the following:
|
||||
|
||||
\begin{itemize}
|
||||
\item Following the protocol is a Nash equilibrium; that is, if each validator has less than $\frac{1}{3}$ of total deposits, their economic welfare is maximized by preparing and committing the same value as everyone else.
|
||||
\item Even if all validators collude, the ratio between the harm incurred by the protocol and the penalties paid by validators is bounded above by some constant. Note that this requires a measure of ``harm incurred by the protocol"; we will discuss this in more detail later.
|
||||
\item The \textit{griefing factor}, the ratio between penalties incurred by non-attacking validators and penalties incurred by attacking validators, in any attack, is bounded above by some global constant, even in the case where the attacker has a majority of all validators.
|
||||
\end{itemize}
|
||||
|
||||
\section{Individual choice model}
|
||||
|
||||
The individual choice analysis is simple. Suppose that some chain $C$ is the chain that you expect to be accepted as the main chain in the future (if all other validators are preparing and committing on $C$, then this will be the main chain because the fork choice rule takes commits into account). Suppose $H$ is the most recent block hash on $C$, and $(epoch_{source}, hash_{source})$ is the source data that $C$ expects validators to prepare with. A validator can avoid penalties by sending a prepare with $H, epoch_{source}, hash_{source}$ and sending a commit with $H$. Sending a commit does restrain the validator's future behavior because of the PREPARE\_COMMIT\_CONSISTENCY slashing condition, but if more than $\frac{2}{3}$ of validators are preparing and committing then this is not an issue because the epoch will itself receive $\frac{2}{3}$ prepares and so the validator will be able to use the current epoch as their source in the next epoch. Hence, it is in a validator's interest to be preparing and comitting on the same chain as everyone else.
|
||||
|
||||
\section{Collective choice model}
|
||||
|
||||
To model the protocol in a collective-choice context, we will first define a \textit{protocol utility function}. The protocol utility function defines ``how well the protocol execution is doing". The protocol utility function cannot be derived mathematically; it can only be conceived and justified intuitively.
|
||||
|
||||
Our protocol utility function is:
|
||||
|
||||
$U = \sum_{e = 0}^{e_c} -log_2(e - max[e\prime < e, e\prime finalized]) - M * F$
|
||||
|
||||
Where:
|
||||
|
||||
\begin{itemize}
|
||||
\item $e$ is the current epoch, going from epoch $0$ to $e_c$, the current epoch
|
||||
\item $e\prime$ is the last finalized epoch before $e$
|
||||
\item $M$ is a very large constant
|
||||
\item $F$ is 1 if a safety failure has taken place, otherwise 0
|
||||
\end{itemize}
|
||||
|
||||
The second term in the function is easy to justify: safety failures are very bad. The first term is trickier. To see how the first term works, consider the case where every epoch where $e mod N = 0$ is finalized and other epochs are not. The average total over each $N$-epoch slice will be roughly $\sum_{i=1}^N -log_2(i) \approx N * (log_2(N) - \frac{1}{ln(2)})$. Hence, the utility per block will be roughly $-log_2(N)$. This basically states that a blockchain with some finality time $N$ has utility roughly $-log(N)$, or in other words \textit{increasing the finality time of a blockchain by a constant factor causes a constant loss of utility}. The utility difference between 1 minute finality and 2 minute finality is the same as the utility difference between 1 hour finality and 2 hour finality.
|
||||
|
||||
This can be justified in two ways. First, one can intuitively argue that a user's psychological estimation of the discomfort of waiting for finality roughly matches this kind of logarithmic utility schedule. Second, one can look at various blockchain use cases, and see that they are roughly logarithmically uniformly distributed along the range of finality times between around 200 miliseconds (``Starcraft on the blockchain") and one week (land registries and the like).
|
||||
|
||||
Now, we need to show that, for any given total deposit size, $\frac{loss\_to\_protocol\_utility}{validator\_penalties}$ is bounded. There are two ways to reduce protocol utility: cause a safety failure, and do not finalize epochs. In the first case, validators lose a large amount of deposits for violating the slashing conditions. In the second case, in a chain that has not been finalized for $k$ epochs, the penalty to attackers is $min(\frac{1}{3} * (NPP + NPCP), \frac{1}{3} * (NCP + NCCP)) * BIR * floor(log_2(k))$, which is equal to the loss of protocol utility multiplied by $BIR * min(\frac{1}{3} * (NPP + NPCP), \frac{1}{3} * (NCP + NCCP))$, which for any given total deposit size is a constant.
|
||||
|
||||
\section{Griefing factor analysis}
|
||||
|
||||
Griefing factor analysis is important because it is one way to quanitfy the risk to honest validators. In general, if all validators are honest, and if network latency stays below the length of an epoch, then validators face zero risk beyond the usual risks of losing or accidentally divulging access to their private keys. In the case where malicious validators exist, they can interfere in the protocol in ways that cause harm to both themselves and honest validators.
|
||||
|
||||
We can approximately define the "griefing factor" as follows:
|
||||
|
||||
% \begin{theorem}
|
||||
A strategy used by a coalition in a given mechanism exhibits a \textit{griefing factor} $B$ if it can be shown that this strategy imposes a loss of $B * x$ to those outside the coalition at the cost of a loss of $x$ to those inside the coalition. If all strategies that cause deviations from some given baseline state exhibit griefing factors less than or equal to some bound B, then we call B a \textit{griefing factor bound}.
|
||||
% \end{theorem}
|
||||
|
||||
A strategy that imposes a loss to outsiders either at no cost to a coalition, or to the benefit of a coalition, is said to have a griefing factor of infinity. Proof of work blockchains have a griefing factor bound of infinity because a 51\% coalition can double its revenue by refusing to include blocks from other participants and waiting for difficulty adjustment to reduce the difficulty and increase their rewards. With selfish mining, the griefing factor may be infinity for coalitions of size as low as 23.21\%.
|
||||
|
||||
Let us start off our griefing analysis by not taking into account validator churn, so all dynasties are identical. Because the equations involved are fractions of linear equations, we know that small churn will only lead to small changes in the results. In Casper, we can identify the following deviating strategies:
|
||||
|
||||
\begin{enumerate}
|
||||
\item A minority of validators do not prepare, or prepare incorrect values.
|
||||
\item (Mirror image of 1) A censorship attack where a majority of validators does not accept prepares from a minority of validators (or other isomorphic attacks such as waiting for the minority to prepare hash $H1$ and then preparing $H2$, making $H2$ the dominant chain and denying the victims their rewards)
|
||||
\item A minority of validators do not commit.
|
||||
\item (Mirror image of 3) A censorship attack where a majority of validators does not accept commits from a minority of validators
|
||||
\end{enumerate}
|
||||
|
||||
Notice that, from the point of view of griefing factor analysis, it is immaterial whether or not a given epoch was prepared or committed. The reward and penalty schedule only pays attention to prepares and commits for the purpose of setting $f$, the value proportional to the logarithm of the time since the last finalized epoch. This value scales penalties evenly for all participants, so it does not affect griefing factors.
|
||||
|
||||
Let us now analyze the griefing factors:
|
||||
|
||||
\begin{tabular}[c]{@{}lllll@{}}
|
||||
Attack & Amount lost by attacker & Amount lost by victims & Griefing factor & Notes\tabularnewline
|
||||
Minority of size $\alpha < \frac{1}{2}$ non-prepares & $NCP * \alpha + NCCP * \alpha^2$ & $NCCP * \alpha * (1-\alpha)$ & $\frac{NCCP * (1-\alpha)}{NCP + NCCP * \alpha}$ & The griefing factor is maximized when $\alpha \approx 0$ \\
|
||||
Majority censors $\alpha < \frac{1}{2}$ minority prepares & $NCCP * \alpha * (1-\alpha)$ & $NCP * \alpha + NCCP * \alpha^2$ & $\frac{NCP + NCCP * \alpha}{NCCP * (1-\alpha)}$ & The griefing factor is maximized when $\alpha \approx \frac{1}{2}$ \\
|
||||
\end{tabular}
|
||||
|
||||
In general, we see a perfect symmetry between the non-commit case and the non-prepare case, so we can assume $\frac{NCCP}{NCP} = \frac{NPCP}{NPP}$. Also, from a protocol utility standpoint, we can make the observation that seeing $\frac{1}{3} \le p_c < \frac{2}{3}$ commits is better than seeing fewer commits, as it gives at least some economic security against finality reversions, so we do want to reward this scenario more than the scenario where we get $\frac{1}{3} \le p_c < \frac{2}{3}$ prepares. Another way to view the situation is to observe that $\frac{1}{3}$ non-prepares causes \textit{everyone} to non-commit, so it should be treated with equal severity.
|
||||
|
||||
In the normal case, anything less than $\frac{1}{3}$ commits provides no economic security, so we can treat $p_c < \frac{1}{3}$ commits as equivalent to no commits; this thus suggests $NPC = 2 * NCC$.
|
||||
|
||||
\section{Conclusions}
|
||||
|
||||
The above analysis shows Casper's basic properties in the context of an individual-choice model, a collective-choice model where the validator set is modeled as a single player, and a model where one coalition is trying to cause other validators to lose money possibly at some cost to itself. Non-economic honest-majority models are out of scope, as is the proof that causing a safety failure requires a large number of slashed validators, as those topics are covered elsewhere. More complex economic attacks involving extortion, blackmail and validator discouragement are not covered here, although the griefing factor analysis made here does serve as a foundation for the analyses of these topics.
|
||||
|
||||
\bibliographystyle{abbrv}
|
||||
\bibliography{main}
|
||||
Optimal selfish mining strategies in Bitcoin; Ayelet Sapirshtein, Yonatan Sompolinsky, and Aviv Zohar: https://arxiv.org/pdf/1507.06183.pdf
|
||||
|
||||
\end{document}
|
|
@ -162,155 +162,6 @@ Note that there is no single principled way to say what the protocol utility is;
|
|||
|
||||
The -M * SF term is self-explanatory; safety failures are very bad, as it means that events that appear to have been finalized, and that users may be relying on being finalized, suddenly become unfinalized. The -ln(i - LFE(i)) term is more complicated. What is it saying is that the amount of pain that users feel from having to wait `k` epochs for their transaction to be finalized is logarithmic in `k`. This can be justified inuitively: the difference between finality in 1 minute and 2 minutes feels similar in size to the difference between 1 hour and 2 hours. The separate `c` term is there to show that even if a given epoch does not finalize, commits can still provide value, as a smaller number of commits on a given block can still make it harder to finalize competing blocks.
|
||||
|
||||
### Incentives
|
||||
|
||||
We define the full set of incentives that we assign to validators in any given epoch as follows.
|
||||
|
||||
From the point of view of the state in which the incentives are being calculated, let us assume:
|
||||
|
||||
* `e` is the epoch number
|
||||
* `H` is the hash of the most recent checkpoint block
|
||||
* `H_s` is the most recent justified checkpoint (ie. checkpoint with prepares from two thirds of the previous and current validator sets) that the state knows about
|
||||
* `k` is a constant
|
||||
* `TD` is the total size of deposits
|
||||
* `LFE(e)` is the last finalized epoch
|
||||
* `D` is a given validator's deposit size
|
||||
|
||||
During any epoch, define `BASE_INTEREST = D * k / sqrt(TD)` and `PENALTY_FACTOR = D * k / sqrt(TD) * log(1 + e - LFE(e))`. Abbreviate:
|
||||
|
||||
* `NCP` = non-commit penalty
|
||||
* `NFP` = non-finality penalty
|
||||
* `NPP` = non-prepare penalty
|
||||
* `NCCP` = per-non-commit collective penalty
|
||||
* `NPCP` = per-non-prepare collective penalty
|
||||
|
||||
Assume all references to the five above variables in the remaining section are actually referring to `NCP * PENALTY_FACTOR`, `NFP * PENALTY_FACTOR`, etc.
|
||||
|
||||
We define the penalties as follows:
|
||||
|
||||
* If the epoch is not finalized, all validators pay `NFP`
|
||||
* All non-committing validators pay `NCP`. Waived if 2/3 prepares are not found (ie. validators cannot legally commit).
|
||||
* All non-preparing validators pay `NPP`
|
||||
* Suppose `cp` is the minimal fraction of validators between the two validator sets that commits (eg. if 80% of the validators in the current dynasty commit and 68% of the validators in the previous dynasty do, `cp = 0.68`). All validators pay `NCCP * (1 - cp)`. Waived if 2/3 prepares are not found (ie. validators cannot legally commit).
|
||||
* Suppose `pp` is the minimal fraction of validators between the two validator sets that prepares. All validators pay `NPCP * (1-pp)`
|
||||
* If a validator violates a slashing condition, they lose their entire deposit.
|
||||
|
||||
#### Uncoordinated Choice
|
||||
|
||||
In an uncoordinated choice model, we assume that there is a validator set V<sub>1</sub> ... V<sub>n</sub>, with deposit sizes |V<sub>1</sub>| ... |V<sub>n</sub>|, and each validator acts independently according to their own incentives. We assume |V<sub>i</sub>| < 1/3.
|
||||
|
||||
Suppose that there is a number of competing unfinalized forks F1, F2 .. Fn. The validator's only possible actions are to (i) prepare a single `F_i` (if they prepare they will be slashed), and (ii) commit one or more `F_i`. Let `P(F_i)` be the probability that the validator believes that a given fork will be finalized (whether in this epoch or indirectly in a future one) conditional on that validator preparing on that fork. Let:
|
||||
|
||||
* `L(e, F_i)` be the validator's private expectation of the number of epochs until a descendant of `F_i` gets prepares from two thirds of validators.
|
||||
|
||||
The validator's expected return from preparing `F_i` is clearly `P(F_i) * D * NPP`. The validator can hence maximize revenues by going for the `F_i` with maximum probability of being finalized. This creates incentives for convergence. The validator's incentive for committing is made up of two components: (i) the reward `P(F_i) * D * NCP`, and (ii) an implied penalty `sum_{F_j: F_1 ... F_n, F_j != F_i} P(F_j) * NPP * L(e, F_j)` because if the validator commits on a fork that does not get adopted then they will be unable to prepare until two thirds of other validators prepare some future value. `L(e, F_j) >= 1`, so we can lower-bound the penalty with `(1 - P(F_i)) * D * NPP`. This suggests that validators will not commit a value unless they are at least `NPP / (NPP + NCP)` sure that it will be finalized.
|
||||
|
||||
#### Coordinated Choice
|
||||
|
||||
Any coalition of size equal to or greater than 1/3 of the total validator set can cause a safety or liveness failure. If they cause a safety failure, then they can get caught via the slashing conditions, and lose an amount of money equal to their entire security deposits. The trickier case is liveness failures.
|
||||
|
||||
The cheapest liveness failure to cause is for 1/3 of validators to continue preparing, but stop committing. In this case, they can delay finality by `d` epochs at a cost of `1/3 * TD * k / sqrt(TD) * 1/2 * sum(i = 2 ... d+1: log(i))` ~= `k * sqrt(TD) / 6 * ((d + 1) * log(d + 1) - (d + 1))`.
|
||||
|
||||
### Griefing Factor Analysis
|
||||
|
||||
Another important kind of analysis to make in public economic protocols is the risk to honest validators. In general, if all validators are honest, and if network latency stays below the length of an epoch, then validators face zero risk beyond the usual risks of losing or accidentally divulging access to their private keys. In the case where malicious validators exist, we can analyze the risk to honest validators through _griefing factor analysis_.
|
||||
|
||||
We can approximately define the "griefing factor" as follows:
|
||||
|
||||
> A strategy used by a coalition in a given mechanism exhibits a griefing factor B if it can be shown that this strategy imposes a loss of B * x to those outside the coalition at the cost of a loss of x to those inside the coalition.
|
||||
|
||||
Further:
|
||||
|
||||
> If all strategies that cause deviations from some given baseline state exhibit griefing factors <= some bound B, then we call B a griefing factor bound.
|
||||
|
||||
A strategy that imposes a loss to outsiders either at no cost to a coalition, or to the benefit of a coalition, is said to have a griefing factor of infinity.
|
||||
|
||||
Fact:
|
||||
|
||||
> Proof of work blockchains have a griefing factor bound of infinity.
|
||||
|
||||
Proof:
|
||||
|
||||
51% coalitions can double their revenue by refusing to build on blocks from all other miners, reducing the revenue of outside miners to zero. Due to selfish mining, griefing factor bounds are also infinity in all models that allow coalitions of size greater than ~0.2321 [cite], and even without selfish mining a miner can grief simply by mining with more hardware than the quantity that would maximize their profits.
|
||||
|
||||
Let us start off our griefing analysis by not taking into account validator churn, so all dynasties are identical. Because the equations involved are fractions of linear equations, we know that small churn will only lead to small changes in the results. In Casper, we can identify the following deviating strategies:
|
||||
|
||||
1. Less than 1/3 of validators do not commit.
|
||||
2. (Mirror image of 1) A censorship attack where 2/3 of validators block commits from less than 1/3 of validators.
|
||||
3. Less than 1/3 of validators do not prepare.
|
||||
4. (Mirror image of 3) A censorship attack where 2/3 of validators block prepares from less than 1/3 of validators.
|
||||
5. More than 1/3 of validators do not commit.
|
||||
6. (Mirror image of 5) A censorship attack where between 1/3 and 1/2 are blocked from committing (cannot be more because in the >1/2 case, the chain committed to by censorship victims will be viewed as winning)
|
||||
7. More than 1/3 of validators do not prepare.
|
||||
8. (Mirror image of 7) A censorship attack where between 1/3 and 1/2 are blocked from preparing.
|
||||
|
||||
We will ignore (8), because we assume in our model that the underlying proposal mechanism (ie. proof of work) is majority-honest, and there is no way for validators to do this.
|
||||
|
||||
Let us now analyze the griefing factors:
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td> Attack </td> <td> Amount lost by attacker </td> <td> Amount lost by victims </td> <td> Griefing factor </td> <td> Notes </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> k < 1/3 non-commit </td> <td> NCP * k + NCCP * k<sup>2</sup></td> <td> NCCP * k * (1-k) </td> <td> NCCP / NCP </td>
|
||||
<td> The griefing factor is maximized when k -> 0 </td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td> Censor k < 1/3 committers </td> <td> NCCP * k * (1-k) </td> <td> NCP * k + NCCP * k<sup>2</sup></td> <td> 1.5 * (NCP + NCCP / 3) / NCCP </td>
|
||||
<td> The griefing factor is maximized when k -> 1/3 </td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td> k < 1/3 non-prepare </td> <td> NPP * k + NCCP * k<sup>2</sup></td> <td> NPCP * k * (1-k) </td> <td> NPCP / NPP </td>
|
||||
<td> The griefing factor is maximized when k -> 0 </td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td> Censor k < 1/3 preparers </td> <td> NPCP * k * (1-k) </td> <td> NPP * k + NPCP * k<sup>2</sup> </td> <td> 1.5 * (NPP + NPCP / 3) / NPCP </td>
|
||||
<td> The griefing factor is maximized when k -> 1/3 </td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td> k > 1/3 non-commit </td> <td> NFP * k + NCP * k + NCCP * k<sup>2</sup></td> <td> NFP * (1-k) + NCCP * k * (1-k)</td> <td>2 * (NFP + NCCP / 3) / (NFP + NCP + NCCP / 3)</td>
|
||||
<td> The griefing factor is maximized when k = 1/3 </td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td> Censor k > 1/3 non-committers </td> <td> NFP * (1-k) + NCCP * k * (1-k) </td> <td> NFP * k + NCP * k + NCCP * k<sup>2</sup> </td> <td> max(1 + NCCP / (NFP + NCP + NCCP / 2), (NFP + NCP + NCCP / 3) / (NFP + NCCP / 3) / 2)</td>
|
||||
<td> The griefing factor is maximized at either k -> 1/2, or k -> 1/3. </td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td> k > 1/3 non-prepare </td> <td> NFP * k + NPP * k + NPCP * k<sup>2</sup></td> <td> NFP * (1-k) + NPCP * k * (1-k)</td> <td>2 * (NFP + NPCP / 3) / (NFP + NPP + NPCP / 3)</td>
|
||||
<td> The griefing factor is maximized when k = 1/3 </td>
|
||||
</tr>
|
||||
|
||||
</table>
|
||||
|
||||
There seems to be a three-dimensional space of optimal solutions with griefing factor bound 1.5, with constaints NCCP = NCP * 1.5 and NPCP = NPP * 1.5. One solution is:
|
||||
|
||||
NCP = 1/2
|
||||
NCCP = 3/4
|
||||
NPP = 1/2
|
||||
NPCP = 3/4
|
||||
NFP = 1
|
||||
|
||||
The griefing factors are: (3/2, 3/2, 3/2, 3/2, 10/7, 7/5, 10/7)
|
||||
|
||||
However, we may want to voluntarily accept higher griefing factors against dominant coalitions in exchange for lower griefing factors against smaller coalitions, the reasoning being that this makes it easier to escape dominant attacking coalitions via a user-activated soft fork (see next section). In this case, an alternative solution is:
|
||||
|
||||
NCP = 3/2
|
||||
NCCP = 3/2
|
||||
NPP = 3/2
|
||||
NPCP = 3/2
|
||||
NFP = 1
|
||||
|
||||
With griefing factors (1, 2, 1, 2, 1, 19/13, 1), a bound of 1.
|
||||
|
||||
There are other solutions; for example, (3, 3, 3, 3, 1) is interesting because it reduces the griefing factors for 1/3 coalitions that prevent finality to 0.8, and (1, 1, 1, 1, 0) reduces the griefing factor for all <50% finality-preventing coalitions to 0.5, at the cost of allowing coalitions of size > 50% to censor at griefing factors between 5/3 and 2. More generally, if we allow censoring coalitions a griefing factor of k, we can reduce the griefing factor for minority coalitions to 3 / (2 * k - 1).
|
||||
|
||||
### Recovering from Coalition Attacks
|
||||
|
||||
Suppose that there exists a coalition of size >= 1/3 (possibly even size >= 2/3) that engages in attacks of type (5), (6) or (7) above. This type of attack can be resolved in honest nodes' favor, but in many cases (especially those where the dishonest coalition is of size >= 1/2) this requires some out-of-band coordination between users, which can only partially be automated. This does require a synchrony assumption between validators and users, but one on the order of weeks (more precisely, the synchrony assumption must be on the same order as the amount of time that a >33% attack will take to resolve; resolution taking weeks is arguably acceptable because the prepetrators will lose a large portion of their deposits in this kind of attack).
|
||||
|
@ -361,11 +212,3 @@ This then degrades into a few cases:
|
|||
1. >=1/2 of miners selectively censor; >=2/3 of validators do not participate. Then, the validators will simply coordinate on finalizing the longest non-censoring chain.
|
||||
2. >=1/3 of validators only sign chains that are selectively censoring. Then, from the point of view of non-participating validators, those validators are offline, and so this situation collapses into the liveness attack described in a previous section.
|
||||
3. Validators wrongly believe that censorship is taking place, because a network synchrony assumption is violated. In this case, some validators end up refusing to prepare or commit checkpoints that they would otherwise be preparing or committing, and so fewer or no blocks finalize until the network synchrony assumption once again takes hold.
|
||||
|
||||
### Discouragement Attacks
|
||||
|
||||
Perhaps the most difficult kind of attack to deal with in this algorithm is a "discouragement attack". This attack happens in two stages. First, an attacker engages in a medium-grade liveness degradation attack, with the main goal of reducing the rewards for honest validators to below zero, or at least to below their capital lockup costs. Then, the attacker engages in more serious attacks on liveness or safety against a much smaller pool of honest validators. A discouragement attack requires the attacker to have an amount of deposits equal to 1/3 of the original total deposit size, but allows them to trigger liveness and safety failures much more cheaply.
|
||||
|
||||
It is worth noting that proof of work is extremely vulnerable to discouragement attacks: assuming a competitive market where miners have low profits, a selfish mining attack can quickly force other miners offline, at which point the miner can engage in a 51% attack. Hence, even if our treatment of discouragement attacks is not fully satisfactory, it arguably still fares substantially better than other protocols.
|
||||
|
||||
Along with minimizing griefing factors, we can mitigate discouragement attacks using another strategy: those who are currently non-validators can coordinate on joining the validator set en masse, overwhelming the attacker to the point that their share of the total validator set is less than 1/3 and their griefing is no longer effective. We expect many to be willing to join altruistically, but we can also recycle a portion of penalties into future rewards, thereby temporarily increasing the incentive for new joiners.
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
import random
|
||||
import datetime
|
||||
|
||||
diffs = [947.15 * 10**12]
|
||||
hashpower = diffs[0] / 17.23
|
||||
times = [1498720341]
|
||||
diffs = [1215.03 * 10**12]
|
||||
hashpower = diffs[0] / 18.76
|
||||
times = [1499790856]
|
||||
|
||||
|
||||
for i in range(3946612, 6010000):
|
||||
for i in range(4008238, 6010000):
|
||||
blocktime = random.expovariate(hashpower / diffs[-1])
|
||||
adjfac = max(1 - int(blocktime / 10), -99) / 2048.
|
||||
newdiff = diffs[-1] * (1 + adjfac)
|
||||
|
|
Loading…
Reference in New Issue