Spalling fixes and finalizing the Snappy EIP

This commit is contained in:
Péter Szilágyi 2017-09-27 11:55:10 +03:00 committed by Hudson Jameson
parent b6c27a3a0d
commit 3e29ac9126
1 changed files with 15 additions and 15 deletions

View File

@ -1,36 +1,36 @@
## Preamble
EIP: <to be assigned>
Title: devp2p snappy compression
EIP: 706
Title: DEVp2p snappy compression
Author: Péter Szilágyi <peter@ethereum.org>
Type: Standard Track
Category: Networking
Status: Draft
Status: Final
Created: 2017-09-07
## Abstract
The base networking protocol (devp2p) used by Ethereum currently does not employ any form of compression. This results in a massive amount of bandwidth wasted in the entire network, making both initial sync as well as normal operation slower and laggyer.
The base networking protocol (DEVp2p) used by Ethereum currently does not employ any form of compression. This results in a massive amount of bandwidth wasted in the entire network, making both initial sync as well as normal operation slower and laggier.
This EIP proposes a tiny extension to the devp2p protocol to enable [Snappy compression](https://en.wikipedia.org/wiki/Snappy_(compression)) on all message payloads after the initial handshake. After extensive benchmarks, results show that data traffic is decreased by 60-80% for initial sync. You can find exact numbers below.
This EIP proposes a tiny extension to the DEVp2p protocol to enable [Snappy compression](https://en.wikipedia.org/wiki/Snappy_(compression)) on all message payloads after the initial handshake. After extensive benchmarks, results show that data traffic is decreased by 60-80% for initial sync. You can find exact numbers below.
## Motivation
Synchronizing the Ethereum main network (block 4,248,000) in Geth using fast sync currently consumes 1.01GB upload and 33.59GB download bandwidth. On the Rinkeby test network (block 852,000) it's 55.89MB upload and 2.51GB download.
However, most of this data (blocks, transactions) are heavily compressable. By enabling compression at the message payload level, we can reduce the previous numbers to 1.01GB upload / 13.46GB download on the main network, and 46.21MB upload / 463.65MB download on the test network.
The motivation behind doing this at the devp2p level (opposed to eth for example) is that it would enable compression for all sub-protocols (eth, les, bzz) seamlessly, reducing any complexity those protocols might incur in trying to individually optimize for data traffic.
The motivation behind doing this at the DEVp2p level (opposed to eth for example) is that it would enable compression for all sub-protocols (eth, les, bzz) seamlessly, reducing any complexity those protocols might incur in trying to individually optimize for data traffic.
## Specification
Bump the advertised devp2p version number from `4` to `5`. If during handshake, the remote side advertises support only for version `4`, run the exact same protocol as until now.
Bump the advertised DEVp2p version number from `4` to `5`. If during handshake, the remote side advertises support only for version `4`, run the exact same protocol as until now.
If the remote side advertises a devp2p version `>= 5`, inject a Snappy compression step right before encrypting the devp2p message during sending:
If the remote side advertises a DEVp2p version `>= 5`, inject a Snappy compression step right before encrypting the DEVp2p message during sending:
* A message consists of `{Code, Size, Payload}`
* Compress the original payload with Snappy and store it in the same field.
* Update the message size to the length of the compressed payload.
* Encrypt and send the message as before, oblivious to compression.
Similarly to message sending, when receiving a devp2p v5 message from a remote node, insert a Snappy decompression step right after the decrypting the devp2p message:
Similarly to message sending, when receiving a DEVp2p v5 message from a remote node, insert a Snappy decompression step right after the decrypting the DEVp2p message:
* A message consists of `{Code, Size, Payload}`
* Decrypt the message payload as before, oblivious to compression.
@ -40,19 +40,19 @@ Similarly to message sending, when receiving a devp2p v5 message from a remote n
Important caveats:
* The handshake message is **never** compressed, since it is needed to negotiate the common version.
* Snappy framing is **not** used, since the devp2p protocol already message oriented.
* Snappy framing is **not** used, since the DEVp2p protocol already message oriented.
### Avoiding DOS attacks
Currently a devp2p message length is limited to 24 bits, amoutning to a maximum size of 16MB. With the introduction of Snappy compression, care must be taken not to blidly decompress messages, since they may get significantly larger than 16MB.
Currently a DEVp2p message length is limited to 24 bits, amounting to a maximum size of 16MB. With the introduction of Snappy compression, care must be taken not to blindy decompress messages, since they may get significantly larger than 16MB.
However, Snappy is capable of calculating the decompressed size of an input message without inflating it in memory. This can be used to discard any messages which decompress above some threshold. **The proposal is to use the same limit (16MB) as the threshold for decompressed messages.** This retains the same guarantees that the current devp2p protocol does, so there won't be surprises in application level protocols.
However, Snappy is capable of calculating the decompressed size of an input message without inflating it in memory. This can be used to discard any messages which decompress above some threshold. **The proposal is to use the same limit (16MB) as the threshold for decompressed messages.** This retains the same guarantees that the current DEVp2p protocol does, so there won't be surprises in application level protocols.
## Alternatives (discarded)
**Alternative solutions to data compression that have been brought up and discarded are:**
Extend protocol `xyz` to support compressed messages versus doing it at devp2p level:
Extend protocol `xyz` to support compressed messages versus doing it at DEVp2p level:
* **Pro**: Can be better optimized when to compress and when not to.
* **Con**: Mixes in transport layer encoding into application layer logic.
@ -69,12 +69,12 @@ Introduce seamless variations of protocol such as `xyz` expanded with `xyz-compr
Don't explicitly limit the decompressed message size, only the compressed one:
* **Pro**: Allows larger messages to traverse through devp2p.
* **Pro**: Allows larger messages to traverse through DEVp2p.
* **Con**: Upper layer protocols need to check and discard large messages.
* **Con**: Needs lazy decompression to allow size limitations without DOS.
## Backwards Compatibility
This proposal is fully backward compatible. Clients upgrading to the proposed devp2p protocol version `5` should still support skipping the compression step for connections that only advertise version `4` of the devp2p protocol.
This proposal is fully backward compatible. Clients upgrading to the proposed DEVp2p protocol version `5` should still support skipping the compression step for connections that only advertise version `4` of the DEVp2p protocol.
## Implementation
You can find a reference implementation of this EIP in https://github.com/ethereum/go-ethereum/pull/15106.