Fix RLP serialisation of `seq[Transaction]` used in `eth` protocol

1. Generalises the special cases for serialising RLP `seq[Transaction]`.
   Previously it only used the special case inside `BlockBody` and `EthBlock`.
   Now it uses it for all `seq[Transaction]` regardless of what objects they
   are parts of, or no object at all.  `openArray[Transaction]` is also
   included, as this was found to be necessary to match in some places.

2. Bug fix parsing `Transaction`: Always read the first byte to get the
   transaction type instead of parsing an RLP `int`.  This way invalid or
   adversarial input gives a correct error (i.e. invalid type code).

   When it was read with `rlp.read(int)`, those inputs gave many crazy
   messages (e.g. "too large to fit in memory").  In the specification it's a
   byte.  (Technically the input is not RLP and we shouldn't be using the RLP
   parser anyway to parse standalone transaction objects).

3. Bug fix parsing `Transaction`: If a typed transaction is detected in
   `seq[Transaction]`, the previous code removed the RLP (blob) wrapper, then
   passed the contents to `read(Transaction)`.  That meant a blob-wrapped
   legacy transaction would be accepted.  This is incorrect.  The new code
   passes the contents to the typed transaction decoder, which correctly
   rejects a wrapped legacy transaction as having invalid type.

Change 1 has a large, practical effect on `eth/65` syncing with peers.

Serialisation of `eth` message types `Transactions` and `PooledTransactions`
have been broken since the introduction of typed transactions (EIP-2718), as
used in Berlin/London forks.  (The special case for `seq[Transaction]` inside
`BlockBody` only fixed message type `BlockBodies`.)

Due to this, whenever a peer sent us a `Transactions` message, we had an RLP
decoding error processing it, and disconnected the peer thinking it was the
peer's error.

These messages are sent often by good peers, so whenever we connected to a
really good peer, we'd end up disconnecting from it within a few tens of
seconds due to this.

This didn't get noticed before updating to `eth/65`, because with old protocols
we tend to only connect to old peers, which may be out of date themselves and
have no typed transactions.  Also, we didn't really investigate occasional
disconnects before, we assumed they're just part of P2P life.

The root cause is the RLP serialisation of individual `Transaction` is meant to
be subtly different from arrays/sequences of `Transaction` objects in network
messages.  RFC-2976 covers this but it's quite subtle:

- Individual transactions are encoded and stored as either `RLP([fields..])`
  for legacy transactions, or `Type || RLP([fields..])`.  Both of these
  encodings are byte sequences.  The part after `Type` doesn't have to be
  RLP in theory, but all types so far use RLP.  EIP-2718 covers this.

- In arrays (sequences), transactions are encoded as either `RLP([fields..])`
  for legacy transactions, or `RLP(Type || RLP([fields..]))` for all typed
  transactions to date.  Spot the extra `RLP(..)` blob encoding, to make it
  valid RLP inside a larger RLP.  EIP-2976 covers this, "Typed Transactions
  over Gossip", although it's not very clear about the blob encoding.

In practice the extra `RLP(..)` applies to all arrays/sequences of transactions
that are to be RLP-encoded as a list.  In principle, it should be all
aggregates (object fields etc.), but it's enough for us to enable it for all
arrays/sequences, as this is what's used in the protocol and EIP-2976.

Signed-off-by: Jamie Lokier <jamie@shareable.org>
This commit is contained in:
Jamie Lokier 2021-07-27 14:27:36 +01:00 committed by zah
parent e3fba48f0f
commit 04ff8e460f
1 changed files with 62 additions and 27 deletions

View File

@ -101,7 +101,7 @@ type
fee*: Option[UInt256] # EIP-1559
BlockBody* = object
transactions*{.rlpCustomSerialization.}: seq[Transaction]
transactions*: seq[Transaction]
uncles*: seq[BlockHeader]
Log* = object
@ -127,7 +127,7 @@ type
EthBlock* = object
header*: BlockHeader
txs* {.rlpCustomSerialization.}: seq[Transaction]
txs*: seq[Transaction]
uncles*: seq[BlockHeader]
CollationHeader* = object
@ -427,39 +427,74 @@ proc readTxEip1559(rlp: var Rlp, tx: var Transaction)=
rlp.read(tx.R)
rlp.read(tx.S)
proc readTxTyped(rlp: var Rlp, tx: var Transaction) {.inline.} =
# EIP-2718: We MUST decode the first byte as a byte, not `rlp.read(int)`.
# If decoded with `rlp.read(int)`, bad transaction data (from the network)
# or even just incorrectly framed data for other reasons fails with
# any of these misleading error messages:
# - "Message too large to fit in memory"
# - "Number encoded with a leading zero"
# - "Read past the end of the RLP stream"
# - "Small number encoded in a non-canonical way"
# - "Attempt to read an Int value past the RLP end"
# - "The RLP contains a larger than expected Int value"
if not rlp.isSingleByte:
if not rlp.hasData:
raise newException(MalformedRlpError,
"Transaction expected but source RLP is empty")
raise newException(MalformedRlpError,
"TypedTransaction type byte is out of range, must be 0x00 to 0x7f")
let txType = rlp.getByteValue
rlp.position += 1
case TxType(txType):
of TxEip2930:
rlp.readTxEip2930(tx)
of TxEip1559:
rlp.readTxEip1559(tx)
else:
raise newException(UnsupportedRlpError,
"TypedTransaction type must be 1 or 2 in this version, got " & $txType)
proc read*(rlp: var Rlp, T: type Transaction): T =
# Individual transactions are encoded and stored as either `RLP([fields..])`
# for legacy transactions, or `Type || RLP([fields..])`. Both of these
# encodings are byte sequences. The part after `Type` doesn't have to be
# RLP in theory, but all types so far use RLP. EIP-2718 covers this.
if rlp.isList:
rlp.readTxLegacy(result)
return
# EIP 2718
let txType = rlp.read(int)
if txType notin {1, 2}:
raise newException(UnsupportedRlpError,
"TxType expect 1 or 2 got " & $txType)
if TxType(txType) == TxEip2930:
rlp.readTxEip2930(result)
else:
rlp.readTxEip1559(result)
rlp.readTxTyped(result)
proc read*(rlp: var Rlp, t: var (EthBlock | BlockBody), _: type seq[Transaction]): seq[Transaction] {.inline.} =
# EIP 2718/2930: we have to override this field
# for reasons described below in `append` proc
proc read*(rlp: var Rlp,
T: (type seq[Transaction]) | (type openArray[Transaction])): seq[Transaction] =
# In arrays (sequences), transactions are encoded as either `RLP([fields..])`
# for legacy transactions, or `RLP(Type || RLP([fields..]))` for all typed
# transactions to date. Spot the extra `RLP(..)` blob encoding, to make it
# valid RLP inside a larger RLP. EIP-2976 covers this, "Typed Transactions
# over Gossip", although it's not very clear about the blob encoding.
#
# In practice the extra `RLP(..)` applies to all arrays/sequences of
# transactions. In principle, all aggregates (objects etc.), but
# arrays/sequences are enough. In `eth/65` protocol this is essential for
# the correct encoding/decoding of `Transactions`, `NewBlock`, and
# `PooledTransactions` network calls. We need a type match on both
# `openArray[Transaction]` and `seq[Transaction]` to catch all cases.
if not rlp.isList:
raise newException(MalformedRlpError,
"List expected, but got blob.")
for tx in rlp:
if tx.isList:
result.add tx.read(Transaction)
raise newException(RlpTypeMismatch,
"Transaction list expected, but source RLP is not a list")
for item in rlp:
var tx: Transaction
if item.isList:
item.readTxLegacy(tx)
else:
let bytes = rlp.read(Blob)
var rr = rlpFromBytes(bytes)
result.add rr.read(Transaction)
var rr = rlpFromBytes(rlp.read(Blob))
rr.readTxTyped(tx)
result.add tx
proc append*(rlpWriter: var RlpWriter, blk: EthBlock | BlockBody, txs: seq[Transaction]) {.inline.} =
# EIP 2718/2930: the new Tx is rlp(txType || txPlayload) -> one blob/one list elem
# not rlp(txType, txPayload) -> two list elem, wrong!
proc append*(rlpWriter: var RlpWriter,
txs: seq[Transaction] | openArray[Transaction]) {.inline.} =
# See above about encoding arrays/sequences of transactions.
rlpWriter.startList(txs.len)
for tx in txs:
if tx.txType == TxLegacy: