In `async` functions, a closure environment is created for variables
that cross an await boundary - this closure environment is kept in
memory for the lifetime of the associated future - this means that
although _some_ variables are no longer used, they still take up memory
for a long time.
In Nimbus, message validation is processed in batches meaning the future
of an incoming gossip message stays around for quite a while - this
leads to memory consumption peaks of 100-200 mb when there are many
attestations in the pipeline.
To avoid excessive memory usage, it's generally better to move non-async
code into proc's such that the variables therein can be released earlier
- this includes the many hidden variables introduced by macro and
template expansion (ie chronicles that does expensive exception
handling)
* move seen table salt to floodsub, use there as well
* shorten seen table salt to size of hash
* avoid unnecessary memory allocations and copies in a few places
* factor out message scoring
* avoid reencoding outgoing message for every peer
* keep checking validators until reject (in case there's both reject and
ignore)
* `readOnce` avoids `readExactly` overhead for single-byte read
* genericAssign -> assign2
* properly propagate initiator information for gossipsub
* Fix pubsubpeer lifetime management
* restore old behavior
* tests fixing
* clamp backoff time value received
* fix member name collisions
* internal test fixes
* better names and explaining of the importance of transport direction
* fixes
* master merge
* wip
* avoid deadlocks
* tcp limits
* expose client field in chronosstream
* limit incoming connections
* update with new listen api
* fix release
* don't override peerinfo in connection
* rework transport with accept
* use semaphore to track resource ussage
* rework with new transport accept api
* move events to conn manager (#373)
* use semaphore to track resource ussage
* merge master
* expose api to acquire conn slots
* don't fail expensive metrics
* allow tracking and updating connections
* set global connection limits to 80
* add per peer connection limits
* make sure conn is closed if tracking failed
* more descriptive naming for handle
* rework with new transport accept api
* add `getStream` hide `selectConn`
* add TransportClosedError
* make nil explicit
* don't make unnecessary copies of message
* logging
* error handling
* cleanup semaphore
* track connections properly
* throw `TooManyConnections` when tracking outgoing
* use proper exception and handle conventions
* check onCloseHandle for nil
* revert internalConnect changes
* adding upgraded flag
* await stream before closing
* simplify tracking
* wip
* logging
* split connection limits into incoming and outgoing
* further streamline connection limits split counts
* don't use closeWithEOF
* move peer and conn event triggers from switch
* wip
* wip
* wip
* merge master
* handle nil connections properly
* add clarifying comment
* don't raise exc on nil
* no finally
* add proper min/max connections logic
* rebase master
* merge master
* master merge
* remove request timeout
should be addressed in separate PR
* merge master
* share semaphore when in/out limits arent enforced
* merge master
* use import
* pass semaphore to trackConn
* don't close last conn
* use storeConn
* merge master
* use storeConn
* adding an upgraded event to conn
* set stopped flag asap
* trigger upgradded event on conn
* set concurrency limit for accepts
* backporting semaphore from tcp-limits2
* export unittests module
* make params explicit
* tone down debug logs
* adding semaphore tests
* use semaphore to throttle concurent upgrades
* add libp2p scope
* trigger upgraded event before any other events
* add event handler for connection upgrade
* cleanup upgraded event on conn close
* make upgrades slot release rebust
* dont forget to release slot on nil connection
* misc
* make sure semaphore is always released
* minor improvements and a nil check
* removing unneeded comment
* make upgradeMonitor a non-closure proc
* make sure the `upgraded` event is initialized
* handle exceptions in accepts when stopping
* don't leak exceptions when stopping accept loops
* check that connection is not closed or eof
* don't release connection lock prematurely
* test that only valid connections can be added
* correct exception type on closed connection
* add clarifying comment
* use closeWithEOF for more stable test
* misc comments
* log stream id in buffestream asserts
* use closeWithEOF to prevent races in tests
* give some time to the remote handler to trigger
* adding more tests to make codecov happy
* handle resets properly with/without pushes/reads
* add clarifying comments
* pushEof should also not be concurrent
* move channel reset to bufferstream
this is where the action happens - lpchannel merely redefines how close
is done
Co-authored-by: Jacek Sieka <jacek@status.im>
* rework transport to use the new accept api
* use the new chronos primits
* fixup tests to use the new transport api
* handle all exceptions in upgradeIncoming
* master merge
* add multiaddress exception type
* raise appropriate exception on invalida address
* allow retrying on TransportTooManyError
* adding TODO
* wip
* merge master
* add sleep if nil is returned
* accept loop handles all exceptions
* avoid issues with tray/except/finally
* make consistent with master
* cleanup accept loop
* logging
* Update libp2p/transports/tcptransport.nim
Co-authored-by: Jacek Sieka <jacek@status.im>
* use Direction enum instead of initiator flag
* use consistent import style
* remove experimental `closeWithEOF()`
Co-authored-by: Jacek Sieka <jacek@status.im>
* fix channels not being reset
silly for loop..
* allow only one concurrent read
* fix mplex test race condition
* add some bufferstream eof tests
* deadlock, lost data and hung channel fixes
* prevent concurrent `reset` calls
* reset LPChannel when read is cancelled (since data is lost)
* ensure there's one, and one only, 0-byte readOnce on EOF
* ensure that all data is returned before EOF is returned
* keep running activity monitor for half-closed channels (or they never
get closed)
* break stream tracking by type
* use closeWithEOF to await wrapped stream
* fix cancelation leaks
* fix channel leaks
* logging
* use close monitor and always call closeUnderlying
* don't use closeWithEOF
* removing close monitor
* logging
To break a potential read/write deadlock, gossipsub uses an unbounded
queue for writes - when peers are too slow to process this queue, it may
end up growing without bounds causing high memory usage.
Here, we introduce a maximum write queue length after which the peer is
disconnected - the queue is generous enough that any "normal" usage
should be fine - writes that are `await`:ed are not affected, only
writes that are launched in an `asyncSpawn` task or similar.
* avoid unnecessary copy of message when there are no send observers
* release message memory earlier in gossipsub
* simplify pubsubpeer logging
* add helper to read EOF marker after closing stream (else stream stay
alive until timeout/reset)
* don't assert on empty channel message
* don't loop when writing to chronos (no need)
* channel close race and deadlock fixes
* remove send lock, write chunks in one go
* push some of half-closed implementation to BufferStream
* fix some hangs where LPChannel readers and writers would not always
wake up
* simplify lazy channels
* fix close happening more than once in some orderings
* reenable connection tracking tests
* close channels first on mplex close such that consumers can read bytes
A notable difference is that BufferedStream is no longer considered EOF
until someone has actually read the EOF marker.
* docs, simplification
* remove almost-empty types module
* lock when writing message (that's the only place the lock matters, and
only when the message is > max msg size)
* logging updates (log in consistent order, makes reading logs easier)
* raise EOF from readExactly only if no bytes have been read (to signal
that _no_ bytes were lost)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
* move pubsub of off switch, pass switch into pubsub
* use join on lpstreams
* properly cleanup up failed peers
* fix tests
* fix peertable hasPeerId
* fix tests
* rework sending, remove helpers from pubsubpeer, unify in broadcast
* further split broadcast into send
* use send where appropriate
* use formatIt
* improve trace
Co-authored-by: Giovanni Petrantoni <giovanni@fragcolor.xyz>