2019-09-04 03:08:51 +00:00
|
|
|
## Nim-LibP2P
|
2019-09-24 17:48:23 +00:00
|
|
|
## Copyright (c) 2019 Status Research & Development GmbH
|
2019-09-04 03:08:51 +00:00
|
|
|
## Licensed under either of
|
|
|
|
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
|
|
|
|
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
|
|
|
|
## at your option.
|
|
|
|
## This file may not be copied, modified, or distributed except according to
|
|
|
|
## those terms.
|
|
|
|
|
2020-09-06 08:31:47 +00:00
|
|
|
import std/[oids, strformat]
|
2020-06-19 17:29:43 +00:00
|
|
|
import chronos, chronicles, metrics
|
2020-09-14 08:19:54 +00:00
|
|
|
import ./coder,
|
2020-07-17 18:44:41 +00:00
|
|
|
../muxer,
|
2019-09-08 07:59:14 +00:00
|
|
|
nimcrypto/utils,
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
../../stream/[bufferstream, connection, streamseq],
|
2020-06-29 15:15:31 +00:00
|
|
|
../../peerinfo
|
2019-09-04 03:08:51 +00:00
|
|
|
|
2020-06-19 17:29:43 +00:00
|
|
|
export connection
|
2020-05-06 16:31:47 +00:00
|
|
|
|
2019-09-10 02:15:52 +00:00
|
|
|
logScope:
|
2020-06-10 08:48:01 +00:00
|
|
|
topics = "mplexchannel"
|
2019-09-10 02:15:52 +00:00
|
|
|
|
2020-05-20 00:14:15 +00:00
|
|
|
## Channel half-closed states
|
|
|
|
##
|
|
|
|
## | State | Closed local | Closed remote
|
|
|
|
## |=============================================
|
|
|
|
## | Read | Yes (until EOF) | No
|
|
|
|
## | Write | No | Yes
|
|
|
|
##
|
|
|
|
|
2020-05-29 16:24:38 +00:00
|
|
|
# TODO: this is one place where we need to use
|
|
|
|
# a proper state machine, but I've opted out of
|
|
|
|
# it for now for two reasons:
|
|
|
|
#
|
|
|
|
# 1) we don't have that many states to manage
|
|
|
|
# 2) I'm not sure if adding the state machine
|
|
|
|
# would have simplified or complicated the code
|
|
|
|
#
|
|
|
|
# But now that this is in place, we should perhaps
|
|
|
|
# reconsider reworking it again, this time with a
|
|
|
|
# more formal approach.
|
|
|
|
#
|
|
|
|
|
2019-09-04 03:08:51 +00:00
|
|
|
type
|
2019-09-12 17:07:34 +00:00
|
|
|
LPChannel* = ref object of BufferStream
|
2020-05-20 00:14:15 +00:00
|
|
|
id*: uint64 # channel id
|
|
|
|
name*: string # name of the channel (for debugging)
|
|
|
|
conn*: Connection # wrapped connection used to for writing
|
|
|
|
initiator*: bool # initiated remotely or locally flag
|
|
|
|
isLazy*: bool # is channel lazy
|
2020-05-29 16:24:38 +00:00
|
|
|
isOpen*: bool # has channel been opened (only used with isLazy)
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
isReset*: bool # channel was reset, pushTo should drop data
|
|
|
|
pushing*: bool
|
2020-05-20 00:14:15 +00:00
|
|
|
closedLocal*: bool # has channel been closed locally
|
|
|
|
msgCode*: MessageType # cached in/out message code
|
|
|
|
closeCode*: MessageType # cached in/out close code
|
|
|
|
resetCode*: MessageType # cached in/out reset code
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
writeLock: AsyncLock
|
2020-05-20 00:14:15 +00:00
|
|
|
|
|
|
|
proc open*(s: LPChannel) {.async, gcsafe.}
|
|
|
|
|
|
|
|
template withWriteLock(lock: AsyncLock, body: untyped): untyped =
|
|
|
|
try:
|
|
|
|
await lock.acquire()
|
|
|
|
body
|
|
|
|
finally:
|
2020-06-11 14:45:59 +00:00
|
|
|
if not(isNil(lock)) and lock.locked:
|
|
|
|
lock.release()
|
2020-05-20 00:14:15 +00:00
|
|
|
|
2020-09-06 08:31:47 +00:00
|
|
|
func shortLog*(s: LPChannel): auto =
|
|
|
|
if s.isNil: "LPChannel(nil)"
|
|
|
|
elif s.conn.peerInfo.isNil: $s.oid
|
|
|
|
elif s.name != $s.oid: &"{shortLog(s.conn.peerInfo.peerId)}:{s.oid}:{s.name}"
|
|
|
|
else: &"{shortLog(s.conn.peerInfo.peerId)}:{s.oid}"
|
|
|
|
chronicles.formatIt(LPChannel): shortLog(it)
|
2020-06-29 15:15:31 +00:00
|
|
|
|
2020-09-06 08:31:47 +00:00
|
|
|
proc closeMessage(s: LPChannel) {.async.} =
|
2020-09-14 08:19:54 +00:00
|
|
|
## send close message
|
2020-06-29 15:15:31 +00:00
|
|
|
withWriteLock(s.writeLock):
|
2020-09-06 08:31:47 +00:00
|
|
|
trace "sending close message", s
|
2019-09-07 23:32:32 +00:00
|
|
|
|
2020-06-29 15:15:31 +00:00
|
|
|
await s.conn.writeMsg(s.id, s.closeCode) # write close
|
2020-03-27 14:25:52 +00:00
|
|
|
|
2020-05-20 00:14:15 +00:00
|
|
|
proc resetMessage(s: LPChannel) {.async.} =
|
2020-06-01 23:19:53 +00:00
|
|
|
## send reset message - this will not raise
|
2020-09-04 16:30:45 +00:00
|
|
|
try:
|
2020-05-20 00:14:15 +00:00
|
|
|
withWriteLock(s.writeLock):
|
2020-09-06 08:31:47 +00:00
|
|
|
trace "sending reset message", s
|
2020-05-20 00:14:15 +00:00
|
|
|
await s.conn.writeMsg(s.id, s.resetCode) # write reset
|
2020-09-04 16:30:45 +00:00
|
|
|
except CancelledError:
|
|
|
|
# This procedure is called from one place and never awaited, so there no
|
|
|
|
# need to re-raise CancelledError.
|
2020-09-08 06:24:28 +00:00
|
|
|
debug "Unexpected cancellation while resetting channel", s
|
2020-09-04 16:30:45 +00:00
|
|
|
except LPStreamEOFError as exc:
|
2020-09-14 08:19:54 +00:00
|
|
|
trace "muxed connection EOF", s, msg = exc.msg
|
2020-09-04 16:30:45 +00:00
|
|
|
except LPStreamClosedError as exc:
|
2020-09-14 08:19:54 +00:00
|
|
|
trace "muxed connection closed", s, msg = exc.msg
|
2020-09-04 16:30:45 +00:00
|
|
|
except LPStreamIncompleteError as exc:
|
2020-09-14 08:19:54 +00:00
|
|
|
trace "incomplete message", s, msg = exc.msg
|
2020-09-04 16:30:45 +00:00
|
|
|
except CatchableError as exc:
|
2020-09-14 08:19:54 +00:00
|
|
|
debug "Unhandled exception leak", s, msg = exc.msg
|
2020-05-20 00:14:15 +00:00
|
|
|
|
|
|
|
proc open*(s: LPChannel) {.async, gcsafe.} =
|
2020-06-29 15:15:31 +00:00
|
|
|
await s.conn.writeMsg(s.id, MessageType.New, s.name)
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
trace "Opened channel", s
|
2020-06-29 15:15:31 +00:00
|
|
|
s.isOpen = true
|
2020-05-20 00:14:15 +00:00
|
|
|
|
|
|
|
proc closeRemote*(s: LPChannel) {.async.} =
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
trace "Closing remote", s
|
2020-07-17 18:44:41 +00:00
|
|
|
try:
|
2020-08-10 22:17:11 +00:00
|
|
|
# close parent bufferstream to prevent further reads
|
|
|
|
await procCall BufferStream(s).close()
|
2020-07-17 18:44:41 +00:00
|
|
|
except CancelledError as exc:
|
|
|
|
raise exc
|
|
|
|
except CatchableError as exc:
|
2020-09-14 08:19:54 +00:00
|
|
|
trace "exception closing remote channel", s, msg = exc.msg
|
2020-09-06 08:31:47 +00:00
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
trace "Closed remote", s
|
2020-03-27 14:25:52 +00:00
|
|
|
|
2020-05-20 00:14:15 +00:00
|
|
|
method closed*(s: LPChannel): bool =
|
|
|
|
## this emulates half-closed behavior
|
|
|
|
## when closed locally writing is
|
2020-05-29 16:24:38 +00:00
|
|
|
## disabled - see the table in the
|
2020-05-20 00:14:15 +00:00
|
|
|
## header of the file
|
|
|
|
s.closedLocal
|
2019-12-04 04:44:54 +00:00
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
method pushTo*(s: LPChannel, data: seq[byte]) {.async.} =
|
|
|
|
if s.isReset:
|
|
|
|
raise newLPStreamClosedError() # Terminate mplex loop
|
|
|
|
|
|
|
|
try:
|
|
|
|
s.pushing = true
|
|
|
|
await procCall BufferStream(s).pushTo(data)
|
|
|
|
finally:
|
|
|
|
s.pushing = false
|
|
|
|
|
2020-05-23 16:50:05 +00:00
|
|
|
method reset*(s: LPChannel) {.base, async, gcsafe.} =
|
2020-06-29 15:15:31 +00:00
|
|
|
if s.closedLocal and s.isEof:
|
2020-09-06 08:31:47 +00:00
|
|
|
trace "channel already closed or reset", s
|
2020-06-29 15:15:31 +00:00
|
|
|
return
|
|
|
|
|
2020-09-14 08:19:54 +00:00
|
|
|
trace "Resetting channel", s, len = s.len
|
2020-07-27 19:33:51 +00:00
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
# First, make sure any new calls to `readOnce` and `pushTo` will fail - there
|
|
|
|
# may already be such calls in the event queue
|
|
|
|
s.isEof = true
|
|
|
|
s.isReset = true
|
2020-07-12 16:37:10 +00:00
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
s.readBuf = StreamSeq()
|
2020-07-12 16:37:10 +00:00
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
s.closedLocal = true
|
2020-07-17 18:44:41 +00:00
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
asyncSpawn s.resetMessage()
|
|
|
|
|
|
|
|
# This should wake up any readers by pushing an EOF marker at least
|
|
|
|
await procCall BufferStream(s).close() # noraises, nocancels
|
|
|
|
|
|
|
|
if s.pushing:
|
|
|
|
# When data is being pushed, there will be two items competing for the
|
|
|
|
# readQueue slot - the BufferStream.close EOF marker and the pushTo data.
|
|
|
|
# If the EOF wins, the pushTo call will get stuck because there will be no
|
|
|
|
# new readers to clear the data. It's worth noting that if there's a reader
|
|
|
|
# already waiting for data, this reader will be unblocked by the pushTo -
|
|
|
|
# this is necessary or it will get stuck
|
|
|
|
if s.readQueue.len > 0:
|
|
|
|
discard s.readQueue.popFirstNoWait()
|
2020-05-23 16:50:05 +00:00
|
|
|
|
2020-09-09 17:12:08 +00:00
|
|
|
trace "Channel reset", s
|
2020-06-29 15:15:31 +00:00
|
|
|
|
2020-05-23 16:50:05 +00:00
|
|
|
method close*(s: LPChannel) {.async, gcsafe.} =
|
|
|
|
if s.closedLocal:
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
trace "Already closed", s
|
2020-05-23 16:50:05 +00:00
|
|
|
return
|
|
|
|
|
2020-09-14 08:19:54 +00:00
|
|
|
trace "Closing channel", s, len = s.len
|
2020-06-29 15:15:31 +00:00
|
|
|
|
|
|
|
proc closeInternal() {.async.} =
|
2020-05-23 16:50:05 +00:00
|
|
|
try:
|
|
|
|
await s.closeMessage().wait(2.minutes)
|
2020-05-29 16:24:38 +00:00
|
|
|
if s.atEof: # already closed by remote close parent buffer immediately
|
2020-05-23 16:50:05 +00:00
|
|
|
await procCall BufferStream(s).close()
|
2020-09-04 16:30:45 +00:00
|
|
|
except CancelledError:
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
debug "Unexpected cancellation while closing channel", s
|
2020-07-12 16:37:10 +00:00
|
|
|
await s.reset()
|
2020-09-04 16:30:45 +00:00
|
|
|
# This is top-level procedure which will work as separate task, so it
|
|
|
|
# do not need to propogate CancelledError.
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
except LPStreamClosedError, LPStreamEOFError:
|
|
|
|
trace "Connection already closed", s
|
|
|
|
except CatchableError as exc: # Shouldn't happen?
|
2020-09-14 08:19:54 +00:00
|
|
|
warn "Exception closing channel", s, msg = exc.msg
|
2020-07-12 16:37:10 +00:00
|
|
|
await s.reset()
|
2020-05-23 16:50:05 +00:00
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
trace "Closed channel", s
|
2020-05-23 16:50:05 +00:00
|
|
|
|
2020-06-19 17:29:43 +00:00
|
|
|
s.closedLocal = true
|
2020-09-04 16:30:45 +00:00
|
|
|
# All the errors are handled inside `closeInternal()` procedure.
|
|
|
|
asyncSpawn closeInternal()
|
2020-07-17 18:44:41 +00:00
|
|
|
|
|
|
|
method initStream*(s: LPChannel) =
|
|
|
|
if s.objName.len == 0:
|
|
|
|
s.objName = "LPChannel"
|
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
s.timeoutHandler = proc(): Future[void] {.gcsafe.} =
|
|
|
|
trace "Idle timeout expired, resetting LPChannel", s
|
|
|
|
s.reset()
|
2020-07-17 18:44:41 +00:00
|
|
|
|
2020-08-04 13:22:05 +00:00
|
|
|
procCall BufferStream(s).initStream()
|
2020-07-17 18:44:41 +00:00
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
s.writeLock = newAsyncLock()
|
|
|
|
|
|
|
|
method write*(s: LPChannel, msg: seq[byte]): Future[void] {.async.} =
|
|
|
|
if s.closedLocal:
|
|
|
|
raise newLPStreamClosedError()
|
|
|
|
|
|
|
|
try:
|
|
|
|
if s.isLazy and not(s.isOpen):
|
|
|
|
await s.open()
|
|
|
|
|
|
|
|
# writes should happen in sequence
|
|
|
|
trace "write msg", len = msg.len
|
|
|
|
|
2020-09-14 08:19:54 +00:00
|
|
|
withWriteLock(s.writeLock):
|
|
|
|
await s.conn.writeMsg(s.id, s.msgCode, msg)
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
s.activity = true
|
|
|
|
except CatchableError as exc:
|
2020-09-14 08:19:54 +00:00
|
|
|
trace "exception in lpchannel write handler", s, msg = exc.msg
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
await s.conn.close()
|
|
|
|
raise exc
|
|
|
|
|
2020-07-17 18:44:41 +00:00
|
|
|
proc init*(
|
|
|
|
L: type LPChannel,
|
|
|
|
id: uint64,
|
|
|
|
conn: Connection,
|
|
|
|
initiator: bool,
|
|
|
|
name: string = "",
|
|
|
|
lazy: bool = false,
|
|
|
|
timeout: Duration = DefaultChanTimeout): LPChannel =
|
|
|
|
|
|
|
|
let chann = L(
|
|
|
|
id: id,
|
|
|
|
name: name,
|
|
|
|
conn: conn,
|
|
|
|
initiator: initiator,
|
|
|
|
isLazy: lazy,
|
|
|
|
timeout: timeout,
|
|
|
|
msgCode: if initiator: MessageType.MsgOut else: MessageType.MsgIn,
|
|
|
|
closeCode: if initiator: MessageType.CloseOut else: MessageType.CloseIn,
|
|
|
|
resetCode: if initiator: MessageType.ResetOut else: MessageType.ResetIn,
|
|
|
|
dir: if initiator: Direction.Out else: Direction.In)
|
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
chann.initStream()
|
2020-07-17 18:44:41 +00:00
|
|
|
|
|
|
|
when chronicles.enabledLogLevel == LogLevel.TRACE:
|
|
|
|
chann.name = if chann.name.len > 0: chann.name else: $chann.oid
|
|
|
|
|
refactor bufferstream to use a queue (#346)
This change modifies how the backpressure algorithm in bufferstream
works - in particular, instead of working byte-by-byte, it will now work
seq-by-seq.
When data arrives, it usually does so in packets - in the current
bufferstream, the packet is read then split into bytes which are fed one
by one to the bufferstream. On the reading side, the bytes are popped of
the bufferstream, again byte by byte, to satisfy `readOnce` requests -
this introduces a lot of synchronization traffic because the checks for
full buffer and for async event handling must be done for every byte.
In this PR, a queue of length 1 is used instead - this means there will
at most exist one "packet" in `pushTo`, one in the queue and one in the
slush buffer that is used to store incomplete reads.
* avoid byte-by-byte copy to buffer, with synchronization in-between
* reuse AsyncQueue synchronization logic instead of rolling own
* avoid writeHandler callback - implement `write` method instead
* simplify EOF signalling by only setting EOF flag in queue reader (and
reset)
* remove BufferStream pipes (unused)
* fixes drainBuffer deadlock when drain is called from within read loop
and thus blocks draining
* fix lpchannel init order
2020-09-10 06:19:13 +00:00
|
|
|
trace "Created new lpchannel", chann
|
2020-07-17 18:44:41 +00:00
|
|
|
|
|
|
|
return chann
|