2018-05-16 08:22:34 +00:00
|
|
|
#
|
2019-02-06 14:49:11 +00:00
|
|
|
# Chronos Stream Transport
|
|
|
|
# (c) Copyright 2018-Present
|
2018-05-16 08:22:34 +00:00
|
|
|
# Status Research & Development GmbH
|
|
|
|
#
|
|
|
|
# Licensed under either of
|
|
|
|
# Apache License, version 2.0, (LICENSE-APACHEv2)
|
|
|
|
# MIT license (LICENSE-MIT)
|
|
|
|
|
2023-06-05 20:21:50 +00:00
|
|
|
{.push raises: [].}
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
import std/deques
|
2023-11-19 17:29:09 +00:00
|
|
|
import stew/ptrops
|
2024-04-13 00:04:42 +00:00
|
|
|
import results
|
2024-03-26 20:33:19 +00:00
|
|
|
import ".."/[asyncloop, config, handles, bipbuffer, osdefs, osutils, oserrno]
|
2024-04-13 00:04:42 +00:00
|
|
|
import ./[common, ipnet]
|
|
|
|
|
|
|
|
export results
|
2018-10-25 10:19:19 +00:00
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
type
|
2018-05-21 21:52:57 +00:00
|
|
|
VectorKind = enum
|
2018-05-22 21:03:13 +00:00
|
|
|
DataBuffer, # Simple buffer pointer/length
|
|
|
|
DataFile # File handle for sendfile/TransmitFile
|
2018-05-21 21:52:57 +00:00
|
|
|
|
|
|
|
StreamVector = object
|
2018-05-22 21:03:13 +00:00
|
|
|
kind: VectorKind # Writer vector source kind
|
|
|
|
buf: pointer # Writer buffer pointer
|
|
|
|
buflen: int # Writer buffer size
|
|
|
|
offset: uint # Writer vector offset
|
2018-10-25 19:59:40 +00:00
|
|
|
size: int # Original size
|
2018-06-05 20:21:07 +00:00
|
|
|
writer: Future[int] # Writer vector completion Future
|
2018-05-21 21:52:57 +00:00
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
TransportKind* {.pure.} = enum
|
2018-05-22 21:03:13 +00:00
|
|
|
Socket, # Socket transport
|
|
|
|
Pipe, # Pipe transport
|
|
|
|
File # File transport
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2018-10-25 10:19:19 +00:00
|
|
|
TransportFlags* = enum
|
|
|
|
None,
|
|
|
|
# Default value
|
|
|
|
WinServerPipe,
|
|
|
|
# This is internal flag which used to differentiate between server pipe
|
|
|
|
# handle and client pipe handle.
|
2023-01-23 09:18:12 +00:00
|
|
|
WinNoPipeFlash,
|
2018-10-25 10:19:19 +00:00
|
|
|
# By default `AddressFamily.Unix` transports in Windows are using
|
|
|
|
# `FlushFileBuffers()` when transport closing.
|
|
|
|
# This flag disables usage of `FlushFileBuffers()` on `AddressFamily.Unix`
|
|
|
|
# transport shutdown. If both server and client are running in the same
|
|
|
|
# thread, because of `FlushFileBuffers()` will ensure that all bytes
|
|
|
|
# or messages written to the pipe are read by the client, it is possible to
|
|
|
|
# get stuck on transport `close()`.
|
|
|
|
# Please use this flag only if you are making both client and server in
|
|
|
|
# the same thread.
|
2024-04-13 00:04:42 +00:00
|
|
|
TcpNoDelay, # deprecated: Use SocketFlags.TcpNoDelay
|
|
|
|
V4Mapped
|
2023-04-03 12:34:35 +00:00
|
|
|
|
|
|
|
SocketFlags* {.pure.} = enum
|
|
|
|
TcpNoDelay,
|
|
|
|
ReuseAddr,
|
|
|
|
ReusePort
|
2023-01-23 09:18:12 +00:00
|
|
|
|
2021-12-08 14:58:24 +00:00
|
|
|
ReadMessagePredicate* = proc (data: openArray[byte]): tuple[consumed: int,
|
2020-03-05 09:59:10 +00:00
|
|
|
done: bool] {.
|
2023-06-05 20:21:50 +00:00
|
|
|
gcsafe, raises: [].}
|
2020-03-05 09:59:10 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
ReaderFuture = Future[void].Raising([TransportError, CancelledError])
|
|
|
|
|
2019-04-04 09:34:23 +00:00
|
|
|
const
|
2023-04-08 16:34:57 +00:00
|
|
|
StreamTransportTrackerName* = "stream.transport"
|
|
|
|
StreamServerTrackerName* = "stream.server"
|
2023-07-31 19:40:00 +00:00
|
|
|
DefaultBacklogSize* = high(int32)
|
2019-04-04 09:34:23 +00:00
|
|
|
|
2018-06-10 23:08:17 +00:00
|
|
|
when defined(windows):
|
|
|
|
type
|
|
|
|
StreamTransport* = ref object of RootRef
|
|
|
|
fd*: AsyncFD # File descriptor
|
|
|
|
state: set[TransportState] # Current Transport state
|
2023-11-15 08:38:48 +00:00
|
|
|
reader: ReaderFuture # Current reader Future
|
2024-03-26 20:33:19 +00:00
|
|
|
buffer: BipBuffer # Reading buffer
|
2023-11-15 08:38:48 +00:00
|
|
|
error: ref TransportError # Current error
|
2018-06-10 23:08:17 +00:00
|
|
|
queue: Deque[StreamVector] # Writer queue
|
2024-01-04 15:17:42 +00:00
|
|
|
future: Future[void].Raising([]) # Stream life future
|
2018-06-10 23:08:17 +00:00
|
|
|
# Windows specific part
|
2023-02-21 10:48:36 +00:00
|
|
|
rwsabuf: WSABUF # Reader WSABUF
|
|
|
|
wwsabuf: WSABUF # Writer WSABUF
|
2018-06-10 23:08:17 +00:00
|
|
|
rovl: CustomOverlapped # Reader OVERLAPPED structure
|
|
|
|
wovl: CustomOverlapped # Writer OVERLAPPED structure
|
2018-10-25 10:19:19 +00:00
|
|
|
flags: set[TransportFlags] # Internal flags
|
2018-06-10 23:08:17 +00:00
|
|
|
case kind*: TransportKind
|
|
|
|
of TransportKind.Socket:
|
|
|
|
domain: Domain # Socket transport domain (IPv4/IPv6)
|
|
|
|
local: TransportAddress # Local address
|
|
|
|
remote: TransportAddress # Remote address
|
|
|
|
of TransportKind.Pipe:
|
2023-11-15 08:38:48 +00:00
|
|
|
discard
|
2018-06-10 23:08:17 +00:00
|
|
|
of TransportKind.File:
|
2023-11-15 08:38:48 +00:00
|
|
|
discard
|
2018-06-10 23:08:17 +00:00
|
|
|
else:
|
|
|
|
type
|
|
|
|
StreamTransport* = ref object of RootRef
|
|
|
|
fd*: AsyncFD # File descriptor
|
|
|
|
state: set[TransportState] # Current Transport state
|
2023-11-15 08:38:48 +00:00
|
|
|
reader: ReaderFuture # Current reader Future
|
2024-03-26 20:33:19 +00:00
|
|
|
buffer: BipBuffer # Reading buffer
|
2023-11-15 08:38:48 +00:00
|
|
|
error: ref TransportError # Current error
|
2018-06-10 23:08:17 +00:00
|
|
|
queue: Deque[StreamVector] # Writer queue
|
2024-01-04 15:17:42 +00:00
|
|
|
future: Future[void].Raising([]) # Stream life future
|
2024-04-13 00:04:42 +00:00
|
|
|
flags: set[TransportFlags] # Internal flags
|
2018-06-10 23:08:17 +00:00
|
|
|
case kind*: TransportKind
|
|
|
|
of TransportKind.Socket:
|
|
|
|
domain: Domain # Socket transport domain (IPv4/IPv6)
|
|
|
|
local: TransportAddress # Local address
|
|
|
|
remote: TransportAddress # Remote address
|
|
|
|
of TransportKind.Pipe:
|
2023-11-15 08:38:48 +00:00
|
|
|
discard
|
2018-06-10 23:08:17 +00:00
|
|
|
of TransportKind.File:
|
2023-11-15 08:38:48 +00:00
|
|
|
discard
|
2018-06-10 23:08:17 +00:00
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
type
|
2023-12-04 13:19:29 +00:00
|
|
|
# TODO evaluate naming of raises-annotated callbacks
|
|
|
|
StreamCallback2* = proc(server: StreamServer,
|
|
|
|
client: StreamTransport) {.async: (raises: []).}
|
2018-05-28 23:35:15 +00:00
|
|
|
## New remote client connection callback
|
|
|
|
## ``server`` - StreamServer object.
|
|
|
|
## ``client`` - accepted client transport.
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-12-04 13:19:29 +00:00
|
|
|
StreamCallback* = proc(server: StreamServer,
|
|
|
|
client: StreamTransport) {.async.}
|
2023-11-15 08:38:48 +00:00
|
|
|
## Connection callback that doesn't check for exceptions at compile time
|
|
|
|
## ``server`` - StreamServer object.
|
|
|
|
## ``client`` - accepted client transport.
|
|
|
|
|
2018-06-10 23:08:17 +00:00
|
|
|
TransportInitCallback* = proc(server: StreamServer,
|
2023-02-21 10:48:36 +00:00
|
|
|
fd: AsyncFD): StreamTransport {.
|
2023-06-05 20:21:50 +00:00
|
|
|
gcsafe, raises: [].}
|
2018-10-25 10:19:19 +00:00
|
|
|
## Custom transport initialization procedure, which can allocate inherited
|
2018-06-10 23:08:17 +00:00
|
|
|
## StreamTransport object.
|
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
StreamServer* = ref object of SocketServer
|
2018-06-10 23:08:17 +00:00
|
|
|
## StreamServer object
|
2023-12-04 13:19:29 +00:00
|
|
|
function*: StreamCallback2 # callback which will be called after new
|
2018-06-10 23:08:17 +00:00
|
|
|
# client accepted
|
|
|
|
init*: TransportInitCallback # callback which will be called before
|
|
|
|
# transport for new client
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
proc getRemoteAddress(transp: StreamTransport,
|
|
|
|
address: Sockaddr_storage, length: SockLen,
|
|
|
|
): TransportAddress =
|
|
|
|
var raddr: TransportAddress
|
|
|
|
fromSAddr(unsafeAddr address, length, raddr)
|
|
|
|
if TransportFlags.V4Mapped in transp.flags:
|
|
|
|
if raddr.isV4Mapped(): raddr.toIPv4() else: raddr
|
|
|
|
else:
|
|
|
|
raddr
|
|
|
|
|
|
|
|
proc remoteAddress2*(
|
|
|
|
transp: StreamTransport
|
|
|
|
): Result[TransportAddress, OSErrorCode] =
|
2018-05-16 08:22:34 +00:00
|
|
|
## Returns ``transp`` remote socket address.
|
2018-10-25 10:19:19 +00:00
|
|
|
if transp.remote.family == AddressFamily.None:
|
2024-04-13 00:04:42 +00:00
|
|
|
var
|
|
|
|
saddr: Sockaddr_storage
|
|
|
|
slen = SockLen(sizeof(saddr))
|
2018-05-16 08:22:34 +00:00
|
|
|
if getpeername(SocketHandle(transp.fd), cast[ptr SockAddr](addr saddr),
|
|
|
|
addr slen) != 0:
|
2024-04-13 00:04:42 +00:00
|
|
|
return err(osLastError())
|
|
|
|
transp.remote = transp.getRemoteAddress(saddr, slen)
|
|
|
|
ok(transp.remote)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
proc localAddress2*(
|
|
|
|
transp: StreamTransport
|
|
|
|
): Result[TransportAddress, OSErrorCode] =
|
2018-05-16 08:22:34 +00:00
|
|
|
## Returns ``transp`` local socket address.
|
2018-10-25 10:19:19 +00:00
|
|
|
if transp.local.family == AddressFamily.None:
|
2024-04-13 00:04:42 +00:00
|
|
|
var
|
|
|
|
saddr: Sockaddr_storage
|
|
|
|
slen = SockLen(sizeof(saddr))
|
2018-05-16 08:22:34 +00:00
|
|
|
if getsockname(SocketHandle(transp.fd), cast[ptr SockAddr](addr saddr),
|
|
|
|
addr slen) != 0:
|
2024-04-13 00:04:42 +00:00
|
|
|
return err(osLastError())
|
2018-10-25 10:19:19 +00:00
|
|
|
fromSAddr(addr saddr, slen, transp.local)
|
2024-04-13 00:04:42 +00:00
|
|
|
ok(transp.local)
|
|
|
|
|
|
|
|
# TODO(cheatfate): This function should not be public, but for some weird
|
|
|
|
# reason if we will make it non-public it start generate
|
|
|
|
# Hint: 'toException' is declared but not used [XDeclaredButNotUsed]
|
|
|
|
func toException*(v: OSErrorCode): ref TransportOsError =
|
|
|
|
getTransportOsError(v)
|
|
|
|
|
|
|
|
proc remoteAddress*(transp: StreamTransport): TransportAddress {.
|
|
|
|
raises: [TransportOsError].} =
|
|
|
|
## Returns ``transp`` remote socket address.
|
|
|
|
remoteAddress2(transp).tryGet()
|
|
|
|
|
|
|
|
proc localAddress*(transp: StreamTransport): TransportAddress {.
|
|
|
|
raises: [TransportOsError].} =
|
|
|
|
## Returns ``transp`` remote socket address.
|
|
|
|
localAddress2(transp).tryGet()
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2019-10-09 12:12:19 +00:00
|
|
|
proc localAddress*(server: StreamServer): TransportAddress =
|
|
|
|
## Returns ``server`` bound local socket address.
|
2023-02-21 10:48:36 +00:00
|
|
|
server.local
|
2019-10-09 12:12:19 +00:00
|
|
|
|
2020-03-05 09:59:10 +00:00
|
|
|
template completeReader(stream: StreamTransport) =
|
|
|
|
if not(isNil(transp.reader)) and not(transp.reader.finished()):
|
|
|
|
transp.reader.complete()
|
|
|
|
transp.reader = nil
|
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
template setReadError(t, e: untyped) =
|
|
|
|
(t).state.incl(ReadError)
|
2018-10-03 00:44:39 +00:00
|
|
|
(t).error = getTransportOsError(e)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
|
|
|
template checkPending(t: untyped) =
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil((t).reader)):
|
2018-05-16 08:22:34 +00:00
|
|
|
raise newException(TransportError, "Read operation already pending!")
|
|
|
|
|
2021-12-08 10:35:27 +00:00
|
|
|
template shiftVectorBuffer(v: var StreamVector, o: untyped) =
|
2018-05-21 21:52:57 +00:00
|
|
|
(v).buf = cast[pointer](cast[uint]((v).buf) + uint(o))
|
|
|
|
(v).buflen -= int(o)
|
|
|
|
|
2021-12-08 10:35:27 +00:00
|
|
|
template shiftVectorFile(v: var StreamVector, o: untyped) =
|
2022-11-02 07:09:15 +00:00
|
|
|
(v).buf = cast[pointer](cast[uint]((v).buf) - uint(o))
|
|
|
|
(v).offset += uint(o)
|
2018-05-21 21:52:57 +00:00
|
|
|
|
2019-05-28 06:29:00 +00:00
|
|
|
proc completePendingWriteQueue(queue: var Deque[StreamVector],
|
|
|
|
v: int) {.inline.} =
|
|
|
|
while len(queue) > 0:
|
|
|
|
var vector = queue.popFirst()
|
2019-06-20 20:30:41 +00:00
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(v)
|
2019-05-28 06:29:00 +00:00
|
|
|
|
2019-08-28 04:57:06 +00:00
|
|
|
proc failPendingWriteQueue(queue: var Deque[StreamVector],
|
2023-11-15 08:38:48 +00:00
|
|
|
error: ref TransportError) {.inline.} =
|
2019-08-28 04:57:06 +00:00
|
|
|
while len(queue) > 0:
|
|
|
|
var vector = queue.popFirst()
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.fail(error)
|
|
|
|
|
2020-06-24 08:21:52 +00:00
|
|
|
proc clean(server: StreamServer) {.inline.} =
|
|
|
|
if not(server.loopFuture.finished()):
|
2023-07-14 10:35:08 +00:00
|
|
|
untrackCounter(StreamServerTrackerName)
|
2020-06-24 08:21:52 +00:00
|
|
|
server.loopFuture.complete()
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil(server.udata)) and (GCUserData in server.flags):
|
2020-06-24 08:21:52 +00:00
|
|
|
GC_unref(cast[ref int](server.udata))
|
|
|
|
GC_unref(server)
|
|
|
|
|
|
|
|
proc clean(transp: StreamTransport) {.inline.} =
|
|
|
|
if not(transp.future.finished()):
|
2023-07-14 10:35:08 +00:00
|
|
|
untrackCounter(StreamTransportTrackerName)
|
2020-06-24 08:21:52 +00:00
|
|
|
transp.future.complete()
|
|
|
|
GC_unref(transp)
|
|
|
|
|
2024-03-26 20:33:19 +00:00
|
|
|
template toUnchecked*(a: untyped): untyped =
|
|
|
|
cast[ptr UncheckedArray[byte]](a)
|
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
func getTransportFlags(server: StreamServer): set[TransportFlags] =
|
|
|
|
if ServerFlags.V4Mapped in server.flags:
|
|
|
|
{TransportFlags.V4Mapped}
|
|
|
|
else:
|
|
|
|
{}
|
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
when defined(windows):
|
2018-05-16 08:22:34 +00:00
|
|
|
|
|
|
|
template zeroOvelappedOffset(t: untyped) =
|
|
|
|
(t).offset = 0
|
|
|
|
(t).offsetHigh = 0
|
|
|
|
|
|
|
|
template setOverlappedOffset(t, o: untyped) =
|
2023-02-21 10:48:36 +00:00
|
|
|
(t).offset = cast[DWORD](cast[uint64](o) and 0xFFFFFFFF'u64)
|
|
|
|
(t).offsetHigh = cast[DWORD](cast[uint64](o) shr 32)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2018-05-21 21:52:57 +00:00
|
|
|
template getFileSize(v: untyped): uint =
|
|
|
|
cast[uint]((v).buf)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
template getFileHandle(v: untyped): HANDLE =
|
|
|
|
cast[HANDLE]((v).buflen)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2018-05-21 21:52:57 +00:00
|
|
|
template setReaderWSABuffer(t: untyped) =
|
2024-03-26 20:33:19 +00:00
|
|
|
let res = (t).buffer.reserve()
|
|
|
|
(t).rwsabuf.buf = cast[cstring](res.data)
|
|
|
|
(t).rwsabuf.len = uint32(res.size)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2018-05-21 21:52:57 +00:00
|
|
|
template setWriterWSABuffer(t, v: untyped) =
|
|
|
|
(t).wwsabuf.buf = cast[cstring](v.buf)
|
2023-02-21 10:48:36 +00:00
|
|
|
(t).wwsabuf.len = cast[ULONG](v.buflen)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-04-30 06:20:08 +00:00
|
|
|
func isConnResetError(err: OSErrorCode): bool {.inline.} =
|
|
|
|
case err
|
|
|
|
of WSAECONNRESET, WSAECONNABORTED, ERROR_PIPE_NOT_CONNECTED:
|
|
|
|
true
|
|
|
|
else:
|
|
|
|
false
|
2019-05-28 06:29:00 +00:00
|
|
|
|
2018-05-22 16:27:20 +00:00
|
|
|
proc writeStreamLoop(udata: pointer) {.gcsafe, nimcall.} =
|
2023-02-21 10:48:36 +00:00
|
|
|
var bytesCount: uint32
|
2018-05-21 21:52:57 +00:00
|
|
|
var ovl = cast[PtrCustomOverlapped](udata)
|
2018-06-10 23:08:17 +00:00
|
|
|
var transp = cast[StreamTransport](ovl.data.udata)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2020-11-18 09:30:33 +00:00
|
|
|
if WriteClosed in transp.state:
|
|
|
|
transp.state.excl(WritePending)
|
|
|
|
transp.state.incl({WritePaused})
|
|
|
|
let error = getTransportUseClosedError()
|
|
|
|
failPendingWriteQueue(transp.queue, error)
|
|
|
|
else:
|
|
|
|
while len(transp.queue) > 0:
|
|
|
|
if WritePending in transp.state:
|
|
|
|
## Continuation
|
|
|
|
transp.state.excl(WritePending)
|
|
|
|
let err = transp.wovl.data.errCode
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of OSErrorCode(-1):
|
2020-11-18 09:30:33 +00:00
|
|
|
bytesCount = transp.wovl.data.bytesCount
|
|
|
|
var vector = transp.queue.popFirst()
|
|
|
|
if bytesCount == 0:
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
|
|
|
else:
|
|
|
|
if transp.kind == TransportKind.Socket:
|
|
|
|
if vector.kind == VectorKind.DataBuffer:
|
|
|
|
if bytesCount < transp.wwsabuf.len:
|
|
|
|
vector.shiftVectorBuffer(bytesCount)
|
|
|
|
transp.queue.addFirst(vector)
|
|
|
|
else:
|
|
|
|
if not(vector.writer.finished()):
|
2023-02-21 10:48:36 +00:00
|
|
|
# This conversion to `int` safe, because its impossible
|
|
|
|
# to call write() with size bigger than `int`.
|
|
|
|
vector.writer.complete(int(transp.wwsabuf.len))
|
2018-10-25 10:19:19 +00:00
|
|
|
else:
|
2020-11-18 09:30:33 +00:00
|
|
|
if uint(bytesCount) < getFileSize(vector):
|
|
|
|
vector.shiftVectorFile(bytesCount)
|
|
|
|
transp.queue.addFirst(vector)
|
|
|
|
else:
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(int(getFileSize(vector)))
|
|
|
|
elif transp.kind == TransportKind.Pipe:
|
|
|
|
if vector.kind == VectorKind.DataBuffer:
|
|
|
|
if bytesCount < transp.wwsabuf.len:
|
|
|
|
vector.shiftVectorBuffer(bytesCount)
|
|
|
|
transp.queue.addFirst(vector)
|
|
|
|
else:
|
|
|
|
if not(vector.writer.finished()):
|
2023-02-21 10:48:36 +00:00
|
|
|
# This conversion to `int` safe, because its impossible
|
|
|
|
# to call write() with size bigger than `int`.
|
|
|
|
vector.writer.complete(int(transp.wwsabuf.len))
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_OPERATION_ABORTED:
|
2020-11-18 09:30:33 +00:00
|
|
|
# CancelIO() interrupt
|
2019-08-28 04:57:06 +00:00
|
|
|
transp.state.incl({WritePaused, WriteEof})
|
2020-11-18 09:30:33 +00:00
|
|
|
let vector = transp.queue.popFirst()
|
2019-08-28 04:57:06 +00:00
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
2019-05-28 06:29:00 +00:00
|
|
|
completePendingWriteQueue(transp.queue, 0)
|
|
|
|
break
|
|
|
|
else:
|
2020-11-18 09:30:33 +00:00
|
|
|
let vector = transp.queue.popFirst()
|
|
|
|
if isConnResetError(err):
|
|
|
|
# Soft error happens which indicates that remote peer got
|
|
|
|
# disconnected, complete all pending writes in queue with 0.
|
|
|
|
transp.state.incl({WritePaused, WriteEof})
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
|
|
|
completePendingWriteQueue(transp.queue, 0)
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
transp.state.incl({WritePaused, WriteError})
|
|
|
|
let error = getTransportOsError(err)
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.fail(error)
|
|
|
|
failPendingWriteQueue(transp.queue, error)
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
## Initiation
|
|
|
|
transp.state.incl(WritePending)
|
|
|
|
if transp.kind == TransportKind.Socket:
|
2022-04-11 09:56:30 +00:00
|
|
|
let sock = SocketHandle(transp.fd)
|
2020-11-18 09:30:33 +00:00
|
|
|
var vector = transp.queue.popFirst()
|
|
|
|
if vector.kind == VectorKind.DataBuffer:
|
|
|
|
transp.wovl.zeroOvelappedOffset()
|
|
|
|
transp.setWriterWSABuffer(vector)
|
2023-02-21 10:48:36 +00:00
|
|
|
let ret = wsaSend(sock, addr transp.wwsabuf, 1,
|
2020-11-18 09:30:33 +00:00
|
|
|
addr bytesCount, DWORD(0),
|
|
|
|
cast[POVERLAPPED](addr transp.wovl), nil)
|
|
|
|
if ret != 0:
|
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_OPERATION_ABORTED:
|
2020-11-18 09:30:33 +00:00
|
|
|
# CancelIO() interrupt
|
|
|
|
transp.state.excl(WritePending)
|
2019-05-28 06:29:00 +00:00
|
|
|
transp.state.incl({WritePaused, WriteEof})
|
2019-06-20 20:30:41 +00:00
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
2019-05-28 06:29:00 +00:00
|
|
|
completePendingWriteQueue(transp.queue, 0)
|
|
|
|
break
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_IO_PENDING:
|
2020-11-18 09:30:33 +00:00
|
|
|
transp.queue.addFirst(vector)
|
2019-05-28 06:29:00 +00:00
|
|
|
else:
|
2020-11-18 09:30:33 +00:00
|
|
|
transp.state.excl(WritePending)
|
|
|
|
if isConnResetError(err):
|
|
|
|
# Soft error happens which indicates that remote peer got
|
|
|
|
# disconnected, complete all pending writes in queue with 0.
|
|
|
|
transp.state.incl({WritePaused, WriteEof})
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
|
|
|
completePendingWriteQueue(transp.queue, 0)
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
transp.state.incl({WritePaused, WriteError})
|
|
|
|
let error = getTransportOsError(err)
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.fail(error)
|
|
|
|
failPendingWriteQueue(transp.queue, error)
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
transp.queue.addFirst(vector)
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
let
|
|
|
|
loop = getThreadDispatcher()
|
|
|
|
size = min(uint32(getFileSize(vector)), 2_147_483_646'u32)
|
2020-11-18 09:30:33 +00:00
|
|
|
|
|
|
|
transp.wovl.setOverlappedOffset(vector.offset)
|
2023-02-21 10:48:36 +00:00
|
|
|
var ret = loop.transmitFile(sock, getFileHandle(vector), size,
|
|
|
|
0'u32,
|
2020-11-18 09:30:33 +00:00
|
|
|
cast[POVERLAPPED](addr transp.wovl),
|
2023-02-21 10:48:36 +00:00
|
|
|
nil, 0'u32)
|
2020-11-18 09:30:33 +00:00
|
|
|
if ret == 0:
|
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_OPERATION_ABORTED:
|
2020-11-18 09:30:33 +00:00
|
|
|
# CancelIO() interrupt
|
|
|
|
transp.state.excl(WritePending)
|
2019-05-28 06:29:00 +00:00
|
|
|
transp.state.incl({WritePaused, WriteEof})
|
2019-06-20 20:30:41 +00:00
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
2019-05-28 06:29:00 +00:00
|
|
|
completePendingWriteQueue(transp.queue, 0)
|
|
|
|
break
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_IO_PENDING:
|
2020-11-18 09:30:33 +00:00
|
|
|
transp.queue.addFirst(vector)
|
2019-05-28 06:29:00 +00:00
|
|
|
else:
|
2020-11-18 09:30:33 +00:00
|
|
|
transp.state.excl(WritePending)
|
|
|
|
if isConnResetError(err):
|
|
|
|
# Soft error happens which indicates that remote peer got
|
|
|
|
# disconnected, complete all pending writes in queue with 0.
|
|
|
|
transp.state.incl({WritePaused, WriteEof})
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
|
|
|
completePendingWriteQueue(transp.queue, 0)
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
transp.state.incl({WritePaused, WriteError})
|
|
|
|
let error = getTransportOsError(err)
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.fail(error)
|
|
|
|
failPendingWriteQueue(transp.queue, error)
|
|
|
|
break
|
2018-10-25 10:19:19 +00:00
|
|
|
else:
|
2020-11-18 09:30:33 +00:00
|
|
|
transp.queue.addFirst(vector)
|
|
|
|
elif transp.kind == TransportKind.Pipe:
|
2023-02-21 10:48:36 +00:00
|
|
|
let pipe = HANDLE(transp.fd)
|
2020-11-18 09:30:33 +00:00
|
|
|
var vector = transp.queue.popFirst()
|
|
|
|
if vector.kind == VectorKind.DataBuffer:
|
|
|
|
transp.wovl.zeroOvelappedOffset()
|
|
|
|
transp.setWriterWSABuffer(vector)
|
|
|
|
let ret = writeFile(pipe, cast[pointer](transp.wwsabuf.buf),
|
|
|
|
DWORD(transp.wwsabuf.len), addr bytesCount,
|
|
|
|
cast[POVERLAPPED](addr transp.wovl))
|
|
|
|
if ret == 0:
|
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_OPERATION_ABORTED, ERROR_NO_DATA:
|
2020-11-18 09:30:33 +00:00
|
|
|
# CancelIO() interrupt
|
|
|
|
transp.state.excl(WritePending)
|
2019-05-28 06:29:00 +00:00
|
|
|
transp.state.incl({WritePaused, WriteEof})
|
2019-06-20 20:30:41 +00:00
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
2019-05-28 06:29:00 +00:00
|
|
|
completePendingWriteQueue(transp.queue, 0)
|
|
|
|
break
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_IO_PENDING:
|
2020-11-18 09:30:33 +00:00
|
|
|
transp.queue.addFirst(vector)
|
2019-05-28 06:29:00 +00:00
|
|
|
else:
|
2020-11-18 09:30:33 +00:00
|
|
|
transp.state.excl(WritePending)
|
|
|
|
if isConnResetError(err):
|
|
|
|
# Soft error happens which indicates that remote peer got
|
|
|
|
# disconnected, complete all pending writes in queue with 0.
|
|
|
|
transp.state.incl({WritePaused, WriteEof})
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
|
|
|
completePendingWriteQueue(transp.queue, 0)
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
transp.state.incl({WritePaused, WriteError})
|
|
|
|
let error = getTransportOsError(err)
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.fail(error)
|
|
|
|
failPendingWriteQueue(transp.queue, error)
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
transp.queue.addFirst(vector)
|
|
|
|
break
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2020-11-18 09:30:33 +00:00
|
|
|
if len(transp.queue) == 0:
|
|
|
|
transp.state.incl(WritePaused)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2018-05-22 16:27:20 +00:00
|
|
|
proc readStreamLoop(udata: pointer) {.gcsafe, nimcall.} =
|
2018-05-21 21:52:57 +00:00
|
|
|
var ovl = cast[PtrCustomOverlapped](udata)
|
2018-06-10 23:08:17 +00:00
|
|
|
var transp = cast[StreamTransport](ovl.data.udata)
|
2018-05-16 08:22:34 +00:00
|
|
|
while true:
|
|
|
|
if ReadPending in transp.state:
|
|
|
|
## Continuation
|
2018-05-21 21:52:57 +00:00
|
|
|
transp.state.excl(ReadPending)
|
2018-05-16 08:22:34 +00:00
|
|
|
let err = transp.rovl.data.errCode
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of OSErrorCode(-1):
|
2018-05-16 08:22:34 +00:00
|
|
|
let bytesCount = transp.rovl.data.bytesCount
|
|
|
|
if bytesCount == 0:
|
2018-06-14 06:49:59 +00:00
|
|
|
transp.state.incl({ReadEof, ReadPaused})
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(bytesCount)
|
|
|
|
if transp.buffer.availSpace() == 0:
|
2018-05-16 08:22:34 +00:00
|
|
|
transp.state.incl(ReadPaused)
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_OPERATION_ABORTED, ERROR_CONNECTION_ABORTED,
|
|
|
|
ERROR_BROKEN_PIPE:
|
2019-03-30 22:31:10 +00:00
|
|
|
# CancelIO() interrupt or closeSocket() call.
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(0)
|
2018-07-03 05:35:45 +00:00
|
|
|
transp.state.incl(ReadPaused)
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_NETNAME_DELETED, WSAECONNABORTED:
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(0)
|
2023-04-30 06:20:08 +00:00
|
|
|
if transp.kind == TransportKind.Socket:
|
|
|
|
transp.state.incl({ReadEof, ReadPaused})
|
|
|
|
else:
|
|
|
|
transp.setReadError(err)
|
|
|
|
of ERROR_PIPE_NOT_CONNECTED:
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(0)
|
2023-04-30 06:20:08 +00:00
|
|
|
if transp.kind == TransportKind.Pipe:
|
|
|
|
transp.state.incl({ReadEof, ReadPaused})
|
|
|
|
else:
|
|
|
|
transp.setReadError(err)
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(0)
|
2018-05-16 08:22:34 +00:00
|
|
|
transp.setReadError(err)
|
2019-04-04 09:34:23 +00:00
|
|
|
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.completeReader()
|
2019-04-04 09:34:23 +00:00
|
|
|
|
|
|
|
if ReadClosed in transp.state:
|
|
|
|
# Stop tracking transport
|
2020-06-24 08:21:52 +00:00
|
|
|
transp.clean()
|
2019-04-04 09:34:23 +00:00
|
|
|
|
2018-05-21 21:52:57 +00:00
|
|
|
if ReadPaused in transp.state:
|
|
|
|
# Transport buffer is full, so we will not continue on reading.
|
|
|
|
break
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
|
|
|
## Initiation
|
2018-05-21 21:52:57 +00:00
|
|
|
if transp.state * {ReadEof, ReadClosed, ReadError} == {}:
|
2018-05-16 08:22:34 +00:00
|
|
|
var flags = DWORD(0)
|
2023-02-21 10:48:36 +00:00
|
|
|
var bytesCount = 0'u32
|
2018-05-16 08:22:34 +00:00
|
|
|
transp.state.excl(ReadPaused)
|
|
|
|
transp.state.incl(ReadPending)
|
|
|
|
if transp.kind == TransportKind.Socket:
|
2022-04-11 09:56:30 +00:00
|
|
|
let sock = SocketHandle(transp.fd)
|
2018-05-21 21:52:57 +00:00
|
|
|
transp.setReaderWSABuffer()
|
2023-02-21 10:48:36 +00:00
|
|
|
let ret = wsaRecv(sock, addr transp.rwsabuf, 1,
|
2018-05-16 08:22:34 +00:00
|
|
|
addr bytesCount, addr flags,
|
|
|
|
cast[POVERLAPPED](addr transp.rovl), nil)
|
|
|
|
if ret != 0:
|
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_OPERATION_ABORTED:
|
2018-05-28 23:35:15 +00:00
|
|
|
# CancelIO() interrupt
|
2018-05-21 21:52:57 +00:00
|
|
|
transp.state.excl(ReadPending)
|
2018-05-16 08:22:34 +00:00
|
|
|
transp.state.incl(ReadPaused)
|
2023-04-30 06:20:08 +00:00
|
|
|
of WSAECONNRESET, WSAENETRESET, WSAECONNABORTED:
|
2018-07-24 15:43:30 +00:00
|
|
|
transp.state.excl(ReadPending)
|
|
|
|
transp.state.incl({ReadEof, ReadPaused})
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.completeReader()
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_IO_PENDING:
|
|
|
|
discard
|
|
|
|
else:
|
2018-06-04 09:57:17 +00:00
|
|
|
transp.state.excl(ReadPending)
|
|
|
|
transp.state.incl(ReadPaused)
|
2018-05-16 08:22:34 +00:00
|
|
|
transp.setReadError(err)
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.completeReader()
|
2018-10-25 10:19:19 +00:00
|
|
|
elif transp.kind == TransportKind.Pipe:
|
2023-02-21 10:48:36 +00:00
|
|
|
let pipe = HANDLE(transp.fd)
|
2018-10-25 10:19:19 +00:00
|
|
|
transp.setReaderWSABuffer()
|
|
|
|
let ret = readFile(pipe, cast[pointer](transp.rwsabuf.buf),
|
|
|
|
DWORD(transp.rwsabuf.len), addr bytesCount,
|
|
|
|
cast[POVERLAPPED](addr transp.rovl))
|
|
|
|
if ret == 0:
|
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_OPERATION_ABORTED:
|
2018-10-25 10:19:19 +00:00
|
|
|
# CancelIO() interrupt
|
|
|
|
transp.state.excl(ReadPending)
|
|
|
|
transp.state.incl(ReadPaused)
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_BROKEN_PIPE, ERROR_PIPE_NOT_CONNECTED:
|
2018-10-25 10:19:19 +00:00
|
|
|
transp.state.excl(ReadPending)
|
|
|
|
transp.state.incl({ReadEof, ReadPaused})
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.completeReader()
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_IO_PENDING:
|
|
|
|
discard
|
|
|
|
else:
|
2018-10-25 10:19:19 +00:00
|
|
|
transp.state.excl(ReadPending)
|
|
|
|
transp.state.incl(ReadPaused)
|
|
|
|
transp.setReadError(err)
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.completeReader()
|
2018-08-24 12:20:08 +00:00
|
|
|
else:
|
|
|
|
transp.state.incl(ReadPaused)
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.completeReader()
|
2019-03-30 22:31:10 +00:00
|
|
|
# Transport close happens in callback, and we not started new
|
|
|
|
# WSARecvFrom session.
|
|
|
|
if ReadClosed in transp.state:
|
2019-06-20 20:30:41 +00:00
|
|
|
if not(transp.future.finished()):
|
2019-03-30 22:31:10 +00:00
|
|
|
transp.future.complete()
|
2018-05-16 08:22:34 +00:00
|
|
|
## Finish Loop
|
|
|
|
break
|
|
|
|
|
2018-06-10 23:08:17 +00:00
|
|
|
proc newStreamSocketTransport(sock: AsyncFD, bufsize: int,
|
2024-04-13 00:04:42 +00:00
|
|
|
child: StreamTransport,
|
|
|
|
flags: set[TransportFlags]): StreamTransport =
|
2018-06-10 23:08:17 +00:00
|
|
|
var transp: StreamTransport
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil(child)):
|
2018-06-10 23:08:17 +00:00
|
|
|
transp = child
|
|
|
|
else:
|
|
|
|
transp = StreamTransport(kind: TransportKind.Socket)
|
2018-05-21 21:52:57 +00:00
|
|
|
transp.fd = sock
|
2024-04-13 00:04:42 +00:00
|
|
|
transp.flags = flags
|
2022-04-11 09:56:30 +00:00
|
|
|
transp.rovl.data = CompletionData(cb: readStreamLoop,
|
2018-05-21 21:52:57 +00:00
|
|
|
udata: cast[pointer](transp))
|
2022-04-11 09:56:30 +00:00
|
|
|
transp.wovl.data = CompletionData(cb: writeStreamLoop,
|
2018-05-21 21:52:57 +00:00
|
|
|
udata: cast[pointer](transp))
|
2024-04-07 04:03:12 +00:00
|
|
|
let size = max(bufsize, DefaultStreamBufferSize)
|
|
|
|
transp.buffer = BipBuffer.init(size)
|
2018-05-21 21:52:57 +00:00
|
|
|
transp.state = {ReadPaused, WritePaused}
|
|
|
|
transp.queue = initDeque[StreamVector]()
|
2024-01-04 15:17:42 +00:00
|
|
|
transp.future = Future[void].Raising([]).init(
|
|
|
|
"stream.socket.transport", {FutureFlag.OwnCancelSchedule})
|
2018-05-21 21:52:57 +00:00
|
|
|
GC_ref(transp)
|
2023-02-21 10:48:36 +00:00
|
|
|
transp
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2018-10-25 10:19:19 +00:00
|
|
|
proc newStreamPipeTransport(fd: AsyncFD, bufsize: int,
|
|
|
|
child: StreamTransport,
|
|
|
|
flags: set[TransportFlags] = {}): StreamTransport =
|
|
|
|
var transp: StreamTransport
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil(child)):
|
2018-10-25 10:19:19 +00:00
|
|
|
transp = child
|
|
|
|
else:
|
|
|
|
transp = StreamTransport(kind: TransportKind.Pipe)
|
|
|
|
transp.fd = fd
|
2022-04-11 09:56:30 +00:00
|
|
|
transp.rovl.data = CompletionData(cb: readStreamLoop,
|
2018-10-25 10:19:19 +00:00
|
|
|
udata: cast[pointer](transp))
|
2022-04-11 09:56:30 +00:00
|
|
|
transp.wovl.data = CompletionData(cb: writeStreamLoop,
|
2018-10-25 10:19:19 +00:00
|
|
|
udata: cast[pointer](transp))
|
2024-04-07 04:03:12 +00:00
|
|
|
let size = max(bufsize, DefaultStreamBufferSize)
|
|
|
|
transp.buffer = BipBuffer.init(size)
|
2018-10-25 10:19:19 +00:00
|
|
|
transp.flags = flags
|
|
|
|
transp.state = {ReadPaused, WritePaused}
|
|
|
|
transp.queue = initDeque[StreamVector]()
|
2024-01-04 15:17:42 +00:00
|
|
|
transp.future = Future[void].Raising([]).init(
|
|
|
|
"stream.pipe.transport", {FutureFlag.OwnCancelSchedule})
|
2018-10-25 10:19:19 +00:00
|
|
|
GC_ref(transp)
|
2023-02-21 10:48:36 +00:00
|
|
|
transp
|
2018-10-25 10:19:19 +00:00
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
proc bindToDomain(handle: AsyncFD,
|
|
|
|
family: AddressFamily): Result[void, OSErrorCode] =
|
|
|
|
case family
|
|
|
|
of AddressFamily.IPv6:
|
2018-05-16 08:22:34 +00:00
|
|
|
var saddr: Sockaddr_in6
|
2023-02-21 10:48:36 +00:00
|
|
|
saddr.sin6_family = type(saddr.sin6_family)(osdefs.AF_INET6)
|
|
|
|
if osdefs.bindSocket(SocketHandle(handle),
|
|
|
|
cast[ptr SockAddr](addr(saddr)),
|
|
|
|
sizeof(saddr).SockLen) != 0'i32:
|
2024-04-13 00:04:42 +00:00
|
|
|
return err(osLastError())
|
|
|
|
ok()
|
|
|
|
of AddressFamily.IPv4:
|
2018-05-16 08:22:34 +00:00
|
|
|
var saddr: Sockaddr_in
|
2023-02-21 10:48:36 +00:00
|
|
|
saddr.sin_family = type(saddr.sin_family)(osdefs.AF_INET)
|
|
|
|
if osdefs.bindSocket(SocketHandle(handle),
|
|
|
|
cast[ptr SockAddr](addr(saddr)),
|
|
|
|
sizeof(saddr).SockLen) != 0'i32:
|
2024-04-13 00:04:42 +00:00
|
|
|
return err(osLastError())
|
|
|
|
ok()
|
2023-02-21 10:48:36 +00:00
|
|
|
else:
|
2024-04-13 00:04:42 +00:00
|
|
|
raiseAssert "Unsupported family"
|
2018-05-16 08:22:34 +00:00
|
|
|
|
|
|
|
proc connect*(address: TransportAddress,
|
2018-06-10 23:08:17 +00:00
|
|
|
bufferSize = DefaultStreamBufferSize,
|
2018-10-25 10:19:19 +00:00
|
|
|
child: StreamTransport = nil,
|
2023-04-03 12:34:35 +00:00
|
|
|
localAddress = TransportAddress(),
|
|
|
|
flags: set[SocketFlags] = {},
|
2023-10-30 13:27:50 +00:00
|
|
|
dualstack = DualStackType.Auto
|
2023-11-15 08:38:48 +00:00
|
|
|
): Future[StreamTransport] {.
|
|
|
|
async: (raw: true, raises: [TransportError, CancelledError]).} =
|
2018-05-27 05:49:47 +00:00
|
|
|
## Open new connection to remote peer with address ``address`` and create
|
|
|
|
## new transport object ``StreamTransport`` for established connection.
|
2018-05-28 23:35:15 +00:00
|
|
|
## ``bufferSize`` is size of internal buffer for transport.
|
2021-01-11 17:15:23 +00:00
|
|
|
let loop = getThreadDispatcher()
|
2018-05-16 08:22:34 +00:00
|
|
|
var retFuture = newFuture[StreamTransport]("stream.transport.connect")
|
2018-10-25 10:19:19 +00:00
|
|
|
if address.family in {AddressFamily.IPv4, AddressFamily.IPv6}:
|
|
|
|
## Socket handling part
|
|
|
|
var
|
|
|
|
saddr: Sockaddr_storage
|
|
|
|
slen: SockLen
|
|
|
|
sock: AsyncFD
|
|
|
|
povl: RefCustomOverlapped
|
|
|
|
proto: Protocol
|
2020-02-25 21:50:39 +00:00
|
|
|
|
|
|
|
var raddress = windowsAnyAddressFix(address)
|
2018-10-25 10:19:19 +00:00
|
|
|
|
2019-10-09 12:12:19 +00:00
|
|
|
toSAddr(raddress, saddr, slen)
|
2018-10-25 10:19:19 +00:00
|
|
|
proto = Protocol.IPPROTO_TCP
|
2023-10-30 13:27:50 +00:00
|
|
|
sock = createAsyncSocket2(raddress.getDomain(), SockType.SOCK_STREAM,
|
|
|
|
proto).valueOr:
|
|
|
|
retFuture.fail(getTransportOsError(error))
|
2019-06-20 20:30:41 +00:00
|
|
|
return retFuture
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-10-30 13:27:50 +00:00
|
|
|
if address.family in {AddressFamily.IPv4, AddressFamily.IPv6}:
|
|
|
|
if SocketFlags.TcpNoDelay in flags:
|
|
|
|
setSockOpt2(sock, osdefs.IPPROTO_TCP, osdefs.TCP_NODELAY, 1).isOkOr:
|
|
|
|
sock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(error))
|
|
|
|
return retFuture
|
|
|
|
|
2023-04-03 12:34:35 +00:00
|
|
|
if SocketFlags.ReuseAddr in flags:
|
2023-10-30 13:27:50 +00:00
|
|
|
setSockOpt2(sock, SOL_SOCKET, SO_REUSEADDR, 1).isOkOr:
|
2023-04-03 12:34:35 +00:00
|
|
|
sock.closeSocket()
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
2023-04-03 12:34:35 +00:00
|
|
|
return retFuture
|
|
|
|
if SocketFlags.ReusePort in flags:
|
2023-10-30 13:27:50 +00:00
|
|
|
setSockOpt2(sock, SOL_SOCKET, SO_REUSEPORT, 1).isOkOr:
|
2023-04-03 12:34:35 +00:00
|
|
|
sock.closeSocket()
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
2023-04-03 12:34:35 +00:00
|
|
|
return retFuture
|
2023-10-30 13:27:50 +00:00
|
|
|
# IPV6_V6ONLY.
|
|
|
|
setDualstack(sock, address.family, dualstack).isOkOr:
|
|
|
|
sock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(error))
|
|
|
|
return retFuture
|
2023-04-03 12:34:35 +00:00
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
let transportFlags =
|
|
|
|
block:
|
|
|
|
# Add `V4Mapped` flag when `::` address is used and dualstack is
|
|
|
|
# set to enabled or auto.
|
|
|
|
var res: set[TransportFlags]
|
|
|
|
if (localAddress.family == AddressFamily.IPv6) and
|
|
|
|
localAddress.isAnyLocal():
|
|
|
|
if dualstack in {DualStackType.Enabled, DualStackType.Auto}:
|
|
|
|
res.incl(TransportFlags.V4Mapped)
|
|
|
|
res
|
|
|
|
|
|
|
|
case localAddress.family
|
|
|
|
of AddressFamily.IPv4, AddressFamily.IPv6:
|
2023-04-03 12:34:35 +00:00
|
|
|
var
|
2024-04-13 00:04:42 +00:00
|
|
|
saddr: Sockaddr_storage
|
|
|
|
slen: SockLen
|
|
|
|
toSAddr(localAddress, saddr, slen)
|
2023-04-03 12:34:35 +00:00
|
|
|
if bindSocket(SocketHandle(sock),
|
2024-04-13 00:04:42 +00:00
|
|
|
cast[ptr SockAddr](addr saddr), slen) != 0:
|
2023-04-03 12:34:35 +00:00
|
|
|
sock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(osLastError()))
|
|
|
|
return retFuture
|
2024-04-13 00:04:42 +00:00
|
|
|
of AddressFamily.Unix:
|
|
|
|
raiseAssert "Unsupported local address family"
|
|
|
|
of AddressFamily.None:
|
|
|
|
let res = bindToDomain(sock, raddress.family)
|
|
|
|
if res.isErr():
|
|
|
|
sock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(res.error))
|
|
|
|
return retFuture
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2019-06-20 20:30:41 +00:00
|
|
|
proc socketContinuation(udata: pointer) {.gcsafe.} =
|
2018-10-25 10:19:19 +00:00
|
|
|
var ovl = cast[RefCustomOverlapped](udata)
|
2019-06-20 20:30:41 +00:00
|
|
|
if not(retFuture.finished()):
|
2018-10-25 10:19:19 +00:00
|
|
|
if ovl.data.errCode == OSErrorCode(-1):
|
2023-04-30 06:20:08 +00:00
|
|
|
if setsockopt(SocketHandle(sock), cint(SOL_SOCKET),
|
|
|
|
cint(SO_UPDATE_CONNECT_CONTEXT), nil,
|
2018-10-25 10:19:19 +00:00
|
|
|
SockLen(0)) != 0'i32:
|
|
|
|
let err = wsaGetLastError()
|
|
|
|
sock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(err))
|
|
|
|
else:
|
2024-04-13 00:04:42 +00:00
|
|
|
let transp = newStreamSocketTransport(sock, bufferSize, child,
|
|
|
|
transportFlags)
|
2019-04-04 09:34:23 +00:00
|
|
|
# Start tracking transport
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2019-04-04 09:34:23 +00:00
|
|
|
retFuture.complete(transp)
|
2018-10-25 10:19:19 +00:00
|
|
|
else:
|
|
|
|
sock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(ovl.data.errCode))
|
|
|
|
GC_unref(ovl)
|
|
|
|
|
2019-06-20 20:30:41 +00:00
|
|
|
proc cancel(udata: pointer) {.gcsafe.} =
|
2020-06-05 16:11:51 +00:00
|
|
|
sock.closeSocket()
|
2019-06-20 20:30:41 +00:00
|
|
|
|
2018-10-25 10:19:19 +00:00
|
|
|
povl = RefCustomOverlapped()
|
|
|
|
GC_ref(povl)
|
2022-04-11 09:56:30 +00:00
|
|
|
povl.data = CompletionData(cb: socketContinuation)
|
2021-09-04 21:53:27 +00:00
|
|
|
let res = loop.connectEx(SocketHandle(sock),
|
2019-07-15 09:59:42 +00:00
|
|
|
cast[ptr SockAddr](addr saddr),
|
2023-02-21 10:48:36 +00:00
|
|
|
cint(slen), nil, 0, nil,
|
2019-07-15 09:59:42 +00:00
|
|
|
cast[POVERLAPPED](povl))
|
|
|
|
# We will not process immediate completion, to avoid undefined behavior.
|
2023-04-30 06:20:08 +00:00
|
|
|
if res == FALSE:
|
2019-07-15 09:59:42 +00:00
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_IO_PENDING:
|
|
|
|
discard
|
|
|
|
else:
|
2019-07-15 09:59:42 +00:00
|
|
|
GC_unref(povl)
|
|
|
|
sock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(err))
|
|
|
|
|
|
|
|
retFuture.cancelCallback = cancel
|
2018-10-25 10:19:19 +00:00
|
|
|
|
|
|
|
elif address.family == AddressFamily.Unix:
|
|
|
|
## Unix domain socket emulation with Windows Named Pipes.
|
2023-02-21 10:48:36 +00:00
|
|
|
# For some reason Nim compiler does not detect `pipeHandle` usage in
|
|
|
|
# pipeContinuation() procedure, so we marking it as {.used.} here.
|
2023-04-30 06:20:08 +00:00
|
|
|
var pipeHandle {.used.} = INVALID_HANDLE_VALUE
|
2023-06-05 20:21:50 +00:00
|
|
|
var pipeContinuation: proc (udata: pointer) {.gcsafe, raises: [].}
|
2023-02-21 10:48:36 +00:00
|
|
|
|
2023-06-05 20:21:50 +00:00
|
|
|
pipeContinuation = proc (udata: pointer) {.gcsafe, raises: [].} =
|
2019-06-20 20:30:41 +00:00
|
|
|
# Continue only if `retFuture` is not cancelled.
|
|
|
|
if not(retFuture.finished()):
|
2023-02-21 10:48:36 +00:00
|
|
|
let
|
2023-11-19 17:29:09 +00:00
|
|
|
pipeSuffix = $cast[cstring](baseAddr address.address_un)
|
2023-02-21 10:48:36 +00:00
|
|
|
pipeAsciiName = PipeHeaderName & pipeSuffix[1 .. ^1]
|
|
|
|
pipeName = toWideString(pipeAsciiName).valueOr:
|
|
|
|
retFuture.fail(getTransportOsError(error))
|
|
|
|
return
|
2023-04-30 06:20:08 +00:00
|
|
|
genericFlags = GENERIC_READ or GENERIC_WRITE
|
|
|
|
shareFlags = FILE_SHARE_READ or FILE_SHARE_WRITE
|
2023-02-21 10:48:36 +00:00
|
|
|
pipeHandle = createFile(pipeName, genericFlags, shareFlags,
|
2023-04-30 06:20:08 +00:00
|
|
|
nil, OPEN_EXISTING,
|
|
|
|
FILE_FLAG_OVERLAPPED, HANDLE(0))
|
2023-02-21 10:48:36 +00:00
|
|
|
free(pipeName)
|
2023-04-30 06:20:08 +00:00
|
|
|
if pipeHandle == INVALID_HANDLE_VALUE:
|
2019-06-20 20:30:41 +00:00
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_PIPE_BUSY:
|
2020-01-27 18:28:44 +00:00
|
|
|
discard setTimer(Moment.fromNow(50.milliseconds),
|
|
|
|
pipeContinuation, nil)
|
2019-06-20 20:30:41 +00:00
|
|
|
else:
|
|
|
|
retFuture.fail(getTransportOsError(err))
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
let res = register2(AsyncFD(pipeHandle))
|
|
|
|
if res.isErr():
|
|
|
|
retFuture.fail(getTransportOsError(res.error()))
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
return
|
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
let transp = newStreamPipeTransport(AsyncFD(pipeHandle),
|
2019-06-20 20:30:41 +00:00
|
|
|
bufferSize, child)
|
|
|
|
# Start tracking transport
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2019-06-20 20:30:41 +00:00
|
|
|
retFuture.complete(transp)
|
2018-10-25 10:19:19 +00:00
|
|
|
pipeContinuation(nil)
|
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
return retFuture
|
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc createAcceptPipe(server: StreamServer): Result[AsyncFD, OSErrorCode] =
|
|
|
|
let
|
2023-11-19 17:29:09 +00:00
|
|
|
pipeSuffix = $cast[cstring](baseAddr server.local.address_un)
|
2023-02-21 10:48:36 +00:00
|
|
|
pipeName = ? toWideString(PipeHeaderName & pipeSuffix)
|
|
|
|
openMode =
|
|
|
|
if FirstPipe notin server.flags:
|
|
|
|
server.flags.incl(FirstPipe)
|
2023-04-30 06:20:08 +00:00
|
|
|
PIPE_ACCESS_DUPLEX or FILE_FLAG_OVERLAPPED or
|
|
|
|
FILE_FLAG_FIRST_PIPE_INSTANCE
|
2023-02-21 10:48:36 +00:00
|
|
|
else:
|
2023-04-30 06:20:08 +00:00
|
|
|
PIPE_ACCESS_DUPLEX or FILE_FLAG_OVERLAPPED
|
|
|
|
pipeMode = PIPE_TYPE_BYTE or PIPE_READMODE_BYTE or PIPE_WAIT
|
2023-02-21 10:48:36 +00:00
|
|
|
pipeHandle = createNamedPipe(pipeName, openMode, pipeMode,
|
2023-04-30 06:20:08 +00:00
|
|
|
PIPE_UNLIMITED_INSTANCES,
|
2023-02-21 10:48:36 +00:00
|
|
|
DWORD(server.bufferSize),
|
|
|
|
DWORD(server.bufferSize),
|
|
|
|
DWORD(0), nil)
|
|
|
|
free(pipeName)
|
2023-04-30 06:20:08 +00:00
|
|
|
if pipeHandle == INVALID_HANDLE_VALUE:
|
2023-02-21 10:48:36 +00:00
|
|
|
return err(osLastError())
|
|
|
|
let res = register2(AsyncFD(pipeHandle))
|
|
|
|
if res.isErr():
|
2023-04-30 06:20:08 +00:00
|
|
|
discard closeHandle(pipeHandle)
|
2023-02-21 10:48:36 +00:00
|
|
|
return err(res.error())
|
|
|
|
|
|
|
|
ok(AsyncFD(pipeHandle))
|
2020-06-24 08:21:52 +00:00
|
|
|
|
2018-10-25 10:19:19 +00:00
|
|
|
proc acceptPipeLoop(udata: pointer) {.gcsafe, nimcall.} =
|
|
|
|
var ovl = cast[PtrCustomOverlapped](udata)
|
|
|
|
var server = cast[StreamServer](ovl.data.udata)
|
|
|
|
|
|
|
|
while true:
|
|
|
|
if server.apending:
|
|
|
|
## Continuation
|
|
|
|
server.apending = false
|
2020-07-15 08:09:34 +00:00
|
|
|
if server.status notin {ServerStatus.Stopped, ServerStatus.Closed}:
|
2023-04-30 06:20:08 +00:00
|
|
|
case ovl.data.errCode
|
|
|
|
of OSErrorCode(-1):
|
2020-07-15 08:09:34 +00:00
|
|
|
var ntransp: StreamTransport
|
|
|
|
var flags = {WinServerPipe}
|
|
|
|
if NoPipeFlash in server.flags:
|
|
|
|
flags.incl(WinNoPipeFlash)
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil(server.init)):
|
2020-07-15 08:09:34 +00:00
|
|
|
var transp = server.init(server, server.sock)
|
|
|
|
ntransp = newStreamPipeTransport(server.sock, server.bufferSize,
|
|
|
|
transp, flags)
|
|
|
|
else:
|
|
|
|
ntransp = newStreamPipeTransport(server.sock, server.bufferSize,
|
|
|
|
nil, flags)
|
|
|
|
# Start tracking transport
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2023-02-21 10:48:36 +00:00
|
|
|
asyncSpawn server.function(server, ntransp)
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_OPERATION_ABORTED:
|
2020-07-15 08:09:34 +00:00
|
|
|
# CancelIO() interrupt or close call.
|
|
|
|
if server.status in {ServerStatus.Closed, ServerStatus.Stopped}:
|
|
|
|
server.clean()
|
|
|
|
break
|
2019-03-30 22:31:10 +00:00
|
|
|
else:
|
2020-07-15 08:09:34 +00:00
|
|
|
# We should not raise defects in this loop.
|
2023-02-21 10:48:36 +00:00
|
|
|
discard disconnectNamedPipe(HANDLE(server.sock))
|
|
|
|
discard closeHandle(HANDLE(server.sock))
|
|
|
|
raiseOsDefect(osLastError(), "acceptPipeLoop(): Unable to accept " &
|
|
|
|
"new pipe connection")
|
2020-07-15 08:09:34 +00:00
|
|
|
else:
|
|
|
|
# Server close happens in callback, and we are not started new
|
|
|
|
# connectNamedPipe session.
|
|
|
|
if not(server.loopFuture.finished()):
|
2020-06-24 08:21:52 +00:00
|
|
|
server.clean()
|
2018-10-25 10:19:19 +00:00
|
|
|
break
|
|
|
|
else:
|
|
|
|
## Initiation
|
2019-03-30 22:31:10 +00:00
|
|
|
if server.status notin {ServerStatus.Stopped, ServerStatus.Closed}:
|
|
|
|
server.apending = true
|
2023-02-21 10:48:36 +00:00
|
|
|
let
|
2023-11-19 17:29:09 +00:00
|
|
|
pipeSuffix = $cast[cstring](baseAddr server.local.address_un)
|
2023-02-21 10:48:36 +00:00
|
|
|
pipeAsciiName = PipeHeaderName & pipeSuffix
|
|
|
|
pipeName = toWideString(pipeAsciiName).valueOr:
|
|
|
|
raiseOsDefect(error, "acceptPipeLoop(): Unable to create name " &
|
|
|
|
"for new pipe connection")
|
|
|
|
openMode =
|
|
|
|
if FirstPipe notin server.flags:
|
|
|
|
server.flags.incl(FirstPipe)
|
2023-04-30 06:20:08 +00:00
|
|
|
PIPE_ACCESS_DUPLEX or FILE_FLAG_OVERLAPPED or
|
|
|
|
FILE_FLAG_FIRST_PIPE_INSTANCE
|
2023-02-21 10:48:36 +00:00
|
|
|
else:
|
2023-04-30 06:20:08 +00:00
|
|
|
PIPE_ACCESS_DUPLEX or FILE_FLAG_OVERLAPPED
|
|
|
|
pipeMode = PIPE_TYPE_BYTE or PIPE_READMODE_BYTE or PIPE_WAIT
|
2023-02-21 10:48:36 +00:00
|
|
|
pipeHandle = createNamedPipe(pipeName, openMode, pipeMode,
|
2023-04-30 06:20:08 +00:00
|
|
|
PIPE_UNLIMITED_INSTANCES,
|
2023-02-21 10:48:36 +00:00
|
|
|
DWORD(server.bufferSize),
|
|
|
|
DWORD(server.bufferSize),
|
|
|
|
DWORD(0), nil)
|
|
|
|
free(pipeName)
|
2023-04-30 06:20:08 +00:00
|
|
|
if pipeHandle == INVALID_HANDLE_VALUE:
|
2023-02-21 10:48:36 +00:00
|
|
|
raiseOsDefect(osLastError(), "acceptPipeLoop(): Unable to create " &
|
|
|
|
"new pipe")
|
2019-03-30 22:31:10 +00:00
|
|
|
server.sock = AsyncFD(pipeHandle)
|
2023-02-21 10:48:36 +00:00
|
|
|
let wres = register2(server.sock)
|
|
|
|
if wres.isErr():
|
|
|
|
raiseOsDefect(wres.error(), "acceptPipeLoop(): Unable to " &
|
|
|
|
"register new pipe in dispatcher")
|
2019-03-30 22:31:10 +00:00
|
|
|
let res = connectNamedPipe(pipeHandle,
|
|
|
|
cast[POVERLAPPED](addr server.aovl))
|
|
|
|
if res == 0:
|
2023-02-21 10:48:36 +00:00
|
|
|
let errCode = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
if errCode == ERROR_OPERATION_ABORTED:
|
2019-03-30 22:31:10 +00:00
|
|
|
server.apending = false
|
|
|
|
break
|
2023-04-30 06:20:08 +00:00
|
|
|
elif errCode == ERROR_IO_PENDING:
|
2019-03-30 22:31:10 +00:00
|
|
|
discard
|
2023-04-30 06:20:08 +00:00
|
|
|
elif errCode == ERROR_PIPE_CONNECTED:
|
2019-03-30 22:31:10 +00:00
|
|
|
discard
|
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
raiseOsDefect(errCode, "acceptPipeLoop(): Unable to establish " &
|
|
|
|
"pipe connection")
|
2018-10-25 10:19:19 +00:00
|
|
|
break
|
2019-03-30 22:31:10 +00:00
|
|
|
else:
|
|
|
|
# Server close happens in callback, and we are not started new
|
|
|
|
# connectNamedPipe session.
|
2020-07-15 08:09:34 +00:00
|
|
|
if not(server.loopFuture.finished()):
|
|
|
|
server.clean()
|
|
|
|
break
|
2018-10-25 10:19:19 +00:00
|
|
|
|
2018-06-04 09:57:17 +00:00
|
|
|
proc acceptLoop(udata: pointer) {.gcsafe, nimcall.} =
|
|
|
|
var ovl = cast[PtrCustomOverlapped](udata)
|
|
|
|
var server = cast[StreamServer](ovl.data.udata)
|
2021-01-11 17:15:23 +00:00
|
|
|
var loop = getThreadDispatcher()
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2018-06-04 09:57:17 +00:00
|
|
|
while true:
|
|
|
|
if server.apending:
|
|
|
|
## Continuation
|
|
|
|
server.apending = false
|
2020-07-15 08:09:34 +00:00
|
|
|
if server.status notin {ServerStatus.Stopped, ServerStatus.Closed}:
|
2023-04-30 06:20:08 +00:00
|
|
|
case ovl.data.errCode
|
|
|
|
of OSErrorCode(-1):
|
|
|
|
if setsockopt(SocketHandle(server.asock), cint(SOL_SOCKET),
|
|
|
|
cint(SO_UPDATE_ACCEPT_CONTEXT),
|
2023-02-21 10:48:36 +00:00
|
|
|
addr server.sock,
|
2020-07-15 08:09:34 +00:00
|
|
|
SockLen(sizeof(SocketHandle))) != 0'i32:
|
2023-02-21 10:48:36 +00:00
|
|
|
let errCode = OSErrorCode(wsaGetLastError())
|
2020-07-15 08:09:34 +00:00
|
|
|
server.asock.closeSocket()
|
2023-02-21 10:48:36 +00:00
|
|
|
raiseOsDefect(errCode, "acceptLoop(): Unable to set accept " &
|
|
|
|
"context for socket")
|
2019-03-30 22:31:10 +00:00
|
|
|
else:
|
2020-07-15 08:09:34 +00:00
|
|
|
var ntransp: StreamTransport
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil(server.init)):
|
2020-07-15 08:09:34 +00:00
|
|
|
let transp = server.init(server, server.asock)
|
|
|
|
ntransp = newStreamSocketTransport(server.asock,
|
|
|
|
server.bufferSize,
|
2024-04-13 00:04:42 +00:00
|
|
|
transp,
|
|
|
|
server.getTransportFlags())
|
2020-07-15 08:09:34 +00:00
|
|
|
else:
|
|
|
|
ntransp = newStreamSocketTransport(server.asock,
|
2024-04-13 00:04:42 +00:00
|
|
|
server.bufferSize, nil,
|
|
|
|
server.getTransportFlags())
|
2020-07-15 08:09:34 +00:00
|
|
|
# Start tracking transport
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2023-02-21 10:48:36 +00:00
|
|
|
asyncSpawn server.function(server, ntransp)
|
2019-03-30 22:31:10 +00:00
|
|
|
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_OPERATION_ABORTED:
|
2020-07-15 08:09:34 +00:00
|
|
|
# CancelIO() interrupt or close.
|
|
|
|
server.asock.closeSocket()
|
|
|
|
if server.status in {ServerStatus.Closed, ServerStatus.Stopped}:
|
|
|
|
# Stop tracking server
|
|
|
|
if not(server.loopFuture.finished()):
|
|
|
|
server.clean()
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
server.asock.closeSocket()
|
2023-02-21 10:48:36 +00:00
|
|
|
raiseOsDefect(ovl.data.errCode, "acceptLoop(): Unable to accept " &
|
|
|
|
"new connection")
|
2019-03-30 22:31:10 +00:00
|
|
|
else:
|
2020-07-15 08:09:34 +00:00
|
|
|
# Server close happens in callback, and we are not started new
|
|
|
|
# AcceptEx session.
|
|
|
|
if not(server.loopFuture.finished()):
|
|
|
|
server.clean()
|
|
|
|
break
|
2018-06-04 09:57:17 +00:00
|
|
|
else:
|
|
|
|
## Initiation
|
2019-03-30 22:31:10 +00:00
|
|
|
if server.status notin {ServerStatus.Stopped, ServerStatus.Closed}:
|
|
|
|
server.apending = true
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
# TODO No way to report back errors!
|
2023-10-30 13:27:50 +00:00
|
|
|
server.asock = createAsyncSocket2(server.domain, SockType.SOCK_STREAM,
|
|
|
|
Protocol.IPPROTO_TCP).valueOr:
|
|
|
|
raiseOsDefect(error, "acceptLoop(): Unablet to create new socket")
|
2019-03-30 22:31:10 +00:00
|
|
|
|
|
|
|
var dwBytesReceived = DWORD(0)
|
|
|
|
let dwReceiveDataLength = DWORD(0)
|
|
|
|
let dwLocalAddressLength = DWORD(sizeof(Sockaddr_in6) + 16)
|
|
|
|
let dwRemoteAddressLength = DWORD(sizeof(Sockaddr_in6) + 16)
|
|
|
|
|
|
|
|
let res = loop.acceptEx(SocketHandle(server.sock),
|
|
|
|
SocketHandle(server.asock),
|
|
|
|
addr server.abuffer[0],
|
|
|
|
dwReceiveDataLength, dwLocalAddressLength,
|
|
|
|
dwRemoteAddressLength, addr dwBytesReceived,
|
|
|
|
cast[POVERLAPPED](addr server.aovl))
|
2023-04-30 06:20:08 +00:00
|
|
|
if res == FALSE:
|
2023-02-21 10:48:36 +00:00
|
|
|
let errCode = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
if errCode == ERROR_OPERATION_ABORTED:
|
2019-03-30 22:31:10 +00:00
|
|
|
server.apending = false
|
|
|
|
break
|
2023-04-30 06:20:08 +00:00
|
|
|
elif errCode == ERROR_IO_PENDING:
|
2019-03-30 22:31:10 +00:00
|
|
|
discard
|
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
raiseOsDefect(errCode, "acceptLoop(): Unable to accept " &
|
|
|
|
"connection")
|
2018-06-10 23:08:17 +00:00
|
|
|
break
|
2019-03-30 22:31:10 +00:00
|
|
|
else:
|
|
|
|
# Server close happens in callback, and we are not started new
|
|
|
|
# AcceptEx session.
|
2020-07-15 08:09:34 +00:00
|
|
|
if not(server.loopFuture.finished()):
|
|
|
|
server.clean()
|
|
|
|
break
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc resumeRead(transp: StreamTransport): Result[void, OSErrorCode] =
|
2020-03-05 09:59:10 +00:00
|
|
|
if ReadPaused in transp.state:
|
|
|
|
transp.state.excl(ReadPaused)
|
|
|
|
readStreamLoop(cast[pointer](addr transp.rovl))
|
2023-02-21 10:48:36 +00:00
|
|
|
ok()
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc resumeWrite(transp: StreamTransport): Result[void, OSErrorCode] =
|
2020-03-05 09:59:10 +00:00
|
|
|
if WritePaused in transp.state:
|
|
|
|
transp.state.excl(WritePaused)
|
|
|
|
writeStreamLoop(cast[pointer](addr transp.wovl))
|
2023-02-21 10:48:36 +00:00
|
|
|
ok()
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc pauseAccept(server: StreamServer): Result[void, OSErrorCode] =
|
2018-06-04 09:57:17 +00:00
|
|
|
if server.apending:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard cancelIo(HANDLE(server.sock))
|
|
|
|
ok()
|
2018-06-04 09:57:17 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc resumeAccept(server: StreamServer): Result[void, OSErrorCode] =
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(server.apending):
|
2018-10-25 10:19:19 +00:00
|
|
|
server.aovl.data.cb(addr server.aovl)
|
2023-02-21 10:48:36 +00:00
|
|
|
ok()
|
2018-06-11 07:16:08 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc accept*(server: StreamServer): Future[StreamTransport] {.
|
2024-02-14 12:05:19 +00:00
|
|
|
async: (raw: true, raises: [TransportUseClosedError,
|
|
|
|
TransportTooManyError, TransportAbortedError, TransportOsError,
|
|
|
|
CancelledError]).} =
|
2020-06-24 08:21:52 +00:00
|
|
|
var retFuture = newFuture[StreamTransport]("stream.server.accept")
|
|
|
|
|
|
|
|
doAssert(server.status != ServerStatus.Running,
|
|
|
|
"You could not use accept() if server was already started")
|
|
|
|
|
|
|
|
if server.status == ServerStatus.Closed:
|
|
|
|
retFuture.fail(getServerUseClosedError())
|
|
|
|
return retFuture
|
|
|
|
|
|
|
|
proc continuationSocket(udata: pointer) {.gcsafe.} =
|
2023-03-24 15:34:45 +00:00
|
|
|
if retFuture.finished():
|
|
|
|
# `retFuture` could become finished in 2 cases:
|
|
|
|
# 1. OS sends IOCP notification about failure, but we already failed
|
|
|
|
# `retFuture` with proper error.
|
|
|
|
# 2. `accept()` call has been cancelled. Cancellation callback closed
|
|
|
|
# accepting socket, so OS sends IOCP notification with an
|
|
|
|
# `ERROR_OPERATION_ABORTED` error.
|
|
|
|
return
|
2020-06-24 08:21:52 +00:00
|
|
|
|
2023-03-24 15:34:45 +00:00
|
|
|
var
|
|
|
|
ovl = cast[PtrCustomOverlapped](udata)
|
|
|
|
server = cast[StreamServer](ovl.data.udata)
|
2020-06-24 08:21:52 +00:00
|
|
|
server.apending = false
|
2023-03-24 15:34:45 +00:00
|
|
|
|
|
|
|
if server.status in {ServerStatus.Stopped, ServerStatus.Closed}:
|
|
|
|
retFuture.fail(getServerUseClosedError())
|
|
|
|
server.asock.closeSocket()
|
|
|
|
server.clean()
|
|
|
|
else:
|
|
|
|
case ovl.data.errCode
|
|
|
|
of OSErrorCode(-1):
|
2023-04-30 06:20:08 +00:00
|
|
|
if setsockopt(SocketHandle(server.asock), cint(SOL_SOCKET),
|
|
|
|
cint(SO_UPDATE_ACCEPT_CONTEXT),
|
2023-03-24 15:34:45 +00:00
|
|
|
addr server.sock,
|
|
|
|
SockLen(sizeof(SocketHandle))) != 0'i32:
|
|
|
|
let err = osLastError()
|
|
|
|
server.asock.closeSocket()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of WSAENOTSOCK:
|
2023-03-24 15:34:45 +00:00
|
|
|
# This can be happened when server get closed, but continuation
|
|
|
|
# was already scheduled, so we failing it not with OS error.
|
|
|
|
retFuture.fail(getServerUseClosedError())
|
|
|
|
else:
|
|
|
|
let errorMsg = osErrorMsg(err)
|
|
|
|
retFuture.fail(getConnectionAbortedError(errorMsg))
|
|
|
|
else:
|
|
|
|
var ntransp: StreamTransport
|
|
|
|
if not(isNil(server.init)):
|
|
|
|
let transp = server.init(server, server.asock)
|
|
|
|
ntransp = newStreamSocketTransport(server.asock,
|
|
|
|
server.bufferSize,
|
2024-04-13 00:04:42 +00:00
|
|
|
transp,
|
|
|
|
server.getTransportFlags())
|
2023-03-24 15:34:45 +00:00
|
|
|
else:
|
|
|
|
ntransp = newStreamSocketTransport(server.asock,
|
2024-04-13 00:04:42 +00:00
|
|
|
server.bufferSize, nil,
|
|
|
|
server.getTransportFlags())
|
2023-03-24 15:34:45 +00:00
|
|
|
# Start tracking transport
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2023-03-24 15:34:45 +00:00
|
|
|
retFuture.complete(ntransp)
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_OPERATION_ABORTED:
|
2023-03-24 15:34:45 +00:00
|
|
|
# CancelIO() interrupt or close.
|
|
|
|
server.asock.closeSocket()
|
2020-07-15 08:09:34 +00:00
|
|
|
retFuture.fail(getServerUseClosedError())
|
2023-03-24 15:34:45 +00:00
|
|
|
server.clean()
|
2023-04-30 06:20:08 +00:00
|
|
|
of WSAENETDOWN, WSAENETRESET, WSAECONNABORTED, WSAECONNRESET,
|
2023-10-31 01:43:58 +00:00
|
|
|
WSAETIMEDOUT, ERROR_NETNAME_DELETED:
|
2021-06-30 15:22:37 +00:00
|
|
|
server.asock.closeSocket()
|
2023-04-30 06:20:08 +00:00
|
|
|
retFuture.fail(getConnectionAbortedError(ovl.data.errCode))
|
2020-07-15 08:09:34 +00:00
|
|
|
server.clean()
|
|
|
|
else:
|
2023-03-24 15:34:45 +00:00
|
|
|
server.asock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(ovl.data.errCode))
|
2020-06-24 08:21:52 +00:00
|
|
|
|
|
|
|
proc cancellationSocket(udata: pointer) {.gcsafe.} =
|
2021-09-04 21:53:27 +00:00
|
|
|
if server.apending:
|
|
|
|
server.apending = false
|
2020-06-24 08:21:52 +00:00
|
|
|
server.asock.closeSocket()
|
|
|
|
|
|
|
|
proc continuationPipe(udata: pointer) {.gcsafe.} =
|
2023-03-24 15:34:45 +00:00
|
|
|
if retFuture.finished():
|
|
|
|
# `retFuture` could become finished in 2 cases:
|
|
|
|
# 1. OS sends IOCP notification about failure, but we already failed
|
|
|
|
# `retFuture` with proper error.
|
|
|
|
# 2. `accept()` call has been cancelled. Cancellation callback closed
|
|
|
|
# accepting socket, so OS sends IOCP notification with an
|
|
|
|
# `ERROR_OPERATION_ABORTED` error.
|
|
|
|
return
|
2020-06-24 08:21:52 +00:00
|
|
|
|
2023-03-24 15:34:45 +00:00
|
|
|
var
|
|
|
|
ovl = cast[PtrCustomOverlapped](udata)
|
|
|
|
server = cast[StreamServer](ovl.data.udata)
|
2020-06-24 08:21:52 +00:00
|
|
|
server.apending = false
|
2023-03-24 15:34:45 +00:00
|
|
|
|
|
|
|
if server.status in {ServerStatus.Stopped, ServerStatus.Closed}:
|
|
|
|
retFuture.fail(getServerUseClosedError())
|
|
|
|
server.sock.closeHandle()
|
|
|
|
server.clean()
|
|
|
|
else:
|
2023-04-30 06:20:08 +00:00
|
|
|
case ovl.data.errCode
|
|
|
|
of OSErrorCode(-1):
|
2023-03-24 15:34:45 +00:00
|
|
|
var ntransp: StreamTransport
|
|
|
|
var flags = {WinServerPipe}
|
|
|
|
if NoPipeFlash in server.flags:
|
|
|
|
flags.incl(WinNoPipeFlash)
|
|
|
|
if not(isNil(server.init)):
|
|
|
|
var transp = server.init(server, server.sock)
|
|
|
|
ntransp = newStreamPipeTransport(server.sock, server.bufferSize,
|
|
|
|
transp, flags)
|
|
|
|
else:
|
|
|
|
ntransp = newStreamPipeTransport(server.sock, server.bufferSize,
|
|
|
|
nil, flags)
|
|
|
|
server.sock = server.createAcceptPipe().valueOr:
|
|
|
|
server.sock = asyncInvalidSocket
|
|
|
|
server.errorCode = error
|
|
|
|
retFuture.fail(getTransportOsError(error))
|
|
|
|
return
|
|
|
|
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2023-03-24 15:34:45 +00:00
|
|
|
retFuture.complete(ntransp)
|
|
|
|
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_OPERATION_ABORTED, ERROR_PIPE_NOT_CONNECTED:
|
2023-03-24 15:34:45 +00:00
|
|
|
# CancelIO() interrupt or close call.
|
2020-07-15 08:09:34 +00:00
|
|
|
retFuture.fail(getServerUseClosedError())
|
|
|
|
server.clean()
|
|
|
|
else:
|
2023-03-24 15:34:45 +00:00
|
|
|
discard closeHandle(HANDLE(server.sock))
|
|
|
|
server.sock = server.createAcceptPipe().valueOr:
|
|
|
|
server.sock = asyncInvalidSocket
|
|
|
|
server.errorCode = error
|
|
|
|
retFuture.fail(getTransportOsError(error))
|
|
|
|
return
|
|
|
|
retFuture.fail(getTransportOsError(ovl.data.errCode))
|
2020-06-24 08:21:52 +00:00
|
|
|
|
|
|
|
proc cancellationPipe(udata: pointer) {.gcsafe.} =
|
2021-09-04 21:53:27 +00:00
|
|
|
if server.apending:
|
|
|
|
server.apending = false
|
2020-06-24 08:21:52 +00:00
|
|
|
server.sock.closeHandle()
|
|
|
|
|
|
|
|
if server.local.family in {AddressFamily.IPv4, AddressFamily.IPv6}:
|
|
|
|
# TCP Sockets part
|
2021-01-11 17:15:23 +00:00
|
|
|
var loop = getThreadDispatcher()
|
2023-10-30 13:27:50 +00:00
|
|
|
server.asock = createAsyncSocket2(server.domain, SockType.SOCK_STREAM,
|
|
|
|
Protocol.IPPROTO_TCP).valueOr:
|
|
|
|
case error
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_TOO_MANY_OPEN_FILES, WSAENOBUFS, WSAEMFILE:
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportTooManyError(error))
|
2020-06-24 08:21:52 +00:00
|
|
|
else:
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
2020-06-24 08:21:52 +00:00
|
|
|
return retFuture
|
|
|
|
|
|
|
|
var dwBytesReceived = DWORD(0)
|
|
|
|
let dwReceiveDataLength = DWORD(0)
|
|
|
|
let dwLocalAddressLength = DWORD(sizeof(Sockaddr_in6) + 16)
|
|
|
|
let dwRemoteAddressLength = DWORD(sizeof(Sockaddr_in6) + 16)
|
|
|
|
|
2022-04-11 09:56:30 +00:00
|
|
|
server.aovl.data = CompletionData(cb: continuationSocket,
|
2020-06-24 08:21:52 +00:00
|
|
|
udata: cast[pointer](server))
|
|
|
|
server.apending = true
|
|
|
|
let res = loop.acceptEx(SocketHandle(server.sock),
|
|
|
|
SocketHandle(server.asock),
|
|
|
|
addr server.abuffer[0],
|
|
|
|
dwReceiveDataLength, dwLocalAddressLength,
|
|
|
|
dwRemoteAddressLength, addr dwBytesReceived,
|
|
|
|
cast[POVERLAPPED](addr server.aovl))
|
2023-04-30 06:20:08 +00:00
|
|
|
if res == FALSE:
|
2020-06-24 08:21:52 +00:00
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_OPERATION_ABORTED:
|
2020-06-24 08:21:52 +00:00
|
|
|
server.apending = false
|
|
|
|
retFuture.fail(getServerUseClosedError())
|
|
|
|
return retFuture
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_IO_PENDING:
|
2020-06-24 08:21:52 +00:00
|
|
|
discard
|
2023-04-30 06:20:08 +00:00
|
|
|
of WSAECONNRESET, WSAECONNABORTED, WSAENETDOWN,
|
|
|
|
WSAENETRESET, WSAETIMEDOUT:
|
2021-11-12 16:13:56 +00:00
|
|
|
server.apending = false
|
2023-04-30 06:20:08 +00:00
|
|
|
retFuture.fail(getConnectionAbortedError(err))
|
2021-11-12 16:13:56 +00:00
|
|
|
return retFuture
|
2020-06-24 08:21:52 +00:00
|
|
|
else:
|
|
|
|
server.apending = false
|
|
|
|
retFuture.fail(getTransportOsError(err))
|
|
|
|
return retFuture
|
|
|
|
|
|
|
|
retFuture.cancelCallback = cancellationSocket
|
|
|
|
|
|
|
|
elif server.local.family in {AddressFamily.Unix}:
|
|
|
|
# Unix domain sockets emulation via Windows Named pipes part.
|
|
|
|
server.apending = true
|
|
|
|
if server.sock == asyncInvalidPipe:
|
|
|
|
let err = server.errorCode
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_TOO_MANY_OPEN_FILES:
|
2020-06-24 08:21:52 +00:00
|
|
|
retFuture.fail(getTransportTooManyError())
|
|
|
|
else:
|
|
|
|
retFuture.fail(getTransportOsError(err))
|
|
|
|
return retFuture
|
|
|
|
|
2022-04-11 09:56:30 +00:00
|
|
|
server.aovl.data = CompletionData(cb: continuationPipe,
|
2020-06-24 08:21:52 +00:00
|
|
|
udata: cast[pointer](server))
|
|
|
|
server.apending = true
|
2023-02-21 10:48:36 +00:00
|
|
|
let res = connectNamedPipe(HANDLE(server.sock),
|
2020-06-24 08:21:52 +00:00
|
|
|
cast[POVERLAPPED](addr server.aovl))
|
|
|
|
if res == 0:
|
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of ERROR_OPERATION_ABORTED:
|
2020-06-24 08:21:52 +00:00
|
|
|
server.apending = false
|
|
|
|
retFuture.fail(getServerUseClosedError())
|
|
|
|
return retFuture
|
2023-04-30 06:20:08 +00:00
|
|
|
of ERROR_IO_PENDING, ERROR_PIPE_CONNECTED:
|
2020-06-24 08:21:52 +00:00
|
|
|
discard
|
|
|
|
else:
|
|
|
|
server.apending = false
|
|
|
|
retFuture.fail(getTransportOsError(err))
|
|
|
|
return retFuture
|
|
|
|
|
|
|
|
retFuture.cancelCallback = cancellationPipe
|
|
|
|
|
|
|
|
return retFuture
|
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
2019-10-29 21:19:41 +00:00
|
|
|
import ../sendfile
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2019-05-28 06:29:00 +00:00
|
|
|
proc isConnResetError(err: OSErrorCode): bool {.inline.} =
|
2023-04-30 06:20:08 +00:00
|
|
|
(err == oserrno.ECONNRESET) or (err == oserrno.EPIPE)
|
2021-12-08 10:35:27 +00:00
|
|
|
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
proc writeStreamLoop(udata: pointer) =
|
2022-04-11 09:56:30 +00:00
|
|
|
if isNil(udata):
|
|
|
|
# TODO this is an if rather than an assert for historical reasons:
|
|
|
|
# it should not happen unless there are race conditions - but if there
|
|
|
|
# are race conditions, `transp` might be invalid even if it's not nil:
|
|
|
|
# it could have been released
|
2018-08-24 12:20:08 +00:00
|
|
|
return
|
|
|
|
|
2022-04-11 09:56:30 +00:00
|
|
|
let
|
|
|
|
transp = cast[StreamTransport](udata)
|
|
|
|
fd = SocketHandle(transp.fd)
|
|
|
|
|
2020-11-18 09:30:33 +00:00
|
|
|
if WriteClosed in transp.state:
|
2021-12-08 10:35:27 +00:00
|
|
|
if transp.queue.len > 0:
|
|
|
|
let error = getTransportUseClosedError()
|
2023-02-21 10:48:36 +00:00
|
|
|
discard removeWriter2(transp.fd)
|
2021-12-08 10:35:27 +00:00
|
|
|
failPendingWriteQueue(transp.queue, error)
|
|
|
|
return
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
|
2021-12-08 10:35:27 +00:00
|
|
|
# We exit this loop in two ways:
|
|
|
|
# * The queue is empty: we call removeWriter to disable further callbacks
|
|
|
|
# * EWOULDBLOCK is returned and we need to wait for a new notification
|
2020-11-18 09:30:33 +00:00
|
|
|
|
2021-12-08 10:35:27 +00:00
|
|
|
while len(transp.queue) > 0:
|
|
|
|
template handleError() =
|
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of oserrno.EINTR:
|
2021-12-08 10:35:27 +00:00
|
|
|
# Signal happened while writing - try again with all data
|
|
|
|
transp.queue.addFirst(vector)
|
|
|
|
continue
|
2023-04-30 06:20:08 +00:00
|
|
|
of oserrno.EWOULDBLOCK:
|
2021-12-08 10:35:27 +00:00
|
|
|
# Socket buffer is full - wait until next write notification - in
|
|
|
|
# particular, ensure removeWriter is not called
|
|
|
|
transp.queue.addFirst(vector)
|
|
|
|
return
|
|
|
|
else:
|
2023-04-30 06:20:08 +00:00
|
|
|
# The errors below will clear the write queue, meaning we'll exit the
|
|
|
|
# loop
|
|
|
|
if isConnResetError(err):
|
|
|
|
# Soft error happens which indicates that remote peer got
|
|
|
|
# disconnected, complete all pending writes in queue with 0.
|
|
|
|
transp.state.incl({WriteEof})
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(0)
|
|
|
|
completePendingWriteQueue(transp.queue, 0)
|
|
|
|
else:
|
|
|
|
transp.state.incl({WriteError})
|
|
|
|
let error = getTransportOsError(err)
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.fail(error)
|
|
|
|
failPendingWriteQueue(transp.queue, error)
|
2021-12-08 10:35:27 +00:00
|
|
|
|
|
|
|
var vector = transp.queue.popFirst()
|
|
|
|
case vector.kind
|
|
|
|
of VectorKind.DataBuffer:
|
|
|
|
let res =
|
|
|
|
case transp.kind
|
|
|
|
of TransportKind.Socket:
|
2023-02-21 10:48:36 +00:00
|
|
|
osdefs.send(fd, vector.buf, vector.buflen, MSG_NOSIGNAL)
|
2021-12-08 10:35:27 +00:00
|
|
|
of TransportKind.Pipe:
|
2023-02-21 10:48:36 +00:00
|
|
|
osdefs.write(cint(fd), vector.buf, vector.buflen)
|
2021-12-08 10:35:27 +00:00
|
|
|
else: raiseAssert "Unsupported transport kind: " & $transp.kind
|
|
|
|
|
|
|
|
if res >= 0:
|
|
|
|
if vector.buflen == res:
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(vector.size)
|
|
|
|
else:
|
|
|
|
vector.shiftVectorBuffer(res)
|
|
|
|
transp.queue.addFirst(vector) # Try again with rest of data
|
|
|
|
else:
|
|
|
|
handleError()
|
|
|
|
|
|
|
|
of VectorKind.DataFile:
|
|
|
|
var nbytes = cast[int](vector.buf)
|
|
|
|
let res = sendfile(int(fd), cast[int](vector.buflen),
|
|
|
|
int(vector.offset), nbytes)
|
|
|
|
|
|
|
|
# In case of some errors on some systems, some bytes may have been
|
|
|
|
# written (see sendfile.nim)
|
|
|
|
vector.size += nbytes
|
|
|
|
|
|
|
|
if res >= 0:
|
|
|
|
if cast[int](vector.buf) == nbytes:
|
|
|
|
if not(vector.writer.finished()):
|
|
|
|
vector.writer.complete(vector.size)
|
|
|
|
else:
|
|
|
|
vector.shiftVectorFile(nbytes)
|
|
|
|
transp.queue.addFirst(vector)
|
|
|
|
else:
|
|
|
|
vector.shiftVectorFile(nbytes)
|
|
|
|
handleError()
|
|
|
|
|
|
|
|
# Nothing left in the queue - no need for further write notifications
|
2023-02-21 10:48:36 +00:00
|
|
|
# All writers are already scheduled, so its impossible to notify about an
|
|
|
|
# error.
|
|
|
|
transp.state.incl(WritePaused)
|
|
|
|
discard removeWriter2(transp.fd)
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
|
|
|
|
proc readStreamLoop(udata: pointer) =
|
2022-04-11 09:56:30 +00:00
|
|
|
if isNil(udata):
|
|
|
|
# TODO this is an if rather than an assert for historical reasons:
|
|
|
|
# it should not happen unless there are race conditions - but if there
|
|
|
|
# are race conditions, `transp` might be invalid even if it's not nil:
|
|
|
|
# it could have been released
|
2018-08-24 12:20:08 +00:00
|
|
|
return
|
|
|
|
|
2022-04-11 09:56:30 +00:00
|
|
|
let
|
|
|
|
transp = cast[StreamTransport](udata)
|
|
|
|
fd = SocketHandle(transp.fd)
|
|
|
|
|
2018-08-24 12:20:08 +00:00
|
|
|
if ReadClosed in transp.state:
|
|
|
|
transp.state.incl({ReadPaused})
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.completeReader()
|
2018-08-24 12:20:08 +00:00
|
|
|
else:
|
2019-07-15 09:59:42 +00:00
|
|
|
if transp.kind == TransportKind.Socket:
|
|
|
|
while true:
|
2024-03-26 20:33:19 +00:00
|
|
|
let
|
|
|
|
(data, size) = transp.buffer.reserve()
|
|
|
|
res = handleEintr(osdefs.recv(fd, data, size, cint(0)))
|
2019-07-15 09:59:42 +00:00
|
|
|
if res < 0:
|
|
|
|
let err = osLastError()
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(0)
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of oserrno.ECONNRESET:
|
2019-07-15 09:59:42 +00:00
|
|
|
transp.state.incl({ReadEof, ReadPaused})
|
2023-02-21 10:48:36 +00:00
|
|
|
let rres = removeReader2(transp.fd)
|
|
|
|
if rres.isErr():
|
|
|
|
transp.state.incl(ReadError)
|
|
|
|
transp.setReadError(rres.error())
|
2019-07-15 09:59:42 +00:00
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
transp.state.incl({ReadPaused, ReadError})
|
2019-07-15 09:59:42 +00:00
|
|
|
transp.setReadError(err)
|
2023-02-21 10:48:36 +00:00
|
|
|
discard removeReader2(transp.fd)
|
2019-07-15 09:59:42 +00:00
|
|
|
elif res == 0:
|
2018-08-24 12:20:08 +00:00
|
|
|
transp.state.incl({ReadEof, ReadPaused})
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(0)
|
2023-02-21 10:48:36 +00:00
|
|
|
let rres = removeReader2(transp.fd)
|
|
|
|
if rres.isErr():
|
|
|
|
transp.state.incl(ReadError)
|
|
|
|
transp.setReadError(rres.error())
|
2018-08-24 12:20:08 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(res)
|
|
|
|
if transp.buffer.availSpace() == 0:
|
2019-07-15 09:59:42 +00:00
|
|
|
transp.state.incl(ReadPaused)
|
2023-02-21 10:48:36 +00:00
|
|
|
let rres = removeReader2(transp.fd)
|
|
|
|
if rres.isErr():
|
|
|
|
transp.state.incl(ReadError)
|
|
|
|
transp.setReadError(rres.error())
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.completeReader()
|
2019-07-15 09:59:42 +00:00
|
|
|
break
|
|
|
|
elif transp.kind == TransportKind.Pipe:
|
|
|
|
while true:
|
2024-03-26 20:33:19 +00:00
|
|
|
let
|
|
|
|
(data, size) = transp.buffer.reserve()
|
|
|
|
res = handleEintr(osdefs.read(cint(fd), data, size))
|
2019-07-15 09:59:42 +00:00
|
|
|
if res < 0:
|
|
|
|
let err = osLastError()
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(0)
|
2023-02-21 10:48:36 +00:00
|
|
|
transp.state.incl(ReadPaused)
|
|
|
|
transp.setReadError(err)
|
|
|
|
discard removeReader2(transp.fd)
|
2019-07-15 09:59:42 +00:00
|
|
|
elif res == 0:
|
|
|
|
transp.state.incl({ReadEof, ReadPaused})
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(0)
|
2023-02-21 10:48:36 +00:00
|
|
|
let rres = removeReader2(transp.fd)
|
|
|
|
if rres.isErr():
|
|
|
|
transp.state.incl(ReadError)
|
|
|
|
transp.setReadError(rres.error())
|
2019-07-15 09:59:42 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.commit(res)
|
|
|
|
if transp.buffer.availSpace() == 0:
|
2019-07-15 09:59:42 +00:00
|
|
|
transp.state.incl(ReadPaused)
|
2023-02-21 10:48:36 +00:00
|
|
|
let rres = removeReader2(transp.fd)
|
|
|
|
if rres.isErr():
|
|
|
|
transp.state.incl(ReadError)
|
|
|
|
transp.setReadError(rres.error())
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.completeReader()
|
2019-07-15 09:59:42 +00:00
|
|
|
break
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2018-06-10 23:08:17 +00:00
|
|
|
proc newStreamSocketTransport(sock: AsyncFD, bufsize: int,
|
2024-04-13 00:04:42 +00:00
|
|
|
child: StreamTransport,
|
|
|
|
flags: set[TransportFlags]): StreamTransport =
|
2018-06-10 23:08:17 +00:00
|
|
|
var transp: StreamTransport
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil(child)):
|
2018-06-10 23:08:17 +00:00
|
|
|
transp = child
|
|
|
|
else:
|
|
|
|
transp = StreamTransport(kind: TransportKind.Socket)
|
|
|
|
|
2018-05-21 21:52:57 +00:00
|
|
|
transp.fd = sock
|
2024-04-13 00:04:42 +00:00
|
|
|
transp.flags = flags
|
2024-04-07 04:03:12 +00:00
|
|
|
let size = max(bufsize, DefaultStreamBufferSize)
|
|
|
|
transp.buffer = BipBuffer.init(size)
|
2018-05-21 21:52:57 +00:00
|
|
|
transp.state = {ReadPaused, WritePaused}
|
|
|
|
transp.queue = initDeque[StreamVector]()
|
2024-01-04 15:17:42 +00:00
|
|
|
transp.future = Future[void].Raising([]).init(
|
|
|
|
"socket.stream.transport", {FutureFlag.OwnCancelSchedule})
|
2018-05-21 21:52:57 +00:00
|
|
|
GC_ref(transp)
|
2023-02-21 10:48:36 +00:00
|
|
|
transp
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2019-07-15 09:59:42 +00:00
|
|
|
proc newStreamPipeTransport(fd: AsyncFD, bufsize: int,
|
|
|
|
child: StreamTransport): StreamTransport =
|
|
|
|
var transp: StreamTransport
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil(child)):
|
2019-07-15 09:59:42 +00:00
|
|
|
transp = child
|
|
|
|
else:
|
|
|
|
transp = StreamTransport(kind: TransportKind.Pipe)
|
|
|
|
|
|
|
|
transp.fd = fd
|
2024-04-07 04:03:12 +00:00
|
|
|
let size = max(bufsize, DefaultStreamBufferSize)
|
|
|
|
transp.buffer = BipBuffer.init(size)
|
2019-07-15 09:59:42 +00:00
|
|
|
transp.state = {ReadPaused, WritePaused}
|
|
|
|
transp.queue = initDeque[StreamVector]()
|
2024-01-04 15:17:42 +00:00
|
|
|
transp.future = Future[void].Raising([]).init(
|
|
|
|
"pipe.stream.transport", {FutureFlag.OwnCancelSchedule})
|
2019-07-15 09:59:42 +00:00
|
|
|
GC_ref(transp)
|
2023-02-21 10:48:36 +00:00
|
|
|
transp
|
2019-07-15 09:59:42 +00:00
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
proc connect*(address: TransportAddress,
|
2018-06-10 23:08:17 +00:00
|
|
|
bufferSize = DefaultStreamBufferSize,
|
2023-01-23 09:18:12 +00:00
|
|
|
child: StreamTransport = nil,
|
2023-04-03 12:34:35 +00:00
|
|
|
localAddress = TransportAddress(),
|
|
|
|
flags: set[SocketFlags] = {},
|
2023-10-30 13:27:50 +00:00
|
|
|
dualstack = DualStackType.Auto,
|
2023-11-15 08:38:48 +00:00
|
|
|
): Future[StreamTransport] {.
|
|
|
|
async: (raw: true, raises: [TransportError, CancelledError]).} =
|
2018-05-27 05:49:47 +00:00
|
|
|
## Open new connection to remote peer with address ``address`` and create
|
|
|
|
## new transport object ``StreamTransport`` for established connection.
|
|
|
|
## ``bufferSize`` - size of internal buffer for transport.
|
2018-05-16 08:22:34 +00:00
|
|
|
var
|
|
|
|
saddr: Sockaddr_storage
|
|
|
|
slen: SockLen
|
2019-03-30 22:31:10 +00:00
|
|
|
var retFuture = newFuture[StreamTransport]("stream.transport.connect")
|
2018-10-25 10:19:19 +00:00
|
|
|
address.toSAddr(saddr, slen)
|
2023-08-04 06:08:34 +00:00
|
|
|
let proto =
|
|
|
|
if address.family == AddressFamily.Unix:
|
|
|
|
Protocol.IPPROTO_IP
|
|
|
|
else:
|
|
|
|
Protocol.IPPROTO_TCP
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
|
2023-10-30 13:27:50 +00:00
|
|
|
let sock = createAsyncSocket2(address.getDomain(), SockType.SOCK_STREAM,
|
|
|
|
proto).valueOr:
|
|
|
|
case error
|
2023-04-30 06:20:08 +00:00
|
|
|
of oserrno.EMFILE:
|
2020-06-24 08:21:52 +00:00
|
|
|
retFuture.fail(getTransportTooManyError())
|
|
|
|
else:
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
2018-06-07 19:07:17 +00:00
|
|
|
return retFuture
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-01-23 09:18:12 +00:00
|
|
|
if address.family in {AddressFamily.IPv4, AddressFamily.IPv6}:
|
2023-04-03 12:34:35 +00:00
|
|
|
if SocketFlags.TcpNoDelay in flags:
|
2023-10-30 13:27:50 +00:00
|
|
|
setSockOpt2(sock, osdefs.IPPROTO_TCP, osdefs.TCP_NODELAY, 1).isOkOr:
|
2023-01-23 09:18:12 +00:00
|
|
|
sock.closeSocket()
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
2023-01-23 09:18:12 +00:00
|
|
|
return retFuture
|
2023-10-30 13:27:50 +00:00
|
|
|
|
2023-04-03 12:34:35 +00:00
|
|
|
if SocketFlags.ReuseAddr in flags:
|
2023-10-30 13:27:50 +00:00
|
|
|
setSockOpt2(sock, SOL_SOCKET, SO_REUSEADDR, 1).isOkOr:
|
2023-04-03 12:34:35 +00:00
|
|
|
sock.closeSocket()
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
2023-04-03 12:34:35 +00:00
|
|
|
return retFuture
|
|
|
|
if SocketFlags.ReusePort in flags:
|
2023-10-30 13:27:50 +00:00
|
|
|
setSockOpt2(sock, SOL_SOCKET, SO_REUSEPORT, 1).isOkOr:
|
2023-04-03 12:34:35 +00:00
|
|
|
sock.closeSocket()
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
2023-04-03 12:34:35 +00:00
|
|
|
return retFuture
|
2023-10-30 13:27:50 +00:00
|
|
|
# IPV6_V6ONLY.
|
|
|
|
setDualstack(sock, address.family, dualstack).isOkOr:
|
|
|
|
sock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(error))
|
|
|
|
return retFuture
|
2023-04-03 12:34:35 +00:00
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
let transportFlags =
|
|
|
|
block:
|
|
|
|
# Add `V4Mapped` flag when `::` address is used and dualstack is
|
|
|
|
# set to enabled or auto.
|
|
|
|
var res: set[TransportFlags]
|
|
|
|
if (localAddress.family == AddressFamily.IPv6) and
|
|
|
|
localAddress.isAnyLocal():
|
|
|
|
if dualstack != DualStackType.Disabled:
|
|
|
|
res.incl(TransportFlags.V4Mapped)
|
|
|
|
res
|
|
|
|
|
|
|
|
case localAddress.family
|
|
|
|
of AddressFamily.IPv4, AddressFamily.IPv6, AddressFamily.Unix:
|
2023-04-03 12:34:35 +00:00
|
|
|
var
|
2024-04-13 00:04:42 +00:00
|
|
|
lsaddr: Sockaddr_storage
|
|
|
|
lslen: SockLen
|
|
|
|
toSAddr(localAddress, lsaddr, lslen)
|
2023-04-03 12:34:35 +00:00
|
|
|
if bindSocket(SocketHandle(sock),
|
2024-04-13 00:04:42 +00:00
|
|
|
cast[ptr SockAddr](addr lsaddr), lslen) != 0:
|
2023-04-03 12:34:35 +00:00
|
|
|
sock.closeSocket()
|
|
|
|
retFuture.fail(getTransportOsError(osLastError()))
|
|
|
|
return retFuture
|
2024-04-13 00:04:42 +00:00
|
|
|
of AddressFamily.None:
|
|
|
|
discard
|
2023-01-23 09:18:12 +00:00
|
|
|
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
proc continuation(udata: pointer) =
|
2019-06-20 20:30:41 +00:00
|
|
|
if not(retFuture.finished()):
|
2023-10-30 13:27:50 +00:00
|
|
|
removeWriter2(sock).isOkOr:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard unregisterAndCloseFd(sock)
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
return
|
|
|
|
|
2023-10-30 13:27:50 +00:00
|
|
|
let err = sock.getSocketError2().valueOr:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard unregisterAndCloseFd(sock)
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
2019-06-06 18:22:17 +00:00
|
|
|
return
|
2023-02-21 10:48:36 +00:00
|
|
|
|
2019-06-06 18:22:17 +00:00
|
|
|
if err != 0:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard unregisterAndCloseFd(sock)
|
2019-06-06 18:22:17 +00:00
|
|
|
retFuture.fail(getTransportOsError(OSErrorCode(err)))
|
|
|
|
return
|
2023-02-21 10:48:36 +00:00
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
let transp = newStreamSocketTransport(sock, bufferSize, child,
|
|
|
|
transportFlags)
|
2019-06-06 18:22:17 +00:00
|
|
|
# Start tracking transport
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2019-06-06 18:22:17 +00:00
|
|
|
retFuture.complete(transp)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
proc cancel(udata: pointer) =
|
2023-02-21 10:48:36 +00:00
|
|
|
if not(retFuture.finished()):
|
|
|
|
closeSocket(sock)
|
2019-06-20 20:30:41 +00:00
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
while true:
|
2023-02-21 10:48:36 +00:00
|
|
|
let res = osdefs.connect(SocketHandle(sock),
|
|
|
|
cast[ptr SockAddr](addr saddr), slen)
|
2018-05-16 08:22:34 +00:00
|
|
|
if res == 0:
|
2024-04-13 00:04:42 +00:00
|
|
|
let transp = newStreamSocketTransport(sock, bufferSize, child,
|
|
|
|
transportFlags)
|
2019-04-04 09:34:23 +00:00
|
|
|
# Start tracking transport
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2019-04-04 09:34:23 +00:00
|
|
|
retFuture.complete(transp)
|
2018-05-16 08:22:34 +00:00
|
|
|
break
|
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
let errorCode = osLastError()
|
2019-06-20 20:30:41 +00:00
|
|
|
# If connect() is interrupted by a signal that is caught while blocked
|
|
|
|
# waiting to establish a connection, connect() shall fail and set
|
|
|
|
# connect() to [EINTR], but the connection request shall not be aborted,
|
|
|
|
# and the connection shall be established asynchronously.
|
|
|
|
#
|
|
|
|
# http://www.madore.org/~david/computers/connect-intr.html
|
2023-04-30 06:20:08 +00:00
|
|
|
case errorCode
|
|
|
|
of oserrno.EINPROGRESS, oserrno.EINTR:
|
2023-10-30 13:27:50 +00:00
|
|
|
addWriter2(sock, continuation).isOkOr:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard unregisterAndCloseFd(sock)
|
2023-10-30 13:27:50 +00:00
|
|
|
retFuture.fail(getTransportOsError(error))
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
return retFuture
|
2019-06-20 20:30:41 +00:00
|
|
|
retFuture.cancelCallback = cancel
|
2018-05-16 08:22:34 +00:00
|
|
|
break
|
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard unregisterAndCloseFd(sock)
|
|
|
|
retFuture.fail(getTransportOsError(errorCode))
|
2018-05-16 08:22:34 +00:00
|
|
|
break
|
|
|
|
return retFuture
|
|
|
|
|
2022-04-11 09:56:30 +00:00
|
|
|
proc acceptLoop(udata: pointer) =
|
|
|
|
if isNil(udata):
|
|
|
|
# TODO this is an if rather than an assert for historical reasons:
|
|
|
|
# it should not happen unless there are race conditions - but if there
|
|
|
|
# are race conditions, `transp` might be invalid even if it's not nil:
|
|
|
|
# it could have been released
|
|
|
|
return
|
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
var
|
|
|
|
saddr: Sockaddr_storage
|
|
|
|
slen: SockLen
|
2022-04-11 09:56:30 +00:00
|
|
|
let server = cast[StreamServer](udata)
|
2023-02-21 10:48:36 +00:00
|
|
|
if server.status in {ServerStatus.Stopped, ServerStatus.Closed}:
|
|
|
|
return
|
2020-07-15 08:09:34 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
let
|
|
|
|
flags = {DescriptorFlag.CloseOnExec, DescriptorFlag.NonBlock}
|
|
|
|
sres = acceptConn(cint(server.sock), cast[ptr SockAddr](addr saddr),
|
|
|
|
addr slen, flags)
|
|
|
|
if sres.isOk():
|
|
|
|
let sock = AsyncFD(sres.get())
|
|
|
|
let rres = register2(sock)
|
|
|
|
if rres.isOk():
|
|
|
|
let ntransp =
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil(server.init)):
|
2019-04-04 09:34:23 +00:00
|
|
|
let transp = server.init(server, sock)
|
2024-04-13 00:04:42 +00:00
|
|
|
newStreamSocketTransport(sock, server.bufferSize, transp,
|
|
|
|
server.getTransportFlags())
|
2018-06-10 23:08:17 +00:00
|
|
|
else:
|
2024-04-13 00:04:42 +00:00
|
|
|
newStreamSocketTransport(sock, server.bufferSize, nil,
|
|
|
|
server.getTransportFlags())
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2023-02-21 10:48:36 +00:00
|
|
|
asyncSpawn server.function(server, ntransp)
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
# Client was accepted, so we not going to raise assertion, but
|
|
|
|
# we need to close the socket.
|
|
|
|
discard closeFd(cint(sock))
|
|
|
|
else:
|
|
|
|
let errorCode = sres.error()
|
2023-04-30 06:20:08 +00:00
|
|
|
if errorCode != oserrno.EAGAIN:
|
2023-02-21 10:48:36 +00:00
|
|
|
# This EAGAIN error appears only when server get closed, while
|
|
|
|
# acceptLoop() reader callback is already scheduled.
|
|
|
|
raiseOsDefect(errorCode, "acceptLoop(): Unable to accept connection")
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc resumeAccept(server: StreamServer): Result[void, OSErrorCode] =
|
|
|
|
addReader2(server.sock, acceptLoop, cast[pointer](server))
|
2018-06-04 09:57:17 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc pauseAccept(server: StreamServer): Result[void, OSErrorCode] =
|
|
|
|
removeReader2(server.sock)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc resumeRead(transp: StreamTransport): Result[void, OSErrorCode] =
|
2020-03-05 09:59:10 +00:00
|
|
|
if ReadPaused in transp.state:
|
2023-02-21 10:48:36 +00:00
|
|
|
? addReader2(transp.fd, readStreamLoop, cast[pointer](transp))
|
2020-03-05 09:59:10 +00:00
|
|
|
transp.state.excl(ReadPaused)
|
2023-02-21 10:48:36 +00:00
|
|
|
ok()
|
|
|
|
|
|
|
|
proc resumeWrite(transp: StreamTransport): Result[void, OSErrorCode] =
|
2021-12-08 10:35:27 +00:00
|
|
|
if transp.queue.len() == 1:
|
|
|
|
# writeStreamLoop keeps writing until queue is empty - we should not call
|
|
|
|
# resumeWrite under any other condition than when the items are
|
|
|
|
# added to a queue - if the flag is not set here, it means that the socket
|
|
|
|
# was not removed from write notifications at the right time, and this
|
|
|
|
# would mean an imbalance in registration and deregistration
|
2023-02-21 10:48:36 +00:00
|
|
|
doAssert(WritePaused in transp.state)
|
|
|
|
? addWriter2(transp.fd, writeStreamLoop, cast[pointer](transp))
|
|
|
|
transp.state.excl(WritePaused)
|
|
|
|
ok()
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc accept*(server: StreamServer): Future[StreamTransport] {.
|
2024-02-14 12:05:19 +00:00
|
|
|
async: (raw: true, raises: [TransportUseClosedError,
|
|
|
|
TransportTooManyError, TransportAbortedError, TransportOsError,
|
|
|
|
CancelledError]).} =
|
2020-06-24 08:21:52 +00:00
|
|
|
var retFuture = newFuture[StreamTransport]("stream.server.accept")
|
|
|
|
|
|
|
|
doAssert(server.status != ServerStatus.Running,
|
|
|
|
"You could not use accept() if server was started with start()")
|
|
|
|
if server.status == ServerStatus.Closed:
|
|
|
|
retFuture.fail(getServerUseClosedError())
|
|
|
|
return retFuture
|
|
|
|
|
|
|
|
proc continuation(udata: pointer) {.gcsafe.} =
|
|
|
|
var
|
|
|
|
saddr: Sockaddr_storage
|
|
|
|
slen: SockLen
|
2020-07-15 08:09:34 +00:00
|
|
|
|
2021-06-30 15:22:37 +00:00
|
|
|
if not(retFuture.finished()):
|
|
|
|
if server.status in {ServerStatus.Stopped, ServerStatus.Closed}:
|
|
|
|
retFuture.fail(getServerUseClosedError())
|
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
let
|
|
|
|
flags = {DescriptorFlag.CloseOnExec, DescriptorFlag.NonBlock}
|
|
|
|
sres = acceptConn(cint(server.sock), cast[ptr SockAddr](addr saddr),
|
|
|
|
addr slen, flags)
|
|
|
|
if sres.isErr():
|
|
|
|
let errorCode = sres.error()
|
2023-04-30 06:20:08 +00:00
|
|
|
case errorCode
|
|
|
|
of oserrno.EAGAIN:
|
2023-02-21 10:48:36 +00:00
|
|
|
# This error appears only when server get closed, while accept()
|
|
|
|
# continuation is already scheduled.
|
|
|
|
retFuture.fail(getServerUseClosedError())
|
2023-04-30 06:20:08 +00:00
|
|
|
of oserrno.EMFILE, oserrno.ENFILE, oserrno.ENOBUFS, oserrno.ENOMEM:
|
|
|
|
retFuture.fail(getTransportTooManyError(errorCode))
|
|
|
|
of oserrno.ECONNABORTED, oserrno.EPERM, oserrno.ETIMEDOUT:
|
|
|
|
retFuture.fail(getConnectionAbortedError(errorCode))
|
2023-02-21 10:48:36 +00:00
|
|
|
else:
|
|
|
|
retFuture.fail(getTransportOsError(errorCode))
|
|
|
|
# Error is already happened so we ignore removeReader2() errors.
|
|
|
|
discard removeReader2(server.sock)
|
|
|
|
else:
|
|
|
|
let
|
|
|
|
sock = AsyncFD(sres.get())
|
|
|
|
rres = register2(sock)
|
|
|
|
if rres.isOk():
|
|
|
|
let res = removeReader2(server.sock)
|
|
|
|
if res.isOk():
|
|
|
|
let ntransp =
|
|
|
|
if not(isNil(server.init)):
|
|
|
|
let transp = server.init(server, sock)
|
2024-04-13 00:04:42 +00:00
|
|
|
newStreamSocketTransport(sock, server.bufferSize, transp,
|
|
|
|
server.getTransportFlags())
|
2023-02-21 10:48:36 +00:00
|
|
|
else:
|
2024-04-13 00:04:42 +00:00
|
|
|
newStreamSocketTransport(sock, server.bufferSize, nil,
|
|
|
|
server.getTransportFlags())
|
2021-06-30 15:22:37 +00:00
|
|
|
# Start tracking transport
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2021-06-30 15:22:37 +00:00
|
|
|
retFuture.complete(ntransp)
|
2020-07-15 08:09:34 +00:00
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard closeFd(cint(sock))
|
|
|
|
let errorMsg = osErrorMsg(res.error())
|
2021-11-12 16:13:56 +00:00
|
|
|
retFuture.fail(getConnectionAbortedError(errorMsg))
|
2020-07-15 08:09:34 +00:00
|
|
|
else:
|
2023-02-21 10:48:36 +00:00
|
|
|
# Error is already happened so we ignore errors.
|
|
|
|
discard removeReader2(server.sock)
|
|
|
|
discard closeFd(cint(sock))
|
|
|
|
let errorMsg = osErrorMsg(rres.error())
|
|
|
|
retFuture.fail(getConnectionAbortedError(errorMsg))
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
|
|
|
|
proc cancellation(udata: pointer) =
|
2023-02-21 10:48:36 +00:00
|
|
|
if not(retFuture.finished()):
|
|
|
|
discard removeReader2(server.sock)
|
|
|
|
|
|
|
|
let res = addReader2(server.sock, continuation, nil)
|
|
|
|
if res.isErr():
|
|
|
|
retFuture.fail(getTransportOsError(res.error()))
|
|
|
|
else:
|
|
|
|
retFuture.cancelCallback = cancellation
|
2020-06-24 08:21:52 +00:00
|
|
|
return retFuture
|
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc start2*(server: StreamServer): Result[void, OSErrorCode] =
|
2018-05-21 21:52:57 +00:00
|
|
|
## Starts ``server``.
|
2023-02-21 10:48:36 +00:00
|
|
|
doAssert(not(isNil(server.function)), "You should not start the server " &
|
|
|
|
"unless you have processing callback configured!")
|
2018-06-04 09:57:17 +00:00
|
|
|
if server.status == ServerStatus.Starting:
|
2023-02-21 10:48:36 +00:00
|
|
|
? server.resumeAccept()
|
2018-06-04 09:57:17 +00:00
|
|
|
server.status = ServerStatus.Running
|
2023-02-21 10:48:36 +00:00
|
|
|
ok()
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc stop2*(server: StreamServer): Result[void, OSErrorCode] =
|
2018-05-22 21:03:13 +00:00
|
|
|
## Stops ``server``.
|
2018-06-04 09:57:17 +00:00
|
|
|
if server.status == ServerStatus.Running:
|
2021-09-04 21:53:27 +00:00
|
|
|
if not(isNil(server.function)):
|
2023-02-21 10:48:36 +00:00
|
|
|
? server.pauseAccept()
|
2018-06-04 09:57:17 +00:00
|
|
|
server.status = ServerStatus.Stopped
|
2018-06-07 08:05:06 +00:00
|
|
|
elif server.status == ServerStatus.Starting:
|
|
|
|
server.status = ServerStatus.Stopped
|
2023-02-21 10:48:36 +00:00
|
|
|
ok()
|
|
|
|
|
2023-06-05 20:21:50 +00:00
|
|
|
proc start*(server: StreamServer) {.raises: [TransportOsError].} =
|
2023-02-21 10:48:36 +00:00
|
|
|
## Starts ``server``.
|
|
|
|
let res = start2(server)
|
|
|
|
if res.isErr(): raiseTransportOsError(res.error())
|
|
|
|
|
2023-06-05 20:21:50 +00:00
|
|
|
proc stop*(server: StreamServer) {.raises: [TransportOsError].} =
|
2023-02-21 10:48:36 +00:00
|
|
|
## Stops ``server``.
|
|
|
|
let res = stop2(server)
|
|
|
|
if res.isErr(): raiseTransportOsError(res.error())
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc join*(server: StreamServer): Future[void] {.
|
|
|
|
async: (raw: true, raises: [CancelledError]).} =
|
2018-06-04 09:57:17 +00:00
|
|
|
## Waits until ``server`` is not closed.
|
2024-04-03 21:30:01 +00:00
|
|
|
server.loopFuture.join()
|
2018-05-21 21:52:57 +00:00
|
|
|
|
2023-04-03 12:34:35 +00:00
|
|
|
proc connect*(address: TransportAddress,
|
|
|
|
bufferSize = DefaultStreamBufferSize,
|
|
|
|
child: StreamTransport = nil,
|
|
|
|
flags: set[TransportFlags],
|
2023-10-30 13:27:50 +00:00
|
|
|
localAddress = TransportAddress(),
|
|
|
|
dualstack = DualStackType.Auto
|
2023-11-15 08:38:48 +00:00
|
|
|
): Future[StreamTransport] {.
|
|
|
|
async: (raw: true, raises: [TransportError, CancelledError]).} =
|
2023-04-03 12:34:35 +00:00
|
|
|
# Retro compatibility with TransportFlags
|
|
|
|
var mappedFlags: set[SocketFlags]
|
|
|
|
if TcpNoDelay in flags: mappedFlags.incl(SocketFlags.TcpNoDelay)
|
2023-10-30 13:27:50 +00:00
|
|
|
connect(address, bufferSize, child, localAddress, mappedFlags, dualstack)
|
2023-04-03 12:34:35 +00:00
|
|
|
|
2024-01-04 15:17:42 +00:00
|
|
|
proc closed*(server: StreamServer): bool =
|
|
|
|
server.status == ServerStatus.Closed
|
|
|
|
|
2018-06-07 08:05:06 +00:00
|
|
|
proc close*(server: StreamServer) =
|
2018-05-21 21:52:57 +00:00
|
|
|
## Release ``server`` resources.
|
2018-08-24 12:20:08 +00:00
|
|
|
##
|
|
|
|
## Please note that release of resources is not completed immediately, to be
|
|
|
|
## sure all resources got released please use ``await server.join()``.
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
proc continuation(udata: pointer) =
|
2021-01-22 13:02:13 +00:00
|
|
|
# Stop tracking server
|
|
|
|
if not(server.loopFuture.finished()):
|
|
|
|
server.clean()
|
2020-06-24 08:21:52 +00:00
|
|
|
|
2021-09-04 21:53:27 +00:00
|
|
|
if server.status in {ServerStatus.Starting, ServerStatus.Stopped}:
|
2018-08-24 12:20:08 +00:00
|
|
|
server.status = ServerStatus.Closed
|
2018-10-25 10:19:19 +00:00
|
|
|
when defined(windows):
|
|
|
|
if server.local.family in {AddressFamily.IPv4, AddressFamily.IPv6}:
|
2021-09-04 21:53:27 +00:00
|
|
|
if server.apending:
|
2020-06-24 08:21:52 +00:00
|
|
|
server.asock.closeSocket()
|
2021-09-04 21:53:27 +00:00
|
|
|
server.apending = false
|
|
|
|
server.sock.closeSocket(continuation)
|
2018-10-25 10:19:19 +00:00
|
|
|
elif server.local.family in {AddressFamily.Unix}:
|
|
|
|
if NoPipeFlash notin server.flags:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard flushFileBuffers(HANDLE(server.sock))
|
|
|
|
discard disconnectNamedPipe(HANDLE(server.sock))
|
2021-09-04 21:53:27 +00:00
|
|
|
server.sock.closeHandle(continuation)
|
2018-10-25 10:19:19 +00:00
|
|
|
else:
|
|
|
|
server.sock.closeSocket(continuation)
|
2018-08-24 12:20:08 +00:00
|
|
|
|
2024-01-04 15:17:42 +00:00
|
|
|
proc closeWait*(server: StreamServer): Future[void] {.async: (raises: []).} =
|
2018-08-24 12:20:08 +00:00
|
|
|
## Close server ``server`` and release all resources.
|
2024-01-04 15:17:42 +00:00
|
|
|
if not server.closed():
|
|
|
|
server.close()
|
|
|
|
await noCancel(server.join())
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-07-31 19:40:00 +00:00
|
|
|
proc getBacklogSize(backlog: int): cint =
|
|
|
|
doAssert(backlog >= 0 and backlog <= high(int32))
|
|
|
|
when defined(windows):
|
|
|
|
# The maximum length of the queue of pending connections. If set to
|
|
|
|
# SOMAXCONN, the underlying service provider responsible for
|
|
|
|
# socket s will set the backlog to a maximum reasonable value. If set to
|
|
|
|
# SOMAXCONN_HINT(N) (where N is a number), the backlog value will be N,
|
|
|
|
# adjusted to be within the range (200, 65535). Note that SOMAXCONN_HINT
|
|
|
|
# can be used to set the backlog to a larger value than possible with
|
|
|
|
# SOMAXCONN.
|
|
|
|
#
|
|
|
|
# Microsoft SDK values are
|
|
|
|
# #define SOMAXCONN 0x7fffffff
|
|
|
|
# #define SOMAXCONN_HINT(b) (-(b))
|
|
|
|
if backlog != high(int32):
|
|
|
|
cint(-backlog)
|
|
|
|
else:
|
|
|
|
cint(backlog)
|
|
|
|
else:
|
|
|
|
cint(backlog)
|
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
proc createStreamServer*(host: TransportAddress,
|
2023-12-04 13:19:29 +00:00
|
|
|
cbproc: StreamCallback2,
|
2018-05-31 08:03:58 +00:00
|
|
|
flags: set[ServerFlags] = {},
|
2018-05-16 08:22:34 +00:00
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
2023-07-31 19:40:00 +00:00
|
|
|
backlog: int = DefaultBacklogSize,
|
2018-05-16 08:22:34 +00:00
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
2018-06-10 23:08:17 +00:00
|
|
|
child: StreamServer = nil,
|
|
|
|
init: TransportInitCallback = nil,
|
2023-10-30 13:27:50 +00:00
|
|
|
udata: pointer = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
2023-06-05 20:21:50 +00:00
|
|
|
raises: [TransportOsError].} =
|
2018-05-28 23:35:15 +00:00
|
|
|
## Create new TCP stream server.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-27 05:49:47 +00:00
|
|
|
## ``host`` - address to which server will be bound.
|
|
|
|
## ``flags`` - flags to apply to server socket.
|
|
|
|
## ``cbproc`` - callback function which will be called, when new client
|
|
|
|
## connection will be established.
|
2018-05-28 23:35:15 +00:00
|
|
|
## ``sock`` - user-driven socket to use.
|
|
|
|
## ``backlog`` - number of outstanding connections in the socket's listen
|
|
|
|
## queue.
|
|
|
|
## ``bufferSize`` - size of internal buffer for transport.
|
2018-06-10 23:08:17 +00:00
|
|
|
## ``child`` - existing object ``StreamServer``object to initialize, can be
|
|
|
|
## used to initalize ``StreamServer`` inherited objects.
|
2018-05-28 23:35:15 +00:00
|
|
|
## ``udata`` - user-defined pointer.
|
2024-04-13 00:04:42 +00:00
|
|
|
let (serverSocket, localAddress, serverFlags) =
|
|
|
|
when defined(windows):
|
|
|
|
# Windows
|
|
|
|
if host.family in {AddressFamily.IPv4, AddressFamily.IPv6}:
|
|
|
|
var
|
|
|
|
saddr: Sockaddr_storage
|
|
|
|
slen: SockLen
|
|
|
|
laddress: TransportAddress
|
|
|
|
|
|
|
|
let sockres =
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
# TODO (cheatfate): `valueOr` generates weird compile error.
|
|
|
|
let res = createAsyncSocket2(host.getDomain(), SockType.SOCK_STREAM,
|
|
|
|
Protocol.IPPROTO_TCP)
|
|
|
|
if res.isErr():
|
|
|
|
raiseTransportOsError(res.error())
|
|
|
|
res.get()
|
|
|
|
else:
|
|
|
|
setDescriptorBlocking(SocketHandle(sock), false).isOkOr:
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
register2(sock).isOkOr:
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
sock
|
|
|
|
# SO_REUSEADDR
|
|
|
|
if ServerFlags.ReuseAddr in flags:
|
|
|
|
setSockOpt2(sockres, SOL_SOCKET, SO_REUSEADDR, 1).isOkOr:
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
discard closeFd(SocketHandle(sockres))
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
# SO_REUSEPORT
|
|
|
|
if ServerFlags.ReusePort in flags:
|
|
|
|
setSockOpt2(sockres, SOL_SOCKET, SO_REUSEPORT, 1).isOkOr:
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
discard closeFd(SocketHandle(sockres))
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
# TCP_NODELAY
|
|
|
|
if ServerFlags.TcpNoDelay in flags:
|
|
|
|
setSockOpt2(sockres, osdefs.IPPROTO_TCP,
|
|
|
|
osdefs.TCP_NODELAY, 1).isOkOr:
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
discard closeFd(SocketHandle(sockres))
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
# IPV6_V6ONLY.
|
2023-10-30 13:27:50 +00:00
|
|
|
if sock == asyncInvalidSocket:
|
2024-04-13 00:04:42 +00:00
|
|
|
setDualstack(sockres, host.family, dualstack).isOkOr:
|
|
|
|
discard closeFd(SocketHandle(sockres))
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
else:
|
|
|
|
setDualstack(sockres, dualstack).isOkOr:
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
|
|
|
|
let flagres =
|
|
|
|
block:
|
|
|
|
var res = flags
|
|
|
|
if (host.family == AddressFamily.IPv6) and host.isAnyLocal():
|
|
|
|
if dualstack in {DualStackType.Enabled, DualStackType.Auto}:
|
|
|
|
res.incl(ServerFlags.V4Mapped)
|
|
|
|
res
|
|
|
|
|
|
|
|
host.toSAddr(saddr, slen)
|
|
|
|
|
|
|
|
if bindSocket(SocketHandle(sockres),
|
|
|
|
cast[ptr SockAddr](addr saddr), slen) != 0:
|
|
|
|
let err = osLastError()
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
discard closeFd(SocketHandle(sockres))
|
|
|
|
raiseTransportOsError(err)
|
|
|
|
|
|
|
|
slen = SockLen(sizeof(saddr))
|
|
|
|
|
|
|
|
if getsockname(SocketHandle(sockres), cast[ptr SockAddr](addr saddr),
|
|
|
|
addr slen) != 0:
|
|
|
|
let err = osLastError()
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
discard closeFd(SocketHandle(sockres))
|
|
|
|
raiseTransportOsError(err)
|
|
|
|
|
|
|
|
fromSAddr(addr saddr, slen, laddress)
|
|
|
|
|
|
|
|
if listen(SocketHandle(sockres), getBacklogSize(backlog)) != 0:
|
|
|
|
let err = osLastError()
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
discard closeFd(SocketHandle(sockres))
|
|
|
|
raiseTransportOsError(err)
|
|
|
|
|
|
|
|
(sockres, laddress, flagres)
|
|
|
|
elif host.family == AddressFamily.Unix:
|
|
|
|
(AsyncFD(0), host, flags)
|
|
|
|
else:
|
|
|
|
raiseAssert "Incorrect host address family"
|
|
|
|
else:
|
|
|
|
# Posix
|
|
|
|
var
|
|
|
|
saddr: Sockaddr_storage
|
|
|
|
slen: SockLen
|
|
|
|
laddress: TransportAddress
|
|
|
|
|
|
|
|
let sockres =
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
let proto = if host.family == AddressFamily.Unix:
|
|
|
|
Protocol.IPPROTO_IP
|
|
|
|
else:
|
|
|
|
Protocol.IPPROTO_TCP
|
2023-10-30 13:27:50 +00:00
|
|
|
# TODO (cheatfate): `valueOr` generates weird compile error.
|
|
|
|
let res = createAsyncSocket2(host.getDomain(), SockType.SOCK_STREAM,
|
2024-04-13 00:04:42 +00:00
|
|
|
proto)
|
2023-10-30 13:27:50 +00:00
|
|
|
if res.isErr():
|
|
|
|
raiseTransportOsError(res.error())
|
|
|
|
res.get()
|
|
|
|
else:
|
2024-04-13 00:04:42 +00:00
|
|
|
setDescriptorFlags(cint(sock), true, true).isOkOr:
|
2023-10-30 13:27:50 +00:00
|
|
|
raiseTransportOsError(error)
|
|
|
|
register2(sock).isOkOr:
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
sock
|
2024-04-13 00:04:42 +00:00
|
|
|
|
|
|
|
if host.family in {AddressFamily.IPv4, AddressFamily.IPv6}:
|
|
|
|
# SO_REUSEADDR
|
|
|
|
if ServerFlags.ReuseAddr in flags:
|
|
|
|
setSockOpt2(sockres, SOL_SOCKET, SO_REUSEADDR, 1).isOkOr:
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
discard unregisterAndCloseFd(sockres)
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
# SO_REUSEPORT
|
|
|
|
if ServerFlags.ReusePort in flags:
|
|
|
|
setSockOpt2(sockres, SOL_SOCKET, SO_REUSEPORT, 1).isOkOr:
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
discard unregisterAndCloseFd(sockres)
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
# TCP_NODELAY
|
|
|
|
if ServerFlags.TcpNoDelay in flags:
|
|
|
|
setSockOpt2(sockres, osdefs.IPPROTO_TCP,
|
|
|
|
osdefs.TCP_NODELAY, 1).isOkOr:
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
discard unregisterAndCloseFd(sockres)
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
# IPV6_V6ONLY
|
|
|
|
if sock == asyncInvalidSocket:
|
|
|
|
setDualstack(sockres, host.family, dualstack).isOkOr:
|
|
|
|
discard closeFd(SocketHandle(sockres))
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
else:
|
|
|
|
setDualstack(sockres, dualstack).isOkOr:
|
|
|
|
raiseTransportOsError(error)
|
|
|
|
|
|
|
|
elif host.family in {AddressFamily.Unix}:
|
|
|
|
# We do not care about result here, because if file cannot be removed,
|
|
|
|
# `bindSocket` will return EADDRINUSE.
|
|
|
|
discard osdefs.unlink(cast[cstring](baseAddr host.address_un))
|
|
|
|
|
|
|
|
let flagres =
|
|
|
|
block:
|
|
|
|
var res = flags
|
|
|
|
if (host.family == AddressFamily.IPv6) and host.isAnyLocal():
|
|
|
|
if dualstack != DualStackType.Disabled:
|
|
|
|
res.incl(ServerFlags.V4Mapped)
|
|
|
|
res
|
2023-10-30 13:27:50 +00:00
|
|
|
|
2018-10-25 10:19:19 +00:00
|
|
|
host.toSAddr(saddr, slen)
|
2024-04-13 00:04:42 +00:00
|
|
|
|
|
|
|
if osdefs.bindSocket(SocketHandle(sockres),
|
|
|
|
cast[ptr SockAddr](addr saddr), slen) != 0:
|
2018-10-25 10:19:19 +00:00
|
|
|
let err = osLastError()
|
|
|
|
if sock == asyncInvalidSocket:
|
2024-04-13 00:04:42 +00:00
|
|
|
discard unregisterAndCloseFd(sockres)
|
2018-10-25 10:19:19 +00:00
|
|
|
raiseTransportOsError(err)
|
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
# Obtain real address
|
2019-10-09 12:12:19 +00:00
|
|
|
slen = SockLen(sizeof(saddr))
|
2024-04-13 00:04:42 +00:00
|
|
|
if getsockname(SocketHandle(sockres), cast[ptr SockAddr](addr saddr),
|
2019-10-09 12:12:19 +00:00
|
|
|
addr slen) != 0:
|
|
|
|
let err = osLastError()
|
|
|
|
if sock == asyncInvalidSocket:
|
2024-04-13 00:04:42 +00:00
|
|
|
discard unregisterAndCloseFd(sockres)
|
2019-10-09 12:12:19 +00:00
|
|
|
raiseTransportOsError(err)
|
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
fromSAddr(addr saddr, slen, laddress)
|
|
|
|
|
|
|
|
if listen(SocketHandle(sockres), getBacklogSize(backlog)) != 0:
|
2018-10-25 10:19:19 +00:00
|
|
|
let err = osLastError()
|
|
|
|
if sock == asyncInvalidSocket:
|
2024-04-13 00:04:42 +00:00
|
|
|
discard unregisterAndCloseFd(sockres)
|
2023-02-21 10:48:36 +00:00
|
|
|
raiseTransportOsError(err)
|
2018-10-25 10:19:19 +00:00
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
(sockres, laddress, flagres)
|
2018-08-27 18:41:11 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
var sres = if not(isNil(child)): child else: StreamServer()
|
|
|
|
|
|
|
|
sres.sock = serverSocket
|
2024-04-13 00:04:42 +00:00
|
|
|
sres.flags = serverFlags
|
2023-02-21 10:48:36 +00:00
|
|
|
sres.function = cbproc
|
|
|
|
sres.init = init
|
|
|
|
sres.bufferSize = bufferSize
|
|
|
|
sres.status = Starting
|
2024-01-04 15:17:42 +00:00
|
|
|
sres.loopFuture = asyncloop.init(
|
|
|
|
Future[void].Raising([]), "stream.transport.server",
|
|
|
|
{FutureFlag.OwnCancelSchedule})
|
2023-02-21 10:48:36 +00:00
|
|
|
sres.udata = udata
|
2023-10-30 13:27:50 +00:00
|
|
|
sres.dualstack = dualstack
|
2024-04-13 00:04:42 +00:00
|
|
|
if localAddress.family != AddressFamily.None:
|
2023-02-21 10:48:36 +00:00
|
|
|
sres.local = localAddress
|
2018-06-04 09:57:17 +00:00
|
|
|
|
|
|
|
when defined(windows):
|
2018-10-25 10:19:19 +00:00
|
|
|
var cb: CallbackFunc
|
|
|
|
if host.family in {AddressFamily.IPv4, AddressFamily.IPv6}:
|
|
|
|
cb = acceptLoop
|
|
|
|
elif host.family == AddressFamily.Unix:
|
|
|
|
cb = acceptPipeLoop
|
|
|
|
|
2020-06-24 08:21:52 +00:00
|
|
|
if not(isNil(cbproc)):
|
2023-10-30 13:27:50 +00:00
|
|
|
sres.aovl.data = CompletionData(cb: cb, udata: cast[pointer](sres))
|
2020-06-24 08:21:52 +00:00
|
|
|
else:
|
|
|
|
if host.family == AddressFamily.Unix:
|
2023-02-21 10:48:36 +00:00
|
|
|
sres.sock =
|
|
|
|
block:
|
|
|
|
let res = sres.createAcceptPipe()
|
|
|
|
if res.isErr():
|
|
|
|
raiseTransportOsError(res.error())
|
|
|
|
res.get()
|
2020-06-24 08:21:52 +00:00
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
sres.domain = host.getDomain()
|
|
|
|
sres.apending = false
|
2018-10-25 10:19:19 +00:00
|
|
|
|
2019-04-04 09:34:23 +00:00
|
|
|
# Start tracking server
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamServerTrackerName)
|
2023-02-21 10:48:36 +00:00
|
|
|
GC_ref(sres)
|
|
|
|
sres
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc createStreamServer*(host: TransportAddress,
|
2023-12-04 13:19:29 +00:00
|
|
|
cbproc: StreamCallback,
|
2023-11-15 08:38:48 +00:00
|
|
|
flags: set[ServerFlags] = {},
|
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
|
|
|
backlog: int = DefaultBacklogSize,
|
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
|
|
|
child: StreamServer = nil,
|
|
|
|
init: TransportInitCallback = nil,
|
|
|
|
udata: pointer = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
|
|
|
raises: [TransportOsError],
|
|
|
|
deprecated: "Callback must not raise exceptions, annotate with {.async: (raises: []).}".} =
|
|
|
|
proc wrap(server: StreamServer,
|
|
|
|
client: StreamTransport) {.async: (raises: []).} =
|
|
|
|
try:
|
2024-06-10 08:18:42 +00:00
|
|
|
await cbproc(server, client)
|
2023-11-15 08:38:48 +00:00
|
|
|
except CatchableError as exc:
|
|
|
|
raiseAssert "Unexpected exception from stream server cbproc: " & exc.msg
|
|
|
|
|
|
|
|
createStreamServer(
|
|
|
|
host, wrap, flags, sock, backlog, bufferSize, child, init, udata,
|
|
|
|
dualstack)
|
|
|
|
|
2020-06-24 08:21:52 +00:00
|
|
|
proc createStreamServer*(host: TransportAddress,
|
|
|
|
flags: set[ServerFlags] = {},
|
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
2023-07-31 19:40:00 +00:00
|
|
|
backlog: int = DefaultBacklogSize,
|
2020-06-24 08:21:52 +00:00
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
|
|
|
child: StreamServer = nil,
|
|
|
|
init: TransportInitCallback = nil,
|
2023-10-30 13:27:50 +00:00
|
|
|
udata: pointer = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
2023-11-15 08:38:48 +00:00
|
|
|
raises: [TransportOsError].} =
|
2023-12-04 13:19:29 +00:00
|
|
|
createStreamServer(host, StreamCallback2(nil), flags, sock, backlog, bufferSize,
|
2023-10-30 13:27:50 +00:00
|
|
|
child, init, cast[pointer](udata), dualstack)
|
2020-06-24 08:21:52 +00:00
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
proc createStreamServer*(port: Port,
|
|
|
|
host: Opt[IpAddress] = Opt.none(IpAddress),
|
|
|
|
flags: set[ServerFlags] = {},
|
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
|
|
|
backlog: int = DefaultBacklogSize,
|
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
|
|
|
child: StreamServer = nil,
|
|
|
|
init: TransportInitCallback = nil,
|
|
|
|
udata: pointer = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
|
|
|
raises: [TransportOsError].} =
|
|
|
|
## Create stream server which will be bound to:
|
|
|
|
## 1. IPv6 address `::`, if IPv6 is available
|
|
|
|
## 2. IPv4 address `0.0.0.0`, if IPv6 is not available.
|
|
|
|
let hostname =
|
|
|
|
if host.isSome():
|
|
|
|
initTAddress(host.get(), port)
|
|
|
|
else:
|
|
|
|
getAutoAddress(port)
|
|
|
|
createStreamServer(hostname, StreamCallback2(nil), flags, sock,
|
|
|
|
backlog, bufferSize, child, init, cast[pointer](udata),
|
|
|
|
dualstack)
|
|
|
|
|
|
|
|
proc createStreamServer*(cbproc: StreamCallback2,
|
|
|
|
port: Port,
|
|
|
|
host: Opt[IpAddress] = Opt.none(IpAddress),
|
|
|
|
flags: set[ServerFlags] = {},
|
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
|
|
|
backlog: int = DefaultBacklogSize,
|
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
|
|
|
child: StreamServer = nil,
|
|
|
|
init: TransportInitCallback = nil,
|
|
|
|
udata: pointer = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
|
|
|
raises: [TransportOsError].} =
|
|
|
|
## Create stream server which will be bound to:
|
|
|
|
## 1. IPv6 address `::`, if IPv6 is available
|
|
|
|
## 2. IPv4 address `0.0.0.0`, if IPv6 is not available.
|
|
|
|
let hostname =
|
|
|
|
if host.isSome():
|
|
|
|
initTAddress(host.get(), port)
|
|
|
|
else:
|
|
|
|
getAutoAddress(port)
|
|
|
|
createStreamServer(hostname, cbproc, flags, sock, backlog,
|
|
|
|
bufferSize, child, init, cast[pointer](udata), dualstack)
|
|
|
|
|
2018-06-06 21:29:37 +00:00
|
|
|
proc createStreamServer*[T](host: TransportAddress,
|
2023-12-04 13:19:29 +00:00
|
|
|
cbproc: StreamCallback2,
|
2018-06-06 21:29:37 +00:00
|
|
|
flags: set[ServerFlags] = {},
|
2018-06-10 23:08:17 +00:00
|
|
|
udata: ref T,
|
2018-06-06 21:29:37 +00:00
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
2023-07-31 19:40:00 +00:00
|
|
|
backlog: int = DefaultBacklogSize,
|
2018-06-06 21:29:37 +00:00
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
2018-06-11 19:54:08 +00:00
|
|
|
child: StreamServer = nil,
|
2023-10-30 13:27:50 +00:00
|
|
|
init: TransportInitCallback = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
2023-11-15 08:38:48 +00:00
|
|
|
raises: [TransportOsError].} =
|
|
|
|
var fflags = flags + {GCUserData}
|
|
|
|
GC_ref(udata)
|
|
|
|
createStreamServer(host, cbproc, fflags, sock, backlog, bufferSize,
|
|
|
|
child, init, cast[pointer](udata), dualstack)
|
|
|
|
|
|
|
|
proc createStreamServer*[T](host: TransportAddress,
|
2023-12-04 13:19:29 +00:00
|
|
|
cbproc: StreamCallback,
|
2023-11-15 08:38:48 +00:00
|
|
|
flags: set[ServerFlags] = {},
|
|
|
|
udata: ref T,
|
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
|
|
|
backlog: int = DefaultBacklogSize,
|
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
|
|
|
child: StreamServer = nil,
|
|
|
|
init: TransportInitCallback = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
|
|
|
raises: [TransportOsError],
|
|
|
|
deprecated: "Callback must not raise exceptions, annotate with {.async: (raises: []).}".} =
|
2018-06-06 21:29:37 +00:00
|
|
|
var fflags = flags + {GCUserData}
|
|
|
|
GC_ref(udata)
|
2023-02-21 10:48:36 +00:00
|
|
|
createStreamServer(host, cbproc, fflags, sock, backlog, bufferSize,
|
2023-10-30 13:27:50 +00:00
|
|
|
child, init, cast[pointer](udata), dualstack)
|
2018-06-06 21:29:37 +00:00
|
|
|
|
2020-06-24 08:21:52 +00:00
|
|
|
proc createStreamServer*[T](host: TransportAddress,
|
|
|
|
flags: set[ServerFlags] = {},
|
|
|
|
udata: ref T,
|
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
2023-07-31 19:40:00 +00:00
|
|
|
backlog: int = DefaultBacklogSize,
|
2020-06-24 08:21:52 +00:00
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
|
|
|
child: StreamServer = nil,
|
2023-10-30 13:27:50 +00:00
|
|
|
init: TransportInitCallback = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
2023-11-15 08:38:48 +00:00
|
|
|
raises: [TransportOsError].} =
|
2020-06-24 08:21:52 +00:00
|
|
|
var fflags = flags + {GCUserData}
|
|
|
|
GC_ref(udata)
|
2023-12-04 13:19:29 +00:00
|
|
|
createStreamServer(host, StreamCallback2(nil), fflags, sock, backlog, bufferSize,
|
2023-10-30 13:27:50 +00:00
|
|
|
child, init, cast[pointer](udata), dualstack)
|
2020-06-24 08:21:52 +00:00
|
|
|
|
2024-04-13 00:04:42 +00:00
|
|
|
proc createStreamServer*[T](cbproc: StreamCallback2,
|
|
|
|
port: Port,
|
|
|
|
host: Opt[IpAddress] = Opt.none(IpAddress),
|
|
|
|
flags: set[ServerFlags] = {},
|
|
|
|
udata: ref T,
|
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
|
|
|
backlog: int = DefaultBacklogSize,
|
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
|
|
|
child: StreamServer = nil,
|
|
|
|
init: TransportInitCallback = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
|
|
|
raises: [TransportOsError].} =
|
|
|
|
## Create stream server which will be bound to:
|
|
|
|
## 1. IPv6 address `::`, if IPv6 is available
|
|
|
|
## 2. IPv4 address `0.0.0.0`, if IPv6 is not available.
|
|
|
|
let fflags = flags + {GCUserData}
|
|
|
|
GC_ref(udata)
|
|
|
|
let hostname =
|
|
|
|
if host.isSome():
|
|
|
|
initTAddress(host.get(), port)
|
|
|
|
else:
|
|
|
|
getAutoAddress(port)
|
|
|
|
createStreamServer(hostname, cbproc, fflags, sock, backlog,
|
|
|
|
bufferSize, child, init, cast[pointer](udata), dualstack)
|
|
|
|
|
|
|
|
proc createStreamServer*[T](port: Port,
|
|
|
|
host: Opt[IpAddress] = Opt.none(IpAddress),
|
|
|
|
flags: set[ServerFlags] = {},
|
|
|
|
udata: ref T,
|
|
|
|
sock: AsyncFD = asyncInvalidSocket,
|
|
|
|
backlog: int = DefaultBacklogSize,
|
|
|
|
bufferSize: int = DefaultStreamBufferSize,
|
|
|
|
child: StreamServer = nil,
|
|
|
|
init: TransportInitCallback = nil,
|
|
|
|
dualstack = DualStackType.Auto): StreamServer {.
|
|
|
|
raises: [TransportOsError].} =
|
|
|
|
## Create stream server which will be bound to:
|
|
|
|
## 1. IPv6 address `::`, if IPv6 is available
|
|
|
|
## 2. IPv4 address `0.0.0.0`, if IPv6 is not available.
|
|
|
|
let fflags = flags + {GCUserData}
|
|
|
|
GC_ref(udata)
|
|
|
|
let hostname =
|
|
|
|
if host.isSome():
|
|
|
|
initTAddress(host.get(), port)
|
|
|
|
else:
|
|
|
|
getAutoAddress(port)
|
|
|
|
createStreamServer(hostname, StreamCallback2(nil), fflags, sock,
|
|
|
|
backlog, bufferSize, child, init, cast[pointer](udata),
|
|
|
|
dualstack)
|
|
|
|
|
2018-06-07 06:17:59 +00:00
|
|
|
proc getUserData*[T](server: StreamServer): T {.inline.} =
|
|
|
|
## Obtain user data stored in ``server`` object.
|
2023-06-01 22:53:20 +00:00
|
|
|
cast[T](server.udata)
|
2018-06-06 21:29:37 +00:00
|
|
|
|
2022-01-20 16:38:41 +00:00
|
|
|
template fastWrite(transp: auto, pbytes: var ptr byte, rbytes: var int,
|
|
|
|
nbytes: int) =
|
2021-12-08 10:35:27 +00:00
|
|
|
# On windows, the write could be initiated here if there is no other write
|
|
|
|
# ongoing, but the queue is still needed due to the mechanics of iocp
|
|
|
|
|
2022-01-04 17:00:17 +00:00
|
|
|
when not defined(windows) and not defined(nimdoc):
|
2021-12-08 10:35:27 +00:00
|
|
|
if transp.queue.len == 0:
|
|
|
|
while rbytes > 0:
|
2022-01-20 16:38:41 +00:00
|
|
|
let res =
|
|
|
|
case transp.kind
|
|
|
|
of TransportKind.Socket:
|
2023-02-21 10:48:36 +00:00
|
|
|
osdefs.send(SocketHandle(transp.fd), pbytes, rbytes,
|
2022-01-20 16:38:41 +00:00
|
|
|
MSG_NOSIGNAL)
|
|
|
|
of TransportKind.Pipe:
|
2023-02-21 10:48:36 +00:00
|
|
|
osdefs.write(cint(transp.fd), pbytes, rbytes)
|
2022-01-20 16:38:41 +00:00
|
|
|
else:
|
|
|
|
raiseAssert "Unsupported transport kind: " & $transp.kind
|
2021-12-08 10:35:27 +00:00
|
|
|
if res > 0:
|
|
|
|
pbytes = cast[ptr byte](cast[uint](pbytes) + cast[uint](res))
|
|
|
|
rbytes -= res
|
|
|
|
|
|
|
|
if rbytes == 0:
|
|
|
|
retFuture.complete(nbytes)
|
|
|
|
return retFuture
|
|
|
|
# Not all bytes written - keep going
|
|
|
|
else:
|
|
|
|
let err = osLastError()
|
2023-04-30 06:20:08 +00:00
|
|
|
case err
|
|
|
|
of oserrno.EWOULDBLOCK:
|
2021-12-08 10:35:27 +00:00
|
|
|
break # No bytes written, add to queue
|
2023-04-30 06:20:08 +00:00
|
|
|
of oserrno.EINTR:
|
2021-12-08 10:35:27 +00:00
|
|
|
continue
|
|
|
|
else:
|
2023-04-30 06:20:08 +00:00
|
|
|
if isConnResetError(err):
|
|
|
|
transp.state.incl({WriteEof})
|
|
|
|
retFuture.complete(0)
|
|
|
|
return retFuture
|
|
|
|
else:
|
|
|
|
transp.state.incl({WriteError})
|
|
|
|
let error = getTransportOsError(err)
|
|
|
|
retFuture.fail(error)
|
|
|
|
return retFuture
|
2021-12-08 10:35:27 +00:00
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
proc write*(transp: StreamTransport, pbytes: pointer,
|
2023-11-15 08:38:48 +00:00
|
|
|
nbytes: int): Future[int] {.
|
|
|
|
async: (raw: true, raises: [TransportError, CancelledError]).} =
|
2018-05-27 05:49:47 +00:00
|
|
|
## Write data from buffer ``pbytes`` with size ``nbytes`` using transport
|
2018-05-22 21:03:13 +00:00
|
|
|
## ``transp``.
|
2019-03-30 22:31:10 +00:00
|
|
|
var retFuture = newFuture[int]("stream.transport.write(pointer)")
|
2018-06-05 20:21:07 +00:00
|
|
|
transp.checkClosed(retFuture)
|
2019-10-23 11:13:23 +00:00
|
|
|
transp.checkWriteEof(retFuture)
|
2021-12-08 10:35:27 +00:00
|
|
|
|
|
|
|
var
|
|
|
|
pbytes = cast[ptr byte](pbytes)
|
|
|
|
rbytes = nbytes # Remaining bytes
|
|
|
|
|
2022-01-20 16:38:41 +00:00
|
|
|
fastWrite(transp, pbytes, rbytes, nbytes)
|
2021-12-08 10:35:27 +00:00
|
|
|
|
2018-06-05 20:21:07 +00:00
|
|
|
var vector = StreamVector(kind: DataBuffer, writer: retFuture,
|
2021-12-08 10:35:27 +00:00
|
|
|
buf: pbytes, buflen: rbytes, size: nbytes)
|
2018-05-16 08:22:34 +00:00
|
|
|
transp.queue.addLast(vector)
|
2023-02-21 10:48:36 +00:00
|
|
|
let wres = transp.resumeWrite()
|
|
|
|
if wres.isErr():
|
|
|
|
retFuture.fail(getTransportOsError(wres.error()))
|
2018-06-05 20:21:07 +00:00
|
|
|
return retFuture
|
|
|
|
|
2024-03-20 06:47:59 +00:00
|
|
|
proc write*(transp: StreamTransport, msg: string,
|
2023-11-15 08:38:48 +00:00
|
|
|
msglen = -1): Future[int] {.
|
|
|
|
async: (raw: true, raises: [TransportError, CancelledError]).} =
|
2018-06-05 20:21:07 +00:00
|
|
|
## Write data from string ``msg`` using transport ``transp``.
|
2023-11-15 08:38:48 +00:00
|
|
|
var retFuture = newFuture[int]("stream.transport.write(string)")
|
2018-06-05 20:21:07 +00:00
|
|
|
transp.checkClosed(retFuture)
|
2019-10-23 11:13:23 +00:00
|
|
|
transp.checkWriteEof(retFuture)
|
2021-12-08 10:35:27 +00:00
|
|
|
let
|
|
|
|
nbytes = if msglen <= 0: len(msg) else: msglen
|
|
|
|
|
|
|
|
var
|
2023-11-19 17:29:09 +00:00
|
|
|
pbytes = cast[ptr byte](baseAddr msg)
|
2021-12-08 10:35:27 +00:00
|
|
|
rbytes = nbytes
|
|
|
|
|
2022-01-20 16:38:41 +00:00
|
|
|
fastWrite(transp, pbytes, rbytes, nbytes)
|
2021-12-08 10:35:27 +00:00
|
|
|
|
|
|
|
let
|
|
|
|
written = nbytes - rbytes # In case fastWrite wrote some
|
|
|
|
|
2024-03-20 06:47:59 +00:00
|
|
|
var localCopy = msg
|
2023-11-15 08:38:48 +00:00
|
|
|
retFuture.addCallback(proc(_: pointer) = reset(localCopy))
|
|
|
|
|
|
|
|
pbytes = cast[ptr byte](addr localCopy[written])
|
2021-12-08 10:35:27 +00:00
|
|
|
|
|
|
|
var vector = StreamVector(kind: DataBuffer, writer: retFuture,
|
|
|
|
buf: pbytes, buflen: rbytes, size: nbytes)
|
2018-06-05 20:21:07 +00:00
|
|
|
transp.queue.addLast(vector)
|
2023-02-21 10:48:36 +00:00
|
|
|
let wres = transp.resumeWrite()
|
|
|
|
if wres.isErr():
|
|
|
|
retFuture.fail(getTransportOsError(wres.error()))
|
2018-06-05 20:21:07 +00:00
|
|
|
return retFuture
|
|
|
|
|
2024-03-20 06:47:59 +00:00
|
|
|
proc write*[T](transp: StreamTransport, msg: seq[T],
|
2023-11-15 08:38:48 +00:00
|
|
|
msglen = -1): Future[int] {.
|
|
|
|
async: (raw: true, raises: [TransportError, CancelledError]).} =
|
2018-06-05 20:21:07 +00:00
|
|
|
## Write sequence ``msg`` using transport ``transp``.
|
2023-11-15 08:38:48 +00:00
|
|
|
var retFuture = newFuture[int]("stream.transport.write(seq)")
|
2018-06-05 20:21:07 +00:00
|
|
|
transp.checkClosed(retFuture)
|
2019-10-23 11:13:23 +00:00
|
|
|
transp.checkWriteEof(retFuture)
|
2021-12-08 10:35:27 +00:00
|
|
|
|
|
|
|
let
|
|
|
|
nbytes = if msglen <= 0: (len(msg) * sizeof(T)) else: (msglen * sizeof(T))
|
|
|
|
|
|
|
|
var
|
2023-11-19 17:29:09 +00:00
|
|
|
pbytes = cast[ptr byte](baseAddr msg)
|
2021-12-08 10:35:27 +00:00
|
|
|
rbytes = nbytes
|
|
|
|
|
2022-01-20 16:38:41 +00:00
|
|
|
fastWrite(transp, pbytes, rbytes, nbytes)
|
2021-12-08 10:35:27 +00:00
|
|
|
|
|
|
|
let
|
|
|
|
written = nbytes - rbytes # In case fastWrite wrote some
|
|
|
|
|
2024-03-20 06:47:59 +00:00
|
|
|
var localCopy = msg
|
2023-11-15 08:38:48 +00:00
|
|
|
retFuture.addCallback(proc(_: pointer) = reset(localCopy))
|
|
|
|
|
|
|
|
pbytes = cast[ptr byte](addr localCopy[written])
|
2021-12-08 10:35:27 +00:00
|
|
|
|
|
|
|
var vector = StreamVector(kind: DataBuffer, writer: retFuture,
|
|
|
|
buf: pbytes, buflen: rbytes, size: nbytes)
|
2018-06-05 20:21:07 +00:00
|
|
|
transp.queue.addLast(vector)
|
2023-02-21 10:48:36 +00:00
|
|
|
let wres = transp.resumeWrite()
|
|
|
|
if wres.isErr():
|
|
|
|
retFuture.fail(getTransportOsError(wres.error()))
|
2018-06-05 20:21:07 +00:00
|
|
|
return retFuture
|
2018-06-04 16:42:54 +00:00
|
|
|
|
2018-05-21 21:52:57 +00:00
|
|
|
proc writeFile*(transp: StreamTransport, handle: int,
|
2023-11-15 08:38:48 +00:00
|
|
|
offset: uint = 0, size: int = 0): Future[int] {.
|
|
|
|
async: (raw: true, raises: [TransportError, CancelledError]).} =
|
2018-05-22 21:03:13 +00:00
|
|
|
## Write data from file descriptor ``handle`` to transport ``transp``.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-22 21:03:13 +00:00
|
|
|
## You can specify starting ``offset`` in opened file and number of bytes
|
|
|
|
## to transfer from file to transport via ``size``.
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
var retFuture = newFuture[int]("stream.transport.writeFile")
|
2018-10-25 10:19:19 +00:00
|
|
|
when defined(windows):
|
|
|
|
if transp.kind != TransportKind.Socket:
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
retFuture.fail(newException(
|
|
|
|
TransportNoSupport, "writeFile() is not supported!"))
|
|
|
|
return retFuture
|
2018-06-05 22:48:03 +00:00
|
|
|
transp.checkClosed(retFuture)
|
2019-10-23 11:13:23 +00:00
|
|
|
transp.checkWriteEof(retFuture)
|
2018-06-05 20:21:07 +00:00
|
|
|
var vector = StreamVector(kind: DataFile, writer: retFuture,
|
2018-05-21 21:52:57 +00:00
|
|
|
buf: cast[pointer](size), offset: offset,
|
|
|
|
buflen: handle)
|
|
|
|
transp.queue.addLast(vector)
|
2023-02-21 10:48:36 +00:00
|
|
|
let wres = transp.resumeWrite()
|
|
|
|
if wres.isErr():
|
|
|
|
retFuture.fail(getTransportOsError(wres.error()))
|
2018-06-05 20:21:07 +00:00
|
|
|
return retFuture
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2018-06-04 16:42:54 +00:00
|
|
|
proc atEof*(transp: StreamTransport): bool {.inline.} =
|
|
|
|
## Returns ``true`` if ``transp`` is at EOF.
|
2024-03-26 20:33:19 +00:00
|
|
|
(len(transp.buffer) == 0) and (ReadEof in transp.state) and
|
2023-02-21 10:48:36 +00:00
|
|
|
(ReadPaused in transp.state)
|
2020-03-05 09:59:10 +00:00
|
|
|
|
|
|
|
template readLoop(name, body: untyped): untyped =
|
|
|
|
# Read data until a predicate is satisfied - the body should return a tuple
|
|
|
|
# signalling how many bytes have been processed and whether we're done reading
|
|
|
|
checkClosed(transp)
|
|
|
|
checkPending(transp)
|
|
|
|
while true:
|
|
|
|
if ReadClosed in transp.state:
|
|
|
|
raise newException(TransportUseClosedError,
|
|
|
|
"Attempt to read data from closed stream")
|
2024-03-26 20:33:19 +00:00
|
|
|
if len(transp.buffer) == 0:
|
2020-03-05 09:59:10 +00:00
|
|
|
# We going to raise an error, only if transport buffer is empty.
|
|
|
|
if ReadError in transp.state:
|
|
|
|
raise transp.getError()
|
|
|
|
|
|
|
|
let (consumed, done) = body
|
2024-03-26 20:33:19 +00:00
|
|
|
transp.buffer.consume(consumed)
|
2020-03-05 09:59:10 +00:00
|
|
|
if done:
|
|
|
|
break
|
2024-03-26 20:33:19 +00:00
|
|
|
|
|
|
|
if len(transp.buffer) == 0:
|
2023-02-21 10:48:36 +00:00
|
|
|
checkPending(transp)
|
2023-11-15 08:38:48 +00:00
|
|
|
let fut = ReaderFuture.init(name)
|
2023-02-21 10:48:36 +00:00
|
|
|
transp.reader = fut
|
|
|
|
let res = resumeRead(transp)
|
|
|
|
if res.isErr():
|
|
|
|
let errorCode = res.error()
|
|
|
|
when defined(windows):
|
|
|
|
# This assertion could be changed, because at this moment
|
|
|
|
# resumeRead() could not return any error.
|
|
|
|
raiseOsDefect(errorCode, "readLoop(): Unable to resume reading")
|
|
|
|
else:
|
2024-03-07 07:09:16 +00:00
|
|
|
transp.completeReader()
|
2023-04-30 06:20:08 +00:00
|
|
|
if errorCode == oserrno.ESRCH:
|
2023-02-21 10:48:36 +00:00
|
|
|
# ESRCH 3 "No such process"
|
|
|
|
# This error could be happened on pipes only, when process which
|
|
|
|
# owns and communicates through this pipe (stdin, stdout, stderr) is
|
|
|
|
# already dead. In such case we need to send notification that this
|
|
|
|
# pipe is at EOF.
|
|
|
|
transp.state.incl({ReadEof, ReadPaused})
|
|
|
|
else:
|
|
|
|
raiseTransportOsError(errorCode)
|
|
|
|
else:
|
|
|
|
await fut
|
2020-03-05 09:59:10 +00:00
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
proc readExactly*(transp: StreamTransport, pbytes: pointer,
|
2023-11-15 08:38:48 +00:00
|
|
|
nbytes: int) {.
|
|
|
|
async: (raises: [TransportError, CancelledError]).} =
|
2018-05-22 21:03:13 +00:00
|
|
|
## Read exactly ``nbytes`` bytes from transport ``transp`` and store it to
|
2020-09-10 00:50:06 +00:00
|
|
|
## ``pbytes``. ``pbytes`` must not be ``nil`` pointer and ``nbytes`` should
|
|
|
|
## be Natural.
|
|
|
|
##
|
|
|
|
## If ``nbytes == 0`` this operation will return immediately.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2024-04-17 23:08:19 +00:00
|
|
|
## If EOF is received and ``nbytes`` is not yet read, the procedure
|
2020-03-05 09:59:10 +00:00
|
|
|
## will raise ``TransportIncompleteError``, potentially with some bytes
|
|
|
|
## already written.
|
2020-09-10 00:50:06 +00:00
|
|
|
doAssert(not(isNil(pbytes)), "pbytes must not be nil")
|
|
|
|
doAssert(nbytes >= 0, "nbytes must be non-negative integer")
|
|
|
|
|
|
|
|
if nbytes == 0:
|
|
|
|
return
|
|
|
|
|
2024-03-26 20:33:19 +00:00
|
|
|
var
|
|
|
|
index = 0
|
|
|
|
pbuffer = pbytes.toUnchecked()
|
2020-03-05 09:59:10 +00:00
|
|
|
readLoop("stream.transport.readExactly"):
|
2024-03-26 20:33:19 +00:00
|
|
|
if len(transp.buffer) == 0:
|
2020-03-05 09:59:10 +00:00
|
|
|
if transp.atEof():
|
2018-05-16 08:22:34 +00:00
|
|
|
raise newException(TransportIncompleteError, "Data incomplete!")
|
2024-04-17 23:08:19 +00:00
|
|
|
var bytesRead = 0
|
2024-03-26 20:33:19 +00:00
|
|
|
for (region, rsize) in transp.buffer.regions():
|
|
|
|
let count = min(nbytes - index, rsize)
|
2024-04-17 23:08:19 +00:00
|
|
|
bytesRead += count
|
2024-03-26 20:33:19 +00:00
|
|
|
if count > 0:
|
|
|
|
copyMem(addr pbuffer[index], region, count)
|
|
|
|
index += count
|
|
|
|
if index == nbytes:
|
|
|
|
break
|
2024-04-17 23:08:19 +00:00
|
|
|
(consumed: bytesRead, done: index == nbytes)
|
2018-05-16 08:22:34 +00:00
|
|
|
|
|
|
|
proc readOnce*(transp: StreamTransport, pbytes: pointer,
|
2023-11-15 08:38:48 +00:00
|
|
|
nbytes: int): Future[int] {.
|
|
|
|
async: (raises: [TransportError, CancelledError]).} =
|
2018-05-16 08:22:34 +00:00
|
|
|
## Perform one read operation on transport ``transp``.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-27 05:49:47 +00:00
|
|
|
## If internal buffer is not empty, ``nbytes`` bytes will be transferred from
|
|
|
|
## internal buffer, otherwise it will wait until some bytes will be received.
|
2020-09-10 00:50:06 +00:00
|
|
|
doAssert(not(isNil(pbytes)), "pbytes must not be nil")
|
|
|
|
doAssert(nbytes > 0, "nbytes must be positive integer")
|
|
|
|
|
2024-03-26 20:33:19 +00:00
|
|
|
var
|
|
|
|
pbuffer = pbytes.toUnchecked()
|
|
|
|
index = 0
|
2020-03-05 09:59:10 +00:00
|
|
|
readLoop("stream.transport.readOnce"):
|
2024-03-26 20:33:19 +00:00
|
|
|
if len(transp.buffer) == 0:
|
2020-03-05 09:59:10 +00:00
|
|
|
(0, transp.atEof())
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
for (region, rsize) in transp.buffer.regions():
|
|
|
|
let size = min(rsize, nbytes - index)
|
|
|
|
copyMem(addr pbuffer[index], region, size)
|
|
|
|
index += size
|
|
|
|
if index >= nbytes:
|
|
|
|
break
|
|
|
|
(index, true)
|
|
|
|
index
|
2018-05-16 08:22:34 +00:00
|
|
|
|
|
|
|
proc readUntil*(transp: StreamTransport, pbytes: pointer, nbytes: int,
|
2023-11-15 08:38:48 +00:00
|
|
|
sep: seq[byte]): Future[int] {.
|
|
|
|
async: (raises: [TransportError, CancelledError]).} =
|
2018-05-22 21:03:13 +00:00
|
|
|
## Read data from the transport ``transp`` until separator ``sep`` is found.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-22 21:03:13 +00:00
|
|
|
## On success, the data and separator will be removed from the internal
|
2019-05-28 06:29:00 +00:00
|
|
|
## buffer (consumed). Returned data will include the separator at the end.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-28 23:35:15 +00:00
|
|
|
## If EOF is received, and `sep` was not found, procedure will raise
|
|
|
|
## ``TransportIncompleteError``.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-28 23:35:15 +00:00
|
|
|
## If ``nbytes`` bytes has been received and `sep` was not found, procedure
|
|
|
|
## will raise ``TransportLimitError``.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-28 23:35:15 +00:00
|
|
|
## Procedure returns actual number of bytes read.
|
2020-09-10 00:50:06 +00:00
|
|
|
doAssert(not(isNil(pbytes)), "pbytes must not be nil")
|
|
|
|
doAssert(len(sep) > 0, "separator must not be empty")
|
|
|
|
doAssert(nbytes >= 0, "nbytes must be non-negative integer")
|
|
|
|
|
|
|
|
if nbytes == 0:
|
|
|
|
raise newException(TransportLimitError, "Limit reached!")
|
|
|
|
|
2024-03-26 20:33:19 +00:00
|
|
|
var pbuffer = pbytes.toUnchecked()
|
2018-05-16 08:22:34 +00:00
|
|
|
var state = 0
|
|
|
|
var k = 0
|
|
|
|
|
2020-03-05 09:59:10 +00:00
|
|
|
readLoop("stream.transport.readUntil"):
|
|
|
|
if transp.atEof():
|
2018-06-05 05:51:59 +00:00
|
|
|
raise newException(TransportIncompleteError, "Data incomplete!")
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2020-03-05 09:59:10 +00:00
|
|
|
var index = 0
|
2024-03-26 20:33:19 +00:00
|
|
|
for ch in transp.buffer:
|
2020-03-05 09:59:10 +00:00
|
|
|
if k >= nbytes:
|
|
|
|
raise newException(TransportLimitError, "Limit reached!")
|
|
|
|
|
|
|
|
inc(index)
|
|
|
|
pbuffer[k] = ch
|
|
|
|
inc(k)
|
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
if sep[state] == ch:
|
|
|
|
inc(state)
|
2020-03-05 09:59:10 +00:00
|
|
|
if state == len(sep):
|
|
|
|
break
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
|
|
|
state = 0
|
|
|
|
|
2020-03-05 09:59:10 +00:00
|
|
|
(index, state == len(sep))
|
2024-03-26 20:33:19 +00:00
|
|
|
k
|
2018-05-16 08:22:34 +00:00
|
|
|
|
|
|
|
proc readLine*(transp: StreamTransport, limit = 0,
|
2023-11-15 08:38:48 +00:00
|
|
|
sep = "\r\n"): Future[string] {.
|
|
|
|
async: (raises: [TransportError, CancelledError]).} =
|
2018-05-22 21:03:13 +00:00
|
|
|
## Read one line from transport ``transp``, where "line" is a sequence of
|
|
|
|
## bytes ending with ``sep`` (default is "\r\n").
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-22 21:03:13 +00:00
|
|
|
## If EOF is received, and ``sep`` was not found, the method will return the
|
|
|
|
## partial read bytes.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-22 21:03:13 +00:00
|
|
|
## If the EOF was received and the internal buffer is empty, return an
|
|
|
|
## empty string.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-22 21:03:13 +00:00
|
|
|
## If ``limit`` more then 0, then read is limited to ``limit`` bytes.
|
2020-03-05 09:59:10 +00:00
|
|
|
let lim = if limit <= 0: -1 else: limit
|
2018-05-16 08:22:34 +00:00
|
|
|
var state = 0
|
2024-03-26 20:33:19 +00:00
|
|
|
var res: string
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2020-03-05 09:59:10 +00:00
|
|
|
readLoop("stream.transport.readLine"):
|
|
|
|
if transp.atEof():
|
|
|
|
(0, true)
|
|
|
|
else:
|
|
|
|
var index = 0
|
2024-03-26 20:33:19 +00:00
|
|
|
for ch in transp.buffer:
|
|
|
|
inc(index)
|
2020-03-05 09:59:10 +00:00
|
|
|
|
2024-03-26 20:33:19 +00:00
|
|
|
if sep[state] == char(ch):
|
2020-03-05 09:59:10 +00:00
|
|
|
inc(state)
|
|
|
|
if state == len(sep):
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
if state != 0:
|
|
|
|
if limit > 0:
|
2024-03-26 20:33:19 +00:00
|
|
|
let missing = min(state, lim - len(res) - 1)
|
|
|
|
res.add(sep[0 ..< missing])
|
2020-03-05 09:59:10 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
res.add(sep[0 ..< state])
|
2020-03-05 09:59:10 +00:00
|
|
|
state = 0
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2024-03-26 20:33:19 +00:00
|
|
|
res.add(char(ch))
|
|
|
|
if len(res) == lim:
|
2020-03-05 09:59:10 +00:00
|
|
|
break
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2024-03-26 20:33:19 +00:00
|
|
|
(index, (state == len(sep)) or (lim == len(res)))
|
|
|
|
res
|
2020-03-05 09:59:10 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc read*(transp: StreamTransport): Future[seq[byte]] {.
|
|
|
|
async: (raises: [TransportError, CancelledError]).} =
|
2020-03-05 09:59:10 +00:00
|
|
|
## Read all bytes from transport ``transp``.
|
|
|
|
##
|
|
|
|
## This procedure allocates buffer seq[byte] and return it as result.
|
2024-03-26 20:33:19 +00:00
|
|
|
var res: seq[byte]
|
2020-03-05 09:59:10 +00:00
|
|
|
readLoop("stream.transport.read"):
|
|
|
|
if transp.atEof():
|
|
|
|
(0, true)
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
2024-04-17 23:08:19 +00:00
|
|
|
var bytesRead = 0
|
2024-03-26 20:33:19 +00:00
|
|
|
for (region, rsize) in transp.buffer.regions():
|
2024-04-17 23:08:19 +00:00
|
|
|
bytesRead += rsize
|
2024-03-26 20:33:19 +00:00
|
|
|
res.add(region.toUnchecked().toOpenArray(0, rsize - 1))
|
2024-04-17 23:08:19 +00:00
|
|
|
(bytesRead, false)
|
2024-03-26 20:33:19 +00:00
|
|
|
res
|
2020-03-05 09:59:10 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc read*(transp: StreamTransport, n: int): Future[seq[byte]] {.
|
|
|
|
async: (raises: [TransportError, CancelledError]).} =
|
2019-05-28 17:12:00 +00:00
|
|
|
## Read all bytes (n <= 0) or exactly `n` bytes from transport ``transp``.
|
2018-06-04 09:57:17 +00:00
|
|
|
##
|
2018-05-28 23:35:15 +00:00
|
|
|
## This procedure allocates buffer seq[byte] and return it as result.
|
2020-03-05 09:59:10 +00:00
|
|
|
if n <= 0:
|
2024-03-26 20:33:19 +00:00
|
|
|
await transp.read()
|
2020-03-05 09:59:10 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
var res: seq[byte]
|
2020-03-05 09:59:10 +00:00
|
|
|
readLoop("stream.transport.read"):
|
|
|
|
if transp.atEof():
|
|
|
|
(0, true)
|
2018-05-16 08:22:34 +00:00
|
|
|
else:
|
2024-04-17 23:08:19 +00:00
|
|
|
var bytesRead = 0
|
2024-03-26 20:33:19 +00:00
|
|
|
for (region, rsize) in transp.buffer.regions():
|
|
|
|
let count = min(rsize, n - len(res))
|
2024-04-17 23:08:19 +00:00
|
|
|
bytesRead += count
|
2024-03-26 20:33:19 +00:00
|
|
|
res.add(region.toUnchecked().toOpenArray(0, count - 1))
|
2024-04-17 23:08:19 +00:00
|
|
|
(bytesRead, len(res) == n)
|
2024-03-26 20:33:19 +00:00
|
|
|
res
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc consume*(transp: StreamTransport): Future[int] {.
|
|
|
|
async: (raises: [TransportError, CancelledError]).} =
|
2020-03-05 09:59:10 +00:00
|
|
|
## Consume all bytes from transport ``transp`` and discard it.
|
2018-06-04 16:42:54 +00:00
|
|
|
##
|
2020-03-05 09:59:10 +00:00
|
|
|
## Return number of bytes actually consumed and discarded.
|
2024-03-26 20:33:19 +00:00
|
|
|
var res = 0
|
2020-03-05 09:59:10 +00:00
|
|
|
readLoop("stream.transport.consume"):
|
|
|
|
if transp.atEof():
|
|
|
|
(0, true)
|
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
let used = len(transp.buffer)
|
|
|
|
res += used
|
|
|
|
(used, false)
|
|
|
|
res
|
2018-06-05 05:51:59 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc consume*(transp: StreamTransport, n: int): Future[int] {.
|
|
|
|
async: (raises: [TransportError, CancelledError]).} =
|
2020-03-05 09:59:10 +00:00
|
|
|
## Consume all bytes (n <= 0) or ``n`` bytes from transport ``transp`` and
|
|
|
|
## discard it.
|
|
|
|
##
|
|
|
|
## Return number of bytes actually consumed and discarded.
|
|
|
|
if n <= 0:
|
2024-03-26 20:33:19 +00:00
|
|
|
await transp.consume()
|
2020-03-05 09:59:10 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
var res = 0
|
2020-03-05 09:59:10 +00:00
|
|
|
readLoop("stream.transport.consume"):
|
|
|
|
if transp.atEof():
|
|
|
|
(0, true)
|
2018-06-04 16:42:54 +00:00
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
let
|
|
|
|
used = len(transp.buffer)
|
|
|
|
count = min(used, n - res)
|
|
|
|
res += count
|
|
|
|
(count, res == n)
|
|
|
|
res
|
2020-03-05 09:59:10 +00:00
|
|
|
|
|
|
|
proc readMessage*(transp: StreamTransport,
|
2023-11-15 08:38:48 +00:00
|
|
|
predicate: ReadMessagePredicate) {.
|
|
|
|
async: (raises: [TransportError, CancelledError]).} =
|
2020-03-05 09:59:10 +00:00
|
|
|
## Read all bytes from transport ``transp`` until ``predicate`` callback
|
|
|
|
## will not be satisfied.
|
|
|
|
##
|
|
|
|
## ``predicate`` callback should return tuple ``(consumed, result)``, where
|
|
|
|
## ``consumed`` is the number of bytes processed and ``result`` is a
|
|
|
|
## completion flag (``true`` if readMessage() should stop reading data,
|
|
|
|
## or ``false`` if readMessage() should continue to read data from transport).
|
|
|
|
##
|
|
|
|
## ``predicate`` callback must copy all the data from ``data`` array and
|
|
|
|
## return number of bytes it is going to consume.
|
2021-12-08 14:58:24 +00:00
|
|
|
## ``predicate`` callback will receive (zero-length) openArray, if transport
|
2020-03-05 09:59:10 +00:00
|
|
|
## is at EOF.
|
|
|
|
readLoop("stream.transport.readMessage"):
|
2024-03-26 20:33:19 +00:00
|
|
|
if len(transp.buffer) == 0:
|
2020-03-05 09:59:10 +00:00
|
|
|
if transp.atEof():
|
|
|
|
predicate([])
|
|
|
|
else:
|
|
|
|
# Case, when transport's buffer is not yet filled with data.
|
|
|
|
(0, false)
|
|
|
|
else:
|
2024-03-26 20:33:19 +00:00
|
|
|
var res: tuple[consumed: int, done: bool]
|
|
|
|
for (region, rsize) in transp.buffer.regions():
|
|
|
|
res = predicate(region.toUnchecked().toOpenArray(0, rsize - 1))
|
|
|
|
break
|
|
|
|
res
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc join*(transp: StreamTransport): Future[void] {.
|
|
|
|
async: (raw: true, raises: [CancelledError]).} =
|
2018-05-16 08:22:34 +00:00
|
|
|
## Wait until ``transp`` will not be closed.
|
2019-03-30 22:31:10 +00:00
|
|
|
var retFuture = newFuture[void]("stream.transport.join")
|
2019-06-20 20:30:41 +00:00
|
|
|
|
|
|
|
proc continuation(udata: pointer) {.gcsafe.} =
|
|
|
|
retFuture.complete()
|
|
|
|
|
|
|
|
proc cancel(udata: pointer) {.gcsafe.} =
|
|
|
|
transp.future.removeCallback(continuation, cast[pointer](retFuture))
|
|
|
|
|
|
|
|
if not(transp.future.finished()):
|
|
|
|
transp.future.addCallback(continuation, cast[pointer](retFuture))
|
|
|
|
retFuture.cancelCallback = cancel
|
2018-08-24 12:20:08 +00:00
|
|
|
else:
|
|
|
|
retFuture.complete()
|
2024-03-26 20:33:19 +00:00
|
|
|
retFuture
|
2018-05-16 08:22:34 +00:00
|
|
|
|
2024-01-04 15:17:42 +00:00
|
|
|
proc closed*(transp: StreamTransport): bool {.inline.} =
|
|
|
|
## Returns ``true`` if transport in closed state.
|
|
|
|
({ReadClosed, WriteClosed} * transp.state != {})
|
|
|
|
|
|
|
|
proc finished*(transp: StreamTransport): bool {.inline.} =
|
|
|
|
## Returns ``true`` if transport in finished (EOF) state.
|
|
|
|
({ReadEof, WriteEof} * transp.state != {})
|
|
|
|
|
|
|
|
proc failed*(transp: StreamTransport): bool {.inline.} =
|
|
|
|
## Returns ``true`` if transport in error state.
|
|
|
|
({ReadError, WriteError} * transp.state != {})
|
|
|
|
|
|
|
|
proc running*(transp: StreamTransport): bool {.inline.} =
|
|
|
|
## Returns ``true`` if transport is still pending.
|
|
|
|
({ReadClosed, ReadEof, ReadError,
|
|
|
|
WriteClosed, WriteEof, WriteError} * transp.state == {})
|
|
|
|
|
2018-05-16 08:22:34 +00:00
|
|
|
proc close*(transp: StreamTransport) =
|
|
|
|
## Closes and frees resources of transport ``transp``.
|
2018-08-24 12:20:08 +00:00
|
|
|
##
|
|
|
|
## Please note that release of resources is not completed immediately, to be
|
|
|
|
## sure all resources got released please use ``await transp.join()``.
|
2019-06-20 20:30:41 +00:00
|
|
|
proc continuation(udata: pointer) {.gcsafe.} =
|
2020-06-24 08:21:52 +00:00
|
|
|
transp.clean()
|
2018-08-24 12:20:08 +00:00
|
|
|
|
2018-06-04 09:57:17 +00:00
|
|
|
if {ReadClosed, WriteClosed} * transp.state == {}:
|
2018-08-24 12:20:08 +00:00
|
|
|
transp.state.incl({WriteClosed, ReadClosed})
|
2018-05-16 08:22:34 +00:00
|
|
|
when defined(windows):
|
2018-10-25 10:19:19 +00:00
|
|
|
if transp.kind == TransportKind.Pipe:
|
|
|
|
if WinServerPipe in transp.flags:
|
|
|
|
if WinNoPipeFlash notin transp.flags:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard flushFileBuffers(HANDLE(transp.fd))
|
|
|
|
discard disconnectNamedPipe(HANDLE(transp.fd))
|
2018-10-25 10:19:19 +00:00
|
|
|
else:
|
|
|
|
if WinNoPipeFlash notin transp.flags:
|
2023-02-21 10:48:36 +00:00
|
|
|
discard flushFileBuffers(HANDLE(transp.fd))
|
2019-03-30 22:31:10 +00:00
|
|
|
if ReadPaused in transp.state:
|
|
|
|
# If readStreamLoop() is not running we need to finish in
|
|
|
|
# continuation step.
|
|
|
|
closeHandle(transp.fd, continuation)
|
|
|
|
else:
|
|
|
|
# If readStreamLoop() is running, it will be properly finished inside
|
|
|
|
# of readStreamLoop().
|
|
|
|
closeHandle(transp.fd)
|
2018-10-25 10:19:19 +00:00
|
|
|
elif transp.kind == TransportKind.Socket:
|
2019-03-30 22:31:10 +00:00
|
|
|
if ReadPaused in transp.state:
|
|
|
|
# If readStreamLoop() is not running we need to finish in
|
|
|
|
# continuation step.
|
|
|
|
closeSocket(transp.fd, continuation)
|
|
|
|
else:
|
|
|
|
# If readStreamLoop() is running, it will be properly finished inside
|
|
|
|
# of readStreamLoop().
|
|
|
|
closeSocket(transp.fd)
|
2018-10-25 10:19:19 +00:00
|
|
|
else:
|
2019-07-15 09:59:42 +00:00
|
|
|
if transp.kind == TransportKind.Pipe:
|
|
|
|
closeHandle(transp.fd, continuation)
|
|
|
|
elif transp.kind == TransportKind.Socket:
|
|
|
|
closeSocket(transp.fd, continuation)
|
2018-08-24 12:20:08 +00:00
|
|
|
|
2024-01-04 15:17:42 +00:00
|
|
|
proc closeWait*(transp: StreamTransport): Future[void] {.async: (raises: []).} =
|
2018-08-24 12:20:08 +00:00
|
|
|
## Close and frees resources of transport ``transp``.
|
2024-01-04 15:17:42 +00:00
|
|
|
if not transp.closed():
|
|
|
|
transp.close()
|
|
|
|
await noCancel(transp.join())
|
2018-07-13 08:24:52 +00:00
|
|
|
|
2023-11-15 08:38:48 +00:00
|
|
|
proc shutdownWait*(transp: StreamTransport): Future[void] {.
|
|
|
|
async: (raw: true, raises: [TransportError, CancelledError]).} =
|
2023-07-19 17:33:28 +00:00
|
|
|
## Perform graceful shutdown of TCP connection backed by transport ``transp``.
|
|
|
|
doAssert(transp.kind == TransportKind.Socket)
|
|
|
|
let retFuture = newFuture[void]("stream.transport.shutdown")
|
|
|
|
transp.checkClosed(retFuture)
|
|
|
|
|
|
|
|
when defined(windows):
|
|
|
|
let loop = getThreadDispatcher()
|
|
|
|
proc continuation(udata: pointer) {.gcsafe.} =
|
|
|
|
let ovl = cast[RefCustomOverlapped](udata)
|
|
|
|
if not(retFuture.finished()):
|
|
|
|
if ovl.data.errCode == OSErrorCode(-1):
|
|
|
|
retFuture.complete()
|
|
|
|
else:
|
|
|
|
transp.state.excl({WriteEof})
|
|
|
|
retFuture.fail(getTransportOsError(ovl.data.errCode))
|
|
|
|
GC_unref(ovl)
|
|
|
|
|
|
|
|
let povl = RefCustomOverlapped(data: CompletionData(cb: continuation))
|
|
|
|
GC_ref(povl)
|
|
|
|
|
|
|
|
let res = loop.disconnectEx(SocketHandle(transp.fd),
|
|
|
|
cast[POVERLAPPED](povl), 0'u32, 0'u32)
|
|
|
|
if res == FALSE:
|
|
|
|
let err = osLastError()
|
|
|
|
case err
|
|
|
|
of ERROR_IO_PENDING:
|
|
|
|
transp.state.incl({WriteEof})
|
|
|
|
else:
|
|
|
|
GC_unref(povl)
|
|
|
|
retFuture.fail(getTransportOsError(err))
|
|
|
|
else:
|
|
|
|
transp.state.incl({WriteEof})
|
|
|
|
retFuture.complete()
|
|
|
|
|
|
|
|
retFuture
|
|
|
|
else:
|
|
|
|
proc continuation(udata: pointer) {.gcsafe.} =
|
|
|
|
if not(retFuture.finished()):
|
|
|
|
retFuture.complete()
|
|
|
|
|
|
|
|
let res = osdefs.shutdown(SocketHandle(transp.fd), SHUT_WR)
|
|
|
|
if res < 0:
|
|
|
|
let err = osLastError()
|
2023-09-15 16:38:39 +00:00
|
|
|
case err
|
|
|
|
of ENOTCONN:
|
|
|
|
# The specified socket is not connected, it means that our initial
|
|
|
|
# goal is already happened.
|
|
|
|
transp.state.incl({WriteEof})
|
|
|
|
callSoon(continuation, nil)
|
|
|
|
else:
|
|
|
|
retFuture.fail(getTransportOsError(err))
|
2023-07-19 17:33:28 +00:00
|
|
|
else:
|
|
|
|
transp.state.incl({WriteEof})
|
|
|
|
callSoon(continuation, nil)
|
|
|
|
retFuture
|
|
|
|
|
2023-02-21 10:48:36 +00:00
|
|
|
proc fromPipe2*(fd: AsyncFD, child: StreamTransport = nil,
|
|
|
|
bufferSize = DefaultStreamBufferSize
|
|
|
|
): Result[StreamTransport, OSErrorCode] =
|
|
|
|
## Create new transport object using pipe's file descriptor.
|
|
|
|
##
|
|
|
|
## ``bufferSize`` is size of internal buffer for transport.
|
|
|
|
? register2(fd)
|
|
|
|
var res = newStreamPipeTransport(fd, bufferSize, child)
|
|
|
|
# Start tracking transport
|
2023-07-14 10:35:08 +00:00
|
|
|
trackCounter(StreamTransportTrackerName)
|
2023-02-21 10:48:36 +00:00
|
|
|
ok(res)
|
|
|
|
|
2019-07-15 09:59:42 +00:00
|
|
|
proc fromPipe*(fd: AsyncFD, child: StreamTransport = nil,
|
exception tracking (#166)
* exception tracking
This PR adds minimal exception tracking to chronos, moving the goalpost
one step further.
In particular, it becomes invalid to raise exceptions from `callSoon`
callbacks: this is critical for writing correct error handling because
there's no reasonable way that a user of chronos can possibly _reason_
about exceptions coming out of there: the event loop will be in an
indeterminite state when the loop is executing an _random_ callback.
As expected, there are several issues in the error handling of chronos:
in particular, it will end up in an inconsistent internal state whenever
the selector loop operations fail, because the internal state update
functions are not written in an exception-safe way. This PR turns this
into a Defect, which probably is not the optimal way of handling things
- expect more work to be done here.
Some API have no way of reporting back errors to callers - for example,
when something fails in the accept loop, there's not much it can do, and
no way to report it back to the user of the API - this has been fixed
with the new accept flow - the old one should be deprecated.
Finally, there is information loss in the API: in composite operations
like `poll` and `waitFor` there's no way to differentiate internal
errors from user-level errors originating from callbacks.
* store `CatchableError` in future
* annotate proc's with correct raises information
* `selectors2` to avoid non-CatchableError IOSelectorsException
* `$` should never raise
* remove unnecessary gcsafe annotations
* fix exceptions leaking out of timer waits
* fix some imports
* functions must signal raising the union of all exceptions across all
platforms to enable cross-platform code
* switch to unittest2
* add `selectors2` which supercedes the std library version and fixes
several exception handling issues in there
* fixes
* docs, platform-independent eh specifiers for some functions
* add feature flag for strict exception mode
also bump version to 3.0.0 - _most_ existing code should be compatible
with this version of exception handling but some things might need
fixing - callbacks, existing raises specifications etc.
* fix AsyncCheck for non-void T
2021-03-24 09:08:33 +00:00
|
|
|
bufferSize = DefaultStreamBufferSize): StreamTransport {.
|
2023-06-05 20:21:50 +00:00
|
|
|
raises: [TransportOsError].} =
|
2019-07-15 09:59:42 +00:00
|
|
|
## Create new transport object using pipe's file descriptor.
|
|
|
|
##
|
|
|
|
## ``bufferSize`` is size of internal buffer for transport.
|
2023-02-21 10:48:36 +00:00
|
|
|
let res = fromPipe2(fd, child, bufferSize)
|
|
|
|
if res.isErr(): raiseTransportOsError(res.error())
|
|
|
|
res.get()
|