Add IndexError crash test for Linux/MacOS.
Add Sentinel version of fix.
Add GetQueuedCompletionStatusEx() support for Windows, which allows to capture more then one event in single `poll()` call.
* AsyncEventQueue (AsyncEventBus replacement) initial commit.
* Add closeWait() and remove assertion test.
* Fix "possible" memory leak.
* Add limits to AsyncEventQueue[T].
Add tests for AsyncEventQueue[T] limits.
Rebase AsyncSync errors to be children of AsyncError.
* Adopt AsyncEventQueue to garbage collect events on emit() too.
Add test from review comment.
* Remove AsyncEventBus test suite.
* Remove unneeded [T] usage.
* Optimize memory usage in 1m test.
* Lower number of events in test from 1m to 100k.
* Deprecate AsyncEventBus.
Add some GC debugging for tests.
* Fix mistype.
* One more attempt to fix crash.
* Add some echo debugging.
* More echo debugging.
* More closer debug echoes.
* Attempt to workaround crash place.
* More debugging echoes in exception handlers.
* Convert suspected test into async procedure.
* Make multiple consumers test async.
* Remove GC debugging.
* Disable added tests.
* Disable AsyncEventQueue tests to confirm that this is an issue.
* Enable all the tests.
* Tentative fix for the HTTP client connection state assertion failures
I've traced the problem to a HTTP connection being closed while there
are outstanding requests that still go through the motions (the crash
occurs when a requests reaches its `finish` processing step, but it
was already put in a Closed state by the connection that owns it).
* Use distinct error exception instead of cancellation error.
Co-authored-by: cheatfate <eugene.kabanov@status.im>
When calling the HTTP `send` / `open` functions, `acquireConnection` is
called to obtain a connection in state `Ready`. In the next code block,
the connection state is advanced to `RequestHeadersSending`. However,
returning from chronos `async` procs yields control, similar to `await`.
This means that a connection may be added to the pool in `Ready` state,
and then a different `acquireConnection` call may obtain a second ref.
Introducing a new `Acquired` state ensures consistency in this case.
No tests added due to this being scheduler dependent; ran manual tests
by adding a `doAssert item.state != HttpClientConnectionState.Acquired`
between `if not(isNil(conn)):` and `return conn`. Eventually, the assert
got hit after several hours of repeated tests, confirming the edge case
to be solved by applying this fix.
Not sure if it is by design that returning from an `async` proc yields.
Even if it's not, this should solve current HTTP issues in nimbus-eth2.
The socket selector holds a `seq` of per-descriptor data. When a reader
is registered, a pointer to a seq item is stored - when the `seq` grows,
this pointer becomes dangling and causes crashes like
https://github.com/status-im/nimbus-eth2/issues/3521.
It turns out that there already exist two mechanisms for passing user
data around - this PR simply removes one of them, saving on memory usage
and removing the need to store pointers to the `seq` data that become
dangling on resize.
`nim doc --project --index:on chronos.nim` was failing with the following error:
/path/to/nim-chronos/chronos/transports/common.nim(519, 21) Error: '*' expected
...which in turned caused errors in all files that import this one.
Now the entire docs can be built successfully, with only a few warnings.
Fixes#186.
Fix of the following bug:
In case of multiple accept headers with same preference
`preferredContentType` used to select the first match within content types
provided by the application. For example, if user specifies accept headers
`application/octet-stream, application/json` and application provides
`application/json, application/octet-stream`, `application/octet-stream`
will be returned, and that is a bug.
Now the procedure returns the most suitable match according
to the application preferences.
When `write` is called on a `StreamTransport`, the current sequence of
operations is:
* copy data to queue
* register for "write" event notification
* return unfinished future to `write` caller
* wait for "write" notification (in `poll`)
* perform one `send`
* wait for notification again if there's more data to write
* complete the future
In this PR, we introduce a fast path for writing:
* If the queue is empty, try to send as much data as possible
* If all data is sent, return completed future without `poll` round
* If there's more data to send than can be sent in one go, add the
rest to queue
* If the queue is not empty, enqueue as above
* When notified that write is possible, keep writing until OS buffer is
full before waiting for event again
The fast path provides significant performance benefits when there are
many small writes, such as when sending gossip to many peers, by
avoiding the poll loop and data copy on each send.
Also fixes an issue where the socket would not be removed from the
writer set if there were pending writes on close.
* Add more accept() call error handlers.
Fix issue when HTTP server silently stops accepting new connections.
Remove unneeded MacOS syscall to disable SIG_PIPE on socket, we already mask this signal in process.
* User `if` instead `case` because constants are actually variables.
* Fix mistypes.
* Do not use case for posix constants.
* Fix Linux compilation error.
* Change Future identifier type from `int` to `uint64`.
* Fix compilation error and tests.
* Add integer overflow test, to avoid confusion with next Nim versions, we expect that unsigned integers do not raise any exceptions if overflow happens.
* Switch from `uint64` to `uint` type.
* Add `uint` overflow tests.