add some more docs - design notes

This commit is contained in:
Jaremy Creechley 2023-09-11 17:22:35 -07:00
parent db25ded557
commit a40c1b23bb
No known key found for this signature in database
GPG Key ID: 4E66FB67B21D3300
4 changed files with 47 additions and 4 deletions

View File

@ -20,6 +20,38 @@ export threadresults
push: {.upraises: [].}
## Design Notes
## ============
## This is the threaded backend for `threadproxyds.nim`. It requires
## a `TResult[T]` to already be allocated, and uses it to "return"
## the data. The `taskpools` worker uses `TResult[T]` to signal
## Chronos that the associated future is ready. Then the future on the
## `threadproxyds` frontend can read the results from `TResult[T]`.
##
## `TResult[T]` handles the shared memory aspect so each threaded
## task here can rely on having the memory until it finishes it's
## work. Even if the future exits early, the thread workers won't
## need to worry about using free-ed memory.
##
## The `FlowVar[T]` in `taskpools` isn't really suitable because
## we want to use Chronos's `ThreadSignalPtr` notification mechanism.
## Likewise the signaling mechanism in `taskpools` isn't suitable
## for the same reason. We need to notify Chronos when our work
## is done.
##
##
## Potential Issues
## ================
## One issue still outstanding with this setup and using a
## ThreadSignalPtr pool is if `threadproxyds` frontend called
## `tresult.release()` early due to a `myFuture.cancel()` scenario.
## In this case the task here would then fire `tresult[].signal.fireAsync()`.
## If another `threadproxyds` had gotten that same ThreadSignalPtr it'd
## potentially get the signal. In this case the `TResult` would still be empty.
## It shouldn't corrupt memory, but the `threadproxyds` TResult would return "empty".
##
##
type
ThreadDatastore* = object
tp*: Taskpool

View File

@ -30,9 +30,13 @@ type
## memory allocated until all references to it are gone.
##
## Important:
## On `refc` that internal destructors for ThreadResult[T]
## On `refc` that "internal" destructors for ThreadResult[T]
## are *not* called. Effectively limiting this to 1 depth
## of destructors. Hence the `threadSafeType` marker below.
##
## Edit: not sure this is quire accurate, but some care
## needs to be taken to verify the destructor
## works with the specific type.
##
## Since ThreadResult is a plain object, its lifetime can be
## tied to that of an async proc. In this case it could be

View File

@ -39,8 +39,16 @@ proc getThreadSignal*(): Future[ThreadSignalPtr] {.async, raises: [].} =
## processes IO descriptor limit, which results in bad
## and unpredictable failure modes.
##
## This could be put onto its own thread and use it's own set ThreadSignalPtr,
## but the sleepAsync should prove if this is useful for not.
## This could be put onto its own thread and use it's own set ThreadSignalPtr
## to become a true "resource pool".
## For now the sleepAsync should prove if this setup is useful
## or not before going into that effort.
##
## TLDR: if all ThreadSignalPtr's are used up, this will
## repetedly call `sleepAsync` deferring whatever request
## is until more ThreadSignalPtr's are available. This
## design isn't particularly fair, but should let us handle
## periods of overloads with lots of requests in flight.
##
{.cast(gcsafe).}:
var cnt = SignalPoolRetries

View File

@ -16,7 +16,6 @@ import ./querycommontests
# import pretty
suite "Test Basic ThreadProxyDatastore":
var
sds: ThreadProxyDatastore