Lightweight, energy-efficient, easily auditable threadpool
Go to file
Mamy André-Ratsimbazafy 1f7f2dfe01
don't print debug info
2021-06-29 17:46:06 +02:00
benchmarks initial commit 2021-06-28 16:47:06 +02:00
doc initial commit 2021-06-28 16:47:06 +02:00
examples initial commit 2021-06-28 16:47:06 +02:00
papers initial commit 2021-06-28 16:47:06 +02:00
taskpools don't print debug info 2021-06-29 17:46:06 +02:00
.gitignore initial commit 2021-06-28 16:47:06 +02:00
LICENSE-APACHEv2 Add license, .nimble, allow usage on 1.2.x 2021-06-29 16:49:53 +02:00
LICENSE-MIT Add license, .nimble, allow usage on 1.2.x 2021-06-29 16:49:53 +02:00
README.md Add license, .nimble, allow usage on 1.2.x 2021-06-29 16:49:53 +02:00
taskpools.nim initial commit 2021-06-28 16:47:06 +02:00
taskpools.nimble Add license, .nimble, allow usage on 1.2.x 2021-06-29 16:49:53 +02:00

README.md

Taskpools

API

The API spec follows https://github.com/nim-lang/RFCs/issues/347#task-parallelism-api

Overview

This implements a lightweight, energy-efficient, easily auditable multithreaded taskpools.

This taskpools will be used in a highly security-sensitive blockchain application targeted at resource-restricted devices hence desirable properties are:

  • Ease of auditing and maintenance.
    • Formally verified synchronization primitives are highly-sought after.
    • Otherwise primitives are implemented from papers or ported from proven codebases that can serve as reference for auditors.
  • Resource-efficient. Threads spindown to save power, low memory use.
  • Decent performance and scalability. The workload to parallelize are cryptography-related and require at least 1ms runtime per thread. This means that only a simple scheduler is required.

Non-goals:

  • Supporting task priorities
  • Being distributed
  • Supporting GC-ed memory on Nim default GC (sequences and strings)
  • Have async-awaitable tasks

In particular compared to Weave, here are the tradeoffs:

  • Taskpools only provide spawn/sync (task parallelism).
    There is no parallel for (data parallelism)
    or precise in/out dependencies (dataflow parallelism).
  • Weave can handle trillions of small tasks that require only 10µs per task. (Load Balancing overhead)
  • Weave maintains an adaptive memory pool to reduce memory allocation overhead, Taskpools allocations are as-needed. (Scheduler overhead)

License

Licensed and distributed under either of

at your option. This file may not be copied, modified, or distributed except according to those terms.