Mamy Ratsimbazafy 79c18d7c94
Polish (#7)
* add fibonacci bench https://github.com/status-im/nim-taskpools/issues/5

* unify allocs, don't use a mix of calloc malloc and wv_alloc

* Chase-Lev Deque: "unlimited" growth

* Remove affinity / CPU pinning support: does not work for ARM (Big.Little Arch), macOS, Alder Lake (P and E cores) and multiple instances of a program get the main thread pinned on the same core.

* Remove weave-specific things: WV_NUM_THREADS, the design-by-contract asserts

* avoid running destructors on freshly allocated tasks on Nim 1.6
2022-01-16 08:57:06 +01:00
2022-01-03 02:31:43 +01:00
2022-01-16 08:57:06 +01:00
2021-06-28 16:47:06 +02:00
2022-01-03 02:31:43 +01:00
2021-06-28 16:47:06 +02:00
2022-01-16 08:57:06 +01:00
2021-06-28 16:47:06 +02:00
2021-06-28 16:47:06 +02:00
2022-01-03 02:31:43 +01:00

Taskpools

API

The API spec follows https://github.com/nim-lang/RFCs/issues/347#task-parallelism-api

Overview

This implements a lightweight, energy-efficient, easily auditable multithreaded taskpools.

This taskpools will be used in a highly security-sensitive blockchain application targeted at resource-restricted devices hence desirable properties are:

  • Ease of auditing and maintenance.
    • Formally verified synchronization primitives are highly-sought after.
    • Otherwise primitives are implemented from papers or ported from proven codebases that can serve as reference for auditors.
  • Resource-efficient. Threads spindown to save power, low memory use.
  • Decent performance and scalability. The workload to parallelize are cryptography-related and require at least 1ms runtime per thread. This means that only a simple scheduler is required.

Non-goals:

  • Supporting task priorities
  • Being distributed
  • Supporting GC-ed memory on Nim default GC (sequences and strings)
  • Have async-awaitable tasks

In particular compared to Weave, here are the tradeoffs:

  • Taskpools only provide spawn/sync (task parallelism).
    There is no parallel for (data parallelism)
    or precise in/out dependencies (dataflow parallelism).
  • Weave can handle trillions of small tasks that require only 10µs per task. (Load Balancing overhead)
  • Weave maintains an adaptive memory pool to reduce memory allocation overhead, Taskpools allocations are as-needed. (Scheduler overhead)

License

Licensed and distributed under either of

at your option. This file may not be copied, modified, or distributed except according to those terms.

Description
Lightweight, energy-efficient, easily auditable threadpool
Readme
Languages
Nim 98.9%
C 1.1%