Jacek Sieka db8f81cd63
perf: flatten merkle tree
A classic encoding of a merkle tree is to store the layers consecutively
in memory breadth-first. This encoding has several advantages:

* Good performance for accessing successive nodes, such as when
constructing the tree or serializing it
* Significantly lower memory usage - avoids the per-node allocation
overhead which otherwise more than doubles the memory usage for
"regular" 32-byte hashes
* Less memory management - a single memory allocation can reserve memory
for the whole tree meaning that there are fewer allocations to keep
track of
* Simplified buffer lifetimes - with all memory allocated up-front,
there's no need for cross-thread memory management or transfers

While we're here, we can clean up a few other things in the
implementation:

* Move async implementation to `merkletree` so that it doesn't have to
be repeated
* Factor tree construction into preparation and computation - the latter
is the part offloaded onto a different thread
* Simplify task posting - `threadpools` already creates a "task" from
the worker function call
* Deprecate several high-overhead accessors that presumably are only
needed in tests
2025-12-17 13:52:44 +01:00
..
2025-12-17 13:52:44 +01:00
2022-01-10 09:32:56 -06:00
2025-11-13 07:34:09 +00:00
2025-01-21 20:54:46 +00:00
2025-01-21 20:54:46 +00:00
2025-01-21 20:54:46 +00:00
2025-01-21 20:54:46 +00:00
2025-12-11 21:03:36 +00:00
2025-01-21 20:54:46 +00:00
2023-08-01 16:47:57 -07:00