Zero-Pool: Consistent High-Performance Thread Pool
When nanoseconds matter and overhead is the enemy.
This is an experimental thread pool implementation focused on exploring lock-free FIFO MPMC queue techniques. Consider this a performance playground rather than a production-ready library.
Key Features:
- Zero locks - lock-free
- Zero queue limit - unbounded
- Zero channels - no std/crossbeam channel overhead
- Zero virtual dispatch - function pointer dispatch avoids vtable lookups
- Zero core spinning - all event-based
- Zero result transport cost - tasks write directly to caller-provided memory
- Zero per worker queues - single global queue structure = perfect workload balancing
- Zero external dependencies - standard library only and stable rust
Using a result-via-parameters pattern means workers place results into caller provided memory, removing thread transport overhead. The single global queue structure ensures optimal load balancing without the complexity of work-stealing or load redistribution algorithms.
Because the library uses raw pointers, you must ensure parameter structs (including any pointers they contain) remain valid until task completion, and that your task functions are thread-safe.
This approach allows complete freedom to optimise multi-threaded workloads any way you want.
Notes
- TaskFuture is easily clonable, but
wait()/wait_timeout()must be called from the thread that submitted the task.is_complete()is safe to call from any thread. - Zero-Pool supports both explicitly creating new thread pools (
ZeroPool::new,ZeroPool::with_workers) and using the global instance (zero_pool::global_pool). - Task functions take a single parameter (e.g.
&MyTaskParams), and the parameter name can be any valid identifier.
Benchmarks (AMD 5900X, Linux 6.18)
test bench_heavy_compute_rayon ... bench: 4,906,749.25 ns/iter
test bench_heavy_compute_zeropool ... bench: 4,691,815.45 ns/iter
test bench_indexed_computation_rayon ... bench: 31,011.32 ns/iter
test bench_indexed_computation_zeropool ... bench: 32,298.93 ns/iter
test bench_individual_tasks_rayon_empty ... bench: 428,244.14 ns/iter
test bench_individual_tasks_zeropool_empty ... bench: 295,761.15 ns/iter
test bench_task_overhead_rayon ... bench: 29,924.82 ns/iter
test bench_task_overhead_zeropool ... bench: 33,054.00 ns/iter
Example Usage
Submitting a Single Task
use ZeroPool;
let pool = new;
let mut result = 0u64;
let task = CalculationParams ;
let future = pool.submit_task;
future.wait;
println!;
Submitting Uniform Batches
Submits multiple tasks of the same type to the thread pool.
use ZeroPool;
let pool = new;
let mut results = vec!;
let tasks: = results.iter_mut.enumerate.map.collect;
let future = pool.submit_batch;
future.wait;
println!;
Submitting Multiple Independent Tasks
You can submit individual tasks and uniform batches in parallel:
use ZeroPool;
// Define first task type
// Define second task type
let pool = new;
// Individual task - separate memory location
let mut single_result = 0u64;
let single_task_params = ComputeParams ;
// Uniform batch - separate memory from above
let mut batch_results = vec!;
let batch_task_params: = batch_results.iter_mut.enumerate
.map
.collect;
// Submit all batches
let future1 = pool.submit_task;
let future2 = pool.submit_batch;
// Wait on them in any order; completion order is not guaranteed
future1.wait;
future2.wait;
println!;
println!;
Using the Global Pool
If you prefer to share a single pool across your entire application, call the global accessor. The pool is created on first use and lives for the duration of the process.
use global_pool;
let pool = global_pool;
let mut result = 0u64;
let params = ExampleParams ;
pool.submit_task.wait;