messaging_thread_pool
Overview
messaging_thread_pool provides a set traits and structs that allows the construction of a simple typed thread pool.
It is useful when the type that needs to be distributed has complex state that is not send/sync.
If the state is send and sync then it is probably better to use a more conventional thread pool such as rayon.
Instances of the type are distributed across the threads of the thread pool and are tied to their allocated thread for their entire lifetime.
Hence instances do not need to be send nor sync (although the messages used to communicate with them do).
Key Advantages
-
Ownership of Non-Send/Sync Data: Unlike traditional thread pools (e.g.,
rayon) where closures and data often need to beSendandSyncto move between threads,messaging_thread_poolguarantees that aPoolItemstays on the thread where it was created. This allows it to own:Rc<T>andRefCell<T>types.- Raw pointers or FFI resources that are thread-bound.
- Large stack-allocated data structures that you don't want to move.
-
Stateful Long-Lived Objects (Actors): This library implements an Actor-like model. Items have an identity (
id) and persistent state. You can send multiple messages to the same item over time, and it will maintain its state between requests. This is distinct from data-parallelism libraries which typically focus on stateless or shared-state parallel processing. -
Sequential Consistency: Messages sent to a specific
PoolItemare processed sequentially in the order they are received. This eliminates race conditions within the item's state and simplifies reasoning about state transitions (e.g., ensuring "Initialize" happens before "Update"). -
Zero Contention & Lock-Free State: Since only one thread ever accesses a specific
PoolItem, there is no need for internal locking (Mutex/RwLock). You avoid the performance penalty of lock contention, even under heavy load. -
Data Locality: By pinning an item to a specific thread, its data remains in the CPU cache associated with that thread's core. This "warm cache" effect can significantly improve performance for state-heavy objects compared to work-stealing pools where tasks (and data) migrate between cores.
-
Message-Passing Architecture: Communication happens via typed Request/Response messages. This decouples the caller from the execution details and fits naturally with the actor model.
-
Fine-Grained Concurrency: You can target specific items by their ID. The pool handles the routing, ensuring that messages for the same ID are processed by the correct thread.
The library infrastructure then allows the routing of messages to specific instances based on a key.
Any work required to respond to a message is executed on that instances assigned thread pool thread.
Response messages are then routed back to the caller via the infrastructure.
It provides simple call schematics, easy to reason about lifetimes and predictable pool behaviour.
The type needs to define an enum of message types and provide implementations of a few simple traits to enable it to be hosted within the thread pool.
The #[pool_item] macro simplifies this process significantly.
Example: Shared State without Locks
This example demonstrates a key advantage of this library: using Rc and RefCell to share state between a parent object and a helper struct. In a traditional thread pool, this would require Arc<Mutex<...>>.
use RefCell;
use Rc;
use pool_item;
use ;
// A helper struct that needs access to the session's data.
// In a standard thread pool, this would likely need Arc<Mutex<Vec<String>>>.
// Here, we can use Rc<RefCell<...>> because UserSession never leaves its thread.
// The main PoolItem
With this infrastructure in place, a pool item can then use the library provided structs to host instances of the pool items in a fixed sized thread pool.
// Create a thread pool with 2 threads
let thread_pool = new;
// Create a session with ID 1
thread_pool
.send_and_receive
.expect
.for_each;
// Send some actions
// Note: These are processed sequentially by the thread owning Session 1
let counts: = thread_pool
.send_and_receive
.expect
.map
.collect;
assert_eq!;
// Verify the log
let log = thread_pool
.send_and_receive
.expect
.next
.unwrap
.result;
assert_eq!;
assert_eq!;
assert_eq!;
The original motivation for the library was to cope with hierarchies of long-lived dependent objects, each of which were required to have their own thread pools to avoid any complex threading dependencies. All of the operations were CPU bound.
It is important to note that unless the operations being performed are quite long running (>50ms) then the costs of messaging infrastructure starts to become significant and will start to eat into the benefits of having multiple threads