macro_rules! spawn_workers {
($shared:expr, { $($name:ident: $func:expr),* }) => { ... };
}Expand description
Macro for simplified multi-threaded setup with WorkerManager
This macro spawns multiple threads and returns a WorkerManager instance
that allows you to control individual workers: pause, resume, stop, and monitor them.
§Syntax
spawn_workers!(shared_data, { name: closure, ... })
§Arguments
shared_data- AnEnhancedThreadShare<T>instance to share between workers{ name: closure, ... }- Named closures for each worker thread
§Returns
A WorkerManager instance that provides methods to control workers:
add_worker(name, handle)- Add a new worker programmaticallypause_worker(name)- Mark a worker for pauseresume_worker(name)- Resume a paused workerremove_worker(name)- Remove worker from trackingget_worker_names()- Get list of all worker namesactive_workers()- Get count of active workersjoin_all()- Wait for all workers to complete
§Example
use thread_share::{enhanced_share, spawn_workers};
let data = enhanced_share!(vec![1, 2, 3]);
// Spawn workers and get manager
let manager = spawn_workers!(data, {
sorter: |data| {
data.update(|v| v.sort());
},
validator: |data| {
assert!(data.get().is_sorted());
}
});
// Control workers
println!("Workers: {:?}", manager.get_worker_names());
println!("Active: {}", manager.active_workers());
// Wait for completion
manager.join_all().expect("Workers failed");§Worker Management
The WorkerManager allows fine-grained control over individual workers:
use thread_share::{enhanced_share, spawn_workers};
let data = enhanced_share!(vec![1, 2, 3]);
let manager = spawn_workers!(data, {
sorter: |data| { /* work */ },
validator: |data| { /* work */ }
});
// Pause a specific worker
let _ = manager.pause_worker("sorter");
// Resume a worker
let _ = manager.resume_worker("sorter");
// Add a new worker programmatically
let handle = std::thread::spawn(|| { /* work */ });
let _ = manager.add_worker("new_worker", handle);
// Remove from tracking
let _ = manager.remove_worker("sorter");§Requirements
- The shared data must be an
EnhancedThreadShare<T>instance - Each closure must implement
FnOnce(ThreadShare<T>) + Send + 'static - The type
Tmust implementSend + Sync + 'static
§Performance
- Thread Spawning: Minimal overhead over standard
thread::spawn - Worker Management: Constant-time operations for most management functions
- Memory Usage: Small overhead for worker tracking structures
- Scalability: Efficient for up to hundreds of workers