pub struct WorkerPool { /* private fields */ }
Expand description
A pool of nodejs workers.
Wraps a inner struct inside Arc<Mutex<T>>
to be able to invoke it’s method within a spawned thread.
This is important so that indefinitely blocking methods such as get_available_workers
can be offloaded.
Implementations§
Source§impl WorkerPool
impl WorkerPool
Sourcepub fn setup(worker_path: &str, max_workers: usize) -> Self
pub fn setup(worker_path: &str, max_workers: usize) -> Self
Create a new workers pool with the maximum numbers of workers that can be spawned for the duration of the program
use node_workers::{WorkerPool};
let nbr_max_workers = 4;
let mut pool = WorkerPool::setup("worker.js", nbr_max_workers);
Sourcepub fn set_binary(&mut self, binary: &str)
pub fn set_binary(&mut self, binary: &str)
Configure the binary that’s used to run JS workers This can be usefull configure node or to run JS via another runtime
use node_workers::{EmptyPayload, WorkerPool};
let mut pool = WorkerPool::setup("examples/worker.ts", 4);
pool.set_binary("node -r esbuild-register");
pool.perform::<(), _>("ping", EmptyPayload::bulk(1))?;
Sourcepub fn with_debug(&mut self, debug: bool)
pub fn with_debug(&mut self, debug: bool)
Enable or disable logging
Sourcepub fn run_worker<P: AsPayload>(
&mut self,
cmd: &str,
payload: P,
) -> WorkerThread
pub fn run_worker<P: AsPayload>( &mut self, cmd: &str, payload: P, ) -> WorkerThread
Run a single worker in a thread. This method returns the created thread, not the result of the worker. Use this if you need more control on the pool.
use node_workers::{WorkerPool};
let mut pool = WorkerPool::setup("examples/worker", 2);
for n in 1..=4 {
pool.run_worker("fib", n * 10);
}
println!("not blocking");
The returned thread optionally holds the serialized result from the worker. This can be deserialized using serde_json in order to get a proper result. This is done under the hood for you.
use node_workers::{WorkerPool};
let mut pool = WorkerPool::setup("examples/worker", 2);
let thread = pool.run_worker("fib2", 40u32);
let result = thread.get_result::<u32>()?;
println!("run_worker result: {:#?}", result);
Sourcepub fn perform<T: DeserializeOwned, P: AsPayload>(
&mut self,
cmd: &str,
payloads: Vec<P>,
) -> Result<Vec<Option<T>>>
pub fn perform<T: DeserializeOwned, P: AsPayload>( &mut self, cmd: &str, payloads: Vec<P>, ) -> Result<Vec<Option<T>>>
Dispatch a task between available workers with a set of payloads.
This mobilize a worker for each payload. As soon as a worker is free, it’ll be assigned right away a new task until all payloads have been processed.
Contrarily to run_worker
, this method is blocking and directly return the result from all workers.
use node_workers::{WorkerPool};
let mut pool = WorkerPool::setup("examples/worker", 2);
pool.with_debug(true);
let payloads = vec![10, 20, 30, 40];
let result = pool.perform::<u64, _>("fib2", payloads).unwrap();
println!("result: {:#?}", result);
§Errors
Each worker is run in a thread, and perform()
will return an error variant if one of them panick.
Sourcepub fn warmup(&self, nbr_workers: usize) -> JoinHandle<()>
pub fn warmup(&self, nbr_workers: usize) -> JoinHandle<()>
Boot a maximum of n workers, making them ready to take on a task right away.
use node_workers::{WorkerPool};
let mut pool = WorkerPool::setup("examples/worker", 2);
let handle = pool.warmup(2);
//... some intensive task on the main thread
handle.join().expect("Couldn't warmup workers");
//... task workers