[][src]Struct threads_pool::ThreadPool

pub struct ThreadPool { /* fields omitted */ }

The standalone thread pool, which gives users more controls on the pool and where the hosted pool shall live.

Examples

extern crate threads_pool;

use threads_pool as pool;
use std::thread;
use std::time::Duration;

// lazy-create the pool
let mut pool = pool::ThreadPool::build(4);

// get ready to submit parallel jobs, first of all, activate the pool. Not that this step is
// optional if the caller will only use the `exec` API to submit jobs.
pool.execute(|| {
    // this closure could cause panic if the pool has not been activated yet.
    println!("Executing jobs ... ");
});

// do parallel stuff.
for id in 0..10 {
    // even `id` jobs always get prioritized
    let priority = if id % 2 == 0 { true } else { false };

    // API `exec` can set if a job needs to be prioritized for execution or not.
    pool.exec(|| {
        thread::sleep(Duration::from_secs(1));
        println!("thread {} has slept for around 1 second ... ", id);
    }, priority);
}

Implementations

impl ThreadPool[src]

pub fn new(size: usize) -> ThreadPool[src]

Create a ThreadPool with default configurations

pub fn new_with_config(size: usize, config: Config) -> ThreadPool[src]

Create a ThreadPool with supplied configurations

pub fn build(size: usize) -> ThreadPool[src]

Create the ThreadPool with default pool configuration settings. When ready to activate the pool, invoke the activate_pool API before submitting jobs for execution. You can also call exec API to automatically activate the pool, however, calling the alternative immutable API execute will always lead to an error.

pub fn build_with_config(size: usize, config: Config) -> ThreadPool[src]

Create the ThreadPool with provided pool configuration settings. When ready to activate the pool, invoke the activate_pool API before submitting jobs for execution. You can also call exec API to automatically activate the pool, however, calling the alternative immutable API execute will always lead to an error.

pub fn activate(&mut self) -> &mut Self[src]

If the pool is lazy created, user is responsible for activating the pool before submitting jobs for execution. API exec will initialized the pool since it takes the mutable self, and hence is able to initialize the pool. However, fail to explicitly initialize the pool could cause unidentified behaviors.

Examples

extern crate threads_pool;
use std::thread;
use std::time::Duration;
use threads_pool as pool;

// lazy-create the pool
let mut pool = pool::ThreadPool::build(4);

// work on other stuff ...
thread::sleep(Duration::from_secs(4));

// get ready to submit parallel jobs, first of all, activate the pool. Not that this step is
// optional if the caller will only use the `exec` API to submit jobs.
pool.activate()
    .execute(|| {
        // this closure could cause panic if the pool has not been activated yet.
        println!("Lazy created pool now accepting new jobs...");
    });

// do parallel stuff.
for id in 0..10 {
    // API `exec` can also activate the pool automatically if it's lazy-created.
    pool.exec(|| {
        thread::sleep(Duration::from_secs(1));
        println!("thread {} has been waken after 1 seconds ... ", id);
    }, true);
}

pub fn set_exec_timeout(&mut self, timeout: Option<Duration>)[src]

Set the time out period for a job to be queued. If timeout is defined as some duration, we will keep the new jobs in the queue for at least them amount of time when all workers of the pool are busy. If it's set to None and all workers are busy at the moment a job comes, we will either block the caller when pool's non_blocking setting is turned on, or return timeout error immediately when the setting is turned off.

pub fn toggle_blocking(&mut self, non_blocking: bool)[src]

Toggle if we should block the execution APIs when all workers are busy and we can't expand the thread pool anymore for any reasons.

pub fn toggle_auto_scale(&mut self, auto_scale: bool)[src]

Toggle if we can add temporary workers when the ThreadPool is under pressure. The temporary works will retire after they have been idle for a period of time.

pub fn exec<F: FnOnce() + Send + 'static>(
    &mut self,
    f: F,
    prioritized: bool
) -> Result<(), ExecutionError>
[src]

exec will dispatch a closure to a free thread to be executed. If no thread are free at the moment and the pool setting allow adding new workers at pressure, some temporary workers will be created and added to the pool for executing the accumulated pending jobs; otherwise, this API will work the same way as the alternative: execute, that it will block the caller until a worker becomes available, or the job queue timed out, where the job will be dropped and the caller needs to send the job to queue for execution again, if it's needed.

For highly competitive environments, such as using the pool to enable async operations for a web server, disabling auto_scale could cause starvation deadlock. However, a server can only have a limited resources for disposal, we can't create unlimited workers, there will be a limit where we have to either choose to drop the job, or put it into a queue for later execution, though that will mostly lie on caller's discretion.

In addition, this API will take a prioritized parameter, which will allow more urgent job to be taken by the workers sooner than jobs without a priority.

Note that if the ThreadPool is created using delayed pool initialization, i.e. created using either build or build_with_config APIs, then the pool will be initialized at the first time a job is to be executed.

Examples

extern crate threads_pool;
use threads_pool as pool;
use std::thread;
use std::time::Duration;

let mut pool = pool::ThreadPool::build(4);

for id in 0..10 {
    pool.exec(|| {
        thread::sleep(Duration::from_secs(1));
        println!("thread {} has been waken after 1 seconds ... ", id);
    }, true);
}

pub fn execute<F: FnOnce() + Send + 'static>(
    &self,
    f: F
) -> Result<(), ExecutionError>
[src]

Similar to exec, yet this is the simplified version taking an immutable version of the pool. The job priority will be automatically evaluated based on the queue length, and for pool under pressure, we will try to balance the queue.

This method will be no-op if the pool is lazy-created (i.e. created via the build functions) and activate has not been called on it yet. If you are not sure if the pool has been activated or not, alter to use exec instead, which will initiate the pool if that's not done yet.

Examples

extern crate threads_pool;
use threads_pool as pool;
use std::thread;
use std::time::Duration;

let pool = pool::ThreadPool::build(4);

for id in 0..10 {
    pool.execute(|| {
        thread::sleep(Duration::from_secs(1));
        println!("thread {} has been waken after 1 seconds ... ", id);
    });
}

pub fn sync_block<R, F>(&self, f: F) -> Result<R, ExecutionError> where
    R: Send + 'static,
    F: FnOnce() -> R + Send + 'static, 
[src]

Trait Implementations

impl Drop for ThreadPool[src]

impl Hibernation for ThreadPool[src]

fn hibernate(&mut self)[src]

Put the pool into hibernation mode. In this mode, all workers will park itself after finishing the current job to reduce CPU usage.

The pool will be prompted back to normal mode on 2 occasions:

  1. calling the unhibernate API to wake up the pool, or 2) sending a new job through the exec API, which will automatically assume an unhibernation desire, wake self up, take and execute the incoming job. Though if you call the immutable API execute, the job will be queued yet not executed. Be aware that if the queue is full, the new job will be dropped and an execution error will be returned in this case.

It is recommended to explicitly call unhibernate when the caller want to wake up the pool, to avoid side effect or undefined behaviors.

fn unhibernate(&mut self)[src]

This will unhibernate the pool if it's currently in the hibernation mode. It will do nothing if the pool is in any other operating mode, e.g. the working mode or shutting down mode.

Cautious: calling this API will set the status flag to normal, which may conflict with actions that would set status flag otherwise.

fn is_hibernating(&self) -> bool[src]

Check if the pool is in hibernation mode.

impl PoolManager for ThreadPool[src]

fn extend(&mut self, more: usize)[src]

Manually extend the size of the pool. If another operation that's already adding more threads to the pool, e.g. the pool is under pressure and trigger a pool extension automatically, then this operation will be cancelled.

fn shrink(&mut self, less: usize)[src]

Manually shrink the size of the pool and release system resources. If another operation that's reducing the size of the pool is undergoing, this shrink-op will be cancelled.

fn resize(&mut self, target: usize)[src]

Resize the pool to the desired size. This will either trigger a pool extension or contraction. Note that if another pool-size changing operation is undergoing, the effect may be cancelled out if we're moving towards the same direction (adding pool size, or reducing pool size).

fn auto_adjust(&mut self)[src]

Automatically adjust the pool size according to criteria: if the pool is idling and we've previously added temporary workers, we will tell them to cease work before designated expiration time; if the pool is overwhelmed and need more workers to handle jobs, we will add more threads to the pool.

fn auto_expire(&mut self, life: Option<Duration>)[src]

Let extended workers to expire when idling for too long.

fn kill_worker(&mut self, id: usize)[src]

Remove a thread worker from the pool with the given worker id.

fn clear(&mut self)[src]

Clear the pool. Note this will not kill all workers immediately, and the API will block until all workers have finished their current job. Note that this also means we may leave queued jobs in place until new threads are added into the pool, otherwise, the jobs will not be executed and go away on program exit.

fn close(&mut self)[src]

Signal the threads in the pool that we're closing, but allow them to finish all jobs in the queue before exiting.

fn force_close(&mut self)[src]

Signal the threads that they must quit now, and all queued jobs in the queue will be de-factor discarded since we're closing the pool.

impl PoolState for ThreadPool[src]

impl ThreadPoolStates for ThreadPool[src]

fn set_exec_timeout(&mut self, timeout: Option<Duration>)[src]

Set the job timeout period.

The timeout period is mainly for dropping jobs when the thread pool is under pressure, i.e. the producer creates new work faster than the consumer can handle them. When the job queue buffer is full, any additional jobs will be dropped after the timeout period. Set the timeout parameter to None to turn this feature off, which is the default behavior. Note that if the timeout is turned off, sending new jobs to the full pool will block the caller until some space is freed up in the work queue.

fn get_exec_timeout(&self) -> Option<Duration>[src]

Check the currently set timeout period. If the result is None, it means we will not timeout on submitted jobs when the job queue is full, which implies the caller will be blocked until some space in the queue is freed up

fn toggle_auto_scale(&mut self, auto_scale: bool)[src]

Toggle if we shall scale the pool automatically when the pool is under pressure, i.e. adding more threads to the pool to take the jobs. These temporarily added threads will go away once the pool is able to keep up with the new jobs to release resources.

fn auto_scale_enabled(&self) -> bool[src]

Check if the auto-scale feature is turned on or not

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> Erased for T

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.