pub struct ThreadPool { /* private fields */ }Expand description
A thread pool for running multiple tasks on a configurable group of threads.
Thread pools can improve performance when executing a large number of concurrent tasks since the expensive overhead of spawning threads is minimized as threads are re-used for multiple tasks. Thread pools are also useful for controlling and limiting parallelism.
Dropping the thread pool will prevent any further tasks from being scheduled
on the pool and detaches all threads in the pool. If you want to block until
all pending tasks have completed and the pool is entirely shut down, then
use one of the available join methods.
§Pool size
Every thread pool has a minimum and maximum number of worker threads that it will spawn for executing tasks. This range is known as the pool size, and affects pool behavior in the following ways:
- Minimum size: A guaranteed number of threads that will always be created and maintained by the thread pool. Threads will be eagerly created to meet this minimum size when the pool is created, and at least this many threads will be kept running in the pool until the pool is shut down.
- Maximum size: A limit on the number of additional threads to spawn to execute more work.
§Queueing
If a new or existing worker thread is unable to immediately start processing a submitted task, that task will be placed in a queue for worker threads to take from when they complete their current tasks. Queueing is only used when it is not possible to directly handoff a task to an existing thread and spawning a new thread would exceed the pool’s configured maximum size.
By default, thread pools are configured to use an unbounded queue which
can hold an unlimited number of pending tasks. This is a sensible default,
but is not desirable in all use-cases and can be changed with
Builder::queue_limit.
§Monitoring
Each pool instance provides methods for gathering various statistics on the pool’s usage, such as number of current number of threads, tasks completed over time, and queued tasks. While these methods provide the most up-to-date numbers upon invocation, they should not be used for controlling program behavior since they can become immediately outdated due to the live nature of the pool.
Implementations§
Source§impl ThreadPool
impl ThreadPool
Sourcepub fn new() -> Self
pub fn new() -> Self
Create a new thread pool with the default configuration.
If you’d like to customize the thread pool’s behavior then use
ThreadPool::builder.
Sourcepub fn queued_tasks(&self) -> usize
pub fn queued_tasks(&self) -> usize
Get the number of tasks queued for execution, but not yet started.
This number will always be less than or equal to the configured
queue_limit, if any.
Note that the number returned may become immediately outdated after invocation.
§Examples
use std::{thread::sleep, time::Duration};
// Create a pool with just one thread.
let pool = threadfin::builder().size(1).build();
// Nothing is queued yet.
assert_eq!(pool.queued_tasks(), 0);
// Start a slow task.
let task = pool.execute(|| {
sleep(Duration::from_millis(100));
});
// Wait a little for the task to start.
sleep(Duration::from_millis(10));
assert_eq!(pool.queued_tasks(), 0);
// Enqueue some more tasks.
let count = 4;
for _ in 0..count {
pool.execute(|| {
// work to do
});
}
// The tasks should still be in the queue because the slow task is
// running on the only thread.
assert_eq!(pool.queued_tasks(), count);Sourcepub fn running_tasks(&self) -> usize
pub fn running_tasks(&self) -> usize
Get the number of tasks currently running.
Note that the number returned may become immediately outdated after invocation.
§Examples
use std::{thread::sleep, time::Duration};
let pool = threadfin::ThreadPool::new();
// Nothing is running yet.
assert_eq!(pool.running_tasks(), 0);
// Start a task.
let task = pool.execute(|| {
sleep(Duration::from_millis(100));
});
// Wait a little for the task to start.
sleep(Duration::from_millis(10));
assert_eq!(pool.running_tasks(), 1);
// Wait for the task to complete.
task.join();
assert_eq!(pool.running_tasks(), 0);Sourcepub fn completed_tasks(&self) -> u64
pub fn completed_tasks(&self) -> u64
Get the number of tasks completed (successfully or otherwise) by this pool since it was created.
Note that the number returned may become immediately outdated after invocation.
§Examples
let pool = threadfin::ThreadPool::new();
assert_eq!(pool.completed_tasks(), 0);
pool.execute(|| 2 + 2).join();
assert_eq!(pool.completed_tasks(), 1);
pool.execute(|| 2 + 2).join();
assert_eq!(pool.completed_tasks(), 2);Sourcepub fn panicked_tasks(&self) -> u64
pub fn panicked_tasks(&self) -> u64
Get the number of tasks that have panicked since the pool was created.
Note that the number returned may become immediately outdated after invocation.
§Examples
use std::{thread::sleep, time::Duration};
let pool = threadfin::ThreadPool::new();
assert_eq!(pool.panicked_tasks(), 0);
let task = pool.execute(|| {
panic!("this task panics");
});
while !task.is_done() {
sleep(Duration::from_millis(100));
}
assert_eq!(pool.panicked_tasks(), 1);Sourcepub fn execute<T, F>(&self, closure: F) -> Task<T> ⓘ
pub fn execute<T, F>(&self, closure: F) -> Task<T> ⓘ
Submit a closure to be executed by the thread pool.
If all worker threads are busy, but there are less threads than the configured maximum, an additional thread will be created and added to the pool to execute this task.
If all worker threads are busy and the pool has reached the configured maximum number of threads, the task will be enqueued. If the queue is configured with a limit, this call will block until space becomes available in the queue.
§Examples
let pool = threadfin::ThreadPool::new();
let task = pool.execute(|| {
2 + 2 // some expensive computation
});
// do something in the meantime
// now wait for the result
let sum = task.join();
assert_eq!(sum, 4);Sourcepub fn execute_future<T, F>(&self, future: F) -> Task<T> ⓘ
pub fn execute_future<T, F>(&self, future: F) -> Task<T> ⓘ
Submit a future to be executed by the thread pool.
If all worker threads are busy, but there are less threads than the configured maximum, an additional thread will be created and added to the pool to execute this task.
If all worker threads are busy and the pool has reached the configured maximum number of threads, the task will be enqueued. If the queue is configured with a limit, this call will block until space becomes available in the queue.
§Thread locality
While the given future must implement Send to be moved into a thread
in the pool to be processed, once the future is assigned a thread it
will stay assigned to that single thread until completion. This improves
cache locality even across .await points in the future.
let pool = threadfin::ThreadPool::new();
let task = pool.execute_future(async {
2 + 2 // some asynchronous code
});
// do something in the meantime
// now wait for the result
let sum = task.join();
assert_eq!(sum, 4);Sourcepub fn try_execute<T, F>(&self, closure: F) -> Result<Task<T>, PoolFullError<F>>
pub fn try_execute<T, F>(&self, closure: F) -> Result<Task<T>, PoolFullError<F>>
Attempts to execute a closure on the thread pool without blocking.
If the pool is at its max thread count and the task queue is full, the task is rejected and an error is returned. The original closure can be extracted from the error.
§Examples
One use for this method is implementing backpressure by executing a closure on the current thread if a pool is currently full.
let pool = threadfin::ThreadPool::new();
// Try to run a closure in the thread pool.
let result = pool.try_execute(|| 2 + 2)
// If successfully submitted, block until the task completes.
.map(|task| task.join())
// If the pool was full, invoke the closure here and now.
.unwrap_or_else(|error| error.into_inner()());
assert_eq!(result, 4);Sourcepub fn try_execute_future<T, F>(
&self,
future: F,
) -> Result<Task<T>, PoolFullError<F>>
pub fn try_execute_future<T, F>( &self, future: F, ) -> Result<Task<T>, PoolFullError<F>>
Attempts to execute a future on the thread pool without blocking.
If the pool is at its max thread count and the task queue is full, the task is rejected and an error is returned. The original future can be extracted from the error.
Sourcepub fn join(self)
pub fn join(self)
Shut down this thread pool and block until all existing tasks have completed and threads have stopped.
Sourcepub fn join_timeout(self, timeout: Duration) -> bool
pub fn join_timeout(self, timeout: Duration) -> bool
Shut down this thread pool and block until all existing tasks have completed and threads have stopped, or until the given timeout passes.
Returns true if the thread pool shut down fully before the timeout.
Sourcepub fn join_deadline(self, deadline: Instant) -> bool
pub fn join_deadline(self, deadline: Instant) -> bool
Shut down this thread pool and block until all existing tasks have completed and threads have stopped, or the given deadline passes.
Returns true if the thread pool shut down fully before the deadline.