Struct tokio::runtime::Builder

source ·
pub struct Builder { /* private fields */ }
Available on crate feature rt only.
Expand description

Builds Tokio Runtime with custom configuration values.

Methods can be chained in order to set the configuration values. The Runtime is constructed by calling build.

New instances of Builder are obtained via Builder::new_multi_thread or Builder::new_current_thread.

See function level documentation for details on the various configuration settings.

Examples

use tokio::runtime::Builder;

fn main() {
    // build runtime
    let runtime = Builder::new_multi_thread()
        .worker_threads(4)
        .thread_name("my-custom-name")
        .thread_stack_size(3 * 1024 * 1024)
        .build()
        .unwrap();

    // use runtime ...
}

Implementations§

source§

impl Builder

source

pub fn new_current_thread() -> Builder

Returns a new builder with the current thread scheduler selected.

Configuration methods can be chained on the return value.

To spawn non-Send tasks on the resulting runtime, combine it with a LocalSet.

source

pub fn new_multi_thread() -> Builder

Available on crate feature rt-multi-thread only.

Returns a new builder with the multi thread scheduler selected.

Configuration methods can be chained on the return value.

source

pub fn enable_all(&mut self) -> &mut Self

Enables both I/O and time drivers.

Doing this is a shorthand for calling enable_io and enable_time individually. If additional components are added to Tokio in the future, enable_all will include these future components.

Examples
use tokio::runtime;

let rt = runtime::Builder::new_multi_thread()
    .enable_all()
    .build()
    .unwrap();
source

pub fn worker_threads(&mut self, val: usize) -> &mut Self

Sets the number of worker threads the Runtime will use.

This can be any number above 0 though it is advised to keep this value on the smaller side.

This will override the value read from environment variable TOKIO_WORKER_THREADS.

Default

The default value is the number of cores available to the system.

When using the current_thread runtime this method has no effect.

Examples
Multi threaded runtime with 4 threads
use tokio::runtime;

// This will spawn a work-stealing runtime with 4 worker threads.
let rt = runtime::Builder::new_multi_thread()
    .worker_threads(4)
    .build()
    .unwrap();

rt.spawn(async move {});
Current thread runtime (will only run on the current thread via Runtime::block_on)
use tokio::runtime;

// Create a runtime that _must_ be driven from a call
// to `Runtime::block_on`.
let rt = runtime::Builder::new_current_thread()
    .build()
    .unwrap();

// This will run the runtime and future on the current thread
rt.block_on(async move {});
Panics

This will panic if val is not larger than 0.

source

pub fn max_blocking_threads(&mut self, val: usize) -> &mut Self

Specifies the limit for additional threads spawned by the Runtime.

These threads are used for blocking operations like tasks spawned through spawn_blocking. Unlike the worker_threads, they are not always active and will exit if left idle for too long. You can change this timeout duration with thread_keep_alive.

The default value is 512.

Panics

This will panic if val is not larger than 0.

Upgrading from 0.x

In old versions max_threads limited both blocking and worker threads, but the current max_blocking_threads does not include async worker threads in the count.

source

pub fn thread_name(&mut self, val: impl Into<String>) -> &mut Self

Sets name of threads spawned by the Runtime’s thread pool.

The default name is “tokio-runtime-worker”.

Examples

let rt = runtime::Builder::new_multi_thread()
    .thread_name("my-pool")
    .build();
source

pub fn thread_name_fn<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() -> String + Send + Sync + 'static,

Sets a function used to generate the name of threads spawned by the Runtime’s thread pool.

The default name fn is || "tokio-runtime-worker".into().

Examples
let rt = runtime::Builder::new_multi_thread()
    .thread_name_fn(|| {
       static ATOMIC_ID: AtomicUsize = AtomicUsize::new(0);
       let id = ATOMIC_ID.fetch_add(1, Ordering::SeqCst);
       format!("my-pool-{}", id)
    })
    .build();
source

pub fn thread_stack_size(&mut self, val: usize) -> &mut Self

Sets the stack size (in bytes) for worker threads.

The actual stack size may be greater than this value if the platform specifies minimal stack size.

The default stack size for spawned threads is 2 MiB, though this particular stack size is subject to change in the future.

Examples

let rt = runtime::Builder::new_multi_thread()
    .thread_stack_size(32 * 1024)
    .build();
source

pub fn on_thread_start<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() + Send + Sync + 'static,

Executes function f after each thread is started but before it starts doing work.

This is intended for bookkeeping and monitoring use cases.

Examples
let runtime = runtime::Builder::new_multi_thread()
    .on_thread_start(|| {
        println!("thread started");
    })
    .build();
source

pub fn on_thread_stop<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() + Send + Sync + 'static,

Executes function f before each thread stops.

This is intended for bookkeeping and monitoring use cases.

Examples
let runtime = runtime::Builder::new_multi_thread()
    .on_thread_stop(|| {
        println!("thread stopping");
    })
    .build();
source

pub fn on_thread_park<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() + Send + Sync + 'static,

Executes function f just before a thread is parked (goes idle). f is called within the Tokio context, so functions like tokio::spawn can be called, and may result in this thread being unparked immediately.

This can be used to start work only when the executor is idle, or for bookkeeping and monitoring purposes.

Note: There can only be one park callback for a runtime; calling this function more than once replaces the last callback defined, rather than adding to it.

Examples
Multithreaded executor
let once = AtomicBool::new(true);
let barrier = Arc::new(Barrier::new(2));

let runtime = runtime::Builder::new_multi_thread()
    .worker_threads(1)
    .on_thread_park({
        let barrier = barrier.clone();
        move || {
            let barrier = barrier.clone();
            if once.swap(false, Ordering::Relaxed) {
                tokio::spawn(async move { barrier.wait().await; });
           }
        }
    })
    .build()
    .unwrap();

runtime.block_on(async {
   barrier.wait().await;
})
Current thread executor
let once = AtomicBool::new(true);
let barrier = Arc::new(Barrier::new(2));

let runtime = runtime::Builder::new_current_thread()
    .on_thread_park({
        let barrier = barrier.clone();
        move || {
            let barrier = barrier.clone();
            if once.swap(false, Ordering::Relaxed) {
                tokio::spawn(async move { barrier.wait().await; });
           }
        }
    })
    .build()
    .unwrap();

runtime.block_on(async {
   barrier.wait().await;
})
source

pub fn on_thread_unpark<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() + Send + Sync + 'static,

Executes function f just after a thread unparks (starts executing tasks).

This is intended for bookkeeping and monitoring use cases; note that work in this callback will increase latencies when the application has allowed one or more runtime threads to go idle.

Note: There can only be one unpark callback for a runtime; calling this function more than once replaces the last callback defined, rather than adding to it.

Examples
let runtime = runtime::Builder::new_multi_thread()
    .on_thread_unpark(|| {
        println!("thread unparking");
    })
    .build();

runtime.unwrap().block_on(async {
   tokio::task::yield_now().await;
   println!("Hello from Tokio!");
})
source

pub fn build(&mut self) -> Result<Runtime>

Creates the configured Runtime.

The returned Runtime instance is ready to spawn tasks.

Examples
use tokio::runtime::Builder;

let rt  = Builder::new_multi_thread().build().unwrap();

rt.block_on(async {
    println!("Hello from the Tokio runtime");
});
source

pub fn thread_keep_alive(&mut self, duration: Duration) -> &mut Self

Sets a custom timeout for a thread in the blocking pool.

By default, the timeout for a thread is set to 10 seconds. This can be overridden using .thread_keep_alive().

Example
let rt = runtime::Builder::new_multi_thread()
    .thread_keep_alive(Duration::from_millis(100))
    .build();
source

pub fn global_queue_interval(&mut self, val: u32) -> &mut Self

Sets the number of scheduler ticks after which the scheduler will poll the global task queue.

A scheduler “tick” roughly corresponds to one poll invocation on a task.

By default the global queue interval is:

  • 31 for the current-thread scheduler.
  • 61 for the multithreaded scheduler.

Schedulers have a local queue of already-claimed tasks, and a global queue of incoming tasks. Setting the interval to a smaller value increases the fairness of the scheduler, at the cost of more synchronization overhead. That can be beneficial for prioritizing getting started on new work, especially if tasks frequently yield rather than complete or await on further I/O. Conversely, a higher value prioritizes existing work, and is a good choice when most tasks quickly complete polling.

Examples
let rt = runtime::Builder::new_multi_thread()
    .global_queue_interval(31)
    .build();
source

pub fn event_interval(&mut self, val: u32) -> &mut Self

Sets the number of scheduler ticks after which the scheduler will poll for external events (timers, I/O, and so on).

A scheduler “tick” roughly corresponds to one poll invocation on a task.

By default, the event interval is 61 for all scheduler types.

Setting the event interval determines the effective “priority” of delivering these external events (which may wake up additional tasks), compared to executing tasks that are currently ready to run. A smaller value is useful when tasks frequently spend a long time in polling, or frequently yield, which can result in overly long delays picking up I/O events. Conversely, picking up new events requires extra synchronization and syscall overhead, so if tasks generally complete their polling quickly, a higher event interval will minimize that overhead while still keeping the scheduler responsive to events.

Examples
let rt = runtime::Builder::new_multi_thread()
    .event_interval(31)
    .build();
source

pub fn unhandled_panic(&mut self, behavior: UnhandledPanic) -> &mut Self

Available on tokio_unstable only.

Configure how the runtime responds to an unhandled panic on a spawned task.

By default, an unhandled panic (i.e. a panic not caught by std::panic::catch_unwind) has no impact on the runtime’s execution. The panic is error value is forwarded to the task’s JoinHandle and all other spawned tasks continue running.

The unhandled_panic option enables configuring this behavior.

  • UnhandledPanic::Ignore is the default behavior. Panics on spawned tasks have no impact on the runtime’s execution.
  • UnhandledPanic::ShutdownRuntime will force the runtime to shutdown immediately when a spawned task panics even if that task’s JoinHandle has not been dropped. All other spawned tasks will immediately terminate and further calls to Runtime::block_on will panic.
Unstable

This option is currently unstable and its implementation is incomplete. The API may change or be removed in the future. See tokio-rs/tokio#4516 for more details.

Examples

The following demonstrates a runtime configured to shutdown on panic. The first spawned task panics and results in the runtime shutting down. The second spawned task never has a chance to execute. The call to block_on will panic due to the runtime being forcibly shutdown.

use tokio::runtime::{self, UnhandledPanic};

let rt = runtime::Builder::new_current_thread()
    .unhandled_panic(UnhandledPanic::ShutdownRuntime)
    .build()
    .unwrap();

rt.spawn(async { panic!("boom"); });
rt.spawn(async {
    // This task never completes.
});

rt.block_on(async {
    // Do some work
})
source

pub fn disable_lifo_slot(&mut self) -> &mut Self

Available on tokio_unstable only.

Disables the LIFO task scheduler heuristic.

The multi-threaded scheduler includes a heuristic for optimizing message-passing patterns. This heuristic results in the last scheduled task being polled first.

To implement this heuristic, each worker thread has a slot which holds the task that should be polled next. However, this slot cannot be stolen by other worker threads, which can result in lower total throughput when tasks tend to have longer poll times.

This configuration option will disable this heuristic resulting in all scheduled tasks being pushed into the worker-local queue, which is stealable.

Consider trying this option when the task “scheduled” time is high but the runtime is underutilized. Use tokio-rs/tokio-metrics to collect this data.

Unstable

This configuration option is considered a workaround for the LIFO slot not being stealable. When the slot becomes stealable, we will revisit whether or not this option is necessary. See tokio-rs/tokio#4941.

Examples
use tokio::runtime;

let rt = runtime::Builder::new_multi_thread()
    .disable_lifo_slot()
    .build()
    .unwrap();
source

pub fn rng_seed(&mut self, seed: RngSeed) -> &mut Self

Available on tokio_unstable only.

Specifies the random number generation seed to use within all threads associated with the runtime being built.

This option is intended to make certain parts of the runtime deterministic (e.g. the tokio::select! macro). In the case of tokio::select! it will ensure that the order that branches are polled is deterministic.

In addition to the code specifying rng_seed and interacting with the runtime, the internals of Tokio and the Rust compiler may affect the sequences of random numbers. In order to ensure repeatable results, the version of Tokio, the versions of all other dependencies that interact with Tokio, and the Rust compiler version should also all remain constant.

Examples
let seed = RngSeed::from_bytes(b"place your seed here");
let rt = runtime::Builder::new_current_thread()
    .rng_seed(seed)
    .build();
source§

impl Builder

source

pub fn enable_io(&mut self) -> &mut Self

Available on crate feature net, or Unix and crate feature process, or Unix and crate feature signal only.

Enables the I/O driver.

Doing this enables using net, process, signal, and some I/O types on the runtime.

Examples
use tokio::runtime;

let rt = runtime::Builder::new_multi_thread()
    .enable_io()
    .build()
    .unwrap();
source

pub fn max_io_events_per_tick(&mut self, capacity: usize) -> &mut Self

Available on crate feature net, or Unix and crate feature process, or Unix and crate feature signal only.

Enables the I/O driver and configures the max number of events to be processed per tick.

Examples
use tokio::runtime;

let rt = runtime::Builder::new_current_thread()
    .enable_io()
    .max_io_events_per_tick(1024)
    .build()
    .unwrap();
source§

impl Builder

source

pub fn enable_time(&mut self) -> &mut Self

Available on crate feature time only.

Enables the time driver.

Doing this enables using tokio::time on the runtime.

Examples
use tokio::runtime;

let rt = runtime::Builder::new_multi_thread()
    .enable_time()
    .build()
    .unwrap();
source§

impl Builder

source

pub fn start_paused(&mut self, start_paused: bool) -> &mut Self

Available on crate feature test-util only.

Controls if the runtime’s clock starts paused or advancing.

Pausing time requires the current-thread runtime; construction of the runtime will panic otherwise.

Examples
use tokio::runtime;

let rt = runtime::Builder::new_current_thread()
    .enable_time()
    .start_paused(true)
    .build()
    .unwrap();

Trait Implementations§

source§

impl Debug for Builder

source§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

const: unstable · source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

const: unstable · source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

const: unstable · source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for Twhere U: From<T>,

const: unstable · source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
const: unstable · source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
const: unstable · source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more