pub struct LoomRuntime { /* private fields */ }Expand description
A bespoke thread pool runtime combining tokio and rayon with CPU pinning.
The runtime provides:
- A tokio async runtime for I/O-bound work
- A rayon thread pool for CPU-bound parallel work
- Automatic CPU pinning for both runtimes
- A task tracker for graceful shutdown
- Zero-allocation compute spawning after warmup
§Performance Guarantees
| Method | Overhead | Allocations | Tracked |
|---|---|---|---|
spawn_async() | ~10ns | Token only | Yes |
spawn_compute() | ~100-500ns | 0 bytes (after warmup) | Yes |
install() | ~0ns | None | No |
rayon_pool() | 0ns | None | No |
tokio_handle() | 0ns | None | No |
§Examples
use loom_rs::LoomBuilder;
let runtime = LoomBuilder::new()
.prefix("myapp")
.tokio_threads(2)
.rayon_threads(6)
.build()?;
runtime.block_on(async {
// Spawn tracked async I/O task
let io_handle = runtime.spawn_async(async {
fetch_data().await
});
// Spawn tracked compute task and await result
let result = runtime.spawn_compute(|| {
expensive_computation()
}).await;
// Zero-overhead parallel iterators (within tracked context)
let processed = runtime.install(|| {
data.par_iter().map(|x| process(x)).collect()
});
});
// Graceful shutdown from main thread
runtime.block_until_idle();Implementations§
Source§impl LoomRuntime
impl LoomRuntime
Sourcepub fn config(&self) -> &LoomConfig
pub fn config(&self) -> &LoomConfig
Get the resolved configuration.
Sourcepub fn tokio_handle(&self) -> &Handle
pub fn tokio_handle(&self) -> &Handle
Get the tokio runtime handle.
This can be used to spawn untracked tasks or enter the runtime context.
For tracked async tasks, prefer spawn_async().
§Performance
Zero overhead - returns a reference.
Sourcepub fn rayon_pool(&self) -> &ThreadPool
pub fn rayon_pool(&self) -> &ThreadPool
Get the rayon thread pool.
This can be used to execute parallel iterators or spawn untracked work directly.
For tracked compute tasks, prefer spawn_compute().
For zero-overhead parallel iterators, prefer install().
§Performance
Zero overhead - returns a reference.
Sourcepub fn task_tracker(&self) -> &TaskTracker
pub fn task_tracker(&self) -> &TaskTracker
Get the task tracker for graceful shutdown.
Use this to track spawned tasks and wait for them to complete.
Sourcepub fn block_on<F: Future>(&self, f: F) -> F::Output
pub fn block_on<F: Future>(&self, f: F) -> F::Output
Block on a future using the tokio runtime.
This is the main entry point for running async code from the main thread.
The current runtime is available via loom_rs::current_runtime() within
the block_on scope.
§Examples
runtime.block_on(async {
// Async code here
// loom_rs::current_runtime() works here
});Sourcepub fn spawn_async<F>(&self, future: F) -> JoinHandle<F::Output>
pub fn spawn_async<F>(&self, future: F) -> JoinHandle<F::Output>
Spawn a tracked async task on tokio.
The task is tracked for graceful shutdown via block_until_idle().
§Performance
Overhead: ~10ns (TaskTracker token only).
§Examples
runtime.block_on(async {
let handle = runtime.spawn_async(async {
// I/O-bound async work
fetch_data().await
});
let result = handle.await.unwrap();
});Sourcepub async fn spawn_compute<F, R>(&self, f: F) -> R
pub async fn spawn_compute<F, R>(&self, f: F) -> R
Spawn CPU-bound work on rayon and await the result.
The task is tracked for graceful shutdown via block_until_idle().
Automatically uses per-type object pools for zero allocation after warmup.
§Performance
| State | Allocations | Overhead |
|---|---|---|
| Pool hit | 0 bytes | ~100-500ns |
| Pool miss | ~32 bytes | ~100-500ns |
| First call per type | Pool + state | ~1µs |
For zero-overhead parallel iterators (within an already-tracked context),
use install() instead.
§Examples
runtime.block_on(async {
let result = runtime.spawn_compute(|| {
// CPU-intensive work
expensive_computation()
}).await;
});Sourcepub async fn spawn_adaptive<F, R>(&self, f: F) -> R
pub async fn spawn_adaptive<F, R>(&self, f: F) -> R
Spawn work with adaptive inline/offload decision.
Uses MAB (Multi-Armed Bandit) to learn whether this function type should run inline on tokio or offload to rayon. Good for handler patterns where work duration varies by input.
Unlike spawn_compute() which always offloads, this adaptively chooses
based on learned behavior and current system pressure.
§Performance
| Scenario | Behavior | Overhead |
|---|---|---|
| Fast work | Inlines after learning | ~100ns (decision only) |
| Slow work | Offloads after learning | ~100-500ns (+ offload) |
| Cold start | Explores both arms | Variable |
§Examples
runtime.block_on(async {
// MAB will learn whether this is fast or slow
let result = runtime.spawn_adaptive(|| {
process_item(item)
}).await;
});Sourcepub async fn spawn_adaptive_with_hint<F, R>(&self, hint: ComputeHint, f: F) -> R
pub async fn spawn_adaptive_with_hint<F, R>(&self, hint: ComputeHint, f: F) -> R
Spawn with hint for cold-start guidance.
The hint helps the scheduler make better initial decisions before it has learned the actual execution time of this function type.
§Hints
ComputeHint::Low- Expected < 50µs (likely inline-safe)ComputeHint::Medium- Expected 50-500µs (borderline)ComputeHint::High- Expected > 500µs (should test offload early)ComputeHint::Unknown- No hint (default exploration)
§Examples
use loom_rs::ComputeHint;
runtime.block_on(async {
// Hint that this is likely slow work
let result = runtime.spawn_adaptive_with_hint(
ComputeHint::High,
|| expensive_computation()
).await;
});Sourcepub fn install<F, R>(&self, f: F) -> R
pub fn install<F, R>(&self, f: F) -> R
Execute work on rayon with zero overhead (sync, blocking).
This installs the rayon pool for the current scope, allowing direct use of rayon’s parallel iterators.
NOT tracked - use within an already-tracked task (e.g., inside
spawn_async or spawn_compute) for proper shutdown tracking.
§Performance
Zero overhead - direct rayon access.
§Examples
runtime.block_on(async {
// This is a tracked context (we're in block_on)
let processed = runtime.install(|| {
use rayon::prelude::*;
data.par_iter().map(|x| process(x)).collect::<Vec<_>>()
});
});Sourcepub async fn scope_compute<'env, F, R>(&self, f: F) -> R
pub async fn scope_compute<'env, F, R>(&self, f: F) -> R
Execute a scoped parallel computation, allowing borrowed data.
Unlike spawn_compute() which requires 'static bounds, scope_compute
allows borrowing local variables from the async context for use in parallel
work. This is safe because:
- The
.awaitsuspends the async task rayon::scopeblocks until ALL spawned work completes- Only then does the future resolve
- Therefore, borrowed references remain valid throughout
§Performance
| Aspect | Value |
|---|---|
| Allocation | ~96 bytes per call (not pooled) |
| Overhead | Comparable to spawn_compute() |
State cannot be pooled because the result type R may contain borrowed
references tied to the calling scope. Benchmarks show performance is
within noise of spawn_compute() - the overhead is dominated by
cross-thread communication, not state management.
§Cancellation Safety
If the future is dropped before completion (e.g., via select! or timeout),
the drop will block until the rayon scope finishes. This is necessary
to prevent use-after-free of borrowed data. In normal usage (awaiting to
completion), there is no blocking overhead.
§Panic Safety
If the closure or any spawned work panics, the panic is captured and re-raised when the future is polled. This ensures panics propagate to the async context as expected.
§Leaking the Future
Important: Do not leak this future via std::mem::forget or similar.
The safety of borrowed data relies on the future’s Drop implementation
blocking until the rayon scope completes. Leaking the future would allow
the rayon work to continue accessing borrowed data after it goes out of
scope, leading to undefined behavior. This is a known limitation shared
by other scoped async APIs (e.g., async-scoped).
§Examples
use std::sync::atomic::{AtomicI32, Ordering};
runtime.block_on(async {
let data = vec![1, 2, 3, 4, 5, 6, 7, 8];
let sum = AtomicI32::new(0);
// Borrow `data` and `sum` for parallel processing
runtime.scope_compute(|s| {
let (left, right) = data.split_at(data.len() / 2);
s.spawn(|_| {
sum.fetch_add(left.iter().sum(), Ordering::Relaxed);
});
s.spawn(|_| {
sum.fetch_add(right.iter().sum(), Ordering::Relaxed);
});
}).await;
// `data` and `sum` are still valid here
println!("Sum of {:?} = {}", data, sum.load(Ordering::Relaxed));
});Sourcepub async fn scope_adaptive<'env, F, R>(&self, f: F) -> R
pub async fn scope_adaptive<'env, F, R>(&self, f: F) -> R
Execute scoped work with adaptive sync/async decision.
Uses MAB (Multi-Armed Bandit) to learn whether this function type should:
- Run synchronously via
install()(blocks tokio worker, lower overhead) - Run asynchronously via
scope_compute()(frees tokio worker, higher overhead)
Unlike spawn_adaptive() which chooses between inline execution and rayon offload,
scope_adaptive always uses rayon::scope (needed for parallel spawning with
borrowed data), but chooses whether to block the tokio worker or use the async bridge.
§Performance
| Scenario | Behavior | Overhead |
|---|---|---|
| Fast scoped work | Sync after learning | ~0ns (install overhead only) |
| Slow scoped work | Async after learning | ~100-500ns (+ bridge) |
| Cold start | Explores both arms | Variable |
§When to Use
Use scope_adaptive when:
- You need to borrow local data (
'envlifetime) - You want parallel spawning via
rayon::scope - Work duration varies and you want the runtime to learn the best strategy
Use scope_compute directly when:
- Work is always slow (> 500µs)
- You want consistent async behavior
§Examples
use std::sync::atomic::{AtomicI32, Ordering};
runtime.block_on(async {
let data = vec![1, 2, 3, 4, 5, 6, 7, 8];
let sum = AtomicI32::new(0);
// MAB learns whether this is fast or slow scoped work
runtime.scope_adaptive(|s| {
let (left, right) = data.split_at(data.len() / 2);
let sum_ref = ∑
s.spawn(move |_| {
sum_ref.fetch_add(left.iter().sum(), Ordering::Relaxed);
});
s.spawn(move |_| {
sum_ref.fetch_add(right.iter().sum(), Ordering::Relaxed);
});
}).await;
println!("Sum: {}", sum.load(Ordering::Relaxed));
});Sourcepub async fn scope_adaptive_with_hint<'env, F, R>(
&self,
hint: ComputeHint,
f: F,
) -> R
pub async fn scope_adaptive_with_hint<'env, F, R>( &self, hint: ComputeHint, f: F, ) -> R
Execute scoped work with hint for cold-start guidance.
The hint helps the scheduler make better initial decisions before it has learned the actual execution time of this function type.
§Hints
ComputeHint::Low- Expected < 50µs (likely sync-safe)ComputeHint::Medium- Expected 50-500µs (borderline)ComputeHint::High- Expected > 500µs (should test async early)ComputeHint::Unknown- No hint (default exploration)
§Examples
use loom_rs::ComputeHint;
use std::sync::atomic::{AtomicI32, Ordering};
runtime.block_on(async {
let data = vec![1, 2, 3, 4];
let sum = AtomicI32::new(0);
// Hint that this is likely fast work
runtime.scope_adaptive_with_hint(ComputeHint::Low, |s| {
let sum_ref = ∑
for &val in &data {
s.spawn(move |_| {
sum_ref.fetch_add(val, Ordering::Relaxed);
});
}
}).await;
});Sourcepub fn shutdown(&self)
pub fn shutdown(&self)
Stop accepting new tasks.
After calling this, spawn_async() and spawn_compute() will still
work, but the shutdown process has begun. Use is_idle() or
wait_for_shutdown() to check/wait for completion.
Sourcepub fn is_idle(&self) -> bool
pub fn is_idle(&self) -> bool
Check if all tracked tasks have completed.
Returns true if shutdown() has been called and all tracked async
tasks and compute tasks have finished.
§Performance
Zero overhead - single atomic load.
Sourcepub fn compute_tasks_in_flight(&self) -> usize
pub fn compute_tasks_in_flight(&self) -> usize
Sourcepub async fn wait_for_shutdown(&self)
pub async fn wait_for_shutdown(&self)
Sourcepub fn block_until_idle(&self)
pub fn block_until_idle(&self)
Block until all tracked tasks complete (from main thread).
This is the primary shutdown method. It:
- Calls
shutdown()to close the task tracker - Waits for all tracked async and compute tasks to finish
§Examples
runtime.block_on(async {
runtime.spawn_async(background_work());
runtime.spawn_compute(|| cpu_work());
});
// Graceful shutdown from main thread
runtime.block_until_idle();Sourcepub fn mab_scheduler(&self) -> Arc<MabScheduler>
pub fn mab_scheduler(&self) -> Arc<MabScheduler>
Get the shared MAB scheduler for handler patterns.
The scheduler is lazily initialized on first call. Use this when you need to make manual scheduling decisions in handler code.
§Example
use loom_rs::mab::{FunctionKey, Arm};
let sched = runtime.mab_scheduler();
let key = FunctionKey::from_type::<MyHandler>();
let ctx = runtime.collect_context();
let (id, arm) = sched.choose(key, &ctx);
let result = match arm {
Arm::InlineTokio => my_work(),
Arm::OffloadRayon => runtime.block_on(async {
runtime.spawn_compute(|| my_work()).await
}),
};
sched.finish(id, elapsed_us, Some(fn_us));Sourcepub fn collect_context(&self) -> Context
pub fn collect_context(&self) -> Context
Collect current runtime context for MAB scheduling decisions.
Returns a snapshot of current metrics including inflight tasks, spawn rate, and queue depth.
Sourcepub fn tokio_threads(&self) -> usize
pub fn tokio_threads(&self) -> usize
Get the number of tokio worker threads.
Sourcepub fn rayon_threads(&self) -> usize
pub fn rayon_threads(&self) -> usize
Get the number of rayon threads.
Sourcepub fn prometheus_metrics(&self) -> &LoomMetrics
pub fn prometheus_metrics(&self) -> &LoomMetrics
Get the Prometheus metrics.
The metrics are always collected (zero overhead atomic operations).
If a Prometheus registry was provided via LoomBuilder::prometheus_registry(),
the metrics are also registered for exposition.
Sourcepub fn tokio_cpus(&self) -> &[usize]
pub fn tokio_cpus(&self) -> &[usize]
Get the CPUs allocated to tokio workers.
Sourcepub fn rayon_cpus(&self) -> &[usize]
pub fn rayon_cpus(&self) -> &[usize]
Get the CPUs allocated to rayon workers.
Trait Implementations§
Source§impl Debug for LoomRuntime
impl Debug for LoomRuntime
Auto Trait Implementations§
impl Freeze for LoomRuntime
impl !RefUnwindSafe for LoomRuntime
impl Send for LoomRuntime
impl Sync for LoomRuntime
impl Unpin for LoomRuntime
impl !UnwindSafe for LoomRuntime
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self with the foreground set to
value.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red() and
green(), which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg():
use yansi::{Paint, Color};
painted.fg(Color::White);Set foreground color to white using white().
use yansi::Paint;
painted.white();Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self with the background set to
value.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red() and
on_green(), which have the same functionality but
are pithier.
§Example
Set background color to red using fg():
use yansi::{Paint, Color};
painted.bg(Color::Red);Set background color to red using on_red().
use yansi::Paint;
painted.on_red();Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute value.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold() and
underline(), which have the same functionality
but are pithier.
§Example
Make text bold using attr():
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);Make text bold using using bold().
use yansi::Paint;
painted.bold();Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi Quirk value.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask() and
wrap(), which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk():
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);Enable wrapping using wrap().
use yansi::Paint;
painted.wrap();Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting() due to conflicts with Vec::clear().
The clear() method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting() due to conflicts with Vec::clear().
The clear() method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted only when both stdout and stderr are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);