pub struct ExecutorProxy {}
Expand description
A proxy struct to the underlying LocalExecutor
. It is accessible from
anywhere within a Glommio context using executor()
.
Implementations§
Source§impl ExecutorProxy
impl ExecutorProxy
Sourcepub fn need_preempt(&self) -> bool
pub fn need_preempt(&self) -> bool
Checks if this task has run for too long and need to be preempted. This
is useful for situations where we can’t call .await, for instance,
if a RefMut
is held. If this tests true, then the user is
responsible for making any preparations necessary for calling .await
and doing it themselves.
§Examples
use glommio::LocalExecutorBuilder;
let ex = LocalExecutorBuilder::default()
.spawn(|| async {
loop {
if glommio::executor().need_preempt() {
break;
}
}
})
.unwrap();
ex.join().unwrap();
Sourcepub async fn yield_if_needed(&self)
pub async fn yield_if_needed(&self)
Conditionally yields the current task queue. The scheduler may then process other task queues according to their latency requirements. If a call to this function results in the current queue to yield, then the calling task is moved to the back of the yielded task queue.
Under which condition this function yield is an implementation detail
subject to change, but it will always be somehow related to the latency
guarantees that the task queues want to uphold in their
Latency::Matters
parameter (or Latency::NotImportant
).
This function is the central mechanism of task cooperation in Glommio
and should be preferred over unconditional yielding methods like
ExecutorProxy::yield_now
and
ExecutorProxy::yield_task_queue_now
.
Sourcepub async fn yield_now(&self)
pub async fn yield_now(&self)
Unconditionally yields the current task and forces the scheduler
to poll another task within the current task queue.
Calling this wakes the current task and returns Poll::Pending
once.
Unless you know you need to yield right now, using
ExecutorProxy::yield_if_needed
instead is the better choice.
Sourcepub async fn yield_task_queue_now(&self)
pub async fn yield_task_queue_now(&self)
Unconditionally yields the current task queue and forces the scheduler
to poll another queue. Use ExecutorProxy::yield_now
to yield within
a queue.
Unless you know you need to yield right now, using
ExecutorProxy::yield_if_needed
instead is the better choice.
Sourcepub fn id(&self) -> usize
pub fn id(&self) -> usize
Returns the id of the current executor
If called from a LocalExecutor
, returns the id of the executor.
Otherwise, this method panics.
§Examples
use glommio::{LocalExecutor, Task};
let local_ex = LocalExecutor::default();
local_ex.run(async {
println!("my ID: {}", glommio::executor().id());
});
Sourcepub fn create_task_queue(
&self,
shares: Shares,
latency: Latency,
name: &str,
) -> TaskQueueHandle
pub fn create_task_queue( &self, shares: Shares, latency: Latency, name: &str, ) -> TaskQueueHandle
Creates a new task queue, with a given latency hint and the provided name
Each task queue is scheduled based on the Shares
and Latency
system, and tasks within a queue will be scheduled in serial.
Returns an opaque handle that can later be used to launch tasks into
that queue with local_into
.
§Examples
use glommio::{Latency, LocalExecutor, Shares};
use std::time::Duration;
let local_ex = LocalExecutor::default();
local_ex.run(async move {
let task_queue = glommio::executor().create_task_queue(
Shares::default(),
Latency::Matters(Duration::from_secs(1)),
"my_tq",
);
let task = glommio::spawn_local_into(
async {
println!("Hello world");
},
task_queue,
)
.expect("failed to spawn task");
});
Sourcepub fn current_task_queue(&self) -> TaskQueueHandle
pub fn current_task_queue(&self) -> TaskQueueHandle
Returns the TaskQueueHandle
that represents the TaskQueue currently
running. This can be passed directly into crate::spawn_local_into
.
This must be run from a task that was generated through
crate::spawn_local
or crate::spawn_local_into
§Examples
use glommio::{Latency, LocalExecutor, LocalExecutorBuilder, Shares};
let ex = LocalExecutorBuilder::default()
.spawn(|| async move {
let original_tq = glommio::executor().current_task_queue();
let new_tq = glommio::executor().create_task_queue(
Shares::default(),
Latency::NotImportant,
"test",
);
let task = glommio::spawn_local_into(
async move {
glommio::spawn_local_into(
async move {
assert_eq!(glommio::executor().current_task_queue(), original_tq);
},
original_tq,
)
.unwrap();
},
new_tq,
)
.unwrap();
task.await;
})
.unwrap();
ex.join().unwrap();
Sourcepub fn task_queue_stats(
&self,
handle: TaskQueueHandle,
) -> Result<TaskQueueStats, ()>
pub fn task_queue_stats( &self, handle: TaskQueueHandle, ) -> Result<TaskQueueStats, ()>
Returns a Result
with its Ok
value wrapping a TaskQueueStats
or a GlommioError
of type [QueueErrorKind
] if there is no task
queue with this handle
§Examples
use glommio::{Latency, LocalExecutorBuilder, Shares};
let ex = LocalExecutorBuilder::default()
.spawn(|| async move {
let new_tq = glommio::executor().create_task_queue(
Shares::default(),
Latency::NotImportant,
"test",
);
println!(
"Stats for test: {:?}",
glommio::executor().task_queue_stats(new_tq).unwrap()
);
})
.unwrap();
ex.join().unwrap();
Sourcepub fn all_task_queue_stats<V>(&self, output: V) -> Vwhere
V: Extend<TaskQueueStats>,
pub fn all_task_queue_stats<V>(&self, output: V) -> Vwhere
V: Extend<TaskQueueStats>,
Returns a collection of TaskQueueStats
with information about all
task queues in the system
The collection can be anything that implements Extend
and it is
initially passed by the user, so they can control how allocations are
done.
§Examples
use glommio::{executor, Latency, LocalExecutorBuilder, Shares};
let ex = LocalExecutorBuilder::default()
.spawn(|| async move {
let new_tq = glommio::executor().create_task_queue(
Shares::default(),
Latency::NotImportant,
"test",
);
let v = Vec::new();
println!(
"Stats for all queues: {:?}",
glommio::executor().all_task_queue_stats(v)
);
})
.unwrap();
ex.join().unwrap();
Sourcepub fn executor_stats(&self) -> ExecutorStats
pub fn executor_stats(&self) -> ExecutorStats
Returns a ExecutorStats
struct with information about this Executor
§Examples:
use glommio::{executor, LocalExecutorBuilder};
let ex = LocalExecutorBuilder::default()
.spawn(|| async move {
println!(
"Stats for executor: {:?}",
glommio::executor().executor_stats()
);
})
.unwrap();
ex.join().unwrap();
Sourcepub fn io_stats(&self) -> IoStats
pub fn io_stats(&self) -> IoStats
Returns an IoStats
struct with information about IO performed by
this executor’s reactor
§Examples:
use glommio::LocalExecutorBuilder;
let ex = LocalExecutorBuilder::default()
.spawn(|| async move {
println!("Stats for executor: {:?}", glommio::executor().io_stats());
})
.unwrap();
ex.join().unwrap();
Sourcepub fn task_queue_io_stats(
&self,
handle: TaskQueueHandle,
) -> Result<IoStats, ()>
pub fn task_queue_io_stats( &self, handle: TaskQueueHandle, ) -> Result<IoStats, ()>
Returns an IoStats
struct with information about IO performed from
the provided TaskQueue by this executor’s reactor
§Examples:
use glommio::{Latency, LocalExecutorBuilder, Shares};
let ex = LocalExecutorBuilder::default()
.spawn(|| async move {
let new_tq = glommio::executor().create_task_queue(
Shares::default(),
Latency::NotImportant,
"test",
);
println!(
"Stats for executor: {:?}",
glommio::executor().task_queue_io_stats(new_tq)
);
})
.unwrap();
ex.join().unwrap();
Sourcepub fn spawn_local<T>(
&self,
future: impl Future<Output = T> + 'static,
) -> Task<T> ⓘwhere
T: 'static,
pub fn spawn_local<T>(
&self,
future: impl Future<Output = T> + 'static,
) -> Task<T> ⓘwhere
T: 'static,
Spawns a task onto the current single-threaded executor.
If called from a LocalExecutor
, the task is spawned on it.
Otherwise, this method panics.
Note that there is no guarantee of when the spawned task is scheduled.
The current task can continue its execution or be preempted by the
newly spawned task immediately. See the documentation for the
top-level Task
for examples.
§Examples
use glommio::{LocalExecutor, Task};
let local_ex = LocalExecutor::default();
local_ex.run(async {
let task = glommio::executor().spawn_local(async { 1 + 2 });
assert_eq!(task.await, 3);
});
Sourcepub fn spawn_local_into<T>(
&self,
future: impl Future<Output = T> + 'static,
handle: TaskQueueHandle,
) -> Result<Task<T>, ()>where
T: 'static,
pub fn spawn_local_into<T>(
&self,
future: impl Future<Output = T> + 'static,
handle: TaskQueueHandle,
) -> Result<Task<T>, ()>where
T: 'static,
Spawns a task onto the current single-threaded executor, in a particular task queue
If called from a LocalExecutor
, the task is spawned on it.
Otherwise, this method panics.
Note that there is no guarantee of when the spawned task is scheduled.
The current task can continue its execution or be preempted by the
newly spawned task immediately. See the documentation for the
top-level Task
for examples.
§Examples
let handle = glommio::executor().create_task_queue(
Shares::default(),
glommio::Latency::NotImportant,
"test_queue",
);
let task = glommio::executor()
.spawn_local_into(async { 1 + 2 }, handle)
.expect("failed to spawn task");
assert_eq!(task.await, 3);
Sourcepub unsafe fn spawn_scoped_local<'a, T>(
&self,
future: impl Future<Output = T> + 'a,
) -> ScopedTask<'a, T> ⓘ
pub unsafe fn spawn_scoped_local<'a, T>( &self, future: impl Future<Output = T> + 'a, ) -> ScopedTask<'a, T> ⓘ
Spawns a task onto the current single-threaded executor.
If called from a LocalExecutor
, the task is spawned on it.
Otherwise, this method panics.
§Safety
ScopedTask
depends on drop
running or .await
being called for
safety. See the struct ScopedTask
for details.
§Examples
use glommio::LocalExecutor;
let local_ex = LocalExecutor::default();
local_ex.run(async {
let non_static = 2;
let task = unsafe { glommio::executor().spawn_scoped_local(async { 1 + non_static }) };
assert_eq!(task.await, 3);
});
Sourcepub unsafe fn spawn_scoped_local_into<'a, T>(
&self,
future: impl Future<Output = T> + 'a,
handle: TaskQueueHandle,
) -> Result<ScopedTask<'a, T>, ()>
pub unsafe fn spawn_scoped_local_into<'a, T>( &self, future: impl Future<Output = T> + 'a, handle: TaskQueueHandle, ) -> Result<ScopedTask<'a, T>, ()>
Spawns a task onto the current single-threaded executor, in a particular task queue
If called from a LocalExecutor
, the task is spawned on it.
Otherwise, this method panics.
§Safety
ScopedTask
depends on drop
running or .await
being called for
safety. See the struct ScopedTask
for details.
§Examples
use glommio::{LocalExecutor, Shares};
let local_ex = LocalExecutor::default();
local_ex.run(async {
let handle = glommio::executor().create_task_queue(
Shares::default(),
glommio::Latency::NotImportant,
"test_queue",
);
let non_static = 2;
let task = unsafe {
glommio::executor()
.spawn_scoped_local_into(async { 1 + non_static }, handle)
.expect("failed to spawn task")
};
assert_eq!(task.await, 3);
})
Sourcepub fn spawn_blocking<F, R>(&self, func: F) -> impl Future<Output = R>
pub fn spawn_blocking<F, R>(&self, func: F) -> impl Future<Output = R>
Spawns a blocking task into a background thread where blocking is acceptable.
Glommio depends on cooperation from tasks in order to drive IO and meet latency requirements. Unyielding tasks are detrimental to the performance of the overall system, not just to the performance of the one stalling task.
spawn_blocking
is there as a last resort when a blocking task needs to
be executed and cannot be made cooperative. Examples are:
- Expensive syscalls that cannot use
io_uring
, such asmmap
(especially withMAP_POPULATE
) - Calls to synchronous third-party code (compression, encoding, etc.)
§Note
This method is not meant to be a way to achieve compute parallelism. Distributing work across executors is the better way to achieve that.
§Examples
use glommio::{LocalExecutor, Task};
use std::time::Duration;
let local_ex = LocalExecutor::default();
local_ex.run(async {
let task = glommio::executor()
.spawn_blocking(|| {
std::thread::sleep(Duration::from_millis(100));
})
.await;
});