Struct glommio::controllers::DeadlineQueue [−][src]
pub struct DeadlineQueue<T> { /* fields omitted */ }
Expand description
Glommio’s scheduler is based on Shares
: the more shares, the more
resources the task will receive.
There are situations however in which we don’t want shares to grow too high: for instance, a background process that is flushing a file to storage. If it were to run at full speed, it would rob us of precious resources that we’d rather dedicate to the rest of the application.
However we don’t want it to to run too slowly either, as it may never complete.
The “right amount” of shares is not even application dependent: it is time dependent! As the load of the system changes, what is “too high” or “too low” changes too.
The DeadlineQueue
uses a feedback loop controller, not unlike the ones in
car’s cruise controls to dynamically and automatically adjust shares so that
the process is slowed down using as few resources as possible but still
finishes before its deadline.
For example, you may want to flush a file and would like it to finish in 10min because you have an agent that copies files every 10 minutes. You name it!
Controlling processes is tricky and you should keep some things in mind for best results:
-
Control loops have a set time. It takes a while for the system to stabilize so this is better suited for long processes (Deadline is much higher than the adjustment period)
-
Control loops add overhead, so setting the adjustment period too low may not be the best way to make sure that the dealine is much higher than the adjustment period =)
-
Control loops have dead time. In control theory, dead time is the time that passes between the application of the control decision and its effects being seen. In our case, the scheduler may not schedule us for a long time, especially if the shares are low.
-
Control loops work better the faster and smoother the response is. Let’s use an example flushing a file: you may be moving data to the file internal buffers but they are not flushed to the media. When the data is finally flushed a giant bubble is inserted into the control loop. The characteristics of the system will radically change. Contract that for instance with O_DIRECT files, where writing to the file means writing to the media: smooth and fast feedback!
The moral of the story is:
- do not use this with buffered files or other buffered sinks where the real physical response is delayed
- do not set the adjustment period too low
- do not use this very short lived processes.
To calculate the speed of the process, the needs of all elements in the queue are considered.
Let’s say, for instance, that you queue items A, B and C, each with 10,000 units finishing respectively in 1, 2 and 3 minutes. From the point of view of the system, all units from A and B need to be flushed before C can start so that is taken into account: the system needs to set its speed to 10,000 points per minute so that the entire queue is flushed in 3 minutes.
Implementations
Creates a new DeadlineQueue
with a given name
and adjustment period
Internally the DeadlineQueue
spawns a new task queue with dynamic
shares in which it will execute the tasks that were registered.
Examples
use glommio::{controllers::DeadlineQueue, LocalExecutor};
use std::time::Duration;
let ex = LocalExecutor::default();
ex.run(async {
DeadlineQueue::<usize>::new("example", Duration::from_millis(250));
});
Pushes a new DeadlineSource
into this queue
Returns an io::Result
wrapping the result of the operation.
Examples:
use futures_lite::{future::ready, Future};
use glommio::{
controllers::{DeadlineQueue, DeadlineSource},
LocalExecutor,
};
use std::{io, pin::Pin, rc::Rc, time::Duration};
struct Example {}
impl DeadlineSource for Example {
type Output = usize;
fn expected_duration(&self) -> Duration {
Duration::from_secs(1)
}
fn action(self: Rc<Self>) -> Pin<Box<dyn Future<Output = Self::Output> + 'static>> {
Box::pin(ready(1))
}
fn total_units(&self) -> u64 {
1
}
fn processed_units(&self) -> u64 {
1
}
}
let ex = LocalExecutor::default();
ex.run(async {
let mut queue = DeadlineQueue::new("example", Duration::from_millis(250));
let res = queue.push_work(Rc::new(Example {})).await;
assert_eq!(res.unwrap(), 1);
});
Returns the TaskQueueHandle associated with this controller
Temporarily bumps the priority of this DeadlineQueue
The bump happens by making sure that the shares never fall below a minimum (250). If the output of the controller is already higher than that then this has no effect.
Disables the controller.
Instead the process being controlled will now have static shares defined
by the shares
argument (between 1 and 1000).
Enables the controller.
This is a no-op if the controller was already enabled. If it had been manually disabled then it moves back to automatic mode.
Queries the controller for its status.
The possible statuses are defined by the ControllerStatus
enum
Trait Implementations
Auto Trait Implementations
impl<T> !RefUnwindSafe for DeadlineQueue<T>
impl<T> !Send for DeadlineQueue<T>
impl<T> !Sync for DeadlineQueue<T>
impl<T> !Unpin for DeadlineQueue<T>
impl<T> !UnwindSafe for DeadlineQueue<T>
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span
, returning an
Instrumented
wrapper. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more