Module lilos::exec

source ·
Expand description

A system for polling an array of tasks forever, plus Notify and other scheduling tools.

Note: for our purposes, a task is an independent top-level future managed by the scheduler polling loop. There are a fixed set of tasks, provided to the scheduler at startup. This is distinct from the casual use of “task” to mean a piece of code that runs concurrently with other code; we’ll use the term “concurrent process” for this. The fixed set of tasks managed by the scheduler can execute an arbitrary number of concurrent processes.

Scheduler entry point

The mechanism for “starting the OS” is run_tasks.

Time

(Note: as of 0.3 the timekeeping features are only compiled in if the systick feature is present, which it is by default. It turns out the operating system can still be quite useful without it!)

The executor uses the timekeeping provided by the time module to enable tasks to be woken at particular times. sleep_until produces a future that resolves at a particular time, while sleep_for expresses the time relative to the current time.

Those functions can also be used to apply a timeout to any operation; see sleep_until for more details.

For the common case of needing to do an operation periodically, consider every_until or PeriodicGate, which try to minimize jitter and drift.

Interrupts, wait, and notify

So, you’ve given the OS an array of tasks that need to each be polled forever. The OS could simply poll every task in a big loop (a pattern known in embedded development as a “superloop”), but this has some problems:

  1. By constantly checking whether each task can make progress, we keep the CPU running full-tilt, burning power needlessly.

  2. Because any given task may have to wait for every other task to be polled before it gets control, the minimum response latency to events is increased, possibly by a lot.

We can do better.

There are, in practice, two reasons why a task might yield.

  1. Because it wants to leave room for other tasks to execute during a long-running operation. In this case, we actually do want to come right back and poll the task. (To do this, use yield_cpu.)

  2. Because it is waiting for an event – a particular timer tick, an interrupt from a peripheral, a signal from another task, etc. In this case, we don’t need to poll the task again until that event occurs.

The OS tracks a wake bit per task. When this bit is set, it means that the task should be polled. Each time through the outer poll loop, the OS will determine which tasks have their wake bits set, clear the wake bits, and then poll the tasks.

(Tasks might be polled even when their bit isn’t set – this is a waste of energy, but is also something that Rust Futures are expected to tolerate. Giving the OS some slack on this dramatically simplifies the implementation. However, the OS tries to poll the smallest feasible set of tasks each time it polls.)

The need to set and check wake bits is embodied by the Notify type, which provides a kind of event broadcast. Tasks can subscribe to a Notify, and when it is signaled, all subscribed tasks get their wake bits set.

Notify is very low level – the more pleasant abstractions of spsc::Queue, mutex, and sleep_until/sleep_for are built on top of it. However, Notify is the only OS facility that’s safe to use from interrupt service routines, making it an ideal way to wake tasks when hardware events occur. See the Notify docs for an example of using this to handle events from a UART.

Building your own task notification mechanism

If Notify doesn’t meet your needs, you can use the wake_task_by_index and wake_tasks_by_mask functions to explicitly wake one or more tasks. Because tasks are required to tolerate spurious wakeups, both of these functions are safe: spamming tasks with wakeup requests merely wastes energy and time.

Both of these functions expose the fact that the scheduler tracks wake bits in a single usize. When waking a task with index 0 (mask 1 << 0), we’re actually waking any task where index % 32 == 0. Very complex systems with greater than 32 top-level tasks will thus experience more spurious wakeups. The advantage of this “lossy” technique is that wake bit manipulation is very, very cheap, and can be done entirely with processor atomic operations.

Idle behavior

When no tasks have their wake bits set, the default behavior is to idle the processor using the WFI instruction. You can override this behavior by starting the scheduler with run_tasks_with_idle or (if you’re using preemption, below) run_tasks_with_preemption_and_idle, which let you substitute a custom “idle hook” to execute when no tasks are ready.

A common use for such an idle hook is to toggle a pin to indicate CPU usage on a logic analyzer, or feed a watchdog.

Adding preemption

By default, the scheduler does not preempt task code: task poll routines are run cooperatively, and ISRs are allowed only in between polls. This increases interrupt response latency, because if an event occurs while polling tasks, all polling must complete before the ISR is run. However, it makes the program much easier to reason about, because code is simply never preempted.

Applications can change this by starting the scheduler with run_tasks_with_preemption or run_tasks_with_preemption_and_idle. These entry points let you set a preemption policy, which allows ISRs above some priority level to preempt task code. (Tasks still cannot preempt one another.)

Structs

  • A lightweight task notification scheme that can be used to safely route events from interrupt handlers to task code.
  • Helper for doing something periodically, accurately.

Enums

  • Selects an interrupt control strategy for the scheduler.

Constants

  • Constant that can be passed to run_tasks and wake_tasks_by_mask to mean “all tasks.”

Traits

Functions

  • Makes a future periodic, with a termination condition.
  • Returns a Waker that doesn’t do anything and costs nothing to clone. This is useful as a placeholder before a real Waker becomes available. You probably don’t need this unless you’re building your own wake lists.
  • Runs the given futures forever, sleeping when possible. Each future acts as a task, in the sense of core::task.
  • Extended version of run_tasks that replaces the default idle behavior (sleeping until the next interrupt) with code of your choosing.
  • Extended version of run_tasks that configures the scheduler with a custom interrupt policy.
  • Extended version of run_tasks that configures the scheduler with a custom interrupt policy and idle hook.
  • Sleeps until the system time has increased by d.
  • Sleeps until the system time is equal to or greater than deadline.
  • Notifies the executor that the task with the given index should be polled on the next iteration.
  • Notifies the executor that any tasks whose wake bits are set in mask should be polled on the next iteration.
  • Returns a future that will be pending exactly once before resolving.