Module lilos::exec

source ·
Expand description

The async runtime executor, plus inter-task communication tools.

Note: for our purposes, a task is an independent top-level future managed by the executor polling loop. There is a fixed set of tasks, provided to the executor at startup. This is distinct from the casual use of “task” to mean a piece of code that runs concurrently with other code; we’ll use the term “concurrent process” for this. The fixed set of tasks managed by the scheduler can execute an arbitrary number of concurrent processes using operations like join and select.

§Starting the executor / operating system

The mechanism for “starting the OS” is run_tasks. That’s the right choice for most applications.

run_tasks is a wrapper around fancier API, which you can use directly in special circumstances:

  • If you need faster interrupt response, consider allowing some interrupts to preempt task code using run_tasks_with_preemption.
  • If you need code to run when no other tasks are ready – which can be useful for putting the CPU into a low power state, or toggling a pin to signal CPU load on a logic analyzer – see run_tasks_with_idle
  • Finally, if you want to turn on all the bells and whistles, you can use run_tasks_with_preemption_and_idle which combines the previous two.

§Interrupts, wait, and notify

So, you’ve given the OS an array of tasks that need to each be polled forever. The OS could simply poll every task in a big loop (a pattern known in embedded development as a “superloop”), but this has some problems:

  1. By constantly checking whether each task can make progress, we keep the CPU running full-tilt, burning power needlessly.

  2. Because any given task may have to wait for every other task to be polled before it gets control, the minimum response latency to events is increased, possibly by a lot.

We can do better.

There are, in practice, two reasons why a task might yield.

  1. Because it has more work to do immediately, but wants to leave room for other tasks to execute during a long-running operation. In this case, we actually do want to come right back and poll the task. (To do this, use yield_cpu.)

  2. Because it is waiting for an event – a particular timer tick, an interrupt from a peripheral, a signal from another task, etc. In this case, we don’t need to poll the task again until that event occurs.

The OS tracks a wake bit per task. When this bit is set, it means that the task should be polled. Each time through the outer poll loop, the OS will determine which tasks have their wake bits set, clear the wake bits, and then poll the tasks.

(Tasks might be polled even when their bit isn’t set – this is a waste of energy, but is also something that Rust Futures are expected to tolerate. Giving the OS some slack on this dramatically simplifies the implementation. However, the OS tries to poll the smallest feasible set of tasks each time it polls.)

The need to set and check wake bits is embodied by the Notify type, which provides a kind of event broadcast. Tasks can subscribe to a Notify, and when it is signaled, all subscribed tasks get their wake bits set – so they will be polled at the next opportunity.

Notify is very low level – the more pleasant abstractions of spsc::Queue, mutex, and even sleep_until/sleep_for are built on top of it. However, Notify is the only OS facility that’s safe to use from interrupt service routines, making it an ideal way to wake tasks when hardware events occur. See the Notify docs for an example of using this to handle events from a UART.

§Idle behavior

When no tasks have their wake bits set, the default behavior is to idle the processor using the WFI instruction. You can override this behavior by starting the scheduler with run_tasks_with_idle or (if you’re using preemption, below) run_tasks_with_preemption_and_idle, which let you substitute a custom “idle hook” to execute when no tasks are ready.

A common use for such an idle hook is to toggle a pin to indicate CPU usage on a logic analyzer, enter a vendor-specific deep-sleep mode, or feed a watchdog.

§Building your own task notification mechanism

If Notify doesn’t meet your needs, you can use the wake_task_by_index and wake_tasks_by_mask functions to explicitly wake one or more tasks. Because tasks are required to tolerate spurious wakeups, both of these functions are safe: spamming tasks with wakeup requests merely wastes energy and time.

Both of these functions expose the fact that the scheduler tracks wake bits in a single usize. When waking a task with index 0 (mask 1 << 0), we’re actually waking any task where index % 32 == 0. Very complex systems with greater than 32 top-level tasks will thus experience more spurious wakeups. The advantage of this “lossy” technique is that wake bit manipulation is very, very cheap, and can be done entirely with processor atomic operations.

For an example of how to do this, read the source code for Notify – it’s written entirely in terms of public API, so if you want to do something similar that Notify itself doesn’t support, you can start by copying it.

§Adding preemption

By default, the scheduler does not preempt task code: task poll routines are run cooperatively, and ISRs are allowed only in between polls. This increases interrupt response latency, because if an event occurs while polling tasks, all polling must complete before the ISR is run. However, it makes the program much easier to reason about, because code is simply never preempted.

Applications can change this by starting the scheduler with run_tasks_with_preemption or run_tasks_with_preemption_and_idle. These entry points let you set a preemption policy, which allows ISRs above some priority level to preempt task code. (Tasks still cannot preempt one another.)

The more basic run_tasks operation is written in terms of run_tasks_with_preemption_and_idle, so if you would like to see how to convert your use of run_tasks to the more complex form, start by copying the code from run_tasks.

Structs§

  • A lightweight task notification scheme that can be used to safely route events from interrupt handlers to task code.
  • Internal future type used to implement Notify::until. This makes it much easier to recognize the future in a debugger.
  • Internal future type used to implement Notify::until_racy. This makes it much easier to recognize the future in a debugger.

Enums§

  • Selects an interrupt control strategy for the scheduler.

Constants§

  • Constant that can be passed to run_tasks and wake_tasks_by_mask to mean “all tasks.”

Traits§

Functions§

  • noop_wakerDeprecated
    Returns a Waker that doesn’t do anything and costs nothing to clone. This is useful as a placeholder before a real Waker becomes available. You probably don’t need this unless you’re building your own wake lists.
  • Runs the given futures forever, sleeping when possible. Each future acts as a task, in the sense of core::task – that is, it is a top-level entity that can wake up separately from the other tasks.
  • Extended version of run_tasks that replaces the default idle behavior (sleeping until the next interrupt) with code of your choosing.
  • Extended version of run_tasks that configures the scheduler with a custom interrupt policy.
  • Extended version of run_tasks that configures the scheduler with a custom interrupt policy and idle hook. See run_tasks for more information about the basic behavior.
  • Notifies the executor that the task with the given index should be polled on the next iteration.
  • Notifies the executor that any tasks whose wake bits are set in mask should be polled on the next iteration.
  • Returns a future that will be pending exactly once before resolving.