[][src]Crate r3

R3
Real-Time Operating System

R3 is a proof-of-concept of a static RTOS that utilizes Rust's compile-time function evaluation mechanism for static configuration (creation of kernel objects and memory allocation).

  • All kernel objects are defined statically for faster boot times, compile-time checking, predictable execution, reduced RAM consumption, no runtime allocation failures, and extra security.
  • The kernel and its configurator don't require an external build tool or a specialized procedural macro, maintaining transparency.
  • The kernel is written in a target-independent way. The target-specific portion (called a port) is provided as a separate crate, which an application chooses and combines with the kernel using the trait system.
  • Leverages Rust's type safety for access control of kernel objects. Safe code can't access an object that it doesn't own.

Note to Application Developers

The implementation code heavily relies on constant propagation, dead code elimination, and “zero-cost” abstractions. Without optimization, it might exhibit massive code bloat and excessive stack consumption. To change the optimization level for debug builds, add the following lines to your Cargo workspace's Cargo.toml:

[profile.dev]
opt-level = 2

Configuring the Kernel

Trait-based Composition

The R3 RTOS utilizes Rust's trait system to allow system designers to construct a system in a modular way.

An application crate uses the following macros to realize each part of the system:

  • A port-provided macro like r3_xxx_port::use_port! (named in this way by convention) instantiates port-specfific items.
  • r3::build! instantiates the kernel and kernel-private static data based on the kernel configuration supplied in the form of a configuration function.
This example is not tested
r3_port_std::use_port!(unsafe struct System);

struct Objects { /* ... */ }
const fn configure_app(_: &mut CfgBuilder<System>) -> Objects { /* ... */ }

const COTTAGE: Objects = r3::build!(System, configure_app => Objects);

These macros generate various impls interconnected under a complex relationship, with an ultimate goal of building a working system.

kernel-traits

use_port! → System Type

The composition process revolves around an application-defined type called a system type. The first thing to do is to define a system type. It could be defined directly, but instead, it's defined by use_port! purely for convenience. A system type is named System by convention.

This example is not tested
r3_xxx_port::use_port!(unsafe struct System);

// ----- The above macro invocation expands to: -----
struct System;

use_port!impl Port

The first important role of use_port! is to implement the trait Port on the system type. Port describes the properties of the target hardware and provides target-dependent low-level functions such as a context switcher. use_port! can define static items to store internal state data (this would be inconvenient and messy without a macro).

Port is actually a group of several supertraits (such as PortThreading), each of which can be implemented in a separate location.

This example is not tested
r3_xxx_port::use_port!(unsafe struct System);

// ----- The above macro invocation also produces: -----
unsafe impl r3::PortThreading for System { /* ... */ }
unsafe impl r3::PortInterrupts for System { /* ... */ }
unsafe impl r3::PortTimer for System { /* ... */ }

// `Port` gets implemented automatically when
// all required supertraits are implemented.

The job of use_port! doesn't end here, but before we move on, we must first explain what build! does.

build!impl KernelCfgN

build! assembles a database of statically defined kernel objects using a supplied configuration function. Using this database, it does things such as determining the optimal data type to represent all allowed task priority values and defining static items to store kernel-private data structures such as task control blocks. The result is attached to a supplied system type by implementing KernelCfg1 and KernelCfg2 on it.

This example is not tested
static COTTAGE: Objects = r3::build!(System, configure_app => Objects);

// ----- The above macro invocation produces: -----
static COTTAGE: Objects = {
    use r3::kernel::TaskCb;

    const CFG: /* ... */ = {
        let mut cfg = r3::kernel::cfg::CfgBuilder::new();
        configure_app(&mut cfg);
        cfg
    };

    static TASK_CB_POOL: [TaskCb<System>; _] = /* ... */;

    // Things needed by both of `Port` and `KernelCfg2` should live in
    // `KernelCfg1` because `Port` cannot refer to an associated item defined
    // by `KernelCfg2`.
    unsafe impl r3::kernel::KernelCfg1 for System {
        type TaskPriority = /* ... */;
    }

    // Things dependent on data types defined by `Port` should live in
    // `KernelCfg2`.
    unsafe impl r3::kernel::KernelCfg2 for System {
        fn task_cb_pool() -> &'static [TaskCb<System>] {
            &TASK_CB_POOL
        }
        /* ... */
    }

    // Make the generated object IDs available to the application
    configure_app(&mut r3::kernel::cfg::CfgBuilder::new())
};

impl Kernel

The traits introduced so far are enough to instantiate the target-independent portion of the RTOS kernel. To reflect this, Kernel and PortToKernel are automatically implemented on the system type by a blanket impl.

This example is not tested
impl<System: Port + KernelCfg1 + KernelCfg2> Kernel for System { /* ... */ }
impl<System: Kernel> PortToKernel for System { /* ... */ }

use_port! → Entry Points

The remaining task of use_port! is to generate entry points to the kernel. The most important one is for booting the kernel. The other ones are interrupt handlers.

This example is not tested
r3_xxx_port::use_port!(unsafe struct System);

// ----- The above macro invocation lastly produces: -----
fn main() {
    <System as r3::kernel::PortToKernel>::boot();
}

Static Configuration

Kernel objects are created in a configuration function having the signature const fn (&mut CfgBuilder) -> T (+ optional self and trailing extra parameters). The code generated by build! calls the supplied top-level configuration function (at compile time) to collect information such as a set of kernel objects that need to be instantiated. This information is used to implement KernelCfg1 and KernelCfg2 on a given system type. At the same time, this process also produces handles to the defined kernel objects (such as Task), which can be returned from a configuration function directly or packaged in a user-defined container type. build! returns the evaluation result of the top-level configuration function. By storing this in a const variable, the application code can access the kernel objects defined in the configuration function.

This example is not tested
r3_port_std::use_port!(unsafe struct System);

struct Objects { task: Task<System> }

const COTTAGE: Objects = r3::build!(System, configure_app => Objects);

// This is the top-level configuration function
const fn configure_app<System: Kernel>(b: &mut CfgBuilder<System>) -> Objects {
    b.num_task_priority_levels(4);
    let task = Task::build()
        .start(task_body).priority(3).active(true).finish(b);
    Objects { task }
}

fn task_body(_: usize) {
    assert_eq!(COTTAGE.task, Task::current().unwrap());
}

Configuration functions are highly composable as they can call other configuration functions in turn. In some sense, this is a way to attribute a certain semantics to a group of kernel objects, making them behave in a meaningful way as a whole, and expose a whole new, higher-level interface. For example, a mutex object similar to std::sync::Mutex can be created by combining kernel::Mutex<System> (a low-level mutex object) and a hunk::Hunk<System, UnsafeCell<T>> (a typed hunk), which in turn is built on top of kernel::Hunk<System> (a low-level untyped hunk).

// Top-level configuration function
const fn configure_app<System: Kernel>(b: &mut CfgBuilder<System>) -> Objects<System> {
    b.num_task_priority_levels(4);
    let my_module = m::configure(b);
    Objects { my_module }
}

mod m {
    pub const fn configure<System: Kernel>(b: &mut CfgBuilder<System>) -> MyModule<System> {
        let task = Task::build()
            .start(task_body).priority(3).active(true).finish(b);
        MyModule { task }
    }

    fn task_body(_: usize) {}
}

The constructors of kernel objects are configuration functions by themselves, but they are different from normal configuration functions in that they can actually mutate the contents of CfgBuilder (which build! will use to create kernel structures in the final form), ultimately shaping the outcome of the configuration process. Therefore, they are the smallest building blocks of configuration functions.

System States

A system can be in some of the system states described in this section at any point.

CPU Lock disables all managed interrupts and dispatching. On a uniprocessor system (which this kernel targets), this is a convenient way to create a critical section to protect a shared resource from concurrent accesses. Most system services are unavailable when CPU Lock is active and will return BadContext. Application code can use acquire_cpu_lock to activate CPU Lock.

Like a lock guard of a mutex, CPU Lock can be thought of as something to be “owned” by a current thread. This conception allows it to be seamlessly integrated with Rust's vocabulary and mental model around the ownership model.

Priority Boost temporarily raises the effective priority of the current task to higher than any values possible in normal circumstances. Priority Boost can only be activated or deactivated in a task context. Potentially blocking system services are disallowed when Priority Boost is active and will return BadContext. Application code can use boost_priority to activate Priority Boost.

Relation to Other Specifications: Inspired from the μITRON4.0 specification. CPU Lock and Priority Boost correspond to a CPU locked state and a dispatching state from μITRON4.0, respectively. In contrast to this specification, both concepts are denoted by proper nouns in the R3 RTOS. This means phrases like “when the CPU is locked” are not allowed.

CPU Lock corresponds to SuspendOSInterrupts and ResumeOSInterrupts from the OSEK/VDX specification.

Threads

An (execution) thread is a sequence of instructions executed by a processor. There can be multiple threads existing at the same time and the kernel is responsible for deciding which thread to run at any point on a processor (this process is called scheduling). The location in a program where a thread starts execution is called the thread's entry point function for the thread. A thread exits when it returns from its entry point function¹ or calls exit_task (valid only for tasks).

¹ More precisely, a thread starts execution with a hypothetical function call to the entry point function, and it exits when it returns from this hypothetical function call.

The properties of threads such as how and when they are created and whether they can block or not are specific to each thread type.

The initial thread that starts up the kernel (by calling PortToKernel::boot) is called the main thread. This is where the initialization of kernel structures takes place. Additionally, an application can register one or more startup hooks to execute user code here. Startup hooks execute with CPU Lock active and should never deactivate CPU Lock. The main thread exits when the kernel requests the port to dispatch the first task.

A first-level interrupt handler starts execution in its own thread in response to asynchronous external events (interrupts). This type of thread always runs to completion but can be preempted by other interrupt handlers. No blocking system calls are allowed in an interrupt handler. A first-level interrupt handler calls the associated application-provided second-level interrupt handlers (InterruptHandler) as well as the callback functions of timers (Timer) through a port timer driver and the kernel timing core.

A task (Task) is the kernel object that can create a thread whose execution is controlled by application code. Each task encapsulates a variety of state data necessary for the execution and scheduling of the associated thread, such as a stack region to store local variables and activation frames, the current priority, the parking state of the task, and a memory region used to save the state of CPU registers when the task is blocked or preempted. The associated thread can be started by activating that task. A task-based thread can make blocking system calls, which will temporarily block the execution of the thread until certain conditions are met. Task-based threads can be preempted by any kind of thread.

Relation to Other Specifications: Not many kernel designs use the word “thread” to describe the concept that applies to both of interrupts and tasks (one notable exception being TI-RTOS), most likely because threads are used to refer to a specific concept in general-purpose operating systems, or they are simply considered synonymous with tasks. For example, the closest concept in the μITRON4.0 specification is processing units. Despite that, it was decided that “thread” was an appropriate term to refer to this concept. The primary factors that drove this decision include: (1) the need for a conceptual entity that can “own” locks, and (2) that this concept is important for discussing thread safety without substituting every mention of “thread” with “task or interrupt handler”.

Contexts

A context is a general term that is often used to describe the “environment” a function executes in. Terms like a task context are used to specify the type of thread a calling thread is expected to be. The following list shows the terms we use to describe contexts throughout this kernel's documentation:

  • Being in a task context means the current thread pertains to a task.
  • Being in an interrupt context means the current thread pertains to an interrupt handler.
  • Being in a boot context means the current thread is the main thread. Startup hooks allow user code to execute in this context.
  • Being in a waitable context means that the current context is a task context and Priority Boost is inactive.

Relation to Other Specifications: The μITRON4.0 specification, the AUTOSAR OS specification, and RTEMS's user manuals use the term “context” in a similar way.

Interrupt Handling Framework

A port may support managing interrupt lines and interrupt handlers through an interface defined by the kernel. When it's supported, an application can use this facility to configure interrupt lines and attach interrupt handlers. It's port-defined whether a port supports managing or not managing interrupt lines and interrupt handlers.

The benefits of providing a standardized interface for interrupts include: (1) increased portability of applications and libraries across target platforms, (2) well-defined semantics of system calls inside an interrupt handler, and (3) decoupling hardware driver components on a system with a non-vectorized interrupt controller or multiplexed interrupt lines. The downsides include: (1) obscuring non-standard hardware features, (2) interference with other ways of managing interrupts (e.g., board support packages, IDEs), (3) additional layer of abstraction that makes the system mechanism unclear.

Port Implementation Note: System calls can provide well-defined semantics inside an interrupt handler only if the port adheres to this interrupt handling framework. If a port developer chooses not to follow this, they are responsible to properly explain the interaction between interrupts and the kernel.

An interrupt request is delivered to a processor by sending a hardware signal to an interrupt controller through an interrupt line. It's possible that more than one interrupt source is connected to a single interrupt line. Upon receiving an interrupt request, the interrupt controller translates the interrupt line to an interrupt number and transfers the control to the first-level interrupt handler associated with that interrupt number.

Each interrupt line has configurable attributes such as an interrupt priority. An application can instruct the kernel to configure them at boot time by CfgInterruptLineBuilder or at runtime by InterruptLine. The interpretation of interrupt priority values is up to a port, but they are usually used to define precedence among interrupt lines in some way, such as favoring one over another when multiple interrupt requests are received at the same time or allowing a higher-priority interrupt handler to preempt another.

The kernel occasionally disables interrupts by activating CPU Lock. The additional interrupt latency introduced by this can pose a problem for time-sensitive applications. To resolve this problem, a port may implement CPU Lock in a way that doesn't disable interrupt lines with a certain priority value and higher. Such priority values and the first-/second-level interrupt handlers for such interrupt lines are said to be unmanaged. The behavior of system calls inside unmanaged interrupt handlers is undefined. Interrupt handlers that aren't unmanaged are said to be managed.

An application can register one or more (second-level) interrupt handlers to an interrupt number. They execute in a serial fashion inside a first-level interrupt handler for the interrupt number. The static configuration system automatically combines multiple second-level interrupt handlers into one (thus taking care of the “execute in a serial fashion” part). It's up to a port to generate a first-level interrupt handler that executes in an appropriate situation, takes care of low-level tasks such as saving and restoring registers, and calls the (combined) second-level interrupt handler.

Interrupt handlers execute with CPU Lock inactive and may return with CPU Lock either active or inactive. Some system calls are not allowed in there and will return BadContext.

The behavior of system calls is undefined inside an unmanaged interrupt handler. The property of being protected from programming errors caused by making system calls inside an unmanaged interrupt handler is called unmanaged safety. Most system services are not marked as unsafe, so in order to ensure unmanaged safety, safe code shouldn't be allowed to register an interrupt handler that potentially executes as an unmanaged interrupt handler. On the other hand, the number of unsafe blocks in application code should be minimized in common use cases. To meet this goal, this framework employs several safeguards: (1) Interrupt handlers can be explicitly marked as unmanaged-safe (safe to use as an unmanaged interrupt handler), but this requires an unsafe block. (2) An interrupt line must be initialized with a priority value that falls within a managed range if it has a non-unmanaged-safe interrupt handler. (3) When changing the priority of an interrupt line, the new priority must be in a managed range. It's possible to bypass this check, but this requires an unsafe block.

Relation to Other Specifications: The division between managed and unmanaged interrupt handlers can be seen in FreeRTOS (some ports), μITRON4.0, and OSEK/VDX. The method of leveraging Rust's unsafe system to ensure unmanaged safety is obviously Rust-specific and novel.

Interrupt handlers and interrupt service routines (terms from μITRON4.0) have been renamed to first-level interrupt handlers and (second-level) interrupt handlers, respectively, because “interrupt service routine” was way too long to type and abbreviating it would result in a set of type names which is either excessively inconsistent (InterruptLine, Irq) or bizarre (InterruptLine, InterruptRq). Removing the term “interrupt service routine” should also remove a source of confusion because interrupt handlers and interrupt service routines are often regarded as synonymous with each other (as evident in the Wikipedia article on interrupt handler), whereas there is a clear sequential relationship between first-level and second-level.

Kernel Timing

The R3 RTOS provides a timing system to enable tracking timed events such as wait operations with timeout.

The kernel uses microseconds as the system time unit. A span of time (Duration) is represented by a 32-bit signed integer (the negative part is only used by the clock adjustment API), which can hold up to 35′47.483647″.

The system clock is a feature of the kernel that manages and exposes a global system time (Time), which is represented by a 64-bit integer. The system time starts at zero, thus behaving like uptime, but it can be updated by an application to represent a real calender time. The method set_time updates the global system time with a new value.

Another way to update the system time is to move it forward or back by a specified delta by calling adjust_time. This update method preserves the absolute (w.r.t. the system time) arrival times of existing timed events. This means that if you have an event scheduled to occur in 10 seconds and you move the system time forward by 2 seconds, the event is now scheduled to occur in 8 seconds.

The kernel timing is driven by a port timer driver, which is part of a port. The kernel and the driver communicate through the traits PortTimer​ (kernel → driver) and PortToKernel​ (driver → kernel).

The kernel expects that timer interrupts are handled in a timely manner. The resilience against overdue timer interrupts is limited by two factors: (1) PortTimer::MAX_TICK_COUNT-PortTimer::MAX_TIMEOUT, representing the headroom of port-timer timeouts below the timer counter's representable range. Violating this will cause the kernel to lose track of time. (2) TIME_HARD_HEADROOM, representing how overdue timed events can be before the internal represention of their arrival times wraps around and the timing algorithm starts exhibiting an incorrect behavior. The application is responsible for ensuring these limitations are not exceeded, e.g., by avoiding holding CPU Lock for a prolonged period of time.

Relation to Other Specifications

There are many major design choices when it comes to kernel timing and timed APIs, and they are quite diverse between operating system or kernel specifications. The following list shows some of them:

  1. What time unit does the application-facing API use? In embedded operating systems, it's very common to expose internal ticks and provide C macros to convert real time values into ticks. The conversion is prone to unexpected rounding and integer overflow, and this sort of error is easy to go unnoticed. (For instance, pdMS_TO_TICKS from FreeRTOS uses uint32_t for intermediate value calculation, and gcc won't report overflow even with -Wall because uint32_t is defined to exhibit a wrap-around behavior. z_tmcvt from Zephyr doesn't detect overflow.) Even if it could be detected statically (which is never true for dynamically calculated values), the range of real time values that doesn't cause overflow varies between target systems, harming the portability of software components written for the operating system (how often this cause a problem is debatable).

    Specifications emphasizing portability have often adopted real time values. The conversion to an internal tick value still happens, but because it's done at a lower level, it's easy to handle out-of-range cases gracefully, e.g., by dividing a long delay request into shorter ones. The downside of this approach is that it's hard to avoid the runtime overhead of the conversion process. Supporting fractional times would require a special treatment in this approach. However, as for the conversion overhead, it can be avoided by matching internal ticks to real time values (see item 2).

    The overflow issue can be assuaged by using 64-bit time values.

  2. Who specifies the frequency of internal ticks? Is it configurable? Or is it bound to real time values? It's often tied to a hardware timer's input clock frequency (most tickless kernels) or period (tickful kernels such as an old version of Linux).

    The tick frequency could be fixed at a predetermined value having a simple ratio to a real time unit such as 1kHz. This can reduce or eliminate the time conversion cost, but it can be tricky to get a hardware timer to operate in a desired frequency in some clock configurations.

  3. Are timed events associated with absolute time values that can be adjusted globally? (I.e., the current absolute time can be adjusted at runtime, and that affects the relative arrival times of all existing time events.) This can be useful for compensating for clock drift, but breaks monotonicity.

    This functionality is tricky to implement because usually an embedded operating system often only tracks some LSBs of an absolute time value (in other words, using modulo-2ⁿ arithmetic), and the interpretation of a value could change if the system time was changed inadvertently. A specific example might further clarify the problem: Let's say you have an alarm scheduled to fire in 49 days. You request the system to move the system time backward by 1 day. The alarm would npw be scheduled to fire in 50 days, which can't be represented by a 32-bit integer with millisecond precision, so the request should be rejected.

  4. Is the timer rate variable? This is another way to compensate for clock drift. This usually done by kernel software as the hardware might not provide sufficient adjustment granularity.

    If this feature is supported, it would be pointless for item 1 to be “ticks” because when this feature is in use, the ticks aren't actual hardware ticks anymore.

  5. How many bits does a relative time value have? Sixty four bits would be sufficient to represent any practically meaningful time interval (as far as an earthling civilization is concerned), but that would be overkill for most use cases. The present-day embedded operating systems mainly target 16- or 32-bit microcontrollers with a tight memory constraint. This means unnecessarily-wide data types (especially those wider than 32 bits) should be avoided.

    Thirty two bits can be restrictive, but it's trivial for an application to work around. It could be made configurable to accommodate the rare use cases where 64-bit time intervals are needed, but then 64-bit arithmetic would increase the processing cost of all timed events, not just the ones needing 64 bits.

  6. How many bits does the operating system use to track the arrival time for a timed event? This should be greater than or equal to the value of item 5. This interacts with item 3.

Based on these considerations, we decided to take the TOPPERS 3rd generation kernel specification as a model for the kernel timing system, with a few changes.

The following table summarizes the choices made by each specification, including ours:

Specification123456
AUTOSAR OS 4.3.1ticksN/A???N/A
FreeRTOS V10ticksport-specificnono16 or 32 bits16 or 32 bits
OSEX/VDX 2.2.3ticksN/Anonounspecified bitsN/A
POSIXnanosecondsN/Amixedno≥ 62 bitsN/A
RTEMS Classicticksuser-configurableyesno32 bits?
TI-RTOSticksuser-configurablenono??
TOPPERS New Gen.millisecondsfixed, millisecondsnono16 bits32 bits
TOPPERS 3rd Gen.microsecondsfixed, microsecondsyesyes32 bits32 bits
Zephyr 2.3ticksuser-configurablenono32 or 64 bits32 or 64 bits
μITRON4.0unspecifiedN/Anonounspecified bitsN/A
μT-Kernel 3.0milli- or micro-seconds?nono32 or 64 bits?
R3microsecondsfixed, microsecondsyesno31 bits32 bits

Rationale: As explained in Relation to Other Specifications, using real time values at an API boundary abstracts away the underlying hardware and leads to better portability of software written for the operating system. The microsecond precision should be practically sufficient to deal with non-integral timing requirements.

Arrival times are tracked by 32-bit modulo-2³² integers. Relative timestamps are limited to 31 bits (excluding the sign bit) to ensure plenty of headroom is always available for global time adjustment. Setting the upper bound in terms of binary digits also lowers the processing cost marginally and avoids the use of an arbitrarily chosen number.

Introspection

The entire kernel state can be dumped for inspection by applying debug formatting ({:?}, Debug) on the object returned by Kernel::debug. Note that this might consume a large amount of stack space.

An example output of the kernel debug printing. This was captured during the mutex_protect_priority_by_ceiling test case. Click here to expand.
Kernel {
    state: State {
        running_task: CpuLockCell(Some(
            TaskCb {
                self: 0x0000000108686058,
                port_task_state: TaskState {
                    tsm: TryLock {
                        value: Running(
                            ThreadId(
                                PoolPtr(
                                    4,
                                ),
                            ),
                        ),
                    },
                },
                attr: TaskAttr {
                    entry_point: 0x00000001082d8d10,
                    entry_param: 0,
                    stack: StackHunk(
                        0x0000000108691a08,
                    ),
                    priority: 1,
                },
                base_priority: CpuLockCell(1),
                effective_priority: CpuLockCell(1),
                st: CpuLockCell(Running),
                link: CpuLockCell(None),
                wait: TaskWait {
                    current_wait: CpuLockCell(None),
                    wait_result: CpuLockCell(Ok(
                        (),
                    )),
                },
                last_mutex_held: CpuLockCell(None),
                park_token: CpuLockCell(false),
            },
        )),
        task_ready_bitmap: CpuLockCell([]),
        task_ready_queue: CpuLockCell([
            ListHead(None),
            ListHead(None),
            ListHead(None),
            ListHead(None),
        ]),
        priority_boost: false,
        timeout: TimeoutGlobals {
            last_tick_count: CpuLockCell(12648432),
            last_tick_time: CpuLockCell(0),
            last_tick_sys_time: CpuLockCell(0),
            frontier_gap: CpuLockCell(0),
            heap: CpuLockCell([
                TimeoutRef(
                    0x0000700009988bf8,
                ),
            ]),
            handle_tick_in_progress: CpuLockCell(false),
        },
    },
    task_cb_pool: {
        0: TaskCb {
            self: 0x0000000108686000,
            port_task_state: TaskState {
                tsm: TryLock {
                    value: Running(
                        ThreadId(
                            PoolPtr(
                                3,
                            ),
                        ),
                    ),
                },
            },
            attr: TaskAttr {
                entry_point: 0x00000001082d8bc0,
                entry_param: 0,
                stack: StackHunk(
                    0x0000000108691608,
                ),
                priority: 0,
            },
            base_priority: CpuLockCell(0),
            effective_priority: CpuLockCell(0),
            st: CpuLockCell(Waiting),
            link: CpuLockCell(None),
            wait: TaskWait {
                current_wait: CpuLockCell(Some(
                    Mutex(0x7000097860d0),
                )),
                wait_result: CpuLockCell(Ok(
                    (),
                )),
            },
            last_mutex_held: CpuLockCell(None),
            park_token: CpuLockCell(false),
        },
        1: TaskCb {
            self: 0x0000000108686058,
            port_task_state: TaskState {
                tsm: TryLock {
                    value: Running(
                        ThreadId(
                            PoolPtr(
                                4,
                            ),
                        ),
                    ),
                },
            },
            attr: TaskAttr {
                entry_point: 0x00000001082d8d10,
                entry_param: 0,
                stack: StackHunk(
                    0x0000000108691a08,
                ),
                priority: 1,
            },
            base_priority: CpuLockCell(1),
            effective_priority: CpuLockCell(1),
            st: CpuLockCell(Running),
            link: CpuLockCell(None),
            wait: TaskWait {
                current_wait: CpuLockCell(None),
                wait_result: CpuLockCell(Ok(
                    (),
                )),
            },
            last_mutex_held: CpuLockCell(None),
            park_token: CpuLockCell(false),
        },
        2: TaskCb {
            self: 0x00000001086860b0,
            port_task_state: TaskState {
                tsm: TryLock {
                    value: Running(
                        ThreadId(
                            PoolPtr(
                                1,
                            ),
                        ),
                    ),
                },
            },
            attr: TaskAttr {
                entry_point: 0x00000001082d8f20,
                entry_param: 0,
                stack: StackHunk(
                    0x0000000108691e08,
                ),
                priority: 2,
            },
            base_priority: CpuLockCell(2),
            effective_priority: CpuLockCell(0),
            st: CpuLockCell(Waiting),
            link: CpuLockCell(None),
            wait: TaskWait {
                current_wait: CpuLockCell(Some(
                    Sleep,
                )),
                wait_result: CpuLockCell(Ok(
                    (),
                )),
            },
            last_mutex_held: CpuLockCell(Some(
                0x0000000108686108,
            )),
            park_token: CpuLockCell(false),
        },
    },
    event_group_cb_pool: {},
    mutex_cb_pool: {
        0: MutexCb {
            self: 0x0000000108686108,
            ceiling: Some(
                0,
            ),
            inconsistent: CpuLockCell(false),
            wait_queue: WaitQueue {
                waits: [
                    { task: 0x108686000, payload: Mutex(0x7000097860d0) },
                ],
                order: TaskPriority,
            },
            prev_mutex_held: CpuLockCell(None),
            owning_task: CpuLockCell(Some(
                0x00000001086860b0,
            )),
        },
    },
    semaphore_cb_pool: {},
    timer_cb_pool: {},
}

Cargo Features

  • chrono: Enables conversion between our duration and timetamp types and ::chrono's types.
  • inline_syscall: Allows (but does not force) inlining for all application-facing methods. Enabling this feature might lower the latency of system calls but there are the following downsides: (1) The decision of inlining is driven by the compiler's built-in heuristics, which takes many factors into consideration. Therefore, the performance improvement (or deterioration) varies unpredictably depending on the global structure of your application and the compiler version used, making it harder to design the system to meet real-time requirements. (2) Inlining increases the code working set size and can make the code run even slower. This is especially likely to happen on an execute-in-place (XIP) system with low-speed code memory such as an SPI flash.

Kernel Features

Enabling the following features might affect the kernel's runtime peformance and memory usage.

  • priority_boost: Enables Priority Boost.
  • system_time: Enables the tracking of a global system time.

Modules

hunk

Type-safe hunks

kernel

The RTOS kernel

prelude

The prelude module.

sync

Safe synchronization primitives.

time

Temporal quantification for the R3 kernel.

utils

Utility

Macros

build

Attach a configuration function to a "system" type by implementing KernelCfg2.