logo
Expand description

The core API definition for R3-OS.

The core API represents the low-level interface between kernel and application code. It includes raw and raw_cfg traits implemented by kernel implementations as well as common data types, such as Time. Because breaking changes in the core API would result in ecosystem split, it’s designed to be more stable than the façade API (r3).

Note to Application Developers

The implementation code heavily relies on constant propagation, dead code elimination, and “zero-cost” abstractions. Without optimization, it might exhibit massive code bloat and excessive stack consumption. To change the optimization level for debug builds, add the following lines to your Cargo workspace’s Cargo.toml:

[profile.dev]
opt-level = 2

Defining a System

System Type

R3-OS utilizes Rust’s trait system to allow system designers to construct a system in a modular way. An R3 application is built around a marker type called a system type. It implements various traits, whose implementations are provided by a kernel implementation, such as r3_kernel, to provide a basic kernel API and encapsulate the kernel’s configuration, such as resource allocation for kernel objects. The application and higher-level wrappers interact with the kernel through these trait implementations.

trait_binding

A system type at least implements r3::kernel::raw::KernelBase (see the parent module for a full listing). The exact way of instantiating a system type is specific to each kernel implementation, but the usual way is that it provides a generic type with a type parameter taking references to a product of static configuration as well as other customization options.

Static Configuration

An embedded system is designed to serve a particular purpose, so it’s often possible to predict many aspects of its operation, such as how many kernel objects of each kind are going to be used and how much memory is allocated in ROM and RAM. The process of collecting such information and specializing a kernel for a particular application at compile time is called static configuration.

Kernel objects are defined (we use this specific word for static creation) in a configuration function having the signature for<C> const fn (&mut r3::kernel::Cfg<C>, ...) -> T where C: ~const r3::kernel::raw_cfg::CfgBase. For each system type, an application provides a top-level configuration function, which defines all kernel objects belonging to that system type. A kernel-provided build macro processes its output (which mainly involves defining static items to store the state of all defined kernel objects) and somehow associates it with the system type.

static_cfg

The configuration process assigns handles, such as StaticTask<System>, to the defined kernel objects, which can be returned by a configuration function and passed over up to the build macro, which returns it to the caller. By storing it in a const item, application code can access the defined kernel objects from everywhere.

type System = r3_kernel::System<SystemTraits>;
r3_port_std::use_port!(unsafe struct SystemTraits);

struct Objects { task: StaticTask<System> }

// Does the following things:
//  - Creates control blocks for the kernel objects defined in `configure_app`.
//  - Associates them to `SystemTraits` using trait implementations.
//  - Assigns the output of `configure_app`, containing the defined object
//    handles, to a `COTTAGE`, making it available globally.
const COTTAGE: Objects = r3_kernel::build!(SystemTraits, configure_app => Objects);

// This is the top-level configuration function
const fn configure_app<C>(b: &mut Cfg<C>) -> Objects
where
    C: ~const traits::CfgBase<System = System>
     + ~const traits::CfgTask,
{
    b.num_task_priority_levels(4);
    let task = StaticTask::define()
        .start(task_body).priority(3).active(true).finish(b);
    Objects { task }
}

fn task_body() {
    assert_eq!(LocalTask::current().unwrap(), COTTAGE.task);
}

Configuration functions are highly composable as they can make nested calls to other configuration functions. In some sense, this is a way to attribute a certain semantics to a group of kernel objects, encapsulate them, and expose a higher-level interface. For example, a mutex object similar to std::sync::Mutex can be created by combining kernel::Mutex<System> (a low-level mutex object) and a hunk::Hunk<System, UnsafeCell<T>> (a typed hunk), which in turn is built on top of kernel::Hunk<System> (a low-level untyped hunk).

// Top-level configuration function
const fn configure_app<C>(b: &mut Cfg<C>) -> Objects<C::System>
where
    C: ~const traits::CfgBase + ~const traits::CfgTask,
{
    b.num_task_priority_levels(4);
    let my_module = m::configure(b);
    Objects { my_module }
}

mod m {
    pub const fn configure<C>(b: &mut Cfg<C>) -> MyModule<C::System>
    where
        C: ~const traits::CfgBase + ~const traits::CfgTask,
    {
        let task = StaticTask::define()
            .start(task_body).priority(3).active(true).finish(b);
        MyModule { task }
    }

    fn task_body() {}
}

Object Handles

Object handles are opaque wrappers of raw kernel object IDs and provide methods to interact with the represented objects. There are three types of object handles:

  • An owned handle represents a dynamically created object. Dropping an owned handle deletes that object.
  • A borrowed handle is a reference to an object owned by a different part of a program. They take a lifetime parameter.
  • A static handle is a subtype of borrowed handle referencing a statically created object.

The following table lists all provided handle types:

Object Safety

R3 incorporates an idea inspired by I/O safety (RFC 3128), which we call object safety here, to enforce the ownership rules on kernel object handles. Object safety allows a high-level object to encapsulate its internally held kernel objects, free of interference from other code.

This property is paramount for constructing sound, meaningful abstractions out of raw primitives. For example, a mutex (e.g., r3::kernel::Mutex) is used to protect data from concurrent accesses, but the mere existence of a mutex doesn’t protect anything. Only after it’s bundled with protected data does it become a sound abstraction (e.g., r3::sync::StaticMutex, std::sync::Mutex). If we didn’t have object safety, i.e., arbitrary code could fabricate any random mutex handle and use it, the encapsulation of this mutex abstraction would be broken, and it wouldn’t be sound anymore.

R3 implements object safety in the following way:

  • All functions that operate on raw object IDs, forming the majority of the kernel-side interface in r3::kernel::raw, are marked as unsafe. Object handles represent permission for safe code to access the represented object and provide non-unsafe methods. Creating one from a raw ID (HandleType::from_id) is unsafe.
  • The owner of owned handles (e.g., Mutex) controls the objects’ lifetimes.
  • It’s possible to take a reference to an object as a normal reference (e.g., &'a Mutex) or a more efficient “borrowed” handle (e.g., MutexRef<'a, _>). In either case, Rust’s region-based memory management mechanism guarantees the absence of use-after-free at compile time.
  • Static configuration makes the defined objects available as borrowed handles with 'static lifetime (e.g., MutexRef<'static, _>). There are type aliases for them (e.g., StaticMutex).

When an invalid object ID is passed to a kernel function, it may cause an undefined behavior. If the kernel implementation can detect the use of an invalid ID, it’s recommended that it indicate the error condition by returning Err(NoAccess).

Creation

Static object creation refers to creating kernel objects in a configuration function. The object handles created in this way are static handles (e.g., StaticMutex), which are borrowed handles with 'static lifetime (e.g., MutexRef<'static, _>).

Dynamic object creation refers to creating and destroying kernel objects at runtime. The object handles created in this way are owned handles (e.g., Mutex). Dropping an owned handle destroys the object.

Unimplemented: Dynamic object creation is not supported yet. The owned kernel object handle types (e.g., Task) are currently just placeholders defined to make provision for supporting dynamic object creation in the future.

Rationale: There are strengths in each mode of object creation. Static object creation allows for highly predictable operation and computational efficiency in terms of both processing and memory but wastes memory if the usage spans of the objects are dynamic, and it requires a level of cooperation between a compile-time mechanism and runtime code that is not supported out-of-the-box by most compiled languages. General-purpose OS abstractions, such as C++11 and the Rust standard library, require dynamic object creation for straightforward implementation. Dynamic object creation provides maximum flexibility but is prone to runtime errors and places more data on RAM (instead of ROM) than necessary.

Although R3 started as static creation only, it’s being envisioned to also support dynamic creation to cover wider use cases.

Relation to Other Specifications: Dynamic creation is prevalent in modern OS standards and abstractions, and despite the demand for predictable execution in embedded systems, there is only a handful of them providing support for true static object creation, including OSEK/VDX, RTIC, TOPPERS (all versions so far), and μITRON4.0. Those supporting both are even rarer.

System States

A system can be in some of the system states described in this section at any point.

CPU Lock disables all managed interrupts and dispatching. On a uniprocessor system, this is a convenient way to create a critical section to protect a shared resource from concurrent accesses. Most system services are unavailable when CPU Lock is active and will return BadContext. Application code can use acquire_cpu_lock to activate CPU Lock.

Like a lock guard of a mutex, CPU Lock can be thought of as something to be “owned” by a current thread. This conception allows it to be seamlessly integrated with Rust’s vocabulary and mental model around the ownership model.

Priority Boost temporarily raises the effective priority of the current task to higher than any values possible in normal circumstances. Priority Boost can only be activated or deactivated in a task context. Potentially blocking system services are disallowed when Priority Boost is active and will return BadContext. Application code can use boost_priority to activate Priority Boost.

Relation to Other Specifications: Inspired by the μITRON4.0 specification. CPU Lock and Priority Boost correspond to a CPU locked state and a dispatching state from μITRON4.0, respectively. In contrast to this specification, both concepts are denoted by proper nouns in R3-OS. This means phrases like “when the CPU is locked” are not allowed.

CPU Lock corresponds to SuspendOSInterrupts and ResumeOSInterrupts from the OSEK/VDX specification.

Threads

An (execution) thread is a logical sequence of instructions executed by a processor. There can be multiple threads existing at the same time, and the kernel is responsible for deciding which thread to run at any point on a processor (this process is called scheduling)². The location in a program where a thread starts execution is called the thread’s entry point function for the thread. A thread exits when it returns from its entry point function¹ or calls exit_task (valid only for tasks).

¹ More precisely, a thread starts execution with a hypothetical function call to the entry point function, and it exits when it returns from this hypothetical function call.

² The execution of code responsible for scheduling is not considered pertaining to any threads.

The properties of threads such as how and when they are created and whether they can block or not are specific to each thread type.

The initial thread that starts up the kernel is called the main thread. The part of the kernel’s lifecycle where the main thread executes is called the kernel’s boot phase, and this is where the initialization of kernel structures takes place. An application can register one or more startup hooks to execute user code here. Startup hooks execute with CPU Lock active and should never deactivate CPU Lock. The main thread exits, i.e., the boot phase completes when the kernel dispatches the first task and starts multi-tasking. The following diagram depicts the kernel lifecycle during and after the boot phase.

system_lifecycle

A first-level interrupt handler starts execution in its own thread in response to asynchronous external events (interrupts). This type of thread always runs to completion but can be preempted by other interrupt handlers. No blocking system calls are allowed in an interrupt handler. A first-level interrupt handler calls the associated application-provided second-level interrupt handlers (StaticInterruptHandler) as well as the callback functions of timers (Timer) through a timer driver and the kernel timing core.

A task (Task) is the kernel object that can create a thread whose execution is controlled by application code. Each task encapsulates a variety of state data necessary for the execution and scheduling of the associated thread, such as a stack region to store local variables and activation frames, the current priority, the parking state of the task, and a memory region used to save the state of CPU registers when the task is blocked or preempted. The associated thread can be started by activating that task. A task-based thread can make blocking system calls, which will temporarily block the execution of the thread until certain conditions are met. Task-based threads can be preempted by any kind of thread.

Relation to Other Specifications: Not many kernel designs use the word “thread” to describe the concept that applies to both of interrupts and tasks (one notable exception being TI-RTOS), most likely because threads are used to refer to a specific concept in general-purpose operating systems, or they are simply considered synonymous with tasks. For example, the closest concept in the μITRON4.0 specification is processing units. Despite that, it was decided that “thread” was an appropriate term to refer to this concept. The primary factors that drove this decision include: (1) the need for a conceptual entity that can “own” locks, and (2) that this concept is important for discussing thread safety without substituting every mention of “thread” with “task or interrupt handler”.

Contexts

A context is a general term that is often used to describe the “environment” a function executes in. Terms like a task context are used to specify the type of thread a calling thread is expected to be. The following list shows the terms we use to describe contexts throughout this documentation:

  • Being in a task context means the current thread pertains to a task.
  • Being in an interrupt context means the current thread pertains to an interrupt handler.
  • Being in a boot context means the current thread is the main thread. Startup hooks allow user code to execute in this context.
  • Being in a waitable context means that the current context is a task context and Priority Boost is inactive.

Relation to Other Specifications: The μITRON4.0 specification, the AUTOSAR OS specification, and RTEMS’s user manuals use the term “context” in a similar way.

Interrupt Handling Framework

A kernel implementation may support managing interrupt lines and interrupt handlers through an interface defined by the kernel. When it’s supported, an application can use this facility to configure interrupt lines and attach interrupt handlers. The interrupt management interface is provided by the optional trait raw::KernelInterruptLine. Even if a system type implements this trait, it’s implementation-defined whether it actually supports interrupt management (i.e., the functions may fail with NotSupported for all possible inputs).

The benefits of providing a standardized interface for interrupts include: (1) increased portability of applications and libraries across target platforms, (2) well-defined semantics of system calls inside an interrupt handler, and (3) decoupling hardware driver components on a system with a non-vectorized interrupt controller or multiplexed interrupt lines. The downsides include: (1) obscuring non-standard hardware features, (2) interference with other ways of managing interrupts (e.g., board support packages, IDEs), (3) additional layer of abstraction that makes the system mechanism unclear.

Implementation Note: System calls can provide well-defined semantics inside an interrupt handler only if the kernel adheres to this interrupt handling framework. If a kernel developer chooses not to follow this, they are responsible to properly explain the interaction between interrupts and the kernel.

An interrupt request is delivered to a processor by sending a hardware signal to an interrupt controller through an interrupt line. It’s possible that more than one interrupt source is connected to a single interrupt line. Upon receiving an interrupt request, the interrupt controller translates the interrupt line to an interrupt number and transfers the control to the first-level interrupt handler associated with that interrupt number.

Each interrupt line has configurable attributes such as an interrupt priority. An application can instruct the kernel to configure them at boot time by InterruptLineDefiner or at runtime by InterruptLine. The interpretation of interrupt priority values is up to a kernel, but they are usually used to define precedence among interrupt lines in some way, such as favoring one over another when multiple interrupt requests are received at the same time or allowing a higher-priority interrupt handler to preempt another.

The kernel occasionally disables interrupts by activating CPU Lock. The additional interrupt latency introduced by this can pose a problem for time-sensitive applications. To resolve this problem, a kernel may implement CPU Lock in a way that doesn’t disable interrupt lines with a certain priority value and higher. Such priority values and the first-/second-level interrupt handlers for such interrupt lines are said to be unmanaged. The behavior of system calls inside unmanaged interrupt handlers is undefined. Interrupt handlers that aren’t unmanaged are said to be managed.

An application can register one or more (second-level) interrupt handlers to an interrupt number. They execute in a serial fashion inside a first-level interrupt handler for the interrupt number. The static configuration system automatically combines multiple second-level interrupt handlers into one (thus taking care of the “execute in a serial fashion” part). It’s up to a kernel to generate a first-level interrupt handler that executes in an appropriate situation, takes care of low-level tasks such as saving and restoring registers, and calls the (combined) second-level interrupt handler.

Interrupt handlers execute with CPU Lock inactive and may return with CPU Lock either active or inactive. Some system calls are not allowed in there and will return BadContext.

The behavior of system calls is undefined inside an unmanaged interrupt handler. The property of being protected from programming errors caused by making system calls inside an unmanaged interrupt handler is called unmanaged safety. Most system services are not marked as unsafe, so in order to ensure unmanaged safety, safe code shouldn’t be allowed to register an interrupt handler that potentially executes as an unmanaged interrupt handler. On the other hand, the number of unsafe blocks in application code should be minimized in common use cases. To meet this goal, this framework employs several safeguards: (1) Interrupt handlers can be explicitly marked as unmanaged-safe (safe to use as an unmanaged interrupt handler), but this requires an unsafe block. (2) An interrupt line must be initialized with a priority value that falls within a managed range if it has a non-unmanaged-safe interrupt handler. (3) When changing the priority of an interrupt line, the new priority must be in a managed range. It’s possible to bypass this check, but this requires an unsafe block.

Relation to Other Specifications: The division between managed and unmanaged interrupt handlers can be seen in FreeRTOS (some ports), μITRON4.0, and OSEK/VDX. The method of leveraging Rust’s unsafe system to ensure unmanaged safety is obviously Rust-specific and novel.

Interrupt handlers and interrupt service routines (terms from μITRON4.0) have been renamed to first-level interrupt handlers and (second-level) interrupt handlers, respectively, because “interrupt service routine” was way too long to type and abbreviating it would result in a set of type names which is either excessively inconsistent (InterruptLine, Irq) or bizarre (InterruptLine, InterruptRq). Removing the term “interrupt service routine” should also remove a source of confusion because interrupt handlers and interrupt service routines are often regarded as synonymous with each other (as evident in the Wikipedia article on interrupt handler), whereas there is a clear sequential relationship between first-level and second-level.

Kernel Timing

R3-OS provides a timing system to enable tracking timed events such as wait operations with timeout.

The kernel uses microseconds as the system time unit. A span of time (Duration) is represented by a 32-bit signed integer (the negative part is only used by the clock adjustment API), which can hold up to 35′47.483647″.

The system clock is a feature of the kernel that manages and exposes a global system time (Time), which is represented by a 64-bit integer. The system time starts at zero, thus behaving like uptime, but it can be updated by an application to represent a real calender time. The method set_time updates the global system time with a new value.

Another way to update the system time is to move it forward or back by a specified delta by calling adjust_time. This update method preserves the absolute (w.r.t. the system time) arrival times of existing timed events. This means that if you have an event scheduled to occur in 10 seconds and you move the system time forward by 2 seconds, the event is now scheduled to occur in 8 seconds.

The kernel expects that timer interrupts are handled in a timely manner. The resilience against overdue timer interrupts is kernel-specific, and once it’s exceeded, the kernel timing algorithm will start exhibiting an incorrect behavior. The application is responsible for ensuring these limitations are not exceeded, e.g., by avoiding holding CPU Lock for a prolonged period of time.

Relation to Other Specifications

There are many major design choices when it comes to kernel timing and timed APIs, and they are quite diverse between operating system or kernel specifications. The following list shows some of them:

  1. What time unit does the application-facing API use? In embedded operating systems, it’s very common to expose internal ticks and provide C macros to convert real time values into ticks. The conversion is prone to unexpected rounding and integer overflow, and this sort of error is easy to go unnoticed. (For instance, pdMS_TO_TICKS from FreeRTOS uses uint32_t for intermediate value calculation, and gcc won’t report overflow even with -Wall because uint32_t is defined to exhibit a wrap-around behavior. z_tmcvt from Zephyr doesn’t detect overflow.) Even if it could be detected statically (which is never true for dynamically calculated values), the range of real time values that doesn’t cause overflow varies between target systems, harming the portability of software components written for the operating system (how often this cause a problem is debatable).

    Specifications emphasizing portability have often adopted real time values. The conversion to an internal tick value still happens, but because it’s done at a lower level, it’s easy to handle out-of-range cases gracefully, e.g., by dividing a long delay request into shorter ones. The downside of this approach is that it’s hard to avoid the runtime overhead of the conversion process. Supporting fractional times would require a special treatment in this approach. However, as for the conversion overhead, it can be avoided by matching internal ticks to real time values (see item 2).

    The overflow issue can be assuaged by using 64-bit time values.

  2. Who specifies the frequency of internal ticks? Is it configurable? Or is it bound to real time values? It’s often tied to a hardware timer’s input clock frequency (most tickless kernels) or period (tickful kernels such as an old version of Linux).

    The tick frequency could be fixed at a predetermined value having a simple ratio to a real time unit such as 1kHz. This can reduce or eliminate the time conversion cost, but it can be tricky to get a hardware timer to operate in a desired frequency in some clock configurations.

  3. Are timed events associated with absolute time values that can be adjusted globally? (I.e., the current absolute time can be adjusted at runtime, and that affects the relative arrival times of all existing time events.) This can be useful for compensating for clock drift, but breaks monotonicity.

    This functionality is tricky to implement because usually an embedded operating system often only tracks some LSBs of an absolute time value (in other words, using modulo-2ⁿ arithmetic), and the interpretation of a value could change if the system time was changed inadvertently. A specific example might further clarify the problem: Let’s say you have an alarm scheduled to fire in 49 days. You request the system to move the system time backward by 1 day. The alarm would npw be scheduled to fire in 50 days, which can’t be represented by a 32-bit integer with millisecond precision, so the request should be rejected.

  4. Is the timer rate variable? This is another way to compensate for clock drift. This usually done by kernel software as the hardware might not provide sufficient adjustment granularity.

    If this feature is supported, it would be pointless for item 1 to be “ticks” because when this feature is in use, the ticks aren’t actual hardware ticks anymore.

  5. How many bits does a relative time value have? Sixty four bits would be sufficient to represent any practically meaningful time interval (as far as an earthling civilization is concerned), but that would be overkill for most use cases. The present-day embedded operating systems mainly target 16- or 32-bit microcontrollers with a tight memory constraint. This means unnecessarily-wide data types (especially those wider than 32 bits) should be avoided.

    Thirty two bits can be restrictive, but it’s trivial for an application to work around. It could be made configurable to accommodate the rare use cases where 64-bit time intervals are needed, but then 64-bit arithmetic would increase the processing cost of all timed events, not just the ones needing 64 bits.

  6. How many bits does the operating system use to track the arrival time for a timed event? This should be greater than or equal to the value of item 5. This interacts with item 3.

Based on these considerations, we decided to take the TOPPERS 3rd generation kernel specification as a model for the kernel timing system, with a few changes.

The following table summarizes the choices made by each specification, including ours:

Specification123456
AUTOSAR OS 4.3.1ticksN/A???N/A
FreeRTOS V10ticksport-specificnono16 or 32 bits16 or 32 bits
OSEX/VDX 2.2.3ticksN/Anonounspecified bitsN/A
POSIXnanosecondsN/Amixedno≥ 62 bitsN/A
RTEMS Classicticksuser-configurableyesno32 bits?
TI-RTOSticksuser-configurablenono??
TOPPERS New Gen.millisecondsfixed, millisecondsnono16 bits32 bits
TOPPERS 3rd Gen.microsecondsfixed, microsecondsyesyes32 bits32 bits
Zephyr 2.3ticksuser-configurablenono32 or 64 bits32 or 64 bits
μITRON4.0unspecifiedN/Anonounspecified bitsN/A
μT-Kernel 3.0milli- or micro-seconds?nono32 or 64 bits?
R3microsecondsfixed, microsecondsyesno31 bits32 bits

Rationale: As explained in Relation to Other Specifications, using real time values at an API boundary abstracts away the underlying hardware and leads to better portability of software written for the operating system. The microsecond precision should be practically sufficient to deal with non-integral timing requirements.

Arrival times are tracked by 32-bit modulo-2³² integers. Relative timestamps are limited to 31 bits (excluding the sign bit) to ensure plenty of headroom is always available for global time adjustment. Setting the upper bound in terms of binary digits also lowers the processing cost marginally and avoids the use of an arbitrarily chosen number.

Introspection

The entire kernel state can be dumped for inspection by applying debug formatting ({:?}, Debug) on the object returned by Kernel::debug. Note that this might consume a large amount of stack space.

An example output of the kernel debug printing. This was captured during the mutex_protect_priority_by_ceiling test case. Click here to expand.
Kernel {
    state: State {
        running_task: CpuLockCell(Some(
            TaskCb {
                self: 0x0000000108686058,
                port_task_state: TaskState {
                    tsm: TryLock {
                        value: Running(
                            ThreadId(
                                PoolPtr(
                                    4,
                                ),
                            ),
                        ),
                    },
                },
                attr: TaskAttr {
                    entry_point: 0x00000001082d8d10,
                    entry_param: 0,
                    stack: StackHunk(
                        0x0000000108691a08,
                    ),
                    priority: 1,
                },
                base_priority: CpuLockCell(1),
                effective_priority: CpuLockCell(1),
                st: CpuLockCell(Running),
                link: CpuLockCell(None),
                wait: TaskWait {
                    current_wait: CpuLockCell(None),
                    wait_result: CpuLockCell(Ok(
                        (),
                    )),
                },
                last_mutex_held: CpuLockCell(None),
                park_token: CpuLockCell(false),
            },
        )),
        task_ready_bitmap: CpuLockCell([]),
        task_ready_queue: CpuLockCell([
            ListHead(None),
            ListHead(None),
            ListHead(None),
            ListHead(None),
        ]),
        priority_boost: false,
        timeout: TimeoutGlobals {
            last_tick_count: CpuLockCell(12648432),
            last_tick_time: CpuLockCell(0),
            last_tick_sys_time: CpuLockCell(0),
            frontier_gap: CpuLockCell(0),
            heap: CpuLockCell([
                TimeoutRef(
                    0x0000700009988bf8,
                ),
            ]),
            handle_tick_in_progress: CpuLockCell(false),
        },
    },
    task_cb_pool: {
        0: TaskCb {
            self: 0x0000000108686000,
            port_task_state: TaskState {
                tsm: TryLock {
                    value: Running(
                        ThreadId(
                            PoolPtr(
                                3,
                            ),
                        ),
                    ),
                },
            },
            attr: TaskAttr {
                entry_point: 0x00000001082d8bc0,
                entry_param: 0,
                stack: StackHunk(
                    0x0000000108691608,
                ),
                priority: 0,
            },
            base_priority: CpuLockCell(0),
            effective_priority: CpuLockCell(0),
            st: CpuLockCell(Waiting),
            link: CpuLockCell(None),
            wait: TaskWait {
                current_wait: CpuLockCell(Some(
                    Mutex(0x7000097860d0),
                )),
                wait_result: CpuLockCell(Ok(
                    (),
                )),
            },
            last_mutex_held: CpuLockCell(None),
            park_token: CpuLockCell(false),
        },
        1: TaskCb {
            self: 0x0000000108686058,
            port_task_state: TaskState {
                tsm: TryLock {
                    value: Running(
                        ThreadId(
                            PoolPtr(
                                4,
                            ),
                        ),
                    ),
                },
            },
            attr: TaskAttr {
                entry_point: 0x00000001082d8d10,
                entry_param: 0,
                stack: StackHunk(
                    0x0000000108691a08,
                ),
                priority: 1,
            },
            base_priority: CpuLockCell(1),
            effective_priority: CpuLockCell(1),
            st: CpuLockCell(Running),
            link: CpuLockCell(None),
            wait: TaskWait {
                current_wait: CpuLockCell(None),
                wait_result: CpuLockCell(Ok(
                    (),
                )),
            },
            last_mutex_held: CpuLockCell(None),
            park_token: CpuLockCell(false),
        },
        2: TaskCb {
            self: 0x00000001086860b0,
            port_task_state: TaskState {
                tsm: TryLock {
                    value: Running(
                        ThreadId(
                            PoolPtr(
                                1,
                            ),
                        ),
                    ),
                },
            },
            attr: TaskAttr {
                entry_point: 0x00000001082d8f20,
                entry_param: 0,
                stack: StackHunk(
                    0x0000000108691e08,
                ),
                priority: 2,
            },
            base_priority: CpuLockCell(2),
            effective_priority: CpuLockCell(0),
            st: CpuLockCell(Waiting),
            link: CpuLockCell(None),
            wait: TaskWait {
                current_wait: CpuLockCell(Some(
                    Sleep,
                )),
                wait_result: CpuLockCell(Ok(
                    (),
                )),
            },
            last_mutex_held: CpuLockCell(Some(
                0x0000000108686108,
            )),
            park_token: CpuLockCell(false),
        },
    },
    event_group_cb_pool: {},
    mutex_cb_pool: {
        0: MutexCb {
            self: 0x0000000108686108,
            ceiling: Some(
                0,
            ),
            inconsistent: CpuLockCell(false),
            wait_queue: WaitQueue {
                waits: [
                    { task: 0x108686000, payload: Mutex(0x7000097860d0) },
                ],
                order: TaskPriority,
            },
            prev_mutex_held: CpuLockCell(None),
            owning_task: CpuLockCell(Some(
                0x00000001086860b0,
            )),
        },
    },
    semaphore_cb_pool: {},
    timer_cb_pool: {},
}

Stability

r3_core defines two levels of API stability:

  • The application-side API stability applies to the API used by application code.

  • The kernel-side API stability applies to the API used by a kernel implementation. The API coverage is strictly larger because a kernel implementation may use the application-side API as well, e.g., to install an interrupt handler for its own kernel timing mechanism.

All public and documented items are covered under the application-side API stability unless noted otherwise.

This release of R3-OS is considered a preview version. During the preview period, r3_core follows the Semantic Versioning 2.0.0 and considers breaking either level of stability as a breaking change. It’s being planned to introduce a versioning scheme that maps each stability level to a distinct version component so that the kernel-side API can be extended substantially while maintaining compatibility with existing application and library code.

Increasing the minimum supported Rust version (MSRV) is not considered a breaking change.

Cargo Features

  • chrono_0p4: Enables conversion between our duration and timetamp types and chrono ^0.4’s types.

Modules

Changelog

A heterogeneous collection to store property values.

Bindings (Bind), a static storage with runtime initialization and configuration-time borrow checking.

Provides Closure, a light-weight closure type.

Type-safe hunks

The kernel interface.

The prelude module.

Temporal quantification for R3-OS.

Utility