uefi_async/lib.rs
1//! uefi-async
2//! ================================
3//! A lightweight, zero-cost asynchronous executor designed specifically for UEFI environments or bare-metal Rust. It provides a simple task scheduler based on a intrusive linked-list and a procedural macro to simplify task registration.
4//!
5//! --------------------------------
6//! Work in Progress
7//! --------------------------------
8//! currently only `nano_alloc` feature is supported.
9//!
10//! --------------------------------
11//! Features
12//! --------------------------------
13//! * **No-Std Compatible**: Designed for environments without a standard library (requires `alloc`).
14//! * **Intrusive Linked-List**: No additional collection overhead for managing tasks.
15//! * **Frequency-Based Scheduling**: Define tasks to run at specific frequencies (Hz), automatically converted to hardware ticks.
16//! * **Macro-Driven Syntax**: A clean, declarative DSL to assign tasks to executors.
17//! * **Tiny Control Primitives**: Comprehensive support for timeouts, joins, and hardware-precise timing.
18//! * **Safe Signaling:** Cross-core event notification with atomic state transitions.
19//! * **Multicore-Ready**: Thread-safe primitives for cross-core signaling and data synchronization.
20//!
21//! --------------------------------
22//! Tiny Async Control Flow
23//! --------------------------------
24//!
25//! ### 1. High-Precision Timing
26//!
27//! Support for human-readable time units and hardware-aligned synchronization.
28//!
29//! ```rust
30//! async fn timer() {
31//! WaitTimer::from_ms(500).await; // Explicit timer
32//! 2.year().await; // Natural language units
33//! 1.mins().await;
34//! 80.ps().await; // Picosecond precision (CPU frequency dependent)
35//! 20.fps().await; // Framerate-locked synchronization
36//! Yield.await; // Voluntary cooperative yield
37//! Skip(2).await; // Skip N executor cycles
38//! }
39//! ```
40//! ### 2. Task Completion & Concurrency
41//!
42//! Powerful macros and traits to combine multiple futures.
43//!
44//! * **`join!`**: Runs multiple tasks concurrently; returns `()`.
45//! * **`try_join!`**: Short-circuits and returns `Err` if any task fails.
46//! * **`join_all!`**: Collects results from all tasks into a flattened tuple.
47//! * **Trait-based Joins**: Call `.join().await` or `.try_join().await` directly on Tuples, Arrays, or Vectors.
48//!
49//! ```rust
50//! async fn async_task() {
51//! // Join tasks into a single state machine on the stack
52//! join!(calc_1(), calc_2(), ...).await;
53//!
54//! // Flattened result collection
55//! let (a, b, c, ..) = join_all!(init_fs(), check_mem(), init_net()).await;
56//! }
57//! ```
58//!
59//! ### 3. Timeouts and Guarding
60//!
61//! ```rust
62//! async fn timeout_example() {
63//! // Built-in timeout support for any Future
64//! match my_task().timeout(500).await {
65//! Ok(val) => handle(val),
66//! Err(_) => handle_timeout(),
67//! }
68//! }
69//!
70//! ```
71//!
72//! ### 4. Advanced Execution Pacing
73//!
74//! The `Pacer` allows you to strictly control the "rhythm" of your loops, essential for smooth 3D rendering or UI animations.
75//!
76//! ```rust
77//! async fn paced_loop() {
78//! let mut pacer = Pacer::new(60); // Target 60 FPS
79//! loop {
80//! pacer.burst(20).await; // Allow a burst of 20 cycles
81//! pacer.throttle().await; // Slow down to match target frequency
82//! pacer.step(10, true).await; // Step-based pacing
83//! }
84//! }
85//! ```
86//!
87//! ### 5. Oneshot, Channel and Signal...
88//!
89//! ```rust
90//! static ASSET_LOADED: Signal<TextureHandle> = Signal::new();
91//!
92//! async fn background_loader() {
93//! let texture = load_texture_gop("logo.bmp").await;
94//! // Notify the renderer that the texture is ready
95//! ASSET_LOADED.signal(texture);
96//! }
97//!
98//! async fn renderer_task() {
99//! // Suspend execution until the signal is triggered
100//! let texture = ASSET_LOADED.wait().await;
101//! draw_to_screen(texture);
102//! }
103//! ```
104//!
105//! ```rust
106//! // 1. Create a channel for keyboard events with a capacity of 32
107//! extern "efiapi" fn process(arg: *mut c_void) {
108//! let (tx, mut rx) = bounded_channel::<Key>(32);
109//!
110//! add!(
111//! executor => {
112//! // Task A: Producer - Polls hardware at a high frequency (e.g., 100Hz)
113//! 100 -> async move {
114//! loop {
115//! if let Some(key) = poll_keyboard() {
116//! tx.send(key); // Non-blocking send
117//! }
118//! Yield.await;
119//! }
120//! },
121//!
122//! // Task B: Consumer - Processes game logic
123//! 0 -> async move {
124//! loop {
125//! // The await point suspends the task if the queue is empty.
126//! // Execution resumes as soon as the producer sends data
127//! // and the executor polls this task again.
128//! let key = (&mut rx).await;
129//! process_game_logic(key);
130//! }
131//! }
132//! }
133//! );
134//! }
135//! ```
136//!
137//! --------------------------------
138//! Multicore & Multi-Scheduler Concurrency
139//! --------------------------------
140//! `uefi-async` enabling seamless and safe parallel execution across multiple cores and schedulers. It provides a robust suite of synchronization and control primitives designed to handle the complexities of asynchronous multicore tasking.
141//!
142//! ### Thread-Safe Asynchronous Primitives
143//!
144//! To ensure data integrity and prevent race conditions during parallel execution, the framework provides three specialized pillars:
145//!
146//! * **Event-based Futures (Event Listening):** Designed for non-blocking coordination, these futures allow tasks to react to external signals or hardware interrupts across different cores without polling.
147//! * **Synchronization Primitives (Data Integrity):** Reliable data sharing is critical when multiple schedulers access the same memory space. We provide thread-safe containers and locks like **Async Mutexes** and **Atomic Shared States** specifically tuned for UEFI.
148//! * **Task Control Futures (Execution Management):** Granular control over the lifecycle of parallel tasks. This includes **Structured Concurrency** to spawn, join, or cancel tasks across different schedulers, and **Priority Steering** to direct critical tasks to specific cores.
149//!
150//!
151//!
152//! --------------------------------
153//! Installation
154//! --------------------------------
155//! Add this to your `Cargo.toml`:
156//!
157//! ```toml
158//! [dependencies]
159//! uefi-async = "*"
160//! ```
161//!
162//! --------------------------------
163//! Usage
164//! --------------------------------
165//!
166//! ### 1. Define your tasks
167//!
168//! Tasks are standard Rust `async` functions or closures.
169//!
170//! ### 2. Initialize and Run
171//!
172//! Use the `add!` macro to set up your executor.
173//!
174//! ```rust
175//! extern crate alloc;
176//! use alloc::boxed::Box;
177//! use uefi_async::nano_alloc::{Executor, TaskNode};
178//!
179//! async fn calc_1() {}
180//! async fn calc_2() {}
181//!
182//! extern "efiapi" fn process(arg: *mut c_void) {
183//! // 1. Create executor
184//! Executor::new()
185//! // 2. Register tasks
186//! .add(&mut TaskNode::new(Box::pin(calc_1()), 0))
187//! .add(&mut TaskNode::new(Box::pin(calc_2()), 60))
188//! // 3. Run the event loop
189//! .run_forever();
190//! }
191//! ```
192//!
193//! or more advanced usage:
194//!
195//! ```rust
196//! extern crate alloc;
197//! use uefi_async::nano_alloc::{Executor, add};
198//! use uefi_async::util::tick;
199//!
200//! async fn af1() {}
201//! async fn af2(_: usize) {}
202//! async fn af3(_: usize, _:usize) {}
203//!
204//! extern "efiapi" fn process(arg: *mut c_void) {
205//! if arg.is_null() { return }
206//! let ctx = unsafe { &mut *arg.cast::<Context>() };
207//! let core = ctx.mp.who_am_i().expect("Failed to get core ID");
208//!
209//! // 1. Create executor
210//! let mut executor1 = Executor::new();
211//! let mut executor2 = Executor::new();
212//! let mut cx = Executor::init_step();
213//!
214//! let offset = 20;
215//! // 2. Use the macro to register tasks
216//! // Syntax: executor => { frequency -> future }
217//! add! (
218//! executor1 => {
219//! 0 -> af1(), // Runs at every tick
220//! 60 -> af2(core), // Runs at 60 HZ
221//! },
222//! executor2 => {
223//! 10u64.saturating_sub(offset) -> af3(core, core),
224//! 30 + 10 -> af1(),
225//! },
226//! );
227//!
228//! loop {
229//! calc_sync(core);
230//!
231//! // 3. Run the event loop manually
232//! executor1.run_step(tick(), &mut cx);
233//! executor2.run_step(tick(), &mut cx);
234//! }
235//! }
236//! ```
237//!
238//! ### 3. Using various control flows, signals, and tunnels...
239//!
240//! ```rust
241//! // Example: Producer task on Core 1, Consumer task on Core 0
242//! extern "efiapi" fn process(arg: *mut c_void) {
243//! if arg.is_null() { return }
244//! let ctx = unsafe { &mut *arg.cast::<Context>() };
245//! let core = ctx.mp.who_am_i().expect("Failed to get core ID");
246//!
247//! let (tx, rx) = unbounded_channel::channel::<PhysicsResult>();
248//! let mut executor = Executor::new();
249//! if core == 1 {
250//! add!(executor => { 20 -> producer(tx)});
251//! executor.run_forever();
252//! }
253//! if core == 0 {
254//! add!(executor => { 0 -> consumer(tx)});
255//! executor.run_forever();
256//! }
257//! }
258//!
259//! // Task running on Core 1's executor
260//! async fn producer(tx: ChannelSender<PhysicsResult>) {
261//! let result = heavy_physics_calculation();
262//! tx.send(result); // Non-blocking atomic push
263//! }
264//!
265//! // Task running on Core 0's executor
266//! async fn consumer(rx: ChannelReceiver<PhysicsResult>) {
267//! // This will return Poll::Pending and yield CPU if the queue is empty,
268//! // allowing the executor to run other tasks (like UI rendering).
269//! let data = rx.await;
270//! update_gpu_buffer(data);
271//! }
272//! ```
273//!
274//! --------------------------------
275//! Why use `uefi-async`?
276//! --------------------------------
277//! In UEFI development, managing multiple periodic tasks (like polling keyboard input while updating a UI or handling network packets) manually can lead to "spaghetti code." `uefi-async` allows you to write clean, linear `async/await` code while the executor ensures that timing constraints are met without a heavy OS-like scheduler.
278//!
279//! --------------------------------
280//! License
281//! --------------------------------
282//! MIT or Apache-2.0.
283
284#![warn(unreachable_pub)]
285#![no_std]
286#![cfg_attr(docsrs, feature(doc_cfg))]
287
288#[cfg(any(feature = "nano-alloc", feature = "alloc"))]
289extern crate alloc;
290
291/// Utility functions for hardware timing and platform-specific operations.
292///
293/// Includes the TSC-based tick counter and frequency calibration.
294pub mod common;
295pub use common::*;
296
297/// Static task management module.
298///
299/// This module provides a mechanism for running the executor without
300/// a dynamic memory allocator, utilizing static memory or stack-allocated
301/// task nodes. Useful for highly constrained environments.
302#[cfg(feature = "static")]
303pub mod no_alloc;
304
305/// Standard asynchronous executor implementation using `alloc`.
306///
307/// Provides the Executor and TaskNode types that rely on
308/// `Box` and `Pin` for flexible task management.
309/// Requires a global allocator to be defined.
310#[cfg(feature = "alloc")]
311pub mod dynamic;
312
313/// Helper module for setting up a global allocator in UEFI.
314///
315/// When enabled, this module provides a bridge between the Rust
316/// memory allocation API and the UEFI Boot Services memory allocation functions.
317#[cfg(feature = "global-allocator")]
318pub mod global_allocator;
319
320/// Specialized, lightweight memory allocator for constrained systems.
321///
322/// A minimal allocator implementation designed to have a very small
323/// footprint, specifically optimized for managing asynchronous task nodes.
324#[cfg(feature = "nano-alloc")]
325pub mod nano_alloc;