1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
//! uefi-async
//! ================================
//! A lightweight, zero-cost asynchronous executor designed specifically for UEFI environments or bare-metal Rust. It provides a simple task scheduler based on a intrusive linked-list and a procedural macro to simplify task registration.
//!
//! --------------------------------
//! Work in Progress
//! --------------------------------
//! currently only `nano_alloc` feature is supported.
//!
//! --------------------------------
//! Features
//! --------------------------------
//! * **No-Std Compatible**: Designed for environments without a standard library (requires `alloc`).
//! * **Intrusive Linked-List**: No additional collection overhead for managing tasks.
//! * **Frequency-Based Scheduling**: Define tasks to run at specific frequencies (Hz), automatically converted to hardware ticks.
//! * **Macro-Driven Syntax**: A clean, declarative DSL to assign tasks to executors.
//! * **Tiny Control Primitives**: Comprehensive support for timeouts, joins, and hardware-precise timing.
//! * **Safe Signaling:** Cross-core event notification with atomic state transitions.
//! * **Multicore-Ready**: Thread-safe primitives for cross-core signaling and data synchronization.
//!
//! --------------------------------
//! Tiny Async Control Flow
//! --------------------------------
//!
//! ### 1. High-Precision Timing
//!
//! Support for human-readable time units and hardware-aligned synchronization.
//!
//! ```rust
//! async fn timer() {
//! WaitTimer::from_ms(500).await; // Explicit timer
//! 2.year().await; // Natural language units
//! 1.mins().await;
//! 80.ps().await; // Picosecond precision (CPU frequency dependent)
//! 20.fps().await; // Framerate-locked synchronization
//! Yield.await; // Voluntary cooperative yield
//! Skip(2).await; // Skip N executor cycles
//! }
//! ```
//! ### 2. Task Completion & Concurrency
//!
//! Powerful macros and traits to combine multiple futures.
//!
//! * **`join!`**: Runs multiple tasks concurrently; returns `()`.
//! * **`try_join!`**: Short-circuits and returns `Err` if any task fails.
//! * **`join_all!`**: Collects results from all tasks into a flattened tuple.
//! * **Trait-based Joins**: Call `.join().await` or `.try_join().await` directly on Tuples, Arrays, or Vectors.
//!
//! ```rust
//! async fn async_task() {
//! // Join tasks into a single state machine on the stack
//! join!(calc_1(), calc_2(), ...).await;
//!
//! // Flattened result collection
//! let (a, b, c, ..) = join_all!(init_fs(), check_mem(), init_net()).await;
//! }
//! ```
//!
//! ### 3. Timeouts and Guarding
//!
//! ```rust
//! async fn timeout_example() {
//! // Built-in timeout support for any Future
//! match my_task().timeout(500).await {
//! Ok(val) => handle(val),
//! Err(_) => handle_timeout(),
//! }
//! }
//!
//! ```
//!
//! ### 4. Advanced Execution Pacing
//!
//! The `Pacer` allows you to strictly control the "rhythm" of your loops, essential for smooth 3D rendering or UI animations.
//!
//! ```rust
//! async fn paced_loop() {
//! let mut pacer = Pacer::new(60); // Target 60 FPS
//! loop {
//! pacer.burst(20).await; // Allow a burst of 20 cycles
//! pacer.throttle().await; // Slow down to match target frequency
//! pacer.step(10, true).await; // Step-based pacing
//! }
//! }
//! ```
//!
//! ### 5. Oneshot, Channel and Signal...
//!
//! ```rust
//! static ASSET_LOADED: Signal<TextureHandle> = Signal::new();
//!
//! async fn background_loader() {
//! let texture = load_texture_gop("logo.bmp").await;
//! // Notify the renderer that the texture is ready
//! ASSET_LOADED.signal(texture);
//! }
//!
//! async fn renderer_task() {
//! // Suspend execution until the signal is triggered
//! let texture = ASSET_LOADED.wait().await;
//! draw_to_screen(texture);
//! }
//! ```
//!
//! ```rust
//! // 1. Create a channel for keyboard events with a capacity of 32
//! extern "efiapi" fn process(arg: *mut c_void) {
//! let (tx, mut rx) = bounded_channel::<Key>(32);
//!
//! add!(
//! executor => {
//! // Task A: Producer - Polls hardware at a high frequency (e.g., 100Hz)
//! 100 -> async move {
//! loop {
//! if let Some(key) = poll_keyboard() {
//! tx.send(key); // Non-blocking send
//! }
//! Yield.await;
//! }
//! },
//!
//! // Task B: Consumer - Processes game logic
//! 0 -> async move {
//! loop {
//! // The await point suspends the task if the queue is empty.
//! // Execution resumes as soon as the producer sends data
//! // and the executor polls this task again.
//! let key = (&mut rx).await;
//! process_game_logic(key);
//! }
//! }
//! }
//! );
//! }
//! ```
//!
//! --------------------------------
//! Multicore & Multi-Scheduler Concurrency
//! --------------------------------
//! `uefi-async` enabling seamless and safe parallel execution across multiple cores and schedulers. It provides a robust suite of synchronization and control primitives designed to handle the complexities of asynchronous multicore tasking.
//!
//! ### Thread-Safe Asynchronous Primitives
//!
//! To ensure data integrity and prevent race conditions during parallel execution, the framework provides three specialized pillars:
//!
//! * **Event-based Futures (Event Listening):** Designed for non-blocking coordination, these futures allow tasks to react to external signals or hardware interrupts across different cores without polling.
//! * **Synchronization Primitives (Data Integrity):** Reliable data sharing is critical when multiple schedulers access the same memory space. We provide thread-safe containers and locks like **Async Mutexes** and **Atomic Shared States** specifically tuned for UEFI.
//! * **Task Control Futures (Execution Management):** Granular control over the lifecycle of parallel tasks. This includes **Structured Concurrency** to spawn, join, or cancel tasks across different schedulers, and **Priority Steering** to direct critical tasks to specific cores.
//!
//!
//!
//! --------------------------------
//! Installation
//! --------------------------------
//! Add this to your `Cargo.toml`:
//!
//! ```toml
//! [dependencies]
//! uefi-async = "*"
//! ```
//!
//! --------------------------------
//! Usage
//! --------------------------------
//!
//! ### 1. Define your tasks
//!
//! Tasks are standard Rust `async` functions or closures.
//!
//! ### 2. Initialize and Run
//!
//! Use the `add!` macro to set up your executor.
//!
//! ```rust
//! extern crate alloc;
//! use alloc::boxed::Box;
//! use uefi_async::nano_alloc::{Executor, TaskNode};
//!
//! async fn calc_1() {}
//! async fn calc_2() {}
//!
//! extern "efiapi" fn process(arg: *mut c_void) {
//! // 1. Create executor
//! Executor::new()
//! // 2. Register tasks
//! .add(&mut TaskNode::new(Box::pin(calc_1()), 0))
//! .add(&mut TaskNode::new(Box::pin(calc_2()), 60))
//! // 3. Run the event loop
//! .run_forever();
//! }
//! ```
//!
//! or more advanced usage:
//!
//! ```rust
//! extern crate alloc;
//! use uefi_async::nano_alloc::{Executor, add};
//! use uefi_async::util::tick;
//!
//! async fn af1() {}
//! async fn af2(_: usize) {}
//! async fn af3(_: usize, _:usize) {}
//!
//! extern "efiapi" fn process(arg: *mut c_void) {
//! if arg.is_null() { return }
//! let ctx = unsafe { &mut *arg.cast::<Context>() };
//! let core = ctx.mp.who_am_i().expect("Failed to get core ID");
//!
//! // 1. Create executor
//! let mut executor1 = Executor::new();
//! let mut executor2 = Executor::new();
//! let mut cx = Executor::init_step();
//!
//! let offset = 20;
//! // 2. Use the macro to register tasks
//! // Syntax: executor => { frequency -> future }
//! add! (
//! executor1 => {
//! 0 -> af1(), // Runs at every tick
//! 60 -> af2(core), // Runs at 60 HZ
//! },
//! executor2 => {
//! 10u64.saturating_sub(offset) -> af3(core, core),
//! 30 + 10 -> af1(),
//! },
//! );
//!
//! loop {
//! calc_sync(core);
//!
//! // 3. Run the event loop manually
//! executor1.run_step(tick(), &mut cx);
//! executor2.run_step(tick(), &mut cx);
//! }
//! }
//! ```
//!
//! ### 3. Using various control flows, signals, and tunnels...
//!
//! ```rust
//! // Example: Producer task on Core 1, Consumer task on Core 0
//! extern "efiapi" fn process(arg: *mut c_void) {
//! if arg.is_null() { return }
//! let ctx = unsafe { &mut *arg.cast::<Context>() };
//! let core = ctx.mp.who_am_i().expect("Failed to get core ID");
//!
//! let (tx, rx) = unbounded_channel::channel::<PhysicsResult>();
//! let mut executor = Executor::new();
//! if core == 1 {
//! add!(executor => { 20 -> producer(tx)});
//! executor.run_forever();
//! }
//! if core == 0 {
//! add!(executor => { 0 -> consumer(tx)});
//! executor.run_forever();
//! }
//! }
//!
//! // Task running on Core 1's executor
//! async fn producer(tx: ChannelSender<PhysicsResult>) {
//! let result = heavy_physics_calculation();
//! tx.send(result); // Non-blocking atomic push
//! }
//!
//! // Task running on Core 0's executor
//! async fn consumer(rx: ChannelReceiver<PhysicsResult>) {
//! // This will return Poll::Pending and yield CPU if the queue is empty,
//! // allowing the executor to run other tasks (like UI rendering).
//! let data = rx.await;
//! update_gpu_buffer(data);
//! }
//! ```
//!
//! --------------------------------
//! Why use `uefi-async`?
//! --------------------------------
//! In UEFI development, managing multiple periodic tasks (like polling keyboard input while updating a UI or handling network packets) manually can lead to "spaghetti code." `uefi-async` allows you to write clean, linear `async/await` code while the executor ensures that timing constraints are met without a heavy OS-like scheduler.
//!
//! --------------------------------
//! License
//! --------------------------------
//! MIT or Apache-2.0.
extern crate alloc;
/// Utility functions for hardware timing and platform-specific operations.
///
/// Includes the TSC-based tick counter and frequency calibration.
pub use *;
/// Static task management module.
///
/// This module provides a mechanism for running the executor without
/// a dynamic memory allocator, utilizing static memory or stack-allocated
/// task nodes. Useful for highly constrained environments.
/// Standard asynchronous executor implementation using `alloc`.
///
/// Provides the Executor and TaskNode types that rely on
/// `Box` and `Pin` for flexible task management.
/// Requires a global allocator to be defined.
/// Helper module for setting up a global allocator in UEFI.
///
/// When enabled, this module provides a bridge between the Rust
/// memory allocation API and the UEFI Boot Services memory allocation functions.
/// Specialized, lightweight memory allocator for constrained systems.
///
/// A minimal allocator implementation designed to have a very small
/// footprint, specifically optimized for managing asynchronous task nodes.