# preemptive-threads
A `#![no_std]` preemptive multithreading library for Rust, built from scratch for OS kernels and embedded systems.
[](https://crates.io/crates/preemptive-threads)
[](https://docs.rs/preemptive-threads)
[](LICENSE)
## Features
- **No standard library** - Built for embedded systems and OS kernels
- **Lock-free scheduling** - O(1) priority queue with atomic operations
- **Thread-safe** - No race conditions, proper synchronization
- **Priority scheduling** - 8 priority levels with round-robin within levels
- **Stack protection** - Guard pages with canary values and overflow detection
- **Static allocation** - No heap required, deterministic memory usage
- **Safe API** - High-level abstractions alongside low-level control
- **Platform timers** - Cross-platform preemption support (where available)
## What's New in v0.1.2
**Major architectural improvements:**
- **Lock-free atomic scheduler** replacing unsafe global singleton
- **Full CPU context saving** including FPU/SSE state
- **Signal-safe preemption** handler with deferred scheduling
- **Enhanced stack protection** with guard pages and watermark tracking
- **Safe API abstractions** - ThreadBuilder, Mutex, ThreadPool
- **Platform timer support** for cross-platform compatibility
## Installation
Add to your `Cargo.toml`:
```toml
[dependencies]
preemptive-threads = "0.1.2"
```
## Quick Start
### Safe API (Recommended)
```rust
#![no_std]
use preemptive_threads::{
ThreadBuilder, ATOMIC_SCHEDULER, Mutex, yield_now, preemption_checkpoint
};
static COUNTER: Mutex<u32> = Mutex::new(0);
fn worker_thread() {
for i in 0..100 {
// Safe mutex access with RAII
{
let mut counter = COUNTER.lock();
*counter += 1;
}
// Cooperative yield point
if i % 10 == 0 {
preemption_checkpoint();
}
}
}
fn main() {
// Create threads with builder pattern
let builder = ThreadBuilder::new()
.stack_size(128 * 1024) // 128KB stack
.priority(5) // Medium priority (0-7)
.name("worker");
// Note: Full thread spawning requires platform-specific features
// For now, use the atomic scheduler directly:
let stack = Box::leak(Box::new([0u8; 65536]));
ATOMIC_SCHEDULER.spawn_thread(stack, worker_thread, 5).unwrap();
// Run scheduler
for _ in 0..100 {
if let Some(thread_id) = ATOMIC_SCHEDULER.schedule() {
println!("Scheduled thread {}", thread_id);
}
preemption_checkpoint();
}
}
```
### Low-Level API
```rust
#![no_std]
use preemptive_threads::{ATOMIC_SCHEDULER, yield_thread};
static mut STACK1: [u8; 64 * 1024] = [0; 64 * 1024];
static mut STACK2: [u8; 64 * 1024] = [0; 64 * 1024];
fn high_priority_task() {
for i in 0..10 {
println!("High priority: {}", i);
yield_thread();
}
}
fn low_priority_task() {
for i in 0..10 {
println!("Low priority: {}", i);
yield_thread();
}
}
fn main() {
unsafe {
// Spawn threads with different priorities (0-7, higher = more priority)
ATOMIC_SCHEDULER.spawn_thread(&mut STACK1, high_priority_task, 7).unwrap();
ATOMIC_SCHEDULER.spawn_thread(&mut STACK2, low_priority_task, 3).unwrap();
// Manual scheduling loop
for i in 0..20 {
if let Some(thread_id) = ATOMIC_SCHEDULER.schedule() {
println!("Iteration {}: Running thread {}", i, thread_id);
// In a real implementation, you'd switch context here
}
}
}
}
```
### Protected Stacks
```rust
use preemptive_threads::{ProtectedStack, StackGuard, StackStatus};
fn main() {
static mut MEMORY: [u8; 8192] = [0; 8192];
unsafe {
let stack = ProtectedStack::new(&mut MEMORY, StackGuard::default());
match stack.check_overflow() {
StackStatus::Ok { used_bytes, free_bytes } => {
println!("Stack OK: {}B used, {}B free", used_bytes, free_bytes);
}
StackStatus::NearOverflow { bytes_remaining } => {
println!("WARNING: Only {}B remaining!", bytes_remaining);
}
StackStatus::Overflow { overflow_bytes } => {
println!("ERROR: Stack overflow by {}B!", overflow_bytes);
}
StackStatus::Corrupted { corrupted_bytes, .. } => {
println!("ERROR: Stack corrupted, {}B affected", corrupted_bytes);
}
}
// Get detailed statistics
let stats = stack.get_stats();
println!("Peak usage: {}B / {}B", stats.peak_usage, stats.usable_size);
}
}
```
### Platform Timer Integration
```rust
use preemptive_threads::{init_preemption_timer, stop_preemption_timer, preemption_checkpoint};
fn main() {
// Try to enable hardware preemption
match init_preemption_timer(10) { // 10ms intervals
Ok(()) => {
println!("Hardware preemption enabled");
// Your application code here
// The timer will automatically trigger scheduling
stop_preemption_timer();
}
Err(msg) => {
println!("Hardware preemption unavailable: {}", msg);
println!("Using cooperative scheduling with checkpoints");
// Insert preemption checkpoints manually
for i in 0..1000000 {
do_work(i);
if i % 1000 == 0 {
preemption_checkpoint(); // Yield control periodically
}
}
}
}
}
fn do_work(_i: usize) {
// CPU intensive work
}
```
## Architecture
### Lock-Free Atomic Scheduler
v0.1.2 introduces a completely rewritten scheduler using lock-free atomic operations:
- **Per-priority circular buffers** for O(1) enqueue/dequeue
- **Atomic bitmap** for instant highest-priority lookup
- **Compare-and-swap** operations for thread-safe access
- **Exponential backoff** for lock acquisition when needed
- **Per-CPU scheduling** support for future multi-core expansion
### Full Context Switching
Context switches now save complete CPU state:
- **All general-purpose registers** (rax, rbx, rcx, rdx, rsi, rdi, r8-r15)
- **Stack and base pointers** (rsp, rbp)
- **Flags register** (rflags) in correct order
- **FPU/SSE state** via FXSAVE/FXRSTOR instructions
- **Future AVX support** with runtime CPU feature detection
### Enhanced Stack Protection
Multiple layers of stack overflow protection:
- **Guard pages** filled with canary values
- **Bounds checking** against stack pointer
- **Watermark tracking** for high water mark analysis
- **Red zone support** for x86_64 ABI compliance
- **Multiple detection methods** for comprehensive coverage
## Performance
- **Context switching**: ~50-100 CPU cycles (depending on FPU state)
- **Scheduling**: O(1) with priority queues
- **Memory overhead**: ~64KB per thread (configurable)
- **Lock-free operations**: No mutex contention in scheduler
- **Cache-friendly**: Per-CPU local queues for better locality
## Platform Support
| Linux x86_64 | ✅ | ✅ | 🚧 Planned |
| macOS x86_64 | ✅ | ✅ | 🚧 Planned |
| Windows x86_64 | ✅ | ✅ | ❌ |
| Other architectures | ❌ | ❌ | ❌ |
## Requirements
- **Architecture**: x86_64 only (ARM64 planned)
- **Memory**: ~64KB per thread stack (configurable)
- **Threads**: Maximum 32 concurrent threads
- **Rust**: 1.70+ (for const generics and advanced features)
## Safety
This library provides both safe and unsafe APIs:
### Safe API
- `ThreadBuilder` - Safe thread creation patterns
- `Mutex<T>` - RAII mutex with automatic unlock
- `ThreadHandle` - Automatic cleanup on drop
- `ProtectedStack` - Bounds-checked stack operations
### Unsafe API
- Direct scheduler access for maximum performance
- Manual stack management for embedded use cases
- Raw context switching for OS kernel integration
## Limitations
- **Single-core only** (multi-core support planned)
- **x86_64 only** (ARM64 port in progress)
- **Static thread limit** (32 threads maximum)
- **No thread-local storage** (TLS planned)
- **Platform-specific preemption** (cooperative fallback available)
## Examples
See the `examples/` directory for complete working examples:
- `safe_threading.rs` - Safe API demonstration
- `platform_timer_demo.rs` - Timer integration
- `example.rs` - Basic low-level usage
- `stress_test.rs` - Multi-thread stress testing
## Contributing
Contributions welcome! Areas of interest:
- ARM64 architecture support
- Multi-core scheduling
- Platform-specific timer implementations
- Thread-local storage
- Real-time scheduling policies
## License
Licensed under either of:
- Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE))
- MIT License ([LICENSE-MIT](LICENSE-MIT))
at your option.