Syncopate
A hierarchical, power-aware task scheduler for Rust applications requiring precise timing control.
Overview
Syncopate provides a flexible scheduler for managing periodic tasks with configurable execution windows. It's designed for applications that need:
- Deterministic timing: Schedule tasks to run at specific intervals
- Execution windows: Define acceptable time ranges for task execution (early/on-time/late detection)
- Power efficiency: Idle durations calculated to minimize CPU wakeups
- Flexible contexts: Share state between tasks using custom context types
- Runtime flexibility: Support both single-threaded and multi-threaded async runtimes
Quick Start
Add syncopate to your Cargo.toml:
[]
= "0.1"
= { = "1", = ["full"] }
Examples
Simple Callback-Based Usage
The callback-based API provides a clean, minimal interface where tasks execute automatically:
use Duration;
use ;
async
Key improvements:
poll()returns justDuration(how long to sleep)- No
mark_completed()calls needed - No
WakeupPlanstruct to inspect - Simple loop:
loop { sleep(scheduler.poll()).await; }
Using Context for Shared State
Define a custom context to share state between tasks and your application:
use ;
use Duration;
use ;
// Define your application context
async
Single-Threaded (Local) Usage
For single-threaded async runtimes (like tokio::task::spawn_local), you can use Rc<RefCell<T>> instead of Arc<Mutex<T>>:
use Rc;
use RefCell;
use Duration;
use ;
// Context with Rc/RefCell for single-threaded use
async
Core Concepts
Callback-Based Execution
Tasks execute automatically via callbacks during poll():
on_execute: Called when the task is executed (receivesTaskExecutionwith drift info and&Context)on_miss: Called when the task misses its window (receivesTaskMisswith miss count and&Context)- Callbacks are synchronous - keep them fast to avoid blocking the scheduler
- For async work, spawn tasks from within the callback
Custom Contexts
Define your own context type to share state:
- Multi-threaded: Use
Arc<Mutex<T>>orArc<RwLock<T>>, requiresSend + Sync - Single-threaded: Use
Rc<RefCell<T>>, noSend + Syncrequired - Flexible structure: Define any fields you need (counters, queues, configuration, etc.)
- Zero overhead: Unit context
()when no shared state is needed
Periodic Tasks
Tasks are defined with:
- period: How often the task should execute
- window_before: How early the task can execute before its ideal time
- window_after: How late the task can execute after its ideal time
- priority: Lower values = higher priority for conflict resolution
- on_execute: Optional callback for automatic execution (receives
TaskExecutionand&Context) - on_miss: Optional callback for deadline violations (receives
TaskMissand&Context)
Scheduler Bounds
The scheduler enforces minimum and maximum periods:
- Tasks with periods below
min_periodare rejected - Tasks with periods above
max_periodare rejected - When no tasks are scheduled, the scheduler sleeps for
max_period
Execution Categories
Tasks are classified based on actual vs. ideal timing:
- Early: Executed before
window_before - On-Time: Executed within
[ideal - window_before, ideal + window_after] - Late: Executed after
window_after - Missed: Never executed within the window
API Design
Multi-Threaded Usage (with Handle)
let = new
.with_context
.build;
// Add tasks from any thread via the handle
handle.add_task?;
// Run scheduler loop
loop
Requirements:
- Context must implement
Send + Sync + 'static - Callbacks must be
Send + Sync - Use
Arc<Mutex<T>>for shared state
Single-Threaded Usage (no Handle)
let mut scheduler = new
.with_context
.build_local;
// Add tasks directly
scheduler.add_task_local?;
// Run scheduler loop
loop
Benefits:
- No
Send + Syncrequirements - Can use
Rc<RefCell<T>> - Simpler for single-threaded runtimes
Architecture
Syncopate uses a poll-based design:
- SchedulerLoop: Core scheduling logic, single-threaded owner
- SchedulerHandle: Cloneable handle for adding tasks from any thread (optional)
- Context: User-defined type shared between tasks and application
- BinaryHeap: Tasks ordered by deadline for efficient scheduling
Benchmarks
Run the benchmark example to measure timing accuracy:
The benchmark demonstrates using context to collect execution statistics.
Planned Features
The current implementation provides core callback-based scheduling with context support. Future enhancements are planned based on the High-Level Design document:
Core Enhancements
-
One-Shot Tasks: Single-execution tasks with monotonic or wall-clock deadlines
TaskType::OneShotwithDeadline::Monotonic(Instant)andDeadline::WallClock(SystemTime)- Automatic removal after execution
- Clock-jump detection for wall-clock deadlines
-
Priority Lanes: Multi-level priority queues with EDF within each level
- Configurable number of priority levels
- Optional priority aging to prevent starvation
- Priority-first, deadline-second scheduling
-
Task Lifecycle Management: Enhanced task control
- Pause/resume tasks without removing them
- Modify task configuration via
TaskPatch - Task removal by ID
-
Task Dependencies: Express "task B must run after task A"
- DAG-based scheduling within a single poll cycle
- Dependency validation at task creation
-
Task Groups: Atomic execution
- All tasks in a group fire together or not at all
- Useful for coordinated multi-task operations
Advanced Scheduling
-
Two-Tier Architecture: Separate precision and efficiency modes
- Precision Tier: O(log n) EDF peek for sub-millisecond scheduling (10-100 μs periods)
- Efficiency Tier: O(n log n) weighted interval coalescing for power-saving (1 ms+ periods)
- Tier selection at scheduler construction
-
Task Coalescing (Efficiency Tier): Batch tasks within overlapping windows
- Weighted interval sweep algorithm (O(n log n))
- Priority-aware coalescing decisions
- Minimize wakeups for power management
-
Hierarchical Sub-Schedulers: Parent-child scheduler relationships
- Tree of schedulers with period constraints (child period ≥ parent period)
- Task isolation between scheduler levels
- Multi-level hierarchy support
-
Period Negotiation: Sub-schedulers can request parent period changes
- Request/response protocol via channels
- Global period recomputation using GCD on integer nanoseconds
- Per-child allow/deny policies
Observability & Integration
-
Tracing Integration:
tracingcrate for observability- Scheduling spans for each poll cycle
- Task execution and miss events
- Performance instrumentation
-
Metrics Export: Production monitoring
- Prometheus metrics endpoint
- Grafana dashboards
- Key metrics: task execution counts, miss rates, coalescing efficiency, idle duration
-
Statistics API: Runtime performance visibility
SchedulerStatstype with total tasks, polls, misses- Average tasks per wakeup (coalescing efficiency)
- Current computed period (GCD of task periods)
-
Energy Profiling: Measure actual power consumption
- Per-coalescing-strategy power usage
- Hardware-specific optimizations
- Integration with system power APIs
Performance & Platform Support
-
no_std Support: Target embedded systems
- Remove
std::timedependency - Generic clock abstraction
- Support for Cortex-M and other embedded platforms
- Arena allocator to avoid heap allocation
- Remove
-
Runtime Abstraction: Clock and Sleeper traits
Clocktrait for time sources (testing, embedded)Sleepertrait for async/blocking sleep strategies- Feature flags:
tokio-sleep,std-sleep
-
SIMD Coalescing: Vectorize interval sweep
- For very large task sets (10,000+ tasks)
- Platform-specific optimizations (ARM NEON, x86 AVX2)
-
Arena Allocator: Avoid per-task heap allocation
- Indexed storage with generation counters
- Improved cache locality
- Reduced memory fragmentation
-
Distributed Scheduling: Multi-process coordination
- Shared memory communication
- Distributed coalescing algorithms
- Process-level hierarchy
Research & Verification
-
Dynamic Priority Adjustment: Adaptive scheduling
- User-defined priority functions
- Adaptive priority based on miss rates
- Machine learning integration for workload prediction
-
Configuration Files: YAML/JSON task definitions
- Declarative task specification
- Hot-reload support
- Schema validation
-
Formal Verification: Mathematical correctness proofs
- TLA+ model of coalescing algorithm
- Prove no-starvation property with aging enabled
- Model checking for deadlock freedom
For detailed design and implementation roadmap, see scheduler-hld.md.
License
MIT OR Apache-2.0