# Clockworker
Clockworker, loosely inspired by Seastar, is a single-threaded async executor
with fair scheduling across multiple queues. Clockworker is agnostic to the underlying
async runtime and can sit on top of any runtime like Tokio, Monoio, or Smol.
**⚠️ Early/Alpha Release**: This project is in early development. APIs may change in breaking ways between versions. Use at your own risk.
## What is Clockworker for?
There is a class of settings where single-threaded async runtimes are a great fit.
Several such runtimes exist in the Rust ecosystem—Tokio, Monoio, Glommio, etc. But
almost none of these (with the exception of Glommio) provide the ability to run
multiple configurable work queues with different priorities. This becomes important
for many real-world single-threaded systems, at minimum to separate foreground and
background work. Clockworker aims to solve this problem.
It does so via work queues with configurable time-shares onto which tasks can be
spawned. Clockworker uses an EEVDF-inspired scheduler: it chooses the
queue to poll based on its fair time share (inspired by Linux CFS/EEVDF), and then
executes tasks from that queue in FIFO order.
Note that Clockworker itself is just an executor loop, not a full async runtime, and
is designed to sit on top of any other runtime.
## Features
- **EEVDF-based queue scheduling**: Fair CPU time distribution between queues using virtual runtime (inspired by Linux CFS/EEVDF)
- **FIFO task ordering**: Tasks within each queue execute in FIFO order
- **Task cancellation**: Abort running tasks via `JoinHandle::abort()`
- **Panic handling**: Configurable panic behavior (propagate or catch as `JoinError::Panic`)
- **Statistics**: Built-in metrics for monitoring executor and queue performance
## Quick Start
Add to your `Cargo.toml`:
```toml
[dependencies]
clockworker = "0.1.0"
```
## Examples
### Basic Usage
The simplest example - spawn a task and wait for it:
```rust
use clockworker::ExecutorBuilder;
use tokio::task::LocalSet;
#[tokio::main(flavor = "current_thread")]
async fn main() {
let local = LocalSet::new();
local.run_until(async {
// Create executor with a single queue
let executor = ExecutorBuilder::new()
.with_queue(0, 1)
.build()
.unwrap();
// Get handle to the queue
let queue = executor.queue(0).unwrap();
// Run executor until our task completes
let result = executor.run_until(async {
// Spawn a task
let handle = queue.spawn(async {
println!("Hello from clockworker!");
42
});
// Wait for task to complete
handle.await
}).await;
println!("Task result: {:?}", result); // Ok(42)
}).await;
}
```
### Multiple Queues with Different Weights
Allocate CPU time proportionally between queues:
```rust
use clockworker::{ExecutorBuilder, yield_maybe};
use tokio::task::LocalSet;
use std::sync::Arc;
use std::sync::atomic::{AtomicU32, Ordering};
#[derive(Debug, PartialEq, Eq, Hash, Copy, Clone)]
enum Queue {
Foreground,
Background,
}
#[tokio::main(flavor = "current_thread")]
async fn main() {
let local = LocalSet::new();
local.run_until(async {
// Create executor with two queues:
// - Foreground: weight 8 (gets 8/9 of CPU time)
// - Background: weight 1 (gets 1/9 of CPU time)
// Note: Queue IDs can be enums (as shown here) or integers, strings, or any
// type implementing QueueKey
let executor = ExecutorBuilder::new()
.with_queue(Queue::Foreground, 8) // High priority queue
.with_queue(Queue::Background, 1) // Low priority queue
.build()
.unwrap();
let high_count = Arc::new(AtomicU32::new(0));
let low_count = Arc::new(AtomicU32::new(0));
// Spawn tasks in both queues
let high_queue = executor.queue(Queue::Foreground).unwrap();
let low_queue = executor.queue(Queue::Background).unwrap();
let high_clone = high_count.clone();
let low_clone = low_count.clone();
executor.run_until(async {
high_queue.spawn({
let count = high_clone;
async move {
loop {
count.fetch_add(1, Ordering::Relaxed);
yield_maybe().await;
}
}
});
low_queue.spawn({
let count = low_clone;
async move {
loop {
count.fetch_add(1, Ordering::Relaxed);
yield_maybe().await;
}
}
});
// Run for a bit
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
}).await;
// After running for a bit, high_count should be ~8x low_count
println!("High: {}, Low: {}",
high_count.load(Ordering::Relaxed),
low_count.load(Ordering::Relaxed));
}).await;
}
```
### Task Cancellation
Cancel tasks using `JoinHandle::abort()`:
```rust
use clockworker::ExecutorBuilder;
use tokio::task::LocalSet;
use tokio::time::{sleep, Duration};
#[tokio::main(flavor = "current_thread")]
async fn main() {
let local = LocalSet::new();
local.run_until(async {
let executor = ExecutorBuilder::new()
.with_queue(0, 1)
.build()
.unwrap();
let queue = executor.queue(0).unwrap();
executor.run_until(async {
// Spawn a long-running task
let handle = queue.spawn(async {
loop {
println!("Running...");
sleep(Duration::from_millis(100)).await;
}
});
// Cancel it after 500ms
sleep(Duration::from_millis(500)).await;
handle.abort();
// Wait for cancellation to complete
let result = handle.await;
assert!(result.is_err());
println!("Task cancelled: {:?}", result);
}).await;
}).await;
}
```
### Panic Handling
By default, the executor also panics when any of the tasks panic (same behavior
as Tokio's single-threaded runtime). However, this can be configured:
```rust
use clockworker::{ExecutorBuilder, JoinError};
use tokio::task::LocalSet;
#[tokio::main(flavor = "current_thread")]
async fn main() {
let local = LocalSet::new();
local.run_until(async {
// Configure executor to catch panics instead of crashing
let executor = ExecutorBuilder::new()
.with_queue(0, 1)
.with_panic_on_task_panic(false) // Catch panics
.build()
.unwrap();
let queue = executor.queue(0).unwrap();
let result = executor.run_until(async {
let handle = queue.spawn(async {
panic!("Something went wrong!");
});
handle.await
}).await;
// Panic is caught and returned as JoinError::Panic
match result {
Err(JoinError::Panic(err)) => {
println!("Task panicked: {}", err);
}
_ => {}
}
}).await;
}
```
## Architecture
Clockworker uses a single-level scheduling approach based on EEVDF:
1. **Queue-level scheduling (EEVDF)**: Fairly distributes CPU time between queues based on their weights using virtual runtime
2. **Task-level scheduling (FIFO)**: Within each queue, tasks execute in FIFO order
This design allows you to:
- Allocate CPU resources between different workload classes (via queue weights)
- Ensure predictable task ordering within each queue
## Benchmarks
Clockworker includes several benchmarks to evaluate performance:
- **overhead**: Measures executor overhead
- **tail_latency**: Measures tail latency under various loads
- **poll_profile**: Profiles polling behavior
- **priority**: Tests priority queue behavior (Linux-specific)
### Running Benchmarks on Linux (via Docker)
Some benchmarks use Linux-specific features (e.g., `libc::setpriority`, io_uring). To run these on macOS or any platform, use Docker:
```bash
# Build the Docker image
make docker-build
# Run the priority benchmark
make docker-run-priority
# Run all benchmarks
make docker-run-all
# Or use docker directly
docker run clockworker-bench priority
```
See [README_DOCKER.md](README_DOCKER.md) for detailed Docker usage instructions.
## Requirements
- Rust 1.70+
- Works with any async runtime (tokio, smol, monoio, etc.) via `LocalSet` or similar
## License
Licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE) for details.