Clockworker
Clockworker, loosely inspired by Seastar, is a single-threaded async executor with powerful, pluggable scheduling. Clockworker is agnostic to the underlying async runtime and can sit on top of any runtime like Tokio, Monoio, or Smol.
⚠️ Early/Alpha Release: This project is in early development. APIs may change in breaking ways between versions. Use at your own risk.
What is Clockworker for?
There is a class of settings where single-threaded async runtimes are a great fit. Several such runtimes exist in the Rust ecosystem—Tokio, Monoio, Glommio, etc. But almost none of these (with the exception of Glommio) provide the ability to run multiple configurable work queues with different priorities. This becomes important for many real-world single-threaded systems, at minimum to separate foreground and background work. Clockworker aims to solve this problem.
It does so via work queues with configurable time-shares onto which tasks can be spawned. Clockworker has a two-level scheduler: the top-level scheduler chooses the queue to poll based on its fair time share (inspired by Linux CFS/EEVDF), and then a task is chosen from that queue based on a queue-specific scheduler, which is fully pluggable—you can use one of the built-in schedulers or write your own by implementing a simple trait.
Note that Clockworker itself is just an executor loop, not a full async runtime, and is designed to sit on top of any other runtime.
Features
- EEVDF-based queue scheduling: Fair CPU time distribution between queues using virtual runtime (inspired by Linux CFS/EEVDF)
- Pluggable task schedulers: Choose how tasks are ordered within each queue
- Task cancellation: Abort running tasks via
JoinHandle::abort() - Panic handling: Configurable panic behavior (propagate or catch as
JoinError::Panic) - Statistics: Built-in metrics for monitoring executor and queue performance
Pre-written schedulers
- LAS (Least Attained Service): Recommended for latency-sensitive workloads; good at minimizing tail latencies.
- RunnableFifo: Simple FIFO ordering based on when tasks become runnable, good for throughput.
- ArrivalFifo: FIFO ordering based on task arrival time
Quick Start
Add to your Cargo.toml:
[]
= "0.1.0"
Examples
Basic Usage
The simplest example - spawn a task and wait for it:
use ;
use LocalSet;
async
Multiple Queues with Different Weights
Allocate CPU time proportionally between queues:
use ;
use LocalSet;
use Arc;
use ;
async
Task Cancellation
Cancel tasks using JoinHandle::abort():
use ;
use LocalSet;
use ;
async
Panic Handling
By default, the executor also panics when any of the tasks panic (same behavior as Tokio's single-threaded runtime). However, this can be configured:
use ;
use LocalSet;
async
Task Grouping
Group related tasks together for better scheduling. This can be useful for policies across tenants, gRPC streams, noisy clients, or even cases where a task spawns many child tasks and you want to make lineage-aware scheduling choices.
use ;
use LocalSet;
async
Choosing a Scheduler
LAS (Least Attained Service) - Recommended for Latency
Use LAS when you need low latency and fair scheduling:
new
.with_queue
.build
LAS prioritizes tasks that have received the least CPU time, which helps ensure:
- Low tail latencies
- Fair CPU distribution within groups
- Better responsiveness for interactive workloads
RunnableFifo
Use RunnableFifo for simple FIFO ordering:
use RunnableFifo;
new
.with_queue
.build
Tasks are ordered by when they become runnable (not arrival time). If a task goes to sleep and wakes up, it goes to the back of the queue.
ArrivalFifo
Use ArrivalFifo for strict arrival-time ordering:
use ArrivalFifo;
new
.with_queue
.build
Tasks maintain their position based on when they were first spawned, even if they go to sleep.
Architecture
Clockworker uses a two-level scheduling approach:
- Queue-level scheduling (EEVDF): Fairly distributes CPU time between queues based on their weights using virtual runtime
- Task-level scheduling (pluggable): Within each queue, a scheduler (LAS, RunnableFifo, etc.) chooses which task to run next
This design allows you to:
- Allocate CPU resources between different workload classes (via queue weights)
- Control latency and fairness within each class (via task schedulers)
Requirements
- Rust 1.70+
- Works with any async runtime (tokio, smol, monoio, etc.) via
LocalSetor similar
License
Licensed under the Apache License, Version 2.0. See LICENSE for details.