Tokio Actors
Zero-ceremony, Tokio-native actors with strong typing and production-ready edge case handling.
Tokio Actors is a lightweight actor framework built for Rust developers who want predictable concurrency without the complexity. Every actor runs as a dedicated tokio::task on your multi-threaded runtime -- no custom schedulers, no hidden magic.
Why Tokio Actors?
Strongly Typed
Message and response types are enforced at compile time. No runtime type casting, no Any trait abuse.
Bounded Mailboxes = Natural Backpressure
Every actor has a bounded mailbox (default: 64). When full, senders wait automatically -- no OOM crashes from runaway queues.
Timer Drift Handling (MissPolicy)
Recurring timers have three drift strategies to handle system lag:
- Skip: Jump to next aligned tick
- CatchUp: Send all missed messages immediately
- Delay: Reset timer from now
This is the kind of edge-case thinking production systems need.
OTP-Style Supervision
Supervisors automatically restart failed children using restart strategies from Erlang/OTP:
- OneForOne: Restart only the failed child
- OneForAll: Restart all children when any one fails
- RestForOne: Restart the failed child and all children started after it
- SimpleOneForOne: Dynamic children sharing a single factory
Each child has a RestartType (Permanent/Transient/Temporary) and a sliding-window restart budget to prevent restart storms.
Perfect for AI/LLM Applications
Actors naturally fit AI/LLM architectures:
- Multi-Agent Systems: Each LLM agent is an actor with isolated state
- API Orchestration: Coordinate multiple LLM API calls with backpressure
- Conversation State: Bounded mailboxes prevent memory bloat from chat history
- Tool Calling: Actors model tool execution with type-safe request/response
- Async Workflows: Chain LLM calls without callback hell
Lifecycle Observability
Query actor status anytime via the system channel -- even when the mailbox is full:
let status = handle.get_status.await?;
println!;
Quick Start
Counter: The Basics
use ;
;
async
Spawning Actors
Every spawn starts with .spawn() and chains options via SpawnBuilder:
use ;
// Anonymous (UUID auto-id)
let h = my_actor.spawn.await?;
// Named (registered in default system)
let h = my_actor.spawn.named.await?;
// Named with custom mailbox
let config = default.with_mailbox_capacity;
let h = my_actor.spawn.named.with_config.await?;
// On a specific system
let sys = create?;
let h = my_actor.spawn.named.on_system.await?;
// Supervised parent (default: OneForOne, 3 restarts / 5s)
let h = my_actor.spawn.named.supervised.await?;
// Supervised with custom strategy
let sup = one_for_all.max_restarts;
let h = my_actor.spawn.named.with_supervision.await?;
Actor Registry (ActorSystem)
use ;
// Default system (lazy singleton)
let sys = default;
// Custom system with config
let sys = create_with?;
// Named lookup (OTP whereis/1)
let handle = sys.;
// Stop/kill by name
sys.stop.await?; // Graceful (vetoable)
sys.kill.await?; // Force (bypasses all hooks)
// Coordinated shutdown with timeout escalation
sys.shutdown.await;
Core Concepts
Message Passing: notify vs send
// Fire-and-forget (async until mailbox accepts)
handle.notify.await?;
// Request-response (wait for actor to process)
let response = handle.send.await?;
// Non-blocking attempt (returns immediately)
handle.try_notify?;
Error Handling Nuance:
notifyerrors -> actor callshandle_failure()and continues processingsenderrors -> actor stops (caller expects a response, failure is critical)
This asymmetry reflects real-world semantics.
Supervision
Supervisors spawn children through their ActorContext and automatically handle restarts:
use ;
;
// Launch with OneForAll strategy and custom budget
let sup = one_for_all.max_restarts;
let handle = MySupervisor.spawn.named.with_supervision.await?;
The runtime handles the restart loop: evaluate strategy, check budget, invoke the factory, wire the new child in -- all non-blocking. If the budget is exhausted, the supervisor itself stops.
Timers with Drift Control
use Duration;
use MissPolicy;
// One-shot after delay
ctx.schedule.after.await?;
// Recurring -- default MissPolicy::Skip
ctx.schedule.every.await?;
// Recurring with explicit drift strategy
ctx.schedule.every
.on_miss
.await?;
Edge Case: Scheduling in the past? The message fires immediately. No panics, no silent failures.
3-Tier Termination
use StopReason;
handle.stop.await?; // Tier 1: pre_stop can veto
handle.stop.await?; // Tier 2: non-vetoable, on_stopped runs
handle.stop.await?; // Tier 3: bypasses ALL lifecycle hooks
Lifecycle Hooks
async
async
Mailbox Monitoring
if handle.mailbox_available < 10
if !handle.is_alive
// System channel bypasses the mailbox, works even when full
let status = handle.get_status.await?;
Deep Rust Patterns
Why Sync is Required for Recurring Timer Messages
Recurring timers clone the message each tick via an internal move || msg.clone() closure held in an Arc across tasks. Rust's Send future rules require the captured msg to be Sync.
In practice this is a non-issue. Enum message types are Sync by default. Only types with unsynchronized interior mutability (Cell, RefCell) aren't Sync, and those also fail Send.
ActorHandle Equality
Handles implement PartialEq based on ActorId, not channel identity:
let actor1 = MyActor.spawn.named.await?;
let actor2 = actor1.clone;
assert_eq!; // Same actor ID
let actor3 = MyActor.spawn.named.await?;
assert_ne!; // Different actor ID
This allows handles to be used in HashSet and HashMap for deduplication and routing.
Bounded Mailbox Backpressure
When the mailbox is full:
notify().awaitblocks until space is availabletry_notify()returnsTrySendError::Fullimmediatelysend().awaitblocks (same as notify, just with response)
During timer catch-up (MissPolicy::CatchUp), we use try_notify to avoid blocking the timer task on a full mailbox. If the mailbox is full, we stop the catch-up. Better to skip than deadlock.
API at a Glance
SpawnBuilder Chain
actor.spawn // Start the builder
.named // Optional: assign a name/ID
.on_system // Optional: target a specific ActorSystem
.with_config // Optional: custom ActorConfig
.supervised // Optional: enable supervision (default config)
.with_supervision // Optional: enable supervision (custom config)
.await?; // Finalize: spawns the actor
ActorHandle Methods
| Method | Description |
|---|---|
notify(msg) |
Fire-and-forget (awaits mailbox space) |
try_notify(msg) |
Non-blocking fire-and-forget |
send(msg) |
Request-response (awaits processing) |
stop(reason) |
Stop via system channel (bypasses full mailbox) |
get_status() |
Introspection snapshot via system channel |
is_alive() |
Check if actor is still running |
mailbox_len() |
Current queue depth |
mailbox_available() |
Free space in mailbox |
mailbox_capacity() |
Total mailbox capacity |
id() |
Get actor ID |
ActorContext Methods
| Method | Description |
|---|---|
spawn_child(factory) |
Returns a [ChildSpawnBuilder] - chain .named(), .restart_type(), .shutdown(), .with_config() |
children() |
Introspection info for all supervised children |
stop_child(id) |
Manually stop a supervised child |
schedule(msg) |
Returns a [ScheduleBuilder] - chain .at(instant), .after(delay), or .every(interval) |
cancel_timer(id) |
Cancel specific timer |
cancel_all_timers() |
Cancel all active timers |
active_timer_count() |
Number of active timers |
add_stream(stream) |
Attach an external stream to the mailbox |
cancel_stream(id) |
Cancel a specific stream |
cancel_all_streams() |
Cancel all active streams |
active_stream_count() |
Number of active streams |
self_handle() |
Get handle to this actor |
actor_id() |
This actor's ID |
actor_name() |
This actor's registered name |
status() |
Current lifecycle status |
ActorSystem Methods
| Method | Description |
|---|---|
ActorSystem::default() |
Lazy default system singleton |
ActorSystem::create(name) |
New named system |
ActorSystem::create_with(name, config) |
New system with custom config |
get::<A>(name) |
Typed actor lookup (OTP whereis) |
stop(name) |
Graceful stop by name |
kill(name) |
Force kill by name |
shutdown() |
Coordinated shutdown with escalation |
registered() |
List all registered actor names |
ActorConfig Builder
default
.with_mailbox_capacity
.supervised // OneForOne, 3 restarts / 5s
.with_supervision // Custom strategy + budget
Testing
Tests cover:
- Request-response and fire-and-forget messaging
- Timer drift policies (Skip, CatchUp, Delay)
- Mailbox backpressure and bounded capacity
- Handle equality and hashing
- Lifecycle hooks and 3-tier termination (Kill bypass)
- ActorSystem registry, spawn methods, and shutdown
- Supervision strategies, restart budget, child lifecycle
- Stream integration (add_stream, StreamEvent, cancellation)
- SpawnBuilder chain (all combinations)
- Error propagation and type preservation
Examples
| Example | Description |
|---|---|
simple_counter |
Basic notify/send usage |
ping_pong |
Bidirectional actor communication |
timers |
Recurring timers with MissPolicy |
cross_comm |
Multiple actors coordinating |
stream_counter |
External stream integration |
supervision |
Parent-child supervision with restart |
Run with:
Future Enhancements
Planned
- Telemetry hooks: Metrics and tracing integration
- Priority messages: Typed channel abstraction mapping to OTP EEP 76
Non-Goals
- Remote messaging: Tokio Actors is explicitly local (in-process)
- Distributed systems: Use Akka/Orleans/Proto.Actor for that
- Proc macros: We keep it simple, just traits
Architecture
Every actor is a dedicated tokio::task. No shared executor, no fancy scheduling, just Tokio doing what it does best.
Stop signals and status queries flow through a dedicated system channel with biased; select! priority over the user mailbox. This means stop() and get_status() work even when the mailbox is full.
License
MIT OR Apache-2.0
Built for Rust developers who value predictability over magic.
For implementation details and edge cases, see examples/ and tests/.
Author
Saddam Uwejan (Sam) - Rust systems engineer specializing in concurrent systems and production infrastructure.
Building high-performance, production-ready Rust libraries for real-world problems.