Tokio Actors
Zero-ceremony, Tokio-native actors with strong typing and production-ready edge case handling.
Tokio Actors is a lightweight actor framework built for Rust developers who want predictable concurrency without the complexity. Every actor runs as a dedicated tokio::task on your multi-threaded runtime—no custom schedulers, no hidden magic.
✨ Why Tokio Actors?
🎯 Strongly Typed
Message and response types are enforced at compile time. No runtime type casting, no Any trait abuse.
🔒 Bounded Mailboxes = Natural Backpressure
Every actor has a bounded mailbox (default: 64). When full, senders wait automatically—no OOM crashes from runaway queues.
⏱️ Timer Drift Handling (MissPolicy)
Recurring timers have three drift strategies to handle system lag:
- Skip: Jump to next aligned tick
- CatchUp: Send all missed messages immediately
- Delay: Reset timer from now
This is the kind of edge-case thinking production systems need.
🤖 Perfect for AI/LLM Applications
Actors naturally fit AI/LLM architectures:
- Multi-Agent Systems: Each LLM agent is an actor with isolated state
- API Orchestration: Coordinate multiple LLM API calls with backpressure
- Conversation State: Bounded mailboxes prevent memory bloat from chat history
- Tool Calling: Actors model tool execution with type-safe request/response
- Async Workflows: Chain LLM calls without callback hell
🚦 Lifecycle Observability
Query actor status anytime: Initializing → Running → Stopping → Stopped. Perfect for health checks and graceful degradation.
🚀 Quick Start
Ping-Pong: Request-Response Pattern
use ;
// Pong actor - simply responds to pings
// Ping actor
async
Spawning Actors
use ;
// Anonymous (UUID auto-id)
let h = my_actor.spawn.await?;
// Named (registers in default system)
let h = my_actor.spawn_named.await?;
// Named with custom mailbox
let config = default.with_mailbox_capacity;
let h = my_actor.spawn_named_with.await?;
// On a specific system
let sys = create?;
let h = my_actor.spawn_on.await?;
let h = my_actor.spawn_on_named.await?;
// Legacy API still works
let h = my_actor.spawn_actor.await?;
Actor Registry (ActorSystem)
use ;
// Default system (lazy singleton)
let sys = default;
// Custom system with config
let sys = create_with?;
// Named lookup (OTP whereis/1)
let handle = sys.;
// Stop/kill by name
sys.stop.await?; // Graceful (vetoable)
sys.kill.await?; // Force (bypasses all hooks)
// Coordinated shutdown with timeout escalation
sys.shutdown.await;
🎭 Core Concepts
Message Passing: notify vs send
// Fire-and-forget (async until mailbox accepts)
handle.notify.await?;
// Request-response (wait for actor to process)
let response = handle.send.await?;
// Non-blocking attempt (returns immediately)
handle.try_notify?;
Error Handling Nuance:
notifyerrors → actor callshandle_failure()and continues processingsenderrors → actor stops (caller expects a response, failure is critical)
This asymmetry reflects real-world semantics.
Timers with Drift Control
use Duration;
use MissPolicy;
ctx.schedule_after?; // One-shot
ctx.schedule_recurring?;
Edge Case: Scheduling in the past? The message fires immediately. No panics, no silent failures.
3-Tier Termination
use StopReason;
handle.stop.await?; // Tier 1: pre_stop can veto
handle.stop.await?; // Tier 2: non-vetoable, on_stopped runs
handle.stop.await?; // Tier 3: bypasses ALL lifecycle hooks
Lifecycle Hooks
async
async
Mailbox Monitoring
if handle.mailbox_available < 10
if !handle.is_alive
🧠 Deep Rust Patterns
Why Sync is Required for Timer Factories
Recurring timers use closures that are held across .await points in a spawned task:
ctx.schedule_recurring_with?;
The closure lives in an Arc that's shared across tasks. Rust's Send future rules require this. For schedule_recurring(msg, ...) where msg: Clone, we require msg: Sync for the same reason—the closure move || msg.clone() captures msg.
Workaround: If your message isn't Sync, use schedule_recurring_with with a factory that doesn't capture state.
ActorHandle Equality
Handles implement PartialEq based on ActorId, not channel identity:
let actor1 = MyActor.spawn_actor.await?;
let actor2 = actor1.clone;
assert_eq!; // ✅ Same actor ID
let actor3 = MyActor.spawn_actor.await?;
assert_ne!; // ✅ Different actor ID
This allows handles to be used in HashSet and HashMap for deduplication and routing.
Bounded Mailbox Backpressure
When the mailbox is full:
notify().awaitblocks until space is availabletry_notify()returnsTrySendError::Fullimmediatelysend().awaitblocks (same as notify, just with response)
During timer catch-up (MissPolicy::CatchUp), we use try_notify to avoid blocking the timer task on a full mailbox. If the mailbox is full, we stop the catch-up—better to skip than deadlock.
📊 API at a Glance
ActorHandle Methods
| Method | Description |
|---|---|
notify(msg) |
Fire-and-forget (awaits mailbox space) |
try_notify(msg) |
Non-blocking fire-and-forget |
send(msg) |
Request-response (awaits processing) |
stop(reason) |
Request actor to stop |
is_alive() |
Check if actor is still running |
mailbox_len() |
Current queue depth |
mailbox_available() |
Free space in mailbox |
id() |
Get actor ID |
ActorContext Methods
| Method | Description |
|---|---|
schedule_once(msg, when) |
Fire message at specific Instant |
schedule_after(msg, delay) |
Fire message after Duration |
schedule_recurring(msg, interval, policy) |
Recurring timer |
schedule_recurring_with(factory, interval, policy) |
Recurring with message factory |
cancel_timer(id) |
Cancel specific timer |
cancel_all_timers() |
Cancel all active timers |
active_timer_count() |
Number of active timers |
self_handle() |
Get handle to this actor |
status() |
Current lifecycle status |
Spawn Methods (ActorExt)
| Method | Description |
|---|---|
spawn() |
Anonymous actor (UUID auto-id) |
spawn_named(name) |
Named, registered in default system |
spawn_named_with(name, &config) |
Named with custom mailbox config |
spawn_on(&system) |
Anonymous on specific system |
spawn_on_named(&system, name) |
Named on specific system |
spawn_on_named_with(&system, name, &config) |
Full params |
spawn_actor(id, config) |
Legacy API (still works) |
ActorSystem Methods
| Method | Description |
|---|---|
ActorSystem::default() |
Lazy default system singleton |
ActorSystem::create(name) |
New named system |
ActorSystem::create_with(name, config) |
New system with custom config |
get::<A>(name) |
Typed actor lookup (OTP whereis) |
stop(name) |
Graceful stop by name |
kill(name) |
Force kill by name |
shutdown() |
Coordinated shutdown with escalation |
registered() |
List all registered actor names |
ActorConfig Builder
default
.with_mailbox_capacity
🧪 Testing
Tests cover:
- Ping-pong bidirectional messaging
- Timer drift policies
- Mailbox backpressure
- Handle equality and hashing
- Lifecycle hooks and 3-tier termination (Kill bypass)
- ActorSystem registry, spawn methods, and shutdown
- Error propagation and type preservation
📦 Examples
| Example | Description |
|---|---|
simple_counter |
Basic notify/send usage |
ping_pong |
Bidirectional actor communication |
timers |
Recurring timers with MissPolicy |
cross_comm |
Multiple actors coordinating |
Run with:
🔮 Future Enhancements
Planned
- Supervision trees: Declarative parent-child relationships
- Telemetry hooks: Metrics and tracing integration
Non-Goals
- Remote messaging: Tokio Actors is explicitly local (in-process)
- Distributed systems: Use Akka/Orleans/Proto.Actor for that
- Proc macros: We keep it simple—just traits
🏗️ Architecture
Every actor is a dedicated tokio::task. No shared executor, no fancy scheduling—just Tokio doing what it does best.
📄 License
MIT OR Apache-2.0
Built with ❤️ for Rust developers who value predictability over magic.
For implementation details and edge cases, see examples/ and tests/.
👤 Author
Saddam Uwejan (Sam) - Rust systems engineer specializing in concurrent systems and production infrastructure.
Building high-performance, production-ready Rust libraries for real-world problems.