Tokio Actors
Zero-ceremony, Tokio-native actors with strong typing and production-ready edge case handling.
Tokio Actors is a lightweight actor framework built for Rust developers who want predictable concurrency without the complexity. Every actor runs as a dedicated tokio::task on your multi-threaded runtime—no custom schedulers, no hidden magic.
✨ Why Tokio Actors?
🎯 Strongly Typed
Message and response types are enforced at compile time. No runtime type casting, no Any trait abuse.
🔒 Bounded Mailboxes = Natural Backpressure
Every actor has a bounded mailbox (default: 64). When full, senders wait automatically—no OOM crashes from runaway queues.
⏱️ Timer Drift Handling (MissPolicy)
Recurring timers have three drift strategies to handle system lag:
- Skip: Jump to next aligned tick
- CatchUp: Send all missed messages immediately
- Delay: Reset timer from now
This is the kind of edge-case thinking production systems need.
🤖 Perfect for AI/LLM Applications
Actors naturally fit AI/LLM architectures:
- Multi-Agent Systems: Each LLM agent is an actor with isolated state
- API Orchestration: Coordinate multiple LLM API calls with backpressure
- Conversation State: Bounded mailboxes prevent memory bloat from chat history
- Tool Calling: Actors model tool execution with type-safe request/response
- Async Workflows: Chain LLM calls without callback hell
🚦 Lifecycle Observability
Query actor status anytime: Initializing → Running → Stopping → Stopped. Perfect for health checks and graceful degradation.
🚀 Quick Start
Ping-Pong: Request-Response Pattern
use ;
// Pong actor - simply responds to pings
// Ping actor
async
Configuration is Completely Optional
// No config needed - uses defaults
actor.spawn_actor.await?;
// Or customize with builder pattern
let config = default.with_mailbox_capacity;
actor.spawn_actor.await?;
// Reference to config works too
actor.spawn_actor.await?;
🎭 Core Concepts
Message Passing: notify vs send
// Fire-and-forget (async until mailbox accepts)
handle.notify.await?;
// Request-response (wait for actor to process)
let response = handle.send.await?;
// Non-blocking attempt (returns immediately)
handle.try_notify?;
Error Handling Nuance:
notifyerrors → actor callshandle_failure()and continues processingsenderrors → actor stops (caller expects a response, failure is critical)
This asymmetry reflects real-world semantics.
Timers with Drift Control
use Duration;
use MissPolicy;
ctx.schedule_after?; // One-shot
ctx.schedule_recurring?;
Edge Case: Scheduling in the past? The message fires immediately. No panics, no silent failures.
Lifecycle Hooks
async
async
Mailbox Monitoring
if handle.mailbox_available < 10
if !handle.is_alive
🧠 Deep Rust Patterns
Why Sync is Required for Timer Factories
Recurring timers use closures that are held across .await points in a spawned task:
ctx.schedule_recurring_with?;
The closure lives in an Arc that's shared across tasks. Rust's Send future rules require this. For schedule_recurring(msg, ...) where msg: Clone, we require msg: Sync for the same reason—the closure move || msg.clone() captures msg.
Workaround: If your message isn't Sync, use schedule_recurring_with with a factory that doesn't capture state.
ActorHandle Equality
Handles implement PartialEq based on ActorId, not channel identity:
let actor1 = MyActor.spawn_actor.await?;
let actor2 = actor1.clone;
assert_eq!; // ✅ Same actor ID
let actor3 = MyActor.spawn_actor.await?;
assert_ne!; // ✅ Different actor ID
This allows handles to be used in HashSet and HashMap for deduplication and routing.
Bounded Mailbox Backpressure
When the mailbox is full:
notify().awaitblocks until space is availabletry_notify()returnsTrySendError::Fullimmediatelysend().awaitblocks (same as notify, just with response)
During timer catch-up (MissPolicy::CatchUp), we use try_notify to avoid blocking the timer task on a full mailbox. If the mailbox is full, we stop the catch-up—better to skip than deadlock.
📊 API at a Glance
ActorHandle Methods
| Method | Description |
|---|---|
notify(msg) |
Fire-and-forget (awaits mailbox space) |
try_notify(msg) |
Non-blocking fire-and-forget |
send(msg) |
Request-response (awaits processing) |
stop(reason) |
Request actor to stop |
is_alive() |
Check if actor is still running |
mailbox_len() |
Current queue depth |
mailbox_available() |
Free space in mailbox |
id() |
Get actor ID |
ActorContext Methods
| Method | Description |
|---|---|
schedule_once(msg, when) |
Fire message at specific Instant |
schedule_after(msg, delay) |
Fire message after Duration |
schedule_recurring(msg, interval, policy) |
Recurring timer |
schedule_recurring_with(factory, interval, policy) |
Recurring with message factory |
cancel_timer(id) |
Cancel specific timer |
cancel_all_timers() |
Cancel all active timers |
active_timer_count() |
Number of active timers |
self_handle() |
Get handle to this actor |
status() |
Current lifecycle status |
ActorConfig Builder
default
.with_mailbox_capacity
🧪 Testing
Tests cover:
- Ping-pong bidirectional messaging
- Timer drift policies
- Mailbox backpressure
- Handle equality and hashing
- Lifecycle hooks
- Error propagation
📦 Examples
| Example | Description |
|---|---|
simple_counter |
Basic notify/send usage |
ping_pong |
Bidirectional actor communication |
timers |
Recurring timers with MissPolicy |
cross_comm |
Multiple actors coordinating |
Run with:
🔮 Future Enhancements
Planned
- Supervision trees: Declarative parent-child relationships
- Actor registry: Named global actor lookup
- Graceful shutdown coordination: Drain mailboxes before stopping
- Telemetry hooks: Metrics and tracing integration
Non-Goals
- Remote messaging: Tokio Actors is explicitly local (in-process)
- Distributed systems: Use Akka/Orleans/Proto.Actor for that
- Proc macros: We keep it simple—just traits
🏗️ Architecture
Every actor is a dedicated tokio::task. No shared executor, no fancy scheduling—just Tokio doing what it does best.
📄 License
MIT OR Apache-2.0
Built with ❤️ for Rust developers who value predictability over magic.
For implementation details and edge cases, see examples/ and tests/.
👤 Author
Saddam Uwejan (Sam) - Rust systems engineer specializing in concurrent systems and production infrastructure.
Building high-performance, production-ready Rust libraries for real-world problems.