Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
rakka
A native Rust runtime for actor-based concurrent and distributed systems, with first-class Python bindings. rakka gives you a single mental model — addressable units of state plus behavior, communicating by asynchronous messages — that scales from a single core to a cluster, and increasingly from a CPU to a GPU.
use *;
;
Why an actor runtime, in Rust, now
The actor model is the same idea wherever it runs: a small, addressable unit of state plus behavior, talking to other actors by asynchronous message passing. That model is a good fit for two converging trends.
Agentic systems. Long-lived, autonomous, collaborating processes that reason, call tools, and coordinate are exactly what supervised, addressable actors describe. Each agent is an actor; conversations are mailboxes; tool calls are typed messages; failure is supervised, not silently swallowed. rakka gives that model a runtime that doesn't trade performance for safety.
Unified compute. Modern workloads no longer live entirely on the
CPU. Inference, embedding, scoring, simulation — they want a GPU.
Coordination, control flow, I/O, persistence — they want a CPU.
Today's stacks force you to glue the two with ad-hoc batching layers,
queues, and serialization boundaries. The actor model already encodes
the right boundary: a message is the dispatch unit. rakka is built so
that the same actor_ref.tell(msg) can target a CPU mailbox today and
a CUDA-backed dispatcher tomorrow — with the same supervision, the
same backpressure, the same observability. The runtime is explicit
about where work runs without forcing the developer to write two
programs.
Granular efficiency. Rust gives us deterministic resource use,
zero-cost abstractions, and ownership-as-concurrency-safety.
Per-message cost stays low. Per-actor footprint stays small. The
scheduler can hand work to a tokio worker, a dedicated dispatcher,
or — by design — a GPU stream, without changing the message contract.
That same precision lets the runtime push backpressure, mailboxes, and
supervision down to a level where they don't need to be rebuilt at
every layer above.
A longer argument is in
docs/actors-and-agentic-computing.md.
What's in the box
| Crate | What it does |
|---|---|
rakka |
Umbrella facade re-exporting the core types |
rakka-core |
Actors, supervision, dispatch, mailboxes, FSMs, event stream, coordinated shutdown |
rakka-config |
HOCON-style layered configuration |
rakka-macros |
Ergonomic derives and helpers |
rakka-testkit |
Probes, virtual time, deterministic test scaffolding |
rakka-remote |
Location-transparent messaging across processes (TCP + framed PDU + reliable delivery) |
rakka-cluster |
Membership, gossip, reachability, split-brain resolution |
rakka-cluster-tools |
Singleton, pub/sub, cluster-client patterns |
rakka-cluster-sharding |
Shard regions, rebalance, remember-entities, persistent coordinator |
rakka-cluster-metrics |
Adaptive load balancing |
rakka-distributed-data |
Convergent replicated data types (CRDTs) over the cluster |
rakka-persistence |
Event sourcing — journals, snapshots, recovery, async snapshotting |
rakka-persistence-query |
Tagged event streams over journals |
rakka-persistence-{sql,redis,mongodb,cassandra,aws,azure} |
Storage adapters |
rakka-persistence-tck |
Conformance suite for journal + snapshot implementations |
rakka-streams |
Typed reactive streams (sources, flows, sinks, junctions, hubs, kill switches) |
rakka-coordination |
Lease-based leadership primitives |
rakka-discovery |
Pluggable service discovery |
rakka-di |
Dependency-injection container |
rakka-hosting |
Builder API for wiring system + config + DI together |
rakka-telemetry |
Tracing, metrics, exporters |
rakka-dashboard |
Live web UI over the running system |
Plus a Python facade — pip install rakka — that exposes the same
actor model with GIL-isolated interpreter pools for CPU-bound work and
async-native tell / ask.
Quick start (Rust)
The umbrella crate is published on crates.io as rakka-rs (the
short name rakka is already taken by an unrelated, dormant crate).
Cargo's package alias keeps the import name rakka:
[]
= { = "rakka-rs", = "0.2", = ["cluster", "persistence"] }
Or pull in subsystem crates directly — rakka-core, rakka-cluster,
rakka-persistence, rakka-streams, etc. are all on crates.io.
use *;
;
# async
Quick start (Python)
&&
return f
=
=
# -> "hello, world"
See docs/python.md for the GIL-strategy guide
(python-pinned, python-subinterpreter-pool per PEP 684,
python-nogil per PEP 703, python-subprocess) and the C-extension
compatibility registry.
Building from source
# Rust
# Python bindings (requires maturin + a Python dev toolchain)
# Docs (optional)
Profiling
rakka ships with a cross-runtime profiler that measures the same four
scenarios (tell, ask, fanout, cpu) in Rust and Python and emits
a shared JSON schema so the two paths can be compared directly.
See docs/profiler.md.
Layout
crates/ Rust workspace
crates/py-bindings/ PyO3 bridge crates
python/rakka/ Python package
python/tests/ Python integration tests
examples/ Runnable Rust examples
benches/ Criterion benches
scripts/ Cross-runtime tooling
docs/ mkdocs-material source
xtask/ Cargo xtask (audit, profile, bump, verify)
Learn more
docs/actors-and-agentic-computing.md— the case for actors as the substrate for agentic + heterogeneous compute.docs/architecture.md— runtime structure.docs/idiomatic-rust.md— design choices.docs/python.md— Python bindings + GIL strategies.docs/remoting.md— cross-process actor remoting.docs/persistence-providers.md— storage adapters.docs/dashboard.md— live system UI.docs/observability.md— tracing + metrics exporters.docs/profiler.md— cross-runtime profiler.PORTING.md— alignment with prior-art runtimes.PORTING_TODO.md— depth roadmap.