tokioraft
tokioraft is a small Rust library for leader election in Tokio-based distributed systems.
It focuses on one job: run a Raft-inspired election state machine based on incoming messages from peer nodes, local timers, and explicit cluster membership updates. It does not try to implement a full Raft log, replication pipeline, storage layer, membership discovery, or transport.
The library is primarily meant for projects that already use Tokio as their async runtime.
What it is
- A minimal election state machine with candidate / leader / follower roles
- Async-first and built around
tokio - Transport-agnostic: your code delivers messages between nodes
- Useful when you only need leader election and role tracking, not a full replicated log
Tokio integration expectations
tokioraft is designed first of all for integration into Tokio-based applications.
That means the host project should avoid creating blocking execution paths that starve or block Tokio worker threads while tokioraft tasks are running.
In practical terms:
- do not block Tokio runtime threads with long synchronous work
- do not park the runtime with blocking waits in code paths that interact with
tokioraft - if blocking work is unavoidable, move it to dedicated blocking facilities such as
tokio::task::spawn_blocking
If the surrounding application blocks Tokio worker threads, the election loop, timers, and message processing inside tokioraft may stop making progress.
What it is not
- Not a full Raft implementation
- No log replication
- No persistent storage
- No built-in networking or node discovery
- No automatic cluster membership management
- No automatic node join / leave detection
Relationship to tinyraft
This project is a spiritual successor to tinyraft.
Similarities
- Very small surface area
- Focus on leader election instead of a full consensus stack
- External transport and message routing
- Intended to be easy to embed into application code
Differences
- Written in Rust and built for Tokio
- Uses typed messages and Rust enums instead of JavaScript objects
- Leans on channels and async tasks internally
- Cluster membership changes are explicit via
set_nodes(...) - The current implementation is intentionally narrow: election only, without replicated-log semantics
How it works
Each node runs its own TinyRaft instance:
- You create a node with the list of known peers
- The node starts internal election timers and an async event loop
- The node emits outgoing Raft messages and local state updates through
get_receiver_from_raft() - Your transport sends
ToNode(...)messages to other nodes - When a remote message arrives, you call
on_receive(...) - The library updates local election state and emits new events as the election evolves
The library produces two kinds of events:
MsgRouter::ToNode(...)— an outgoing election message that your transport should deliver to another nodeMsgRouter::ToSelf(...)— a local state update, such as candidate / leader / follower changes
The library itself does not send network traffic. It only tells you what should be sent.
Installation
Add the crate to your project:
[]
= "0.1"
Or use it from the local workspace while developing.
Basic usage
use Duration;
use ;
async
Receiving network messages
When your transport receives a message from another node, pass it back into the local election state machine:
# use ;
# async
Changing cluster membership
Membership updates are explicit and external:
# use TinyRaft;
# async
This is important:
- node addition is not automatic
- node removal is not automatic
- your orchestration layer is responsible for deciding membership
- the library only applies the membership you pass in
Examples
Two examples are included:
examples/standart_usage.rs— minimal single-node usageexamples/nodes_communication.rs— simulated multi-node message routing
Run them with:
To control log verbosity:
TEST_TINYRAFT=info
Testing
The project contains integration tests inspired by the original tinyraft scenarios:
These tests cover:
- leader election
- leader replacement after node removal
- adding nodes
- explicit membership changes
- leadership expiration
- restart of follower nodes
- leader priority
- simple partition scenarios
Design philosophy
tokioraft is meant for applications that need:
- a leader
- simple coordination
- explicit control over transport and cluster orchestration
If you need a full replicated state machine, durable consensus log, or automatic cluster discovery, this project is intentionally too small for that job.