tokioraft 0.1.0

A minimal Tokio-based leader election library inspired by tinyraft
Documentation

tokioraft

tokioraft is a small Rust library for leader election in Tokio-based distributed systems.

It focuses on one job: run a Raft-inspired election state machine based on incoming messages from peer nodes, local timers, and explicit cluster membership updates. It does not try to implement a full Raft log, replication pipeline, storage layer, membership discovery, or transport.

The library is primarily meant for projects that already use Tokio as their async runtime.

What it is

  • A minimal election state machine with candidate / leader / follower roles
  • Async-first and built around tokio
  • Transport-agnostic: your code delivers messages between nodes
  • Useful when you only need leader election and role tracking, not a full replicated log

Tokio integration expectations

tokioraft is designed first of all for integration into Tokio-based applications.

That means the host project should avoid creating blocking execution paths that starve or block Tokio worker threads while tokioraft tasks are running.

In practical terms:

  • do not block Tokio runtime threads with long synchronous work
  • do not park the runtime with blocking waits in code paths that interact with tokioraft
  • if blocking work is unavoidable, move it to dedicated blocking facilities such as tokio::task::spawn_blocking

If the surrounding application blocks Tokio worker threads, the election loop, timers, and message processing inside tokioraft may stop making progress.

What it is not

  • Not a full Raft implementation
  • No log replication
  • No persistent storage
  • No built-in networking or node discovery
  • No automatic cluster membership management
  • No automatic node join / leave detection

Relationship to tinyraft

This project is a spiritual successor to tinyraft.

Similarities

  • Very small surface area
  • Focus on leader election instead of a full consensus stack
  • External transport and message routing
  • Intended to be easy to embed into application code

Differences

  • Written in Rust and built for Tokio
  • Uses typed messages and Rust enums instead of JavaScript objects
  • Leans on channels and async tasks internally
  • Cluster membership changes are explicit via set_nodes(...)
  • The current implementation is intentionally narrow: election only, without replicated-log semantics

How it works

Each node runs its own TinyRaft instance:

  1. You create a node with the list of known peers
  2. The node starts internal election timers and an async event loop
  3. The node emits outgoing Raft messages and local state updates through get_receiver_from_raft()
  4. Your transport sends ToNode(...) messages to other nodes
  5. When a remote message arrives, you call on_receive(...)
  6. The library updates local election state and emits new events as the election evolves

The library produces two kinds of events:

  • MsgRouter::ToNode(...) — an outgoing election message that your transport should deliver to another node
  • MsgRouter::ToSelf(...) — a local state update, such as candidate / leader / follower changes

The library itself does not send network traffic. It only tells you what should be sent.

Installation

Add the crate to your project:

[dependencies]
tokioraft = "0.1"

Or use it from the local workspace while developing.

Basic usage

use std::time::Duration;
use tokioraft::{send::MsgRouter, TinyRaft};

#[tokio::main]
async fn main() {
    let raft = TinyRaft::start(
        vec!["node1", "node2", "node3"],
        "node1",
        Duration::from_millis(5000),
        Duration::from_millis(1000),
        None,
        None,
        None,
        0,
    )
    .await;

    let rx = raft.get_receiver_from_raft();

    tokio::spawn(async move {
        while let Ok(event) = rx.recv().await {
            match event {
                MsgRouter::ToNode(peer, message) => {
                    // deliver `message` to `peer` using your transport
                    let _ = (peer, message);
                }
                MsgRouter::ToSelf(state) => {
                    // react to local role changes
                    let _ = state;
                }
            }
        }
    });
}

Receiving network messages

When your transport receives a message from another node, pass it back into the local election state machine:

# use tokioraft::{send::Message, NodeId, TinyRaft};
# async fn example(raft: TinyRaft, from: NodeId, message: Message) {
raft.on_receive(from, message).await.unwrap();
# }

Changing cluster membership

Membership updates are explicit and external:

# use tokioraft::TinyRaft;
# async fn example(raft: TinyRaft) {
raft.set_nodes(vec!["node1", "node2", "node3"]).await.unwrap();
# }

This is important:

  • node addition is not automatic
  • node removal is not automatic
  • your orchestration layer is responsible for deciding membership
  • the library only applies the membership you pass in

Examples

Two examples are included:

  • examples/standart_usage.rs — minimal single-node usage
  • examples/nodes_communication.rs — simulated multi-node message routing

Run them with:

cargo run --example standart_usage
cargo run --example nodes_communication

To control log verbosity:

TEST_TINYRAFT=info cargo run --example nodes_communication

Testing

The project contains integration tests inspired by the original tinyraft scenarios:

cargo test

These tests cover:

  • leader election
  • leader replacement after node removal
  • adding nodes
  • explicit membership changes
  • leadership expiration
  • restart of follower nodes
  • leader priority
  • simple partition scenarios

Design philosophy

tokioraft is meant for applications that need:

  • a leader
  • simple coordination
  • explicit control over transport and cluster orchestration

If you need a full replicated state machine, durable consensus log, or automatic cluster discovery, this project is intentionally too small for that job.