Korium
Batteries-included adaptive networking fabric
Korium is a high-performance, secure, and adaptive networking library written in Rust. It provides a robust foundation for building decentralized applications, scale-out fabrics, and distributed services with built-in NAT traversal, efficient PubSub, and a cryptographic identity system.
Why Korium?
- Zero Configuration — Self-organizing mesh with automatic peer discovery
- NAT Traversal — Built-in relay infrastructure and path probing via SmartSock
- Secure by Default — Ed25519 identities with mutual TLS on every connection
- Adaptive Performance — Latency-tiered DHT with automatic path optimization
- Complete Stack — PubSub messaging, request-response, direct connections, and membership management
Quick Start
Add Korium to your Cargo.toml:
[]
= "0.1"
= { = "1", = ["full"] }
Create a Node
use Node;
async
PubSub Messaging
// Subscribe to a topic
node.subscribe.await?;
// Publish messages (signed with your identity)
node.publish.await?;
// Receive messages
let mut rx = node.messages.await?;
while let Some = rx.recv.await
Request-Response
// Set up a request handler (echo server)
node.set_request_handler.await?;
// Send a request and get a response
let response = node.send.await?;
println!;
// Or use the low-level API for async handling
let mut requests = node.incoming_requests.await?;
while let Some = requests.recv.await
Peer Discovery
// Find peers near a target identity
let peers = node.find_peers.await?;
// Resolve a peer's published contact record
let contact = node.resolve.await?;
// Publish your address for others to discover
node.publish_address.await?;
NAT Traversal
// Automatic NAT configuration (helper is a known peer identity in the DHT)
let helper_identity = "abc123..."; // hex-encoded peer identity
let = node.configure_nat.await?;
if is_public else
// Alternative: Enable mesh-mediated signaling (no dedicated relay connection)
let mut rx = node.enable_mesh_signaling.await;
while let Some = rx.recv.await
Architecture
┌─────────────────────────────────────────────────────────────────────┐
│ Node │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌────────────┐ │
│ │ GossipSub │ │ Crypto │ │ DHT │ │ Relay │ │
│ │ (PubSub) │ │ (Identity) │ │ (Discovery) │ │ (Client) │ │
│ └──────┬──────┘ └─────────────┘ └──────┬──────┘ └─────┬──────┘ │
│ │ │ │ │
│ ┌──────┴─────────────────────────────────┴────────────────┴──────┐ │
│ │ RpcNode │ │
│ │ (Connection pooling, request routing) │ │
│ └────────────────────────────┬───────────────────────────────────┘ │
│ ┌────────────────────────────┴───────────────────────────────────┐ │
│ │ SmartSock │ │
│ │ (Path probing, relay tunnels, virtual addressing, QUIC mux) │ │
│ └────────────────────────────┬───────────────────────────────────┘ │
│ ┌────────────────────────────┴───────────────────────────────────┐ │
│ │ QUIC (Quinn) │ │
│ └────────────────────────────┬───────────────────────────────────┘ │
│ ┌────────────────────────────┴───────────────────────────────────┐ │
│ │ UDP Socket + Relay Server │ │
│ └────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
Module Overview
| Module | Description |
|---|---|
node |
High-level facade exposing the complete public API |
transport |
SmartSock with path probing, relay tunnels, and virtual addresses |
rpc |
Connection pooling, RPC dispatch, and actor-based state management |
dht |
Kademlia-style DHT with latency tiering, adaptive parameters, and peer discovery |
gossipsub |
GossipSub v1.1/v1.2 epidemic broadcast with peer scoring |
relay |
UDP relay server and client with mesh-mediated signaling for NAT traversal |
crypto |
Ed25519 certificates, identity verification, custom TLS |
identity |
Keypairs, endpoint records, and signed address publication |
protocols |
Protocol trait definitions (DhtNodeRpc, GossipSubRpc, RelayRpc, PlainRpc) |
messages |
Protocol message types and bounded serialization |
Core Concepts
Identity (Ed25519 Public Keys)
Every node has a cryptographic identity derived from an Ed25519 keypair:
let node = bind.await?;
let identity: String = node.identity; // 64 hex characters (32 bytes)
let keypair = node.keypair; // Access for signing
Identities are:
- Self-certifying — The identity IS the public key
- Collision-resistant — 256-bit space makes collisions infeasible
- Verifiable — Every connection verifies peer identity via mTLS
Contact
A Contact represents a reachable peer:
SmartAddr (Virtual Addressing)
SmartSock maps identities to virtual IPv6 addresses in the fd00:c0f1::/32 range:
Identity (32 bytes) → blake3 hash → fd00:c0f1:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
This enables:
- Transparent path switching — QUIC sees stable addresses while SmartSock handles path changes
- Relay abstraction — Applications use identity-based addressing regardless of NAT status
SmartConnect
Automatic connection establishment with fallback:
- Try direct connection to published addresses
- If direct fails, use peer's designated relays
- Configure relay tunnel and establish QUIC connection through relay
// SmartConnect handles all complexity internally
let conn = node.connect.await?;
NAT Traversal
Mesh-First Relay Model
Korium uses a mesh-first relay model where any reachable mesh peer can act as a relay:
- No dedicated relay servers — Any publicly reachable node serves as a relay
- Mesh-mediated signaling — Relay signals forwarded through GossipSub mesh
- Opportunistic relaying — Connection attempts try mesh peers as relays
- Zero configuration — Works automatically when mesh peers are available
How SmartSock Works
SmartSock implements transparent NAT traversal:
- Path Probing — Periodic probes measure RTT to all known paths
- Path Selection — Best path chosen (direct preferred, relay as fallback)
- Relay Tunnels — UDP packets wrapped in CRLY frames through relay
- Automatic Upgrade — Switch from relay to direct when hole-punch succeeds
Protocol Headers
Path Probe (SMPR)
┌──────────┬──────────┬──────────┬──────────────┐
│ Magic │ Type │ Tx ID │ Timestamp │
│ 4 bytes │ 1 byte │ 8 bytes │ 8 bytes │
└──────────┴──────────┴──────────┴──────────────┘
Relay Frame (CRLY)
┌──────────┬──────────────┬──────────────────────┐
│ Magic │ Session ID │ QUIC Payload │
│ 4 bytes │ 16 bytes │ (variable) │
└──────────┴──────────────┴──────────────────────┘
Path Selection Algorithm
if direct_path.rtt + 10ms < current_path.rtt:
switch to direct_path
elif relay_path.rtt + 50ms < direct_path.rtt:
switch to relay_path (relay gets 50ms handicap)
DHT (Distributed Hash Table)
Kademlia Implementation
The DHT is used internally for peer discovery and address publication:
- 256 k-buckets with configurable k (default: 20, adaptive: 10-30)
- Iterative lookups with configurable α (default: 3, adaptive: 2-5)
- S/Kademlia PoW: Identity generation requires Proof-of-Work for Sybil resistance
Key Operations
// Find peers near a target identity
let peers = node.find_peers.await?;
// Resolve peer's published contact record
let contact = node.resolve.await?;
// Publish your address for discovery
node.publish_address.await?;
Latency Tiering
The DHT implements Coral-inspired latency tiering:
- RTT samples collected per /16 IP prefix (IPv4) or /32 prefix (IPv6)
- K-means clustering groups prefixes into 1-7 latency tiers
- Tiered lookups prefer faster prefixes for lower latency
- LRU-bounded — tracks up to 10,000 active prefixes (~1MB memory)
Scalability (10M+ Nodes)
Korium is designed to scale to millions of concurrent peers. Key design decisions enable efficient operation at scale:
Memory Efficiency (Per-Node at 10M Network)
Each node uses constant memory regardless of network size:
| Component | Memory | Design |
|---|---|---|
| Routing table | ~640 KB | 256 buckets × 20 contacts |
| RTT tiering | ~1 MB | /16 prefix-based (not per-peer) |
| Passive view | ~13 KB | 100 recovery candidates |
| Connection cache | ~200 KB | 1,000 LRU connections |
| Peer scoring | ~1 MB | 10K active peers scored |
| Message dedup | ~2 MB | 10K source sequence windows |
| Total | ~5 MB | Bounded, scales to 10M+ nodes |
DHT Performance
| Metric | Value | Notes |
|---|---|---|
| Lookup hops | O(log₂ N) ≈ 23 | Standard Kademlia complexity |
| Parallel queries (α) | 2-5 adaptive | Reduces under congestion |
| Bucket size (k) | 10-30 adaptive | Increases with churn |
| Routing contacts | ~5,120 max | 256 buckets × 20 |
Korium vs Standard Kademlia
| Feature | Standard Kademlia | Korium | Benefit |
|---|---|---|---|
| Bucket size | Fixed k=20 | Adaptive 10-30 | Handles churn spikes |
| Concurrency | Fixed α=3 | Adaptive 2-5 | Load shedding |
| RTT optimization | ❌ None | /16 prefix tiering | Lower latency paths |
| Sybil protection | ❌ Basic | S/Kademlia PoW + per-peer limits | Eclipse resistant |
| Gossip layer | ❌ None | GossipSub v1.1/v1.2 | Fast broadcast, scoring |
| NAT traversal | ❌ None | SmartSock + mesh relays | Works behind NAT |
| Identity | SHA-1 node IDs | Ed25519 + PoW | Self-certifying, Sybil-resistant |
Scaling Boundaries (Per-Node)
These limits are per-node, not network-wide. With 10M nodes, the network's aggregate capacity scales linearly:
| Parameter | Per-Node Limit | At 10M Nodes | Notes |
|---|---|---|---|
| Routing contacts | ~5,120 | N/A | O(log N) = 23 hops at 10M |
| Contact records | 100K entries | 1 trillion | Distributed across DHT |
| Scored peers | 10,000 | 100 billion | Per-node active peer set |
| PubSub topics | 10,000 | 100 billion | Topics span multiple nodes |
| Peers per topic | 1,000 | N/A | Gossip efficiency bound |
| Relay sessions | 10,000 | 100 billion | Per-relay server |
Key Design Decisions
-
Prefix-based RTT — Tracking RTT per /16 IP prefix instead of per-peer reduces memory from O(N) to O(65K) while maintaining routing quality through statistical sampling.
-
Adaptive parameters — k and α automatically adjust based on observed churn rate, preventing cascade failures during network instability.
-
Bounded data structures — All caches use LRU eviction with fixed caps, ensuring memory stays constant regardless of network size.
GossipSub (PubSub)
GossipSub v1.1/v1.2 Implementation
Korium implements the full GossipSub v1.1 specification with v1.2 extensions:
- Peer Scoring (P1-P7): Time in mesh, message delivery, invalid messages, IP colocation
- Adaptive Gossip: D_score mesh quotas, Opportunistic Grafting, Flood Publishing
- IDontWant (v1.2): Bandwidth optimization for large messages
- Mesh Management: D, D_lo, D_hi, D_out, D_score parameters
- Prune Backoff: Exponential backoff for pruned peers
Epidemic Broadcast
GossipSub implements efficient topic-based publish/subscribe:
- Mesh overlay — Each topic maintains a mesh of connected peers
- Eager push — Messages forwarded immediately to mesh peers
- Flood publishing — Publishers send to all peers above publish threshold
- Gossip protocol — IHave/IWant metadata exchange for reliability
- Relay signaling — NAT traversal signals forwarded through mesh peers
Message Flow
Publisher → Mesh Push → Subscribers
↓
Gossip (IHave)
↓
IWant requests
↓
Message delivery
Message Authentication
All published messages include Ed25519 signatures:
// Messages are signed with publisher's keypair
node.publish.await?;
// Signatures verified on receipt (invalid messages rejected)
let msg = rx.recv.await?; // msg.from is verified sender
Rate Limiting
| Limit | Value |
|---|---|
| Publish rate | 100/sec |
| Per-peer receive rate | 50/sec |
| Max message size | 64 KB |
| Max topics | 10,000 |
| Max peers per topic | 1,000 |
Security
Defense Layers
| Layer | Protection |
|---|---|
| Identity | Ed25519 keypairs, identity = public key |
| Transport | Mutual TLS on all QUIC connections |
| RPC | Identity verification on every request |
| Storage | Per-peer quotas, rate limiting, content validation |
| Routing | Rate-limited insertions, ping verification, S/Kademlia PoW |
| PubSub | Message signatures, replay detection, peer scoring (P1-P7), IP colocation (P6) |
Security Constants
| Constant | Value | Purpose |
|---|---|---|
MAX_VALUE_SIZE |
1 MB | DHT value limit |
MAX_RESPONSE_SIZE |
1 MB | RPC response limit |
MAX_SESSIONS |
10,000 | Relay session limit |
MAX_SESSIONS_PER_IP |
50 | Per-IP relay rate limit |
PER_PEER_STORAGE_QUOTA |
1 MB | DHT storage per peer |
PER_PEER_ENTRY_LIMIT |
100 | DHT entries per peer |
MAX_CONCURRENT_STREAMS |
64 | QUIC streams per connection |
POW_DIFFICULTY |
24 bits | Identity PoW (Sybil resistance) |
CLI Usage
Running a Node
# Start a node on a random port
# Start with specific bind address
# Bootstrap from existing peer
# With debug logging
RUST_LOG=debug
Chatroom Example
# Terminal 1: Start first node
# Terminal 2: Join with bootstrap (copy the bootstrap string from Terminal 1)
The chatroom demonstrates:
- PubSub messaging (
/roommessages) - Direct messaging (
/dm <identity> <message>) - Peer discovery (
/peers)
Testing
# Run all tests
# Run with logging
RUST_LOG=debug
# Run specific test
# Run integration tests
# Run relay tests
# Spawn local cluster (7 nodes)
Dependencies
| Crate | Purpose |
|---|---|
quinn |
QUIC implementation |
tokio |
Async runtime |
ed25519-dalek |
Ed25519 signatures |
blake3 |
Fast cryptographic hashing |
rustls |
TLS implementation |
bincode |
Binary serialization |
lru |
LRU caches |
tracing |
Structured logging |
rcgen |
X.509 certificate generation |
x509-parser |
Certificate parsing |
References
NAT Traversal with QUIC
-
Liang, J., et al. (2024). Implementing NAT Hole Punching with QUIC. VTC2024-Fall. arXiv:2408.01791
Demonstrates QUIC hole punching advantages and connection migration saving 2 RTTs.
Distributed Hash Tables
-
Freedman, M. J., et al. (2004). Democratizing Content Publication with Coral. NSDI '04. PDF
Introduced "sloppy" DHT with latency-based clustering—inspiration for Korium's tiering system.
-
Baumgart, I. & Mies, S. (2007). S/Kademlia: A Practicable Approach Towards Secure Key-Based Routing. ICPP '07.
The S/Kademlia specification that Korium implements for Sybil-resistant identity generation via Proof-of-Work.
GossipSub / PlumTree
-
Vyzovitis, D., et al. (2020). GossipSub: Attack-Resilient Message Propagation in the Filecoin and ETH2.0 Networks.
The GossipSub v1.1 specification that Korium's PubSub implementation follows, including peer scoring (P1-P7), Adaptive Gossip, and mesh management.
-
Leitão, J., Pereira, J., & Rodrigues, L. (2007). Epidemic Broadcast Trees. SRDS '07.
The PlumTree paper that influenced GossipSub's design, combining gossip reliability with efficient message propagation.
License
MIT License - see LICENSE for details.