Netabase
A peer-to-peer networking layer built on libp2p with integrated type-safe storage, enabling distributed applications with automatic data synchronization across native and WASM environments.
This crate is still in early development and will change frequently as it stabalises. It is not advised to use this in a production environment until it stabalises.
Roadmap
Version 1.0
- Complete WASM support (WebRTC, IndexedDB)
- Connection profiles (local/global/hybrid modes)
- Data synchronization with conflict resolution
- Relay support for NAT traversal
- Advanced query API
- Metrics and monitoring
- Migration tools
Paxos Consensus Integration
Netabase has begun integrating paxakos for distributed consensus. Implementation roadmap:
Phase 1: Core Trait Implementation
-
Implement
LogEntrytrait for netabase_store definitions- Add
id()method returning unique identifier for each entry - Ensure serialization compatibility with bincode
- Update
NetabaseDefinitionTraitto requireLogEntrybound
- Add
-
Create
Stateimplementation for distributed state managementapply(&mut self, entry: &LogEntry)- process entries and update statefreeze(&self)- create immutable snapshot of current statecluster_at(&self, round: RoundNum)- return cluster membership at roundconcurrency(&self)- return parallelism level for round processing
-
Implement
NodeInfotrait for peer identification- Define node identity compatible with libp2p
PeerId - Add serialization for network transmission
- Integrate with existing peer discovery mechanisms
- Define node identity compatible with libp2p
Phase 2: libp2p Communicator
-
Create
PaxosComm unificatorfor libp2p integration- Implement
Communicatortrait with 12 associated types:Node,RoundNum,CoordNum,LogEntry,Error- Future types:
SendPrepare,SendProposal,SendCommit,SendCommitById - Vote metadata:
Abstain,Yea,Nay
- Implement 4 required message-sending methods:
send_prepare(coord, round, receivers)- broadcast prepare messagessend_proposal(coord, round, entry, receivers)- propose log entrysend_commit(coord, round, entry, receivers)- commit with full entrysend_commit_by_id(coord, round, entry_id, receivers)- commit by ID only
- Implement
-
Create custom libp2p protocol handler (
/paxos/1.0.0)- Define request/response message types using serde
- Integrate with libp2p's
request_responsebehavior - Handle protocol message routing through swarm
- Implement timeout and retry logic
-
Add
PaxosBehaviourto libp2p swarm- Create behavior struct wrapping paxakos
Node - Implement
NetworkBehaviourtrait - Handle protocol events and route to paxakos
- Integrate with existing Kademlia and Identify behaviors
- Create behavior struct wrapping paxakos
Phase 3: Node Lifecycle Integration
-
Integrate paxakos node with netabase lifecycle
- Initialize paxakos
Nodeinstart_swarm() - Use
NodeBuilderwith customCommunicatorandState - Handle graceful shutdown in
stop_swarm() - Expose node handle through netabase API
- Initialize paxakos
-
Implement consensus-backed operations
put_record_consensus(&mut self, record: D)- append via paxakos- Handle
Commit<S, R, P>futures and apply outcomes - Propagate consensus results through event system
- Provide fallback to DHT for non-consensus operations
Phase 4: Optional Features & Optimizations
-
Add paxakos decorations
heartbeats- node liveness monitoringautofill- automatic log gap fillingcatch-up- synchronize lagging nodesmaster-leases- optimize read-only operations
-
Implement cluster management
- Dynamic cluster membership changes
- Node addition/removal protocols
- Quorum reconfiguration
-
Performance optimization
- Batch log entry applications
- Concurrent round processing
- Message compression and deduplication
Phase 5: Testing & Documentation
-
Comprehensive testing
- Multi-node consensus tests with nextest
- Network partition tolerance tests
- Leader election and failover tests
- Performance benchmarks with criterion
-
Documentation & examples
- Paxos API documentation
- Consensus vs DHT operation guide
- Example: distributed counter with strong consistency
- Example: replicated state machine
Implementation Notes
Key Design Decisions:
- Paxos consensus optional via
paxosfeature flag (native-only) - Coexists with existing DHT-based operations (eventual consistency)
- Users choose consistency model per operation
- WebRTC/WASM compatibility requires alternative consensus (Paxos uses threading)
Dependencies:
paxakos = "0.13.0"(already added to netabase_store)- libp2p request-response protocol for message transport
- Separate feature flag to avoid WASM compilation issues
Features
Current Features
-
P2P Networking:
- Built on libp2p for robust peer-to-peer communication
- mDNS for automatic local peer discovery
- Kademlia DHT for distributed record storage and discovery
- Identify protocol for peer information exchange
- Connection limits and management
-
Cross-Platform Support:
- Native (TCP, QUIC, mDNS)
- WASM (WebRTC, WebSocket) - coming soon
- Unified API across platforms
-
Integrated Storage:
- Built on
netabase_storefor type-safe data management - Multiple backend support (Sled, Redb)
- Automatic data persistence with secondary key indexing
- libp2p RecordStore integration
- Built on
-
Record Distribution:
- Publish records to the DHT network
- Query records from remote peers
- Automatic record replication
- Provider advertisement and discovery
-
Type-Safe Operations:
- Compile-time verification of network operations
- Schema-based data models with macros
- Type-safe record keys and queries
-
Event System:
- Broadcast channels for network events
- Multiple concurrent subscribers
- Real-time peer discovery notifications
- Connection and behavior events
Installation
Add to your Cargo.toml:
[]
= "0.0.3"
= "0.0.3"
= "0.0.3"
# Required for macros to work
= { = "2.0", = ["serde"] }
= { = "1.0", = ["derive"] }
= { = "0.27.2", = ["derive"] }
= { = "2.0.1", = ["from", "try_into", "into"] }
# Runtime dependencies
= { = "1.0", = ["full"] }
= "1.0"
Quick Start
1. Define Your Data Model
use netabase_definition_module;
use *;
2. Initialize Netabase
use Netabase;
async
3. Store and Publish Records
// Create a message
let message = Message ;
// Store locally and publish to the DHT
let result = netabase.put_record.await?;
println!;
4. Query Records
// Query a specific record by key
let key = Primary;
let result = netabase.get_record.await?;
// Query local records
let local_messages = netabase.query_local_records.await?;
println!;
5. Provider Management
// Advertise as a provider for a key
let key = Primary;
netabase.start_providing.await?;
println!;
// Find providers for a key
let providers_result = netabase.get_providers.await?;
match providers_result
6. Listen for Network Events
use NetabaseSwarmEvent;
// Subscribe to network events
let mut event_receiver = netabase.subscribe_to_broadcasts;
// Spawn a background task to handle events
spawn;
Advanced Usage
Multi-Model Networks
Netabase supports multiple data models in a single network:
let mut app = new_with_path?;
app.start_swarm.await?;
// Each model type is independently managed
app.put_record.await?;
app.put_record.await?;
Custom Storage Backend
use ;
// Use Redb instead of default Sled
let config = with_backend;
let netabase = new_with_config?;
// Or specify both path and backend
let netabase = new_with_path_and_backend?;
DHT Mode Management
// Get current DHT mode
let mode = netabase.get_mode.await?;
println!;
// Switch to client mode (read-only, lower resource usage)
netabase.set_mode.await?;
// Switch to server mode (full participation)
netabase.set_mode.await?;
Bootstrap and Peer Management
use ;
// Add a known peer
let peer_id: PeerId = "12D3KooW...".parse?;
let address: Multiaddr = "/ip4/192.168.1.100/tcp/4001".parse?;
netabase.add_address.await?;
// Bootstrap to join the DHT network
let result = netabase.bootstrap.await?;
println!;
// Remove a peer
netabase.remove_peer.await?;
Architecture
Components
-
Netabase Struct: Main API entry point
- Manages lifecycle (start/stop swarm)
- Provides typed record operations
- Handles event subscriptions
-
Network Layer (internal):
NetabaseBehaviour: libp2p network behaviourNetabaseStore: Unified storage backend for DHT- Swarm handlers for command and event processing
-
Storage Layer (
netabase_store):- Type-safe key-value stores
- Backend abstraction (Sled/Redb/IndexedDB)
- Secondary key indexing
-
Event System:
- Broadcast channels for network events
- Multiple concurrent subscribers
- Zero-cost resubscribe
Data Flow
Application
↓ put_record()
Netabase
├─→ Command Channel → Swarm Handler
│ ↓
│ NetabaseStore (local)
│ ↓
│ Kademlia DHT
│ ↓
│ Remote Peers
│
└─→ Broadcast Channel ← Swarm Events
↓
Event Subscribers
Performance Considerations
- Local-first: All operations start with local storage (fast)
- Async operations: Network operations don't block
- Efficient encoding: Uses bincode for compact serialization
- Channel-based: Non-blocking communication between layers
- Secondary key indexing: O(m) queries where m is matching records
Abstraction Overhead
Netabase builds on netabase_store for its storage layer, which provides excellent type safety and multi-backend support. However, this abstraction does come with some performance overhead (typically 5-10%). For applications where maximum performance is critical and you don't need the networking features, consider using netabase_store directly.
The main overhead sources are:
- Type conversions for DHT record storage
- libp2p's RecordStore trait implementation
- Channel-based async communication between layers
We're actively working to reduce this overhead while maintaining type safety and the clean API.
Future Plans
UniFFI Integration: We're planning to add UniFFI support to enable using netabase from other languages (Python, Kotlin/Swift, etc.):
- Export generated model code to UniFFI
- Create FFI-safe API wrappers for all major operations
- Enable cross-language distributed applications
- Support for callbacks and async operations across language boundaries
This will make it possible to build distributed applications in Python, Swift, or Kotlin that can seamlessly communicate with Rust-based netabase nodes.
P2P Network Profiles: Planned features for easier distributed application development:
- Configurable connection profiles (local-only, DHT-backed, full mesh, etc.)
- Protocol abstraction for easier integration with different transport layers
- Automatic conflict resolution strategies (CRDT-based, last-write-wins, custom)
- Built-in data synchronization patterns
Platform Support
| Feature | Native | WASM |
|---|---|---|
| TCP | ✅ | ❌ |
| QUIC | ✅ | ❌ |
| mDNS | ✅ | ❌ |
| Kad DHT | ✅ | 🚧 |
| Sled Backend | ✅ | ❌ |
| Redb Backend | ✅ | ❌ |
| IndexedDB | ❌ | 🚧 |
🚧 = Planned for future release
Examples
See the examples/ directory:
- simple_mdns_chat.rs: Complete chat application using mDNS discovery
# In another terminal
Testing
# Run all tests (native)
# Run a specific test
# Build with release optimizations
API Reference
Main Methods
new()- Create with defaultsnew_with_path(path)- Custom database pathnew_with_config(config)- Custom configurationstart_swarm()- Start networkingstop_swarm()- Shutdown gracefullysubscribe_to_broadcasts()- Get event receiver
Record Operations
put_record(model)- Store and publishget_record(key)- Query networkremove_record(key)- Remove locallyquery_local_records(limit)- Query local store
Provider Operations
start_providing(key)- Advertise as providerstop_providing(key)- Stop advertisingget_providers(key)- Find providers
Network Management
bootstrap()- Join DHT networkadd_address(peer_id, addr)- Add peerremove_address(peer_id, addr)- Remove addressremove_peer(peer_id)- Remove peerget_mode()- Query DHT modeset_mode(mode)- Change DHT modeget_protocol_names()- Get protocol info
Testing
Netabase includes a comprehensive test suite to ensure reliability and correctness.
Test Suite Overview
-
Unit Tests: Core functionality tests
-
Integration Tests: Multi-node P2P tests using
std::process::Command# Basic P2P tests # Advanced DHT tests # Chat application tests -
Build Verification: Ensures examples, doctests, and benchmarks compile
-
Network Topology Tests: Inter-process P2P communication tests with various network configurations
# Run all network topology tests # Run specific testAvailable tests:
test_two_node_basic: Simple two-node communication (5 messages)test_two_node_many_messages: Two nodes with 20 messagestest_multi_sender_single_receiver: 3 senders, 1 receivertest_message_content_integrity: Verifies message content is preserved
-
WASM Compilation Tests: Verifies WASM target compilation
-
Benchmarks: Performance benchmarking
Comprehensive Test Runner
Run all tests systematically using the provided Nushell script:
# Make script executable
# Run all tests
Test Coverage
The test suite covers:
- ✅ mDNS Peer Discovery: Automatic local network peer discovery
- ✅ DHT Record Operations: Put/get records across nodes
- ✅ Provider Records: Advertising and querying content providers
- ✅ Bootstrap: Joining the DHT network
- ✅ Cross-Node Communication: Message passing between nodes
- ✅ Concurrent Operations: Multiple simultaneous operations
- ✅ Network Scalability: Tests with 2-15 nodes
- ✅ Record Replication: Data distribution across the network
- ✅ Local Storage: Query and persistence operations
- ✅ Event Subscription: Network event broadcasting
- ✅ Build Verification: Examples and doctests compilation
- ✅ Network Topology Tests: Inter-process P2P communication
- ⚠️ WASM Compilation: Currently failing (see WASM Support section)
CI/CD Integration
For continuous integration, use:
# Quick test suite (no integration tests)
# Full test suite including integration tests
Note: Integration tests use --test-threads=1 to avoid port conflicts when spawning multiple test nodes.
Troubleshooting
Peers Not Discovered
- mDNS only works on local networks
- Check firewall settings
- Ensure both peers are in server mode
- Try adding peers manually with
add_address()
Records Not Found
- Bootstrap to join the DHT network first
- Check if you're in client or server mode
- Verify the record was published successfully
- Allow time for DHT propagation
WASM Support
Current Status
WASM support is under active development. The wasm feature exists but requires additional work to fully function.
Known WASM Compilation Issues
The following issues prevent successful WASM compilation and need to be resolved:
1. Storage Backend Abstraction Issue
Problem: The IndexedDBStore and MemoryStore implementations use sled-specific methods (to_ivec() and from_ivec()) that don't exist in the WASM context.
Location:
netabase_store/src/databases/indexeddb_store.rs:204netabase_store/src/databases/memory_store.rs:334, 376, 607
Error:
error[E0599]: no method named `to_ivec` found for type parameter `D`
error[E0599]: no function or associated item named `from_ivec` found for type parameter `D`
Resolution Needed:
- Create a platform-agnostic serialization trait that provides
to_vec()andfrom_vec()for all backends - Update WASM backends to use the generic serialization methods
- Properly feature-gate sled-specific
IVecusage
2. Feature Gating (Resolved)
Previous Issue: Some features were referenced but not properly defined.
Resolution: Feature gating has been fixed:
sled,redb, andlibp2pfeatures are now properly gated- The
ToIVectrait methods are now correctly gated onfeature = "sled"instead offeature = "native" - The
RecordStoreExttrait is properly gated onall(feature = "libp2p", not(target_arch = "wasm32"))
WASM TODO List
To complete WASM support, the following tasks must be completed:
- Fix storage backend serialization abstraction (see issue #1 above)
- Fix feature gating for
sled,redb, andlibp2pfeatures (resolved) - Properly implement IndexedDB storage backend for WASM
- Test WebRTC transport in WASM environment
- Test WebSocket-WebSys transport
- Test WebTransport-WebSys functionality
- Create WASM-specific examples
- Add browser-based integration tests
- Document browser storage limitations
- Add WASM-specific configuration guide
- Benchmark WASM performance vs native
Testing WASM Compilation
To test WASM compilation:
# Install WASM target
# Attempt to build for WASM
# Run WASM-specific tests
Workaround for Development
Until WASM support is fully implemented, you can:
- Use the
nativefeature for desktop/server applications - Implement your own browser-specific storage using
IndexedDBdirectly - Use a native backend as a bridge for WASM applications
- Consider using the library in client mode only (no local storage)
Documentation
- Getting Started: Step-by-step tutorial for building your first distributed application
- Architecture: Deep dive into netabase's design and internal architecture
- Macro Guide: Learn what happens behind the scenes with the
netabase_definition_modulemacro
License
This project is licensed under the GPL 3 License.
Related Projects
- netabase_store - Type-safe storage layer
- gdelt_fetcher - GDELT data source integration
Contributing
Contributions welcome! Please ensure:
- Code passes all tests
- New features include tests and documentation
- Follow existing code style