Netabase Store
A type-safe, multi-backend key-value storage library for Rust with support for native (Sled, Redb) and WASM (IndexedDB) environments.
β οΈ Early Development: This crate is still in early development and will change frequently as it stabilizes. It is not advised to use this in a production environment until it stabilizes.
Features
β¨ Core Features
-
ποΈ Multi-Backend Support:
- Sled: High-performance embedded database for native platforms
- Redb: Memory-efficient embedded database with ACID guarantees
- RedbZeroCopy: Zero-copy variant for maximum performance (10-54x faster for bulk ops)
- IndexedDB: Browser-based storage for WASM applications
- In-Memory: Fast in-memory storage for testing and caching
-
βοΈ Unified Configuration API:
FileConfig,MemoryConfig,IndexedDBConfigwith builder pattern- Consistent initialization across all backends
- Switch backends by changing one line of code
- Type-safe configuration with sensible defaults
-
π Type-Safe Schema Definition:
- Derive macros for automatic schema generation
- Primary and secondary key support
- Compile-time type checking for all database operations
- Zero-cost abstractions with trait-based design
-
π Cross-Platform:
- Unified API across native and WASM targets
- Feature flags for platform-specific backends
- Seamless switching between backends with same configuration
-
β‘ High Performance:
- Transaction API with type-state pattern (10-100x faster for bulk ops)
- Batch operations for bulk inserts/updates
- Efficient secondary key indexing
- Minimal overhead (<5-10%) over raw backend operations
- Zero-copy deserialization where possible
-
π Secondary Key Indexing:
- Fast lookups using secondary keys
- Multiple secondary keys per model
- Automatic index management
-
π Iteration Support:
- Efficient iteration over stored data
- Type-safe iterators with proper error handling
-
π libp2p Integration (Optional):
- Record store implementation for distributed systems
- Compatible with libp2p DHT
- Enable via
record-storefeature
-
π§ͺ Testing Utilities:
- Comprehensive test suite
- Benchmarking tools included
- WASM test support via wasm-pack
π Extensibility
-
Unified Trait-Based API:
NetabaseTreeSyncfor synchronous operations (native)NetabaseTreeAsyncfor asynchronous operations (WASM)- Easy to implement custom backends
- Full compatibility with existing code
-
Batch Processing:
Batchabletrait for atomic bulk operations- Significantly faster than individual operations
- Backend-specific optimizations
Installation
Add to your Cargo.toml:
[]
= "0.0.3"
# Required dependencies for macros to work
= { = "2.0", = ["serde"] }
= { = "1.0", = ["derive"] }
= { = "0.27.2", = ["derive"] }
= { = "2.0.1", = ["from", "try_into", "into"] }
= "1.0" # Optional, for error handling
# For WASM support
[]
= { = "0.0.2", = false, = ["wasm"] }
Feature Flags
native(default): Enable Sled and Redb backendssled: Enable Sled backend onlyredb: Enable Redb backend onlyredb-zerocopy: Enable zero-copy Redb backend (high-performance variant)wasm: Enable IndexedDB backend for WASMlibp2p: Enable libp2p integrationrecord-store: Enable RecordStore trait (requireslibp2p)
Quick Start
1. Define Your Schema
use netabase_definition_module;
use NetabaseModelTrait;
use *;
2. Use with NetabaseStore (Recommended)
The unified NetabaseStore provides a consistent API across all backends:
use NetabaseStore;
3. Direct Backend Usage (Advanced)
You can also use backends directly for backend-specific features:
use SledStore;
use RedbStore;
// Direct Sled usage
let sled_store = temp?;
let user_tree = sled_store.;
// Direct Redb usage
let redb_store = new?;
let user_tree = redb_store.;
// Both have identical APIs via NetabaseTreeSync trait
4. Use with IndexedDB (WASM)
use IndexedDBStore;
use NetabaseTreeAsync;
async
Advanced Usage
Configuration API
The new unified configuration system provides consistent backend initialization across all database types:
FileConfig - For File-Based Backends
use FileConfig;
use BackendStore;
use SledStore;
// Method 1: Builder pattern (recommended)
let config = builder
.path
.cache_size_mb
.truncate
.build;
let store = new?;
// Method 2: Simple constructor
let config = new;
let store = open?;
// Method 3: Temporary database
let store = temp?;
Switching Backends with Same Config
The power of the configuration API is that you can switch backends without changing your code:
use FileConfig;
use BackendStore;
let config = builder
.path
.cache_size_mb
.build;
// Try different backends - same config!
let store = new?;
let store = new?;
let store = new?;
// All have the same API from this point on!
let user_tree = store.;
Configuration Options Reference
FileConfig (for Sled, Redb, RedbZeroCopy):
path: PathBuf- Database file/directory pathcache_size_mb: usize- Cache size in megabytes (default: 256)create_if_missing: bool- Create if doesn't exist (default: true)truncate: bool- Delete existing data (default: false)read_only: bool- Open read-only (default: false)use_fsync: bool- Fsync for durability (default: true)
MemoryConfig (for in-memory backend):
capacity: Option<usize>- Optional capacity hint
IndexedDBConfig (for WASM):
database_name: String- IndexedDB database nameversion: u32- Schema version (default: 1)
Batch Operations & Bulk Methods
For high-performance bulk operations, use the convenient bulk methods:
use NetabaseStore;
let store = temp?;
let user_tree = store.;
// Bulk insert - 8-9x faster than loop!
let users: =
.map
.collect;
user_tree.put_many?; // Single transaction
// Bulk read
let keys: = .map.collect;
let users: = user_tree.get_many?;
// Bulk secondary key queries
let email_keys = vec!;
let results: = user_tree.get_many_by_secondary_keys?;
Bulk Methods:
put_many(Vec<M>)- Insert multiple models in one transactionget_many(Vec<M::Keys>)- Read multiple models in one transactionget_many_by_secondary_keys(Vec<SecondaryKey>)- Query multiple secondary keys in one transaction
Or use the batch API for more control:
use Batchable;
// Create a batch
let mut batch = user_tree.create_batch?;
// Add many operations
for i in 0..1000
// Commit atomically - all or nothing
batch.commit?;
Bulk operations are:
- β‘ Faster: 8-10x faster than individual operations
- π Atomic: All succeed or all fail
- π¦ Efficient: Single transaction reduces overhead
Transactions (New!)
For maximum performance and atomicity, use the transaction API to reuse a single transaction across multiple operations:
use NetabaseStore;
let store = sled?;
// Read-only transaction - multiple concurrent reads allowed
let txn = store.read;
let user_tree = txn.;
let user = user_tree.get?;
// Transaction auto-closes on drop
// Read-write transaction - exclusive access, atomic commit
let mut txn = store.write?;
let mut user_tree = txn.;
// All operations share the same transaction
for i in 0..1000
// Bulk helpers also work within transactions
user_tree.put_many?;
// Commit all changes atomically
txn.commit?;
// Or drop without committing to rollback
Transaction Benefits:
- π 10-100x Faster: Single transaction for many operations (eliminates per-operation overhead)
- π Type-Safe: Compile-time enforcement of read-only vs read-write access
- β‘ Zero-Cost: Phantom types compile away completely
- π ACID: Full atomicity for write transactions (Redb)
Compile-Time Safety:
let txn = store.read; // ReadOnly transaction
let tree = txn.;
tree.put?; // β Compile error: put() not available on ReadOnly!
Secondary Keys
Secondary keys enable efficient lookups on non-primary fields:
// Query by single secondary key
let tech_articles = article_tree
.get_by_secondary_key?;
// Bulk query multiple secondary keys (2-3x faster!)
let keys = vec!;
let results: = article_tree.get_many_by_secondary_keys?;
// results[0] = tech articles, results[1] = science articles
Multiple Models in One Store
use NetabaseStore;
let store = sled?;
// Different trees for different models
let user_tree = store.;
let post_tree = store.;
// Each tree is independent but shares the same underlying database
user_tree.put?;
post_tree.put?;
Temporary Store for Testing
use NetabaseStore;
// Perfect for unit tests - no I/O, no cleanup needed
let store = temp?;
let user_tree = store.;
user_tree.put?;
Custom Backend Implementation
Netabase Store's trait-based design makes it easy to implement custom storage backends. Here's what you need to know:
Required Traits
To create a custom backend, implement one of these traits depending on your backend's characteristics:
1. NetabaseTreeSync - For Synchronous Backends
Use this for native, blocking I/O backends (like SQLite, file systems, etc.):
use NetabaseTreeSync;
use NetabaseModelTrait;
use NetabaseDefinitionTrait;
use NetabaseError;
2. NetabaseTreeAsync - For Asynchronous Backends
Use this for async backends (remote databases, web APIs, etc.):
use NetabaseTreeAsync;
use NetabaseError;
use Future;
3. OpenTree - For Store-Level API
Implement this on your store type to allow opening trees:
use OpenTree;
4. Batchable (Optional) - For Batch Operations
If your backend supports atomic batching:
use ;
Implementation Tips
-
Serialization: Use
bincodefor efficient serialization:use ; let bytes = encode_to_vec?; let = decode_from_slice?; -
Secondary Key Indexing: Store composite keys:
// Create composite key: secondary_key_bytes + primary_key_bytes let mut composite = secondary_key_bytes; composite.extend_from_slice; -
Error Handling: Convert your backend errors to
NetabaseError:use NetabaseError; my_backend_op.map_err?; -
Iterator Support: Implement iterators for efficient traversal:
Complete Example
See the existing backends for reference:
sled_store.rs: Example of sync backend with batch supportredb_store.rs: Example of transactional backendindexeddb_store.rs: Example of async WASM backendmemory_store.rs: Simple in-memory implementation
All existing code will work with your custom backend once you implement the traits!
Performance
Netabase Store is designed for high performance while maintaining type safety. The library provides multiple APIs optimized for different use cases, with comprehensive benchmarking and profiling support.
API Options for Performance
The library offers three APIs with different performance characteristics:
- Standard Wrapper API: Simple, ergonomic API with auto-transaction per operation
- Bulk Methods:
put_many(),get_many(),get_many_by_secondary_keys()- single transaction for multiple items - ZeroCopy API: Explicit transaction management for maximum control
Benchmark Results
Comprehensive benchmarks comparing all implementations across multiple dataset sizes (10, 100, 500, 1000, 5000 items):
Insert Performance (1000 items)
| Implementation | Time | vs Raw | Notes |
|---|---|---|---|
| Raw Redb (baseline) | 1.42 ms | 0% | Single transaction, manual index management |
| Wrapper Redb (bulk) | 3.10 ms | +118% | put_many() - single transaction |
| Wrapper Redb (loop) | 27.3 ms | +1,822% | Individual put() calls - creates N transactions |
| ZeroCopy (bulk) | 3.51 ms | +147% | put_many() with explicit transaction |
| ZeroCopy (loop) | 4.34 ms | +206% | Loop with single explicit transaction |
Key Insights:
- Bulk methods provide 8-9x speedup over loop-based insertion (27.3ms β 3.10ms)
- Bulk wrapper API approaches raw performance (118% overhead vs 1,822% for loops)
- Transaction overhead dominates when creating N transactions vs 1 transaction
Read Performance (1000 items)
| Implementation | Time | vs Raw | Notes |
|---|---|---|---|
| Raw Redb (baseline) | 164 Β΅s | 0% | Single transaction |
| Wrapper Redb (bulk) | 382 Β΅s | +133% | get_many() - single transaction |
| Wrapper Redb (loop) | 895 Β΅s | +446% | Individual get() calls - creates N transactions |
| ZeroCopy (single txn) | 692 Β΅s | +322% | Explicit read transaction |
Key Insights:
- Bulk
get_many()provides 2.3x speedup over individual gets (895Β΅s β 382Β΅s) - Transaction reuse is critical for read performance
- Even bulk methods have overhead due to transaction and deserialization costs
Secondary Key Queries (10 queries)
| Implementation | Time | vs Raw | Notes |
|---|---|---|---|
| Raw Redb (baseline) | 291 Β΅s | 0% | 10 transactions, manual index traversal |
| Wrapper Redb (bulk) | 470 Β΅s | +61% | get_many_by_secondary_keys() - single transaction |
| Wrapper Redb (loop) | 1.02 ms | +248% | 10 separate get_by_secondary_key() calls |
| ZeroCopy (single txn) | 5.41 Β΅s | -98% | Single transaction, optimized index access |
Key Insights:
- ZeroCopy API is 54x faster than raw redb for secondary queries (291Β΅s β 5.4Β΅s)
- Bulk secondary query method provides 2.2x speedup over loops
- Single transaction + efficient index access = dramatic performance gains
Performance Optimization Guide
1. Use Bulk Methods for Standard API (8-9x faster)
// β Slow: Creates 1000 transactions
for user in users
// β
Fast: Single transaction
tree.put_many?; // 8-9x faster!
Available Bulk Methods:
put_many(Vec<M>)- Bulk insertget_many(Vec<M::Keys>)- Bulk readget_many_by_secondary_keys(Vec<SecondaryKey>)- Bulk secondary queries
2. Use Explicit Transactions for Maximum Control
// For write-heavy workloads
let mut txn = store.write?;
let mut tree = txn.;
for user in users
txn.commit?; // Single atomic commit
3. Choose the Right API for Your Use Case
| Use Case | Recommended API | Reason |
|---|---|---|
| Simple CRUD, few operations | Standard wrapper | Simplest API, auto-commit |
| Bulk inserts/reads (100+ items) | Bulk methods | 8-9x faster than loops |
| Complex transactions | Explicit transactions | Full control, atomic commits |
| Read-heavy queries | ZeroCopy API | Up to 54x faster for secondary queries |
Profiling Support
The benchmarks include full profiling support via pprof and flamegraphs:
# Run benchmarks with profiling
# Analyze profiling data
# View flamegraphs (SVG files in target/criterion/)
Flamegraphs show:
- Function call stacks and time distribution
- Serialization overhead (bincode operations)
- Transaction costs (redb internal operations)
- Memory allocation patterns
- Lock contention (if any)
Running Benchmarks
# Cross-store comparison (all backends, multiple sizes)
# Generate visualizations
# View results
Backend Comparison
Redb
- Best for: Write-heavy workloads, ACID guarantees
- Wrapper overhead: 118-133% for bulk operations
- Strengths: Excellent write performance, full ACID compliance, efficient storage
- Use when: Data integrity is critical, write performance matters
Sled
- Best for: Read-heavy workloads
- Wrapper overhead: ~20% for read operations
- Strengths: Very low read overhead, battle-tested
- Use when: Read performance is critical, workload is read-heavy
Technical Notes
Why Transaction Overhead Matters
Creating a new transaction has fixed costs:
- Lock acquisition
- MVCC snapshot creation
- Internal state setup
When you call put() in a loop, you pay these costs N times. Using put_many() or explicit transactions, you pay once.
Type Safety vs Performance
The wrapper APIs prioritize type safety and ergonomics. For applications where the overhead is significant:
- Use bulk methods first - often solves the problem
- Use explicit transactions - full control with same safety
- Profile your workload - measure before optimizing
- Consider ZeroCopy API - for specialized high-performance scenarios
Serialization Overhead
The read-path overhead in Redb comes from type system limitations with Generic Associated Types (GATs). We prioritize safety over unsafe transmutes. For applications where this matters:
- Use bulk methods to amortize overhead
- Use explicit transactions for better performance
- Consider Sled backend for read-heavy workloads
See benchmark results and visualizations in docs/benchmarks/ for detailed performance analysis.
Testing
# Run all tests
# Run native tests only
# Run WASM tests (requires wasm-pack and Firefox)
Architecture
See ARCHITECTURE.md for a deep dive into the library's design.
High-Level Overview
βββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Your Application Code β
β (Type-safe models with macros) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββ
β NetabaseStore<D, Backend> β
β (Unified API layer - Recommended) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββΌββββββββββββββ
β β β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β SledStore β β RedbStore β βIndexedDBStoreβ
β <D> β β <D> β β <D> β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β β β
β β β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Trait Layer β
β (NetabaseTreeSync, NetabaseTreeAsync) β
β (OpenTree, Batchable, StoreOps) β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β β β
β β β
βββββββ ββββββββ βββββββββββ
β Sledβ β Redb β βIndexedDBβ
βββββββ ββββββββ βββββββββββ
Native Native WASM
Roadmap
For 1.0.0
- Transaction support across multiple operations (COMPLETED)
- Zero-copy reads for redb backend (via
redb-zerocopyfeature) - Phase 1 Complete - Allow modules to define more than one definition for flexible organization
- Migration utilities for schema changes
- Query builder for complex queries
- Range queries on ordered keys
- Compression support
- Encryption at rest
- Improved documentation and examples
Future Plans
- Distributed systems support with automatic sync
- CRDT-based conflict resolution
- WebRTC backend for peer-to-peer storage
- SQL-like query language
- GraphQL integration
Examples
See the test_netabase_store_usage crate for a complete working example.
Additional examples in the repository:
examples/basic_store.rs- Basic CRUD operationsexamples/unified_api.rs- Working with multiple backendstests/wasm_tests.rs- WASM usage patterns
Why Netabase Store?
Problem
Working with different database backends in Rust typically means:
- Learning different APIs for each backend
- No type safety for keys and values
- Manual serialization/deserialization
- Difficulty switching backends
- Complex secondary indexing
Solution
Netabase Store provides:
- β Single unified API across all backends
- β Compile-time type safety for everything
- β Automatic serialization with bincode
- β Seamless backend switching
- β Automatic secondary key management
- β Cross-platform support (native + WASM)
Contributing
Contributions are welcome! Please:
- Open an issue to discuss major changes
- Follow the existing code style
- Add tests for new features
- Update documentation
License
This project is licensed under the GPL-3.0-only License - see the LICENSE file for details.
Links
Acknowledgments
Built with: