Expand description
§Integration Modes: Embedded vs Standalone
Decision time: 30 seconds
Choose your integration mode based on your primary programming language and latency requirements.
§30-Second Decision Guide
┌─────────────────────────────────────┐
│ Is your application written in Rust?│
└──────────────┬──────────────────────┘
│
┌─────┴─────┐
│ YES │─────────────────────────────────────┐
└───────────┘ │
│ │
┌─────┴─────────────────────────────┐ │
│ Do you need <0.1ms KV operations? │ │
└──────────────┬────────────────────┘ │
│ │
┌─────┴─────┐ │
│ YES │ │
└─────┬─────┘ │
│ │
▼ ▼
┌──────────────────┐ ┌─────────────────┐
│ EMBEDDED MODE │ │ STANDALONE MODE │
│ │ │ │
│ • In-process │ │ • Separate │
│ • Zero overhead │ │ server │
│ • <0.1ms latency │ │ • gRPC API │
│ • Rust-only │ │ • 1-2ms latency │
└──────────────────┘ │ • Any language │
└─────────────────┘§Embedded Mode
§What is it?
d-engine runs inside your Rust application process:
- Direct memory access (no serialization)
- Function call latency (<0.1ms)
- Single binary deployment
- Rust API only
§When to use?
✅ Use Embedded Mode if:
- Your application is written in Rust
- You need ultra-low latency (<0.1ms for KV operations)
- You want zero serialization overhead
- You prefer single-binary deployment
- You’re building latency-sensitive systems (trading, gaming, real-time analytics)
§Quick Start
use d_engine::EmbeddedEngine;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Start with config file
let engine = EmbeddedEngine::start_with("d-engine.toml").await?;
// Wait for leader election
engine.wait_ready(Duration::from_secs(5)).await?;
// Get zero-overhead KV client
let client = engine.client();
client.put(b"key".to_vec(), b"value".to_vec()).await?;
engine.stop().await?;
Ok(())
}§Performance Characteristics
| Operation | Latency |
|---|---|
| put() (write) | <0.1ms (single node) |
| 1-5ms (3-node) | |
| get() (read) | <0.1ms (local) |
| Leader election | <100ms (single) |
| 1-2s (3-node) |
§Standalone Mode
§What is it?
d-engine runs as a separate server process:
- gRPC-based communication
- Language-agnostic client libraries
- Network latency (1-2ms typical)
- Polyglot ecosystem support
§When to use?
✅ Use Standalone Mode if:
- Your application is not written in Rust (Go, Python, Java, etc.)
- You need language-agnostic deployment
- You prefer microservices architecture
- 1-2ms latency is acceptable
- You want to share one d-engine cluster across multiple applications
§Quick Start
1. Start d-engine server:
# Start single-node server
d-engine-server --config d-engine.toml2. Connect from any language:
Rust Client:
use d_engine::{ClientBuilder, Client};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = ClientBuilder::new(vec!["http://127.0.0.1:9083".to_string()])
.build()
.await?;
client.kv().put(b"key".to_vec(), b"value".to_vec()).await?;
Ok(())
}Go Client (coming soon):
// Future: Go client library
client := dengine.NewClient("127.0.0.1:9083")
client.Put([]byte("key"), []byte("value"))Python Client (coming soon):
# Future: Python client library
client = dengine.Client("127.0.0.1:9083")
client.put(b"key", b"value")§Performance Characteristics
| Operation | Latency |
|---|---|
| put() (write) | 1-2ms (local) |
| 2-10ms (remote) | |
| get() (read) | 1-2ms (gRPC) |
| Leader election | 1-2s |
§Comparison Table
| Feature | Embedded Mode | Standalone Mode |
|---|---|---|
| Language | Rust only | Any (via gRPC) |
| Deployment | In-process | Separate server |
| Latency (KV ops) | <0.1ms | 1-2ms |
| Communication | Direct memory | gRPC network |
| Serialization | Zero | Protocol Buffers |
| Use Case | Ultra-low latency | Polyglot microservices |
| Single Binary | ✅ Yes | ❌ No (server + app) |
| Cross-Language | ❌ No | ✅ Yes |
| Overhead | Minimal | Network + serialization |
| Complexity | Simple (1 process) | Moderate (2+ processes) |
§Architecture Differences
§Embedded Mode Architecture
┌─────────────────────────────────────────┐
│ Your Rust Application Process │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ Business │◄───┤ LocalKvClient│ │
│ │ Logic │ │ (memory) │ │
│ └──────────────┘ └───────┬──────┘ │
│ │ │
│ ┌────────▼───────┐ │
│ │ Raft Engine │ │
│ │ (d-engine-core)│ │
│ └────────────────┘ │
└─────────────────────────────────────────┘§Standalone Mode Architecture
┌────────────────────┐ ┌────────────────────┐
│ Your Application │ │ d-engine Server │
│ (Any Language) │ │ │
│ │ │ ┌──────────────┐ │
│ ┌──────────────┐ │ gRPC │ │ Raft Engine │ │
│ │ Business │◄─┼─────────┼─►│ │ │
│ │ Logic │ │ │ │ (d-engine- │ │
│ └──────────────┘ │ │ │ core) │ │
│ │ │ └──────────────┘ │
└────────────────────┘ └────────────────────┘
(Go, Python, Java, etc.) (Separate process)§FAQ
§Q: When should I consider switching modes?
Switch to Standalone if:
- You need to support non-Rust applications
- Multiple services need to share one cluster
Switch to Embedded if:
- You’re building a Rust-only application
- You need <0.1ms KV latency
Note: Migration only requires changing client initialization, business logic stays the same.
§Q: Which mode is more production-ready?
A: Both are production-ready. Choose based on your language and latency requirements.
§Q: Does Embedded Mode support clustering?
A: Yes! Embedded nodes communicate via network (gRPC) for Raft protocol, but your application uses local memory access.
§Q: Can I switch modes later?
A: Yes. Your business logic code remains the same, only connection setup changes.
§Next Steps
§For Embedded Mode Users
§For Standalone Mode Users
Created: 2025-12-25
Last Updated: 2025-12-25