dakera-rs

Rust client SDK for Dakera — high-performance vector database for AI agent memory.
Features
- HTTP and gRPC transports — choose the protocol that fits your workload
- Async/await — built on Tokio for non-blocking I/O
- Type-safe API — fully typed request/response models with serde
- Memory management — store, recall, and forget agent memories
- Knowledge graphs — build and query knowledge graphs from memories
- Agent management — list agents, sessions, and statistics
- Vector operations — upsert, query, delete, batch query, hybrid search
- Full-text search — BM25-based search with hybrid vector+text support
- Admin & analytics — cluster management, cache control, backups, quotas
- gRPC connection pooling — HTTP/2 multiplexing with round-robin load balancing
Installation
cargo add dakera-client
Or add to your Cargo.toml:
[dependencies]
dakera-client = "0.3"
Quick Start
use dakera_client::{DakeraClient, UpsertRequest, QueryRequest, Vector};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = DakeraClient::new("http://localhost:3000")?;
let health = client.health().await?;
println!("Server healthy: {}", health.healthy);
let request = UpsertRequest {
vectors: vec![
Vector {
id: "vec1".to_string(),
values: vec![0.1, 0.2, 0.3, 0.4],
metadata: None,
},
],
};
client.upsert("my-namespace", request).await?;
let query = QueryRequest {
vector: vec![0.1, 0.2, 0.3, 0.4],
top_k: 10,
filter: None,
include_metadata: true,
};
let results = client.query("my-namespace", query).await?;
for match_ in results.matches {
println!("ID: {}, Score: {}", match_.id, match_.score);
}
Ok(())
}
Agent Memory
use dakera_client::{DakeraClient, memory::{StoreMemoryRequest, RecallRequest}};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = DakeraClient::new("http://localhost:3000")?;
let request = StoreMemoryRequest::new("agent-1", "The user prefers dark mode")
.with_importance(0.8)
.with_tags(vec!["preferences".to_string()]);
let stored = client.store_memory(request).await?;
println!("Stored: {}", stored.memory_id);
let request = RecallRequest::new("agent-1", "user preferences")
.with_top_k(5);
let recalled = client.recall(request).await?;
for memory in recalled.memories {
println!("{}: {} (score: {})", memory.id, memory.content, memory.score);
}
Ok(())
}
gRPC Client
Enable the grpc feature for high-performance gRPC communication with connection pooling:
[dependencies]
dakera-client = { version = "0.3", features = ["grpc"] }
use dakera_client::grpc::{GrpcClient, GrpcClientConfig, GrpcConnectionPool};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = GrpcClientConfig::default()
.with_endpoint("http://localhost:50051")
.with_concurrency_limit(100);
let client = GrpcClient::connect(config).await?;
let pool = GrpcConnectionPool::new(GrpcClientConfig::default(), 4).await?;
let client = pool.get();
let health = client.health().await?;
println!("Healthy: {}", health.healthy);
Ok(())
}
Feature Flags
| Feature |
Default |
Description |
http-client |
Yes |
HTTP client via reqwest with rustls |
grpc |
No |
gRPC client with connection pooling via tonic |
full |
No |
Enables both http-client and grpc |
Related Repositories
License
MIT License - see LICENSE for details.