# mubit-sdk (Rust)
Canonical Rust SDK for MuBit. Durable memory + continual learning for AI agents.
**Full documentation:** https://docs.mubit.ai
## Install
```bash
cargo add mubit-sdk
```
## Two Layers
The SDK offers two integration depths. Start at the top and drop down only when you need more control.
| **Learn** | `learn` | Explicit `enrich_messages()` / `record()` / `end()` for closed-loop memory with lesson injection | Closed-loop learning with full async control |
| **Flat client** | `Client` | All control-plane ops live directly on `Client` — `client.create_project(...)`, `client.set_prompt(...)`, etc. 17 typed helpers (`remember`, `recall`, `checkpoint`, …) wrap the identically-named raw ops with `session_id` resolution and richer defaults | Fine-grained control over what gets remembered, queried, and when reflection triggers; direct access to Projects/Prompts/Skills/Sessions |
Admin and low-level storage ops still live under `client.auth` and `client.core` for clarity.
## Learn (Closed-Loop Memory)
The `learn` module provides explicit methods for closed-loop memory. Unlike Python/JS, Rust does not use monkey-patching — call `enrich_messages()` before your LLM call and `record()` after.
```rust
use mubit_sdk::learn::{LearnSession, LearnConfig};
use serde_json::json;
let session = LearnSession::new(
LearnConfig::from_env()
.agent_id("my-agent")
.inject_lessons(true)
.auto_reflect(true)
).await;
let messages = vec![
json!({"role": "user", "content": "Fix the auth bug"}),
];
// Enrich with lessons before LLM call
let enriched = session.enrich_messages(&messages).await;
// Make your LLM call with enriched messages...
// Record the interaction
session.record("response text", "gpt-4o", 1500.0).await;
// End run (triggers reflection)
session.end().await;
```
Key config: `inject_lessons`, `injection_position`, `max_token_budget`, `auto_reflect`, `cache_ttl_seconds`, `fail_open`.
## Client Helpers
### Quickstart
```rust
use mubit_sdk::{Client, ClientConfig, RecallOptions, RememberOptions};
use std::env;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api_key = env::var("MUBIT_API_KEY")?;
let client = Client::new(
ClientConfig::default()
.run_id("sdk-rust-demo")
.api_key(api_key),
)?;
let mut remember = RememberOptions::new(
"Checkpoint before replaying recovery after token signer rotation.",
);
remember.run_id = Some("sdk-rust-demo".to_string());
remember.intent = Some("lesson".to_string());
remember.lesson_type = Some("success".to_string());
client.remember(remember).await?;
let mut recall = RecallOptions::new("What should I do before replaying recovery?");
recall.run_id = Some("sdk-rust-demo".to_string());
recall.entry_types = vec!["lesson".to_string(), "rule".to_string()];
let answer = client.recall(recall).await?;
println!("{}", answer["final_answer"]);
Ok(())
}
```
### Methods by Use Case
| Basic memory | `remember`, `recall` | Ingest content with intent classification; semantic query with evidence scoring |
| Prompt context | `get_context` | Token-budgeted, pre-assembled context block for LLM prompt injection (rules → lessons → facts) |
| Exact artifacts | `archive`, `dereference` | Bit-exact storage with stable reference IDs; retrieval without semantic search |
| Run lifecycle | `checkpoint`, `reflect`, `record_outcome` | Durable pre-compaction state; LLM-powered lesson extraction; reinforcement feedback |
| Multi-agent | `register_agent`, `list_agents`, `handoff`, `feedback` | Scoped read/write access per agent; task transfer between agents |
| Diagnostics | `memory_health`, `diagnose`, `surface_strategies`, `forget` | Staleness metrics; contextual error debugging; lesson clustering; deletion |
Helper calls take typed option structs instead of raw `serde_json::Value` payload bags.
### The Learning Loop
```
1. remember() → ingest facts, traces, lessons into MuBit
2. reflect() → LLM extracts lessons from run evidence
(auto-promotes recurring lessons: run → session → global)
3. get_context() → retrieve relevant lessons for the next LLM call
4. record_outcome() → reinforce what worked, adjust confidence scores
```
With `LearnSession`, steps 1 and 3 are handled by `record()` and `enrich_messages()`. With helpers, you orchestrate them yourself.
### Exact References
```rust
use mubit_sdk::{ArchiveOptions, Client, ClientConfig, DereferenceOptions};
let client = Client::new(ClientConfig::default().run_id("sdk-rust-demo"))?;
let mut archive = ArchiveOptions::new(
"--- a/query.py\n+++ b/query.py\n@@ ...",
"patch_fragment",
);
archive.run_id = Some("sdk-rust-demo".to_string());
archive.labels = vec!["django".to_string(), "retry".to_string()];
let archived = client.archive(archive).await?;
let mut dereference = DereferenceOptions::new(
archived["reference_id"].as_str().unwrap_or_default(),
);
dereference.run_id = Some("sdk-rust-demo".to_string());
let exact = client.dereference(dereference).await?;
println!("{}", exact["evidence"]["content"]);
```
## Managed MuBit resources
For teams and hosted deployments, configure agents declaratively as **Projects** + **Agent Cards** with versioned prompts and skills. Full guide: [Projects, Agents, Skills, Prompts](https://docs.mubit.ai/sdk/projects-and-agents).
Managed resources are accessed directly on `Client` with `serde_json::json!` payloads — typed option structs (like `RememberOptions`) exist only for the 17 high-level helpers.
### Projects
```rust
use serde_json::json;
let resp = client.create_project(json!({
"name": "triage-demo",
"description": "Customer-support triage pilot",
})).await?;
let project_id = resp["project"]["project_id"].as_str().unwrap();
let projects = client.list_projects(json!({})).await?;
```
### Agent Definitions
```rust
client.create_agent_definition(json!({
"project_id": project_id,
"agent_id": "triage",
"role": "customer triage agent",
"system_prompt_content": "You are a concise, empathetic triage agent...",
})).await?;
```
### Prompt version lifecycle
Every agent has exactly one `active` prompt version and any number of `candidate` versions awaiting review.
```rust
// Manual edit — activates immediately.
client.set_prompt(json!({
"agent_id": "triage",
"content": "...",
"activate": true,
})).await?;
// Ask the control plane to propose a candidate from recent outcomes.
let resp = client.optimize_prompt(json!({
"agent_id": "triage",
"project_id": project_id,
})).await?;
let candidate_id = resp["candidate"]["version_id"].as_str().unwrap();
// Diff + approve.
let diff = client.get_prompt_diff(json!({
"agent_id": "triage",
"version_a_id": active_version_id,
"version_b_id": candidate_id,
})).await?;
client.activate_prompt_version(json!({
"agent_id": "triage",
"version_id": candidate_id,
})).await?;
```
See [Prompt Optimization Lifecycle](https://docs.mubit.ai/recipes/prompt-optimization) for the full capture → optimize → review → activate workflow.
### Skills
Same lifecycle — `create_skill`, `optimize_skill`, `activate_skill_version`, `get_skill_diff` all accept `json!{...}` payloads and return the same version shape as prompts.
## Examples
Public adoption scenarios:
```bash
cargo run --example 01_remember_recall --features public-examples
cargo run --example 13_learn_module_smoke --features public-examples
```
Internal smoke tests for wire-level verification live under `examples/internal/`.
## Related
- **Full documentation:** https://docs.mubit.ai
- **SDK methods reference:** https://docs.mubit.ai/sdk/sdk-methods
- **API reference (HTTP + gRPC):** https://docs.mubit.ai/api-reference/control-http
- **GitHub:** https://github.com/mubit-ai/ricedb