mubit-sdk (Rust)
Canonical Rust SDK for MuBit. Durable memory + continual learning for AI agents.
Full documentation: https://docs.mubit.ai
Install
Two Layers
The SDK offers two integration depths. Start at the top and drop down only when you need more control.
| Layer | Module | What it does | When to use |
|---|---|---|---|
| Learn | learn |
Explicit enrich_messages() / record() / end() for closed-loop memory with lesson injection |
Closed-loop learning with full async control |
| Flat client | Client |
All control-plane ops live directly on Client — client.create_project(...), client.set_prompt(...), etc. 17 typed helpers (remember, recall, checkpoint, …) wrap the identically-named raw ops with session_id resolution and richer defaults |
Fine-grained control over what gets remembered, queried, and when reflection triggers; direct access to Projects/Prompts/Skills/Sessions |
Admin and low-level storage ops still live under client.auth and client.core for clarity.
Learn (Closed-Loop Memory)
The learn module provides explicit methods for closed-loop memory. Unlike Python/JS, Rust does not use monkey-patching — call enrich_messages() before your LLM call and record() after.
use ;
use json;
let session = new.await;
let messages = vec!;
// Enrich with lessons before LLM call
let enriched = session.enrich_messages.await;
// Make your LLM call with enriched messages...
// Record the interaction
session.record.await;
// End run (triggers reflection)
session.end.await;
Key config: inject_lessons, injection_position, max_token_budget, auto_reflect, cache_ttl_seconds, fail_open.
Client Helpers
Quickstart
use ;
use env;
async
Methods by Use Case
| Use case | Methods | What they do |
|---|---|---|
| Basic memory | remember, recall |
Ingest content with intent classification; semantic query with evidence scoring |
| Prompt context | get_context |
Token-budgeted, pre-assembled context block for LLM prompt injection (rules → lessons → facts) |
| Exact artifacts | archive, dereference |
Bit-exact storage with stable reference IDs; retrieval without semantic search |
| Run lifecycle | checkpoint, reflect, record_outcome |
Durable pre-compaction state; LLM-powered lesson extraction; reinforcement feedback |
| Multi-agent | register_agent, list_agents, handoff, feedback |
Scoped read/write access per agent; task transfer between agents |
| Diagnostics | memory_health, diagnose, surface_strategies, forget |
Staleness metrics; contextual error debugging; lesson clustering; deletion |
Helper calls take typed option structs instead of raw serde_json::Value payload bags.
The Learning Loop
1. remember() → ingest facts, traces, lessons into MuBit
2. reflect() → LLM extracts lessons from run evidence
(auto-promotes recurring lessons: run → session → global)
3. get_context() → retrieve relevant lessons for the next LLM call
4. record_outcome() → reinforce what worked, adjust confidence scores
With LearnSession, steps 1 and 3 are handled by record() and enrich_messages(). With helpers, you orchestrate them yourself.
Exact References
use ;
let client = new?;
let mut archive = new;
archive.run_id = Some;
archive.labels = vec!;
let archived = client.archive.await?;
let mut dereference = new;
dereference.run_id = Some;
let exact = client.dereference.await?;
println!;
Managed MuBit resources
For teams and hosted deployments, configure agents declaratively as Projects + Agent Cards with versioned prompts and skills. Full guide: Projects, Agents, Skills, Prompts.
Managed resources are accessed directly on Client with serde_json::json! payloads — typed option structs (like RememberOptions) exist only for the 17 high-level helpers.
Projects
use json;
let resp = client.create_project.await?;
let project_id = resp.as_str.unwrap;
let projects = client.list_projects.await?;
Agent Definitions
client.create_agent_definition.await?;
Prompt version lifecycle
Every agent has exactly one active prompt version and any number of candidate versions awaiting review.
// Manual edit — activates immediately.
client.set_prompt.await?;
// Ask the control plane to propose a candidate from recent outcomes.
let resp = client.optimize_prompt.await?;
let candidate_id = resp.as_str.unwrap;
// Diff + approve.
let diff = client.get_prompt_diff.await?;
client.activate_prompt_version.await?;
See Prompt Optimization Lifecycle for the full capture → optimize → review → activate workflow.
Skills
Same lifecycle — create_skill, optimize_skill, activate_skill_version, get_skill_diff all accept json!{...} payloads and return the same version shape as prompts.
Examples
Public adoption scenarios:
Internal smoke tests for wire-level verification live under examples/internal/.
Related
- Full documentation: https://docs.mubit.ai
- SDK methods reference: https://docs.mubit.ai/sdk/sdk-methods
- API reference (HTTP + gRPC): https://docs.mubit.ai/api-reference/control-http
- GitHub: https://github.com/mubit-ai/ricedb