mubit-sdk 0.5.1

Umbrella Rust SDK for Mubit core/control planes
Documentation
# mubit-sdk (Rust)

Canonical Rust SDK for MuBit.

## Install

```bash
cargo add mubit-sdk
```

## Three Layers

The SDK offers three integration depths. Start at the top and drop down only when you need more control.

| Layer | Module | What it does | When to use |
| --- | --- | --- | --- |
| **Learn** | `learn` | Explicit `enrich_messages()` / `record()` / `end()` for closed-loop memory with lesson injection | Closed-loop learning with full async control |
| **Helpers** | `Client` | 17 explicit methods for memory write/read, context assembly, reflection, multi-agent coordination | Fine-grained control over what gets remembered, queried, and when reflection triggers |
| **Raw** | `client.auth`, `client.core`, `client.control` | Direct 1:1 mappings to every gRPC/HTTP endpoint | Wire debugging, async ingest job polling, compatibility routes |

## Layer 1: Learn (Closed-Loop Memory)

The `learn` module provides explicit methods for closed-loop memory. Unlike Python/JS, Rust does not use monkey-patching — call `enrich_messages()` before your LLM call and `record()` after.

```rust
use mubit_sdk::learn::{LearnSession, LearnConfig};
use serde_json::json;

let session = LearnSession::new(
    LearnConfig::from_env()
        .agent_id("my-agent")
        .inject_lessons(true)
        .auto_reflect(true)
).await;

let messages = vec![
    json!({"role": "user", "content": "Fix the auth bug"}),
];

// Enrich with lessons before LLM call
let enriched = session.enrich_messages(&messages).await;

// Make your LLM call with enriched messages...

// Record the interaction
session.record("response text", "gpt-4o", 1500.0).await;

// End run (triggers reflection)
session.end().await;
```

Key config: `inject_lessons`, `injection_position`, `max_token_budget`, `auto_reflect`, `cache_ttl_seconds`, `fail_open`.

## Layer 2: Client Helpers

### Quickstart

```rust
use mubit_sdk::{Client, ClientConfig, RecallOptions, RememberOptions};
use std::env;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let api_key = env::var("MUBIT_API_KEY")?;
    let client = Client::new(
        ClientConfig::default()
            .run_id("sdk-rust-demo")
            .api_key(api_key),
    )?;

    let mut remember = RememberOptions::new(
        "Checkpoint before replaying recovery after token signer rotation.",
    );
    remember.run_id = Some("sdk-rust-demo".to_string());
    remember.intent = Some("lesson".to_string());
    remember.lesson_type = Some("success".to_string());
    client.remember(remember).await?;

    let mut recall = RecallOptions::new("What should I do before replaying recovery?");
    recall.run_id = Some("sdk-rust-demo".to_string());
    recall.entry_types = vec!["lesson".to_string(), "rule".to_string()];
    let answer = client.recall(recall).await?;

    println!("{}", answer["final_answer"]);
    Ok(())
}
```

### Methods by Use Case

| Use case | Methods | What they do |
| --- | --- | --- |
| Basic memory | `remember`, `recall` | Ingest content with intent classification; semantic query with evidence scoring |
| Prompt context | `get_context` | Token-budgeted, pre-assembled context block for LLM prompt injection (rules → lessons → facts) |
| Exact artifacts | `archive`, `dereference` | Bit-exact storage with stable reference IDs; retrieval without semantic search |
| Run lifecycle | `checkpoint`, `reflect`, `record_outcome` | Durable pre-compaction state; LLM-powered lesson extraction; reinforcement feedback |
| Multi-agent | `register_agent`, `list_agents`, `handoff`, `feedback` | Scoped read/write access per agent; task transfer between agents |
| Diagnostics | `memory_health`, `diagnose`, `surface_strategies`, `forget` | Staleness metrics; contextual error debugging; lesson clustering; deletion |

Helper calls take typed option structs instead of raw `serde_json::Value` payload bags.

### The Learning Loop

```
1. remember()         → ingest facts, traces, lessons into MuBit
2. reflect()          → LLM extracts lessons from run evidence
                         (auto-promotes recurring lessons: run → session → global)
3. get_context()      → retrieve relevant lessons for the next LLM call
4. record_outcome()   → reinforce what worked, adjust confidence scores
```

With `LearnSession`, steps 1 and 3 are handled by `record()` and `enrich_messages()`. With helpers, you orchestrate them yourself.

### Exact References

```rust
use mubit_sdk::{ArchiveOptions, Client, ClientConfig, DereferenceOptions};

let client = Client::new(ClientConfig::default().run_id("sdk-rust-demo"))?;

let mut archive = ArchiveOptions::new(
    "--- a/query.py\n+++ b/query.py\n@@ ...",
    "patch_fragment",
);
archive.run_id = Some("sdk-rust-demo".to_string());
archive.labels = vec!["django".to_string(), "retry".to_string()];
let archived = client.archive(archive).await?;

let mut dereference = DereferenceOptions::new(
    archived["reference_id"].as_str().unwrap_or_default(),
);
dereference.run_id = Some("sdk-rust-demo".to_string());
let exact = client.dereference(dereference).await?;
println!("{}", exact["evidence"]["content"]);
```

## Layer 3: Raw Domains

Low-level 1:1 mappings to every API endpoint on `client.auth`, `client.core`, and `client.control`.