mentra
Mentra is an agent runtime for building tool-using LLM applications.
MSRV: Rust 1.85.
Current Features
- streaming model response handling
- custom tool execution through the async
ExecutableTooltrait - builtin
shell,background_run,check_background, andfilestools - builtin
tasksubagents with isolated child context and parent-side tracking - persistent agent teams with
team_spawn,team_send,broadcast,team_read_inbox, and generic request-response protocols viateam_request,team_respond, andteam_list_requests - three-layer context compaction with silent tool-result shrinking, auto-summary compaction, and a builtin
compacttool - agent events and snapshots for CLI or UI watchers
- Anthropic provider support
- Gemini Developer API provider support
- OpenAI provider support via the Responses API
- image inputs for OpenAI and Anthropic, plus inline image bytes for Gemini
Quickstart Example
Clone the repository and run the workspace quickstart example:
The quickstart example accepts a prompt from CLI args or stdin. Set MENTRA_MODEL to skip model discovery and force a specific OpenAI model.
Building A Runtime
Use Runtime::builder() when you want Mentra's builtin runtime tools, or Runtime::empty_builder() when you want to opt into every tool explicitly.
use ;
Coding Agent Setup
Runtime::builder() registers Mentra's builtin tools, including shell, background_run, check_background, files, and the runtime/task/team intrinsics. Shell and background execution remain disabled by default, so coding-agent setups must opt in with a runtime policy.
use ;
Registering a skills directory also makes the builtin load_skill tool available:
use ;
Sending Images
You can attach image blocks alongside text when sending a user turn:
# use ;
# async
For already-hosted assets, use ContentBlock::image_url(...) instead. Gemini currently supports inline image_bytes(...) inputs only and rejects image_url(...).
Context Compaction
Agents compact context by default:
- old tool results are micro-compacted in outbound requests
- when estimated request context exceeds roughly 50k tokens, Mentra writes the full transcript to the default transcript directory and replaces older history with a model-generated summary
- the model can also call the builtin
compacttool explicitly
You can tune or disable this per-agent with ContextCompactionConfig:
use ;
let config = AgentConfig ;
Data And Persistence Defaults
For non-test builds, Mentra keeps all default persisted state under a workspace-scoped app-data directory:
- store:
<platform data dir>/mentra/workspaces/<workspace-hash>/runtime.sqlite - runtime-scoped stores:
<platform data dir>/mentra/workspaces/<workspace-hash>/runtime-<runtime-id>.sqlite - team state:
<platform data dir>/mentra/workspaces/<workspace-hash>/team/ - task state:
<platform data dir>/mentra/workspaces/<workspace-hash>/tasks/ - transcripts:
<platform data dir>/mentra/workspaces/<workspace-hash>/transcripts/
If the platform data directory cannot be resolved, Mentra falls back to .mentra/workspaces/<workspace-hash>/... inside the current workspace.
Override these defaults when needed:
- use
Runtime::builder().with_store(...)for the SQLite store - customize
AgentConfig::task.tasks_dir,AgentConfig::team.team_dir, andAgentConfig::context_compaction.transcript_dirfor task, team, and transcript storage
Interactive Repo Example
Clone the repository when you want the richer interactive demo with provider selection, persisted runtime inspection, skills loading, and team/task visibility.
Set OPENAI_API_KEY, ANTHROPIC_API_KEY, or GEMINI_API_KEY, then run. The example lets you choose a provider and shows up to 10 models from that provider ordered newest to oldest.
Run Checks