LocalGPT
A local device focused AI assistant built in Rust — persistent memory, autonomous tasks. Inspired by and compatible with OpenClaw.
cargo install localgpt
Why LocalGPT?
- Single binary — no Node.js, Docker, or Python required
- Local device focused — runs entirely on your machine, your memory data stays yours
- Persistent memory — markdown-based knowledge store with full-text and semantic search
- Hybrid web search — native provider search passthrough plus client-side fallback providers
- Autonomous heartbeat — delegate tasks and let it work in the background
- Multiple interfaces — CLI, web UI, desktop GUI, Telegram bot
- Defense-in-depth security — signed policy files, kernel-enforced sandbox, prompt injection defenses
- Multiple LLM providers — Anthropic (Claude), OpenAI, xAI (Grok), Ollama, GLM (Z.AI), Google Vertex AI, CLI providers (claude-cli, gemini-cli, codex-cli)
- OpenClaw compatible — works with SOUL, MEMORY, HEARTBEAT markdown files and skills format
Install
# From crates.io (includes desktop GUI)
# Headless (no desktop GUI — for servers, Docker, CI)
# From source checkout
Quick Start
# Initialize configuration
# Start interactive chat
# Ask a single question
# Inspect resolved config/data/state/cache paths
# Run as a daemon with heartbeat, HTTP API and web ui
How It Works
LocalGPT uses XDG-compliant directories (or platform equivalents) for config/data/state/cache. Run localgpt paths to see your resolved paths.
Workspace memory layout:
<data_dir>/workspace/
├── MEMORY.md # Long-term knowledge (auto-loaded each session)
├── HEARTBEAT.md # Autonomous task queue
├── SOUL.md # Personality and behavioral guidance
└── knowledge/ # Structured knowledge bank (optional)
├── finance/
├── legal/
└── tech/
Files are indexed with SQLite FTS5 for fast keyword search, and sqlite-vec for semantic search with local embeddings.
Configuration
Stored at <config_dir>/config.toml (run localgpt config path or localgpt paths):
[]
= "claude-cli/opus"
[]
= "${ANTHROPIC_API_KEY}"
[]
= true
= "30m"
= { = "09:00", = "22:00" }
[]
= "~/.local/share/localgpt/workspace" # optional override
# Optional: Telegram bot
[]
= true
= "${TELEGRAM_BOT_TOKEN}"
Using a local OpenAI-compatible server (LM Studio, llamafile, etc.)
If you run a local server that speaks the OpenAI API (e.g., LM Studio, llamafile, vLLM), point LocalGPT at it and pick an openai/* model ID so it does not try to spawn the claude CLI:
- Start your server (LM Studio default port:
1234; llamafile default:8080) and note its model name. - Edit your config file (
localgpt config path):[] = "openai/<your-model-name>" [] # Many local servers accept a dummy key = "not-needed" = "http://127.0.0.1:8080/v1" # or http://127.0.0.1:1234/v1 for LM Studio - Run
localgpt chat(orlocalgpt daemon start) and requests will go to your local server.
Tip: If you see Failed to spawn Claude CLI, change agent.default_model away from claude-cli/* or install the claude CLI.
Web Search
Configure web search providers under [tools.web_search] and validate with:
Full setup guide: docs/web-search.md
Telegram Bot
Access LocalGPT from Telegram with full chat, tool use, and memory support.
- Create a bot via @BotFather and get the API token
- Set
TELEGRAM_BOT_TOKENor add the token toconfig.toml - Start the daemon:
localgpt daemon start - Message your bot — enter the 6-digit pairing code shown in the daemon logs
Once paired, use /help in Telegram to see available commands.
Security
LocalGPT ships with layered security to keep the agent confined and your data safe — no cloud dependency required.
Kernel-Enforced Sandbox
Every shell command the agent runs is executed inside an OS-level sandbox:
| Platform | Mechanism | Capabilities |
|---|---|---|
| Linux | Landlock LSM + seccomp-bpf | Filesystem allow-listing, network denial, syscall filtering |
| macOS | Seatbelt (SBPL) | Filesystem allow-listing, network denial |
| All | rlimits | 120s timeout, 1MB output cap, 50MB file size, 64 process limit |
The sandbox denies access to sensitive directories including ~/.ssh, ~/.aws, ~/.gnupg, ~/.docker, ~/.kube, and credential files (~/.npmrc, ~/.pypirc, ~/.netrc). It blocks all network syscalls by default. Configure extra paths as needed:
[]
= true
= "auto" # auto | full | standard | minimal | none
[]
= ["/opt/data"]
= ["/tmp/scratch"]
:::note Claude CLI Backend
If using the Claude CLI as your LLM backend (agent.default_model = "claude-cli/*"), the sandbox does not apply to Claude CLI subprocess calls — only to tool-executed shell commands. The subprocess itself runs outside the sandbox with access to your system.
:::
Signed Custom Rules (LocalGPT.md)
Place a LocalGPT.md in your workspace to add custom rules (e.g. "never execute rm -rf"). The file is HMAC-SHA256 signed with a device-local key so tampering will be detected:
Verification runs at every session start. If the file is unsigned, missing, or tampered with, LocalGPT falls back to its hardcoded security suffix — it never operates with a compromised LocalGPT.md.
Prompt Injection Defenses
- Marker stripping — known LLM control tokens (
<|im_start|>,[INST],<<SYS>>, etc.) are stripped from tool outputs - Pattern detection — regex scanning for injection phrases ("ignore previous instructions", "you are now a", etc.) with warnings surfaced to the user
- Content boundaries — all external content is wrapped in XML delimiters (
<tool_output>,<memory_context>,<external_content>) so the model can distinguish data from instructions - Protected files — the agent is blocked from writing to
LocalGPT.md,.localgpt_manifest.json,IDENTITY.md,localgpt.device.key, andlocalgpt.audit.jsonl
Audit Chain
All security events (signing, verification, tamper detection, blocked writes) are logged to an append-only, hash-chained audit file at <state_dir>/localgpt.audit.jsonl. Each entry contains the SHA-256 of the previous entry, making retroactive modification detectable.
CLI Commands
# Chat
# Desktop GUI (default build)
# Daemon
# Memory
# Web search
# Security
# Config
# Paths
HTTP API
When the daemon is running:
| Endpoint | Description |
|---|---|
GET / |
Embedded web UI |
GET /health |
Health check |
GET /api/status |
Server status |
GET /api/config |
Effective config summary |
GET /api/heartbeat/status |
Last heartbeat status/event |
POST /api/sessions |
Create session |
GET /api/sessions |
List active in-memory sessions |
GET /api/sessions/{session_id} |
Session status |
DELETE /api/sessions/{session_id} |
Delete session |
GET /api/sessions/{session_id}/messages |
Session transcript/messages |
POST /api/sessions/{session_id}/compact |
Compact session history |
POST /api/sessions/{session_id}/clear |
Clear session history |
POST /api/sessions/{session_id}/model |
Switch model for session |
POST /api/chat |
Chat with the assistant |
POST /api/chat/stream |
SSE streaming chat |
GET /api/ws |
WebSocket chat endpoint |
GET /api/memory/search?q=<query> |
Search memory |
GET /api/memory/stats |
Memory statistics |
POST /api/memory/reindex |
Trigger memory reindex |
GET /api/saved-sessions |
List persisted sessions |
GET /api/saved-sessions/{session_id} |
Get persisted session |
GET /api/logs/daemon |
Tail daemon logs |
Gen Mode (World Generation)
Gen is a separate binary (localgpt-gen) in the workspace — not a localgpt gen subcommand.
# Install from crates.io
# Install from this repo
# Or run directly from the workspace
# Start interactive Gen mode
# Start with an initial prompt
# Load an existing glTF/GLB scene
# Verbose logging
# Combine options
# Custom agent ID (default: "gen")
localgpt-gen runs a Bevy window (1280x720) on the main thread and an agent loop on a background tokio runtime. The agent gets safe tools (memory, web) plus Gen-specific tools (spawn/modify entities, scene inspection, glTF export). Type /quit or /exit in the terminal to close.
Built something cool with Gen? Share your creation on Discord!
Blog
Why I Built LocalGPT in 4 Nights — the initial story with commit-by-commit breakdown.
Built With
Rust, Tokio, Axum, SQLite (FTS5 + sqlite-vec), fastembed, eframe
Contributors
Stargazers
License
Your contributions
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be licensed under the Apache-2.0 license, without any additional terms or conditions.