LocalGPT
A local device focused AI assistant built in Rust — persistent memory, autonomous tasks, ~27MB binary. Inspired by and compatible with OpenClaw.
cargo install localgpt
Why LocalGPT?
- Single binary — no Node.js, Docker, or Python required
- Local device focused — runs entirely on your machine, your memory data stays yours
- Persistent memory — markdown-based knowledge store with full-text and semantic search
- Autonomous heartbeat — delegate tasks and let it work in the background
- Multiple interfaces — CLI, web UI, desktop GUI
- Multiple LLM providers — Anthropic (Claude), OpenAI, Ollama
- OpenClaw compatible — works with SOUL, MEMORY, HEARTBEAT markdown files and skills format
Install
# Full install (includes desktop GUI)
# Headless (no desktop GUI — for servers, Docker, CI)
Quick Start
# Initialize configuration
# Start interactive chat
# Ask a single question
# Run as a daemon with heartbeat, HTTP API and web ui
How It Works
LocalGPT uses plain markdown files as its memory:
~/.localgpt/workspace/
├── MEMORY.md # Long-term knowledge (auto-loaded each session)
├── HEARTBEAT.md # Autonomous task queue
├── SOUL.md # Personality and behavioral guidance
└── knowledge/ # Structured knowledge bank (optional)
├── finance/
├── legal/
└── tech/
Files are indexed with SQLite FTS5 for fast keyword search, and sqlite-vec for semantic search with local embeddings
Configuration
Stored at ~/.localgpt/config.toml:
[]
= "claude-cli/opus"
[]
= "${ANTHROPIC_API_KEY}"
[]
= true
= "30m"
= { = "09:00", = "22:00" }
[]
= "~/.localgpt/workspace"
CLI Commands
# Chat
# Daemon
# Memory
# Config
HTTP API
When the daemon is running:
| Endpoint | Description |
|---|---|
GET /health |
Health check |
GET /api/status |
Server status |
POST /api/chat |
Chat with the assistant |
GET /api/memory/search?q=<query> |
Search memory |
GET /api/memory/stats |
Memory statistics |
Blog
Why I Built LocalGPT in 4 Nights — the full story with commit-by-commit breakdown.
Built With
Rust, Tokio, Axum, SQLite (FTS5 + sqlite-vec), fastembed, eframe