LocalGPT
Build explorable 3D worlds with natural language — geometry, materials, lighting, audio, and behaviors. Open source, runs locally.
Install
# World Building
# AI Assistant (chat, memory, daemon)
Gen Mode (World Building)
localgpt-gen is a standalone binary for AI-driven 3D world creation with the Bevy game engine.
# Start interactive mode
# Start with an initial prompt
# Load an existing scene
# Verbose logging
Features
- Parametric shapes — box, sphere, cylinder, capsule, plane, torus, pyramid, tetrahedron, icosahedron, wedge
- PBR materials — color, metalness, roughness, emissive, alpha, double-sided
- Lighting — point, spot, directional lights with color and intensity
- Behaviors — orbit, spin, bob, look_at, pulse, path_follow, bounce
- Audio — ambient sounds (wind, rain, forest, ocean, cave) and spatial emitters
- Export — glTF/GLB, HTML (browser-viewable), screenshots
- World skills — save/load complete worlds as reusable skills
Headless Mode & Experiment Queue
Queue world experiments and generate without a window — overnight batch runs, CI pipelines, or scripted variations:
# Generate a single world (no window)
# With style hint
The memory system learns your creative style across sessions — palettes, lighting preferences, entity templates — and applies them automatically in future generations.
Full docs: Headless Mode & Experiment Queue
MCP Server
Use Gen from any MCP-compatible tool (Claude CLI, Codex CLI, Gemini CLI, VS Code, Zed, Cursor):
Add to your .mcp.json:
When using Gen interactively with a CLI backend, use --connect to route tool calls to your existing window instead of spawning a new one. See CLI Mode (MCP Relay).
Full docs: LocalGPT Gen | MCP Server
Built something cool? Share on Discord or YouTube!
AI Assistant
localgpt is a local-first AI assistant with persistent memory, autonomous tasks, and multiple interfaces.
# Interactive chat
# Single question
# Run as daemon with HTTP API and web UI
Why LocalGPT?
- Single binary — no Node.js, Docker, or Python required
- Local device focused — runs entirely on your machine, your data stays yours
- Persistent memory — markdown-based knowledge store with full-text and semantic search
- Hybrid web search — native provider search passthrough plus client-side fallback
- Autonomous heartbeat — delegate tasks and let it work in the background
- Multiple interfaces — CLI, web UI, desktop GUI, Telegram bot
- Defense-in-depth security — signed policy files, kernel-enforced sandbox, prompt injection defenses
- Multiple LLM providers — Anthropic, OpenAI, xAI, Ollama, GLM, Vertex AI, CLI providers
How It Works
LocalGPT uses XDG-compliant directories for config/data/state/cache. Run localgpt paths to see resolved paths.
Workspace memory layout:
<workspace>/
├── MEMORY.md # Long-term knowledge (auto-loaded each session)
├── HEARTBEAT.md # Autonomous task queue
├── SOUL.md # Personality and behavioral guidance
└── knowledge/ # Structured knowledge bank
Files are indexed with SQLite FTS5 for keyword search and sqlite-vec for semantic search with local embeddings.
Configuration
Stored at <config_dir>/config.toml:
[]
= "claude-cli/opus"
[]
= "${ANTHROPIC_API_KEY}"
[]
= true
= "30m"
[]
= true
= "${TELEGRAM_BOT_TOKEN}"
Full config reference: website/docs/configuration.md
Security
- Kernel-enforced sandbox — Landlock/seccomp on Linux, Seatbelt on macOS
- Signed policy files — HMAC-SHA256 signed
LocalGPT.mdwith tamper detection - Prompt injection defenses — marker stripping, pattern detection, content boundaries
- Audit chain — hash-chained security event log
Security docs: website/docs/sandbox.md | website/docs/localgpt.md
HTTP API
| Endpoint | Description |
|---|---|
GET / |
Embedded web UI |
POST /api/chat |
Chat with assistant |
POST /api/chat/stream |
SSE streaming chat |
GET /api/memory/search?q=<query> |
Search memory |
Full API reference: website/docs/http-api.md
CLI Commands
Full CLI reference: website/docs/cli-commands.md
Blog
Built With
Rust, Tokio, Axum, Bevy, SQLite (FTS5 + sqlite-vec), fastembed, eframe
