Intent-Engine
中文 | English
Persistent memory for AI coding assistants.
AI Forgets. Every Time.
Without Intent-Engine:
Day 1: "Build authentication"
AI works brilliantly...
[session ends]
Day 2: "Continue auth"
AI: "What authentication?"
With Intent-Engine:
Day 1: "Build authentication"
AI works, saves progress...
[session ends]
Day 2: "Continue auth"
AI: "Resuming #42: JWT auth.
Done: token generation.
Next: refresh tokens."
One command restores everything: ie status
Visual Dashboard
See your entire task structure at a glance:

Features:
- Task Navigator — Hierarchical tree view with search
- Task Detail — Full spec with markdown rendering (mermaid diagrams, code blocks)
- Decision Timeline — Chronological log of all decisions and notes
- Multi-project Support — Switch between projects via tabs
Not Just Memory — Infrastructure
What actually happens when things go wrong:
- Session ends → ✓ Persisted
- Tool crashes → ✓ Recoverable
- Week later → ✓ Full history
- Multiple agents → ✓ Isolated
- Complex project → ✓ Focus-driven
Why It Works
Minimal Footprint — ~200 tokens overhead, single binary, no daemons
Battle-Tested Stack — Rust + SQLite + FTS5, GB-scale in milliseconds, local-only
The Bigger Picture
The unsolved problem in AI agents: tasks that span days or weeks.
Intent-Engine provides the foundation:
Week-long refactoring:
├── Agent A (session: "api") → focus: #12 REST endpoints
├── Agent B (session: "db") → focus: #15 Schema migration
└── Agent C (session: "test") → focus: #18 Integration tests
depends_on: [#12, #15]
- Interruptions → Persistent memory
- Multi-agent → Session isolation
- Scheduling → Dependency graph (
depends_on) - Context explosion → Focus-driven retrieval
Result: Reliable multi-day, multi-agent workflows.
Get Started
Claude Code (recommended)
OpenCode
Manual Install
# Choose one
# Or use the install script
|
Core Commands
|
LLM-Powered Features (Optional)
Event-to-Task Synthesis - Automatically generate structured task summaries from event history:
# Configure LLM (one-time setup)
# Test connection
# Now when completing tasks, synthesis happens automatically for AI-owned tasks
Cost Awareness:
1,500 tokens per synthesis ($0.003 with GPT-3.5-turbo)- 20 tasks/day ≈ $22/year with GPT-3.5, or use local models (free)
- Synthesis only happens when LLM configured (graceful degradation)
- See LLM Use Cases for full details
How It Works
Session Start → ie status → Full context restored
↓
Working → ie plan → Tasks tracked
→ ie log → Decisions recorded
↓
Interruption → Auto-persisted
↓
Next Session → ie status → Continue where you left off
Documentation
- Quick Start — Get running in 5 minutes
- Dashboard Guide — Visual interface walkthrough
- CLAUDE.md — AI integration guide
- Commands — Full reference
MIT OR Apache-2.0 · GitHub
Give your AI the memory it deserves.