do_it
An autonomous coding agent that runs local LLMs via Ollama to read, write, and fix code in your repositories. Works on Windows and Linux with no shell dependency, no Python, no cloud APIs.
Features
- Local-first — runs entirely on your machine via Ollama
- Cross-platform — Windows (MSVC) and Linux, no shell operators
- Agent roles — restrict tools and prompts per task type (
developer,navigator,qa,boss,research,memory) - Persistent memory —
.ai/hierarchy: plan, last session notes, knowledge base - Rich tool set — filesystem, git, web search, code intelligence (AST), Telegram notifications
- Model routing — use different models per role (e.g. a large coder model for
developer, a small fast model fornavigator)
Quick Start
# 1. Pull a model
# 2. Install
# 3. Run
# With a role (recommended for smaller models)
Roles
Each role restricts the agent to a focused set of tools and a role-specific system prompt. This is critical for smaller models — 6–8 tools instead of 20+ significantly improves output quality.
| Role | Purpose | Key tools |
|---|---|---|
developer |
Write and edit code | read/write file, str_replace, run_command, git, AST |
navigator |
Explore codebase structure | tree, find_files, search, outline, find_references |
research |
Find information | web_search, fetch_url, memory |
qa |
Run tests, verify changes | run_command, diff_repo, git_log, search |
boss |
Plan and orchestrate | memory, tree, web_search, ask_human |
memory |
Manage .ai/ state |
memory_read, memory_write |
Tools
Filesystem: read_file, write_file, str_replace, list_dir, find_files, search_in_files, tree
Execution: run_command, diff_repo
Git: git_status, git_commit, git_log, git_stash
Internet: web_search (DuckDuckGo, no API key), fetch_url
Code intelligence (Rust, TypeScript, JavaScript, Python, C++, Kotlin):
get_symbols, outline, get_signature, find_references
Memory (.ai/ hierarchy): memory_read, memory_write
Communication: ask_human (Telegram or console), finish
Configuration
# config.toml
= "http://localhost:11434"
= "qwen3.5:9b"
= 0.0
= 4096
= 8
= 6000
# Optional: different models per role
[]
= "qwen3-coder-next"
= "qwen3.5:4b"
= "qwen3.5:4b"
# Optional: Telegram notifications for ask_human
# telegram_token = "..."
# telegram_chat_id = "..."
Memory hierarchy
The agent maintains persistent state in .ai/ at the repository root:
.ai/
├── prompts/ ← custom role prompts (override built-ins)
├── state/
│ ├── current_plan.md
│ ├── last_session.md ← agent reads this on startup
│ └── session_counter.txt
├── logs/history.md
└── knowledge/ ← agent-written notes about the project
At session start, last_session.md is automatically injected into context so the agent remembers what it did before.
Custom role prompts: create .ai/prompts/developer.md to override the built-in developer prompt for a specific project.
CLI
do_it run --task <text|file|image>
--repo <path> (default: .)
--role <role> (default: unrestricted)
--config <path> (default: config.toml)
--system-prompt <text|file>
--max-steps <n> (default: 30)
do_it config [--config <path>]
do_it roles
Roadmap
-
spawn_agent— boss delegates subtasks to role-specific sub-agents -
git_push/git_pullstructured tools - Web search providers beyond DuckDuckGo
- Tree-sitter backend for more accurate AST analysis
Authors
Built by Claude Sonnet 4.6 with Oleksandr.
License
MIT