Expand description
Koda Core — the engine library for the Koda AI coding agent.
This crate contains the pure engine logic with zero terminal dependencies.
It communicates exclusively through EngineEvent (output) and
EngineCommand (input) enums.
See DESIGN.md in the repository root for the full architectural rationale.
Modules§
- agent
- Sub-agent configuration, discovery, and invocation. KodaAgent — shared, immutable agent resources.
- approval
- Tool approval modes, safety gates, and shared mode state. Approval modes and tool confirmation.
- bash_
path_ lint - Heuristic path-escape detection for bash commands. Bash path lint: detect commands that escape the project root.
- bash_
safety - Bash command safety classification (destructive, mutating, read-only). Bash command safety classification.
- compact
- Context compaction — summarise old messages to reclaim token budget. Session compaction — summarize old messages to reclaim context.
- config
- Global configuration: provider, model, model settings, CLI flags. Configuration loading for agents and global settings.
- context
- Context window management and token budgeting. Context window tracking.
- db
- SQLite persistence layer — sessions, messages, usage tracking. SQLite persistence layer.
- engine
- Engine protocol:
EngineEvent/EngineCommandenums. Engine module: the protocol boundary between Koda’s core and any client. - git
- Git helpers — status, diff, blame, log. Git integration for context injection.
- inference
- The main inference loop — send messages, stream responses, dispatch tools. LLM inference loop with streaming, tool execution, and sub-agent delegation.
- inference_
helpers - Shared helpers used by the inference loop. Helper functions for inference — context estimation, message assembly, error classification.
- keystore
- Credential storage (OS keychain via
keyring). Secure API key storage. - loop_
guard - Guardrail against runaway tool-call loops. Loop detection and hard-cap user prompt for the inference loop.
- mcp
- MCP (Model Context Protocol) server connections and tool routing. MCP (Model Context Protocol) support.
- memory
- Project memory —
MEMORY.md/CLAUDE.mdread/write. Semantic memory: project context injected into the system prompt. - model_
context - Hardcoded context-window lookup table (fallback when API doesn’t report). Model context window lookup.
- output_
caps - Context-scaled output caps for tool results. Centralized tool output caps, scaled to the model’s context window.
- persistence
Persistencetrait — the database contract. Persistence trait — the storage contract for koda.- preview
- Diff preview generation for file mutations. Pre-confirmation diff previews for destructive tool operations.
- progress
- Progress reporting helpers for long operations. Structured progress tracking.
- prompt
- System prompt construction. System prompt construction.
- providers
- LLM provider abstraction — Anthropic, Gemini, OpenAI-compatible. LLM provider abstraction layer.
- runtime_
env - Environment variable access (mockable for tests). Thread-safe runtime environment for API keys and config.
- session
- Session lifecycle — create, resume, list, delete. KodaSession — per-conversation state.
- settings
- User settings persistence (
~/.config/koda/settings.json). User settings persistence. - skills
- Skill discovery and activation (project, user, built-in). Skill discovery and loading.
- sub_
agent_ cache - Cache for sub-agent provider/model config across invocations. Sub-agent result caching.
- tool_
dispatch - Tool dispatch — routes tool calls from inference to the registry. Tool execution dispatch — sequential, parallel, and sub-agent.
- tools
- Tool registry, definitions, execution, and path safety. Tool registry and execution engine.
- truncate
- Token-safe output truncation. Output truncation for display.
- undo
- Undo stack for file mutations. Undo stack for file mutations.
- version
- Version string and update-check helpers. Version checker: non-blocking startup check for newer crate versions.