koda-core 0.2.23

Core engine for the Koda AI coding agent (macOS and Linux only)
Documentation

koda-core

Engine library for the Koda AI coding agent.

Pure logic with zero terminal dependencies — communicates exclusively through EngineEvent (output) and EngineCommand (input) enums over async channels.

What's inside

  • LLM providers — 14 providers (Anthropic, OpenAI, Gemini, Groq, Ollama, LM Studio, etc.)
  • Tool system — 20+ built-in tools (file ops, shell, search, memory, agents)
  • Per-tool approval — three trust modes (Plan/Safe/Auto) with effect-based safety classification
  • Inference loop — streaming tool-use loop with parallel execution
  • SQLite persistence — sessions, messages, compaction

Rust edition: 2024 (MSRV 1.88)

Stability note: koda-core is published primarily to support the koda-cli binary distribution. Its public API is not stable pre-1.0 and may change in any release, including patch versions. If you embed koda-core directly, pin a specific version and review CHANGELOG.md before upgrading.

Usage

koda-core is a channel-driven engine. Create async channels, spawn the inference loop, and drive the engine through EngineCommand/EngineEvent pairs:

use koda_core::engine::{EngineCommand, EngineEvent};
use tokio::sync::mpsc;

// The engine communicates exclusively through async channels.
// EngineEvents flow out (streaming text, tool calls, approvals).
// EngineCommands flow in (approval responses, cancellation).
let (cmd_tx, mut cmd_rx) = mpsc::channel::<EngineCommand>(64);
let (evt_tx, mut evt_rx) = mpsc::channel::<EngineEvent>(64);

// Spawn the inference loop, then select over evt_rx for streaming
// output and cmd_tx to send approval decisions back.
// See koda-cli for a complete implementation.

See DESIGN.md for architectural decisions.

License

MIT