Kkachi
High-performance LLM prompt optimization library with composable pipelines.
Features
- Composable Pipelines — Chain steps: refine, best_of, ensemble, reason, extract, map
- Concurrent Execution — Run multiple pipelines concurrently with shared LLM and rate limiting
- Step Combinators — then, race, par, retry, fallback, when
- DSPy-Style Modules — Chain of Thought, Best of N, Ensemble, Program of Thought, ReAct Agent
- Jinja2 Templates — Dynamic prompt generation
- CLI Validators — External tool validation with composition (.and_(), .or_())
- Memory & RAG — Persistent vector store with DuckDB
- Pattern Validation — Regex, substring, length checks
- LLM-as-Judge — Semantic validation
- Multi-Objective Optimization — Pareto-optimal prompt tuning
- Skills & Defaults — Reusable instruction injection and runtime substitution
- Zero-Copy Core — GATs over async/await, lifetimes over Arc, minimal cloning
Python Installation
Quick Start (Python)
=
# Simple pipeline
= \
\
\
# Concurrent pipelines
=
Quick Start (Rust)
use *;
let llm = auto?;
// Composable pipeline
let result = pipeline
.refine_with
.extract
.go;
// Concurrent execution
let results = new
.task
.task
.max_concurrency
.go;
Repository Structure
crates/kkachi— Core Rust librarycrates/kkachi-python— Python bindings (PyO3 + maturin)examples/— Rust and Python usage examplesbenches/— Benchmarks
License
Dual-licensed: AGPL-3.0-or-later for open source use, commercial license available. See LICENSE for details.