Kernex is a composable Rust framework for building AI agent systems. It provides sandboxed execution, multi-provider AI backends, persistent memory with reward-based learning, skill loading, and topology-driven multi-agent pipelines — all as independent, embeddable crates.
Prerequisites
- Rust 1.74+ — Install from rustup.rs
- Cargo — Comes with Rust
For running examples:
- Ollama (optional) — For local AI without API keys. Install
- Node.js 18+ (optional) — For MCP-based skills. Install
Features
- Sandbox-first execution — OS-level protection via Seatbelt (macOS) and Landlock (Linux) combined with highly configurable
SandboxProfileallow/deny lists - 6 AI providers — Claude Code CLI, Anthropic, OpenAI, Ollama, OpenRouter, Gemini
- OpenAI-compatible base URL — works with LiteLLM, Cerebras, DeepSeek, Hugging Face, and any compatible endpoint
- Dynamic instantiation — instantiate any provider from a config map at runtime via
ProviderFactory - Agentic run loop —
Runtime::run()with configurable turn limits; providers handle tool dispatch internally - Hook system — intercept every tool call with
HookRunner: allow, block, audit, or rate-limit before execution - Typed tool schemas — Auto-generated JSON Schema for tool parameters via
schemars - MCP client — stdio-based Model Context Protocol for external tool integration
- Persistent memory — SQLite-backed conversations, facts, reward-based learning, scheduled tasks
- Skills.sh compatible — load skills from
SKILL.mdfiles with TOML/YAML frontmatter; 12 builtin agent personas included - Multi-agent pipelines — TOML-defined topologies with corrective loops and file-mediated handoffs
- Trait-based composition — implement
ProviderorStoreto plug in your own backends - Secure by default — API keys protected in memory via
secrecy::SecretString; prompt caching support for Anthropic
Architecture
Kernex is a Cargo workspace with 7 composable crates:
graph TD
classDef facade fill:#2B6CB0,stroke:#2C5282,stroke-width:2px,color:#fff
classDef core fill:#4A5568,stroke:#2D3748,stroke-width:2px,color:#fff
classDef impl fill:#319795,stroke:#285E61,stroke-width:2px,color:#fff
R[kernex-runtime]:::facade
C[kernex-core]:::core
S[kernex-sandbox]:::impl
P[kernex-providers]:::impl
M[kernex-memory]:::impl
K[kernex-skills]:::impl
PL[kernex-pipelines]:::impl
R --> C
R --> S
R --> P
R --> M
R --> K
R --> PL
P --> C
M --> C
K --> C
PL --> C
S -.o|OS Protection| P
| Crate | crates.io | Description |
|---|---|---|
kernex-core |
Shared types, traits, config, sanitization | |
kernex-sandbox |
OS-level sandbox (Seatbelt + Landlock) | |
kernex-providers |
6 AI providers, tool executor, MCP client | |
kernex-memory |
SQLite memory, FTS5 search, reward learning | |
kernex-skills |
Skill/project loader, trigger matching | |
kernex-pipelines |
TOML topology, multi-agent orchestration | |
kernex-runtime |
Facade crate with RuntimeBuilder |
Quick Start
Add Kernex to your project:
[]
= "0.3"
= "0.3"
= "0.3"
= { = "1", = ["full"] }
Send a message and get a response with persistent memory:
use RuntimeBuilder;
use Provider;
use Request;
use ProviderFactory;
use ProviderConfig;
async
runtime.complete() handles the full pipeline: build context from memory → enrich with skills → send to provider → save exchange.
Use individual crates for fine-grained control:
use OpenAiProvider;
use Store;
use load_skills;
use load_topology;
Runtime API
RuntimeBuilder assembles all subsystems. All options are optional — defaults work out of the box:
use Arc;
use RuntimeBuilder;
let runtime = new
.data_dir // persistent data root, default: ~/.kernex
.system_prompt // base system prompt prepended every turn
.channel // channel ID for memory scoping
.project // project scope for facts and lessons
.hook_runner // lifecycle hook runner (see Hooks section)
.build
.await?;
Two completion methods:
| Method | When to use |
|---|---|
runtime.complete(&provider, &request) |
Single context-enriched turn. Memory is built, provider runs its internal loop. |
runtime.run(&provider, &request, &config) |
Explicit turn-limit control. Sets max_turns on the context and fires on_stop after completion. |
use ;
let config = RunConfig ;
match runtime.run.await?
Hooks
Implement HookRunner to intercept every tool call across all providers:
use ;
use async_trait;
use Value;
;
let runtime = new
.hook_runner
.build.await?;
pre_tool runs before dispatch; returning Blocked cancels the tool and returns the reason as a tool error. post_tool fires after completion. on_stop fires at the end of Runtime::run().
Providers
Kernex ships with 6 built-in AI providers:
| Provider | Module | API Key Required |
|---|---|---|
| Claude Code CLI | claude_code |
No (uses local CLI) |
| Anthropic | anthropic |
Yes |
| OpenAI | openai |
Yes |
| Ollama | ollama |
No (local) |
| OpenRouter | openrouter |
Yes |
| Gemini | gemini |
Yes |
Prompt caching (Anthropic)
Place KERNEX_CACHE_BOUNDARY in your system prompt to split it into a cached stable prefix and a dynamic per-turn suffix. Anthropic caches the stable prefix across turns, reducing token costs on long sessions.
use CACHE_BOUNDARY;
let system = format!;
let runtime = new
.system_prompt
.build.await?;
Text before the marker gets cache_control: ephemeral. Text after is sent as a plain block each turn. The anthropic-beta: prompt-caching-2024-07-31 header is added automatically when the boundary is present.
Using any OpenAI-compatible endpoint
The OpenAI provider accepts a custom base_url, making it work with any compatible service:
use OpenAiProvider;
// LiteLLM proxy
let provider = from_config?;
// DeepSeek
let provider = from_config?;
// Cerebras
let provider = from_config?;
Implementing a custom provider
use Provider;
use Context;
use Response;
Project Structure
~/.kernex/ # Default data directory
├── config.toml # Runtime configuration
├── memory.db # SQLite persistent memory
├── skills/ # Skill definitions
│ └── my-skill/
│ └── SKILL.md # TOML/YAML frontmatter + instructions
├── projects/ # Project definitions
│ └── my-project/
│ └── AGENTS.md # Project instructions + skills (or ROLE.md)
└── topologies/ # Pipeline definitions
└── my-pipeline/
├── TOPOLOGY.toml # Phase definitions
└── agents/ # Agent .md files
Examples
| Example | Description | Prerequisites | Run |
|---|---|---|---|
simple_chat |
Interactive chat with local LLM | Ollama running | cargo run --example simple_chat |
memory_agent |
Persistent facts and lessons | None | cargo run --example memory_agent |
skill_loader |
Load skills and match triggers | None | cargo run --example skill_loader |
pipeline_loader |
Multi-agent topology demo | None | cargo run --example pipeline_loader |
All examples are in crates/kernex-runtime/examples/.
Skills
Kernex supports Skills.sh compatible skills. 21 ready-to-use skills included: 9 tool-integration skills and 12 builtin agent personas.
Tool skills
| Skill | Backend | Description |
|---|---|---|
| filesystem | MCP | Secure file operations |
| git | MCP | Repository operations |
| playwright | MCP | Browser automation |
| github | MCP | GitHub API integration |
| postgres | MCP | PostgreSQL read-only access |
| sqlite | MCP | SQLite read/write access |
| brave-search | MCP | Web search via Brave API |
| CLI | Extract text from PDFs | |
| webhook | CLI | Send HTTP webhooks |
See examples/skills/ for documentation and templates.
Builtin agent skills
12 agent persona skills ship in examples/skills/builtin/. Install all at once:
| Skill | Purpose |
|---|---|
frontend-developer |
UI/UX, component architecture |
backend-architect |
APIs, databases, system design |
security-engineer |
Threat modeling, secure code review |
devops-automator |
CI/CD, infrastructure, containers |
reality-checker |
Assumptions audit, edge case analysis |
api-tester |
API contract and integration testing |
performance-benchmarker |
Profiling and optimization |
senior-developer |
Cross-domain code review |
ai-engineer |
ML/AI integration patterns |
accessibility-auditor |
WCAG, a11y review |
agents-orchestrator |
Multi-agent workflow design |
project-manager |
Planning, scope, delivery |
Skills activate automatically when a message matches their triggers frontmatter. No configuration needed beyond placing the file.
Creating custom skills
# Copy the template
# Edit SKILL.md with your triggers and MCP config
# See examples/skills/README.md for full guide
Common Errors
"unknown provider type: xyz"
The provider name must match exactly. Valid values: openai, anthropic, ollama, gemini, openrouter, claude-code.
"config error: failed to create data dir"
Ensure ~/.kernex/ is writable:
&&
"provider error: timeout"
The provider took longer than the configured timeout (default 120s). Check your internet connection or increase the timeout in ProviderConfig.
"Ollama not available"
Ollama server isn't running. Start it in a separate terminal:
"model not found" (Ollama)
Pull the model first:
Development
# Build all crates
# Run all tests
# Lint
# Format
Versioning
This project follows Semantic Versioning. All crates in the workspace share the same version number.
- MAJOR — breaking API changes
- MINOR — new features, backward compatible
- PATCH — bug fixes, backward compatible
See CHANGELOG.md for release history.
Contributing
Contributions are welcome. Please:
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Ensure all checks pass:
cargo build && cargo clippy -- -D warnings && cargo test && cargo fmt --check - Commit with conventional commits (
feat:,fix:,refactor:,docs:,test:) - Open a Pull Request
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.