Kernex is a composable Rust framework for building AI agent systems. It provides sandboxed execution, multi-provider AI backends, persistent memory with reward-based learning, skill loading, and topology-driven multi-agent pipelines — all as independent, embeddable crates.
Features
- Sandbox-first execution — OS-level protection via Seatbelt (macOS) and Landlock (Linux) combined with highly configurable
SandboxProfileallow/deny lists - 6 AI providers — Claude Code CLI, Anthropic, OpenAI, Ollama, OpenRouter, Gemini
- OpenAI-compatible base URL — works with LiteLLM, Cerebras, DeepSeek, Hugging Face, and any compatible endpoint
- Dynamic instantiation — instantiate robust AI Providers completely dynamically from configuration maps using
ProviderFactory - MCP client — stdio-based Model Context Protocol for external tool integration
- Persistent memory — SQLite-backed conversations, facts, reward-based learning, scheduled tasks
- Skills.sh compatible — load skills from
SKILL.mdfiles with TOML/YAML frontmatter - Multi-agent pipelines — TOML-defined topologies with corrective loops and file-mediated handoffs
- Trait-based composition — implement
ProviderorStoreto plug in your own backends - Secure by default — All API keys are protected in memory with
secrecy::SecretString
Architecture
Kernex is a Cargo workspace with 7 composable crates:
graph TD
classDef facade fill:#2B6CB0,stroke:#2C5282,stroke-width:2px,color:#fff
classDef core fill:#4A5568,stroke:#2D3748,stroke-width:2px,color:#fff
classDef impl fill:#319795,stroke:#285E61,stroke-width:2px,color:#fff
R[kernex-runtime]:::facade
C[kernex-core]:::core
S[kernex-sandbox]:::impl
P[kernex-providers]:::impl
M[kernex-memory]:::impl
K[kernex-skills]:::impl
PL[kernex-pipelines]:::impl
R --> C
R --> S
R --> P
R --> M
R --> K
R --> PL
P --> C
M --> C
K --> C
PL --> C
S -.o|OS Protection| P
| Crate | crates.io | Description |
|---|---|---|
kernex-core |
Shared types, traits, config, sanitization | |
kernex-sandbox |
OS-level sandbox (Seatbelt + Landlock) | |
kernex-providers |
6 AI providers, tool executor, MCP client | |
kernex-memory |
SQLite memory, FTS5 search, reward learning | |
kernex-skills |
Skill/project loader, trigger matching | |
kernex-pipelines |
TOML topology, multi-agent orchestration | |
kernex-runtime |
Facade crate with RuntimeBuilder |
Quick Start
Add Kernex to your project:
[]
= "0.3"
= "0.3"
= "0.3"
= { = "1", = ["full"] }
Send a message and get a response with persistent memory:
use RuntimeBuilder;
use Provider;
use Request;
use ProviderFactory;
use ProviderConfig;
async
runtime.complete() handles the full pipeline: build context from memory → enrich with skills → send to provider → save exchange.
Use individual crates for fine-grained control:
use OpenAiProvider;
use Store;
use load_skills;
use load_topology;
Providers
Kernex ships with 6 built-in AI providers:
| Provider | Module | API Key Required |
|---|---|---|
| Claude Code CLI | claude_code |
No (uses local CLI) |
| Anthropic | anthropic |
Yes |
| OpenAI | openai |
Yes |
| Ollama | ollama |
No (local) |
| OpenRouter | openrouter |
Yes |
| Gemini | gemini |
Yes |
Using any OpenAI-compatible endpoint
The OpenAI provider accepts a custom base_url, making it work with any compatible service:
use OpenAiProvider;
// LiteLLM proxy
let provider = from_config?;
// DeepSeek
let provider = from_config?;
// Cerebras
let provider = from_config?;
Implementing a custom provider
use Provider;
use Context;
use Response;
Project Structure
~/.kernex/ # Default data directory
├── config.toml # Runtime configuration
├── memory.db # SQLite persistent memory
├── skills/ # Skill definitions
│ └── my-skill/
│ └── SKILL.md # TOML/YAML frontmatter + instructions
├── projects/ # Project definitions
│ └── my-project/
│ └── AGENTS.md # Project instructions + skills (or ROLE.md)
└── topologies/ # Pipeline definitions
└── my-pipeline/
├── TOPOLOGY.toml # Phase definitions
└── agents/ # Agent .md files
Examples
Runnable examples in crates/kernex-runtime/examples/:
# Interactive chat with Ollama (local, no API key)
# Persistent memory: facts, lessons, outcomes
# Load skills and match triggers
# Load and inspect a multi-agent pipeline topology
Reference skills for common MCP servers in examples/skills/.
Development
# Build all crates
# Run all tests
# Lint
# Format
Versioning
This project follows Semantic Versioning. All crates in the workspace share the same version number.
- MAJOR — breaking API changes
- MINOR — new features, backward compatible
- PATCH — bug fixes, backward compatible
See CHANGELOG.md for release history.
Contributing
Contributions are welcome. Please:
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Ensure all checks pass:
cargo build && cargo clippy -- -D warnings && cargo test && cargo fmt --check - Commit with conventional commits (
feat:,fix:,refactor:,docs:,test:) - Open a Pull Request
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.