Kernex is a composable Rust framework for building AI agent systems. It provides sandboxed execution, multi-provider AI backends, persistent memory with reward-based learning, skill loading, and topology-driven multi-agent pipelines — all as independent, embeddable crates.
Features
- Sandbox-first execution — OS-level protection via Seatbelt (macOS) and Landlock (Linux)
- 6 AI providers — Claude Code CLI, Anthropic, OpenAI, Ollama, OpenRouter, Gemini
- OpenAI-compatible base URL — works with LiteLLM, Cerebras, DeepSeek, Hugging Face, and any compatible endpoint
- MCP client — stdio-based Model Context Protocol for external tool integration
- Persistent memory — SQLite-backed conversations, facts, reward-based learning, scheduled tasks
- Skills.sh compatible — load skills from
SKILL.mdfiles with TOML/YAML frontmatter - Multi-agent pipelines — TOML-defined topologies with corrective loops and file-mediated handoffs
- Trait-based composition — implement
ProviderorStoreto plug in your own backends
Architecture
Kernex is a Cargo workspace with 7 composable crates:
kernex-runtime Facade — composes all crates into a RuntimeBuilder
├── kernex-core Shared types, traits (Provider, Store), config, error handling
├── kernex-sandbox OS-level protection (Seatbelt/Landlock)
├── kernex-providers AI backends + tool executor + MCP client
├── kernex-memory SQLite storage, conversations, learning, tasks
├── kernex-skills Skill/project loader, trigger matching, MCP activation
└── kernex-pipelines Topology-driven multi-agent pipelines
| Crate | crates.io | Description |
|---|---|---|
kernex-core |
Shared types, traits, config, sanitization | |
kernex-sandbox |
OS-level sandbox (Seatbelt + Landlock) | |
kernex-providers |
6 AI providers, tool executor, MCP client | |
kernex-memory |
SQLite memory, FTS5 search, reward learning | |
kernex-skills |
Skill/project loader, trigger matching | |
kernex-pipelines |
TOML topology, multi-agent orchestration | |
kernex-runtime |
Facade crate with RuntimeBuilder |
Quick Start
Add Kernex to your project:
[]
= "0.3"
= "0.3"
= "0.3"
= { = "1", = ["full"] }
Send a message and get a response with persistent memory:
use RuntimeBuilder;
use Provider;
use Request;
use OllamaProvider;
async
runtime.complete() handles the full pipeline: build context from memory → enrich with skills → send to provider → save exchange.
Use individual crates for fine-grained control:
use OpenAiProvider;
use Store;
use load_skills;
use load_topology;
Providers
Kernex ships with 6 built-in AI providers:
| Provider | Module | API Key Required |
|---|---|---|
| Claude Code CLI | claude_code |
No (uses local CLI) |
| Anthropic | anthropic |
Yes |
| OpenAI | openai |
Yes |
| Ollama | ollama |
No (local) |
| OpenRouter | openrouter |
Yes |
| Gemini | gemini |
Yes |
Using any OpenAI-compatible endpoint
The OpenAI provider accepts a custom base_url, making it work with any compatible service:
use OpenAiProvider;
// LiteLLM proxy
let provider = from_config?;
// DeepSeek
let provider = from_config?;
// Cerebras
let provider = from_config?;
Implementing a custom provider
use Provider;
use Context;
use Response;
Project Structure
~/.kernex/ # Default data directory
├── config.toml # Runtime configuration
├── memory.db # SQLite persistent memory
├── skills/ # Skill definitions
│ └── my-skill/
│ └── SKILL.md # TOML/YAML frontmatter + instructions
├── projects/ # Project definitions
│ └── my-project/
│ └── AGENTS.md # Project instructions + skills (or ROLE.md)
└── topologies/ # Pipeline definitions
└── my-pipeline/
├── TOPOLOGY.toml # Phase definitions
└── agents/ # Agent .md files
Examples
Runnable examples in crates/kernex-runtime/examples/:
# Interactive chat with Ollama (local, no API key)
# Persistent memory: facts, lessons, outcomes
# Load skills and match triggers
# Load and inspect a multi-agent pipeline topology
Reference skills for common MCP servers in examples/skills/.
Development
# Build all crates
# Run all tests
# Lint
# Format
Versioning
This project follows Semantic Versioning. All crates in the workspace share the same version number.
- MAJOR — breaking API changes
- MINOR — new features, backward compatible
- PATCH — bug fixes, backward compatible
See CHANGELOG.md for release history.
Contributing
Contributions are welcome. Please:
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Ensure all checks pass:
cargo build && cargo clippy -- -D warnings && cargo test && cargo fmt --check - Commit with conventional commits (
feat:,fix:,refactor:,docs:,test:) - Open a Pull Request
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.