kernex-runtime 0.3.0

The Rust runtime for AI agents — composable engine with sandbox, providers, learning, and pipelines
Documentation

Kernex is a composable Rust framework for building AI agent systems. It provides sandboxed execution, multi-provider AI backends, persistent memory with reward-based learning, skill loading, and topology-driven multi-agent pipelines — all as independent, embeddable crates.

Features

  • Sandbox-first execution — OS-level protection via Seatbelt (macOS) and Landlock (Linux)
  • 6 AI providers — Claude Code CLI, Anthropic, OpenAI, Ollama, OpenRouter, Gemini
  • OpenAI-compatible base URL — works with LiteLLM, Cerebras, DeepSeek, Hugging Face, and any compatible endpoint
  • MCP client — stdio-based Model Context Protocol for external tool integration
  • Persistent memory — SQLite-backed conversations, facts, reward-based learning, scheduled tasks
  • Skills.sh compatible — load skills from SKILL.md files with TOML/YAML frontmatter
  • Multi-agent pipelines — TOML-defined topologies with corrective loops and file-mediated handoffs
  • Trait-based composition — implement Provider or Store to plug in your own backends

Architecture

Kernex is a Cargo workspace with 7 composable crates:

kernex-runtime          Facade — composes all crates into a RuntimeBuilder
  ├── kernex-core       Shared types, traits (Provider, Store), config, error handling
  ├── kernex-sandbox    OS-level protection (Seatbelt/Landlock)
  ├── kernex-providers  AI backends + tool executor + MCP client
  ├── kernex-memory     SQLite storage, conversations, learning, tasks
  ├── kernex-skills     Skill/project loader, trigger matching, MCP activation
  └── kernex-pipelines  Topology-driven multi-agent pipelines
Crate crates.io Description
kernex-core Shared types, traits, config, sanitization
kernex-sandbox OS-level sandbox (Seatbelt + Landlock)
kernex-providers 6 AI providers, tool executor, MCP client
kernex-memory SQLite memory, FTS5 search, reward learning
kernex-skills Skill/project loader, trigger matching
kernex-pipelines TOML topology, multi-agent orchestration
kernex-runtime Facade crate with RuntimeBuilder

Quick Start

Add Kernex to your project:

[dependencies]
kernex-runtime = "0.3"
kernex-core = "0.3"
kernex-providers = "0.3"
tokio = { version = "1", features = ["full"] }

Send a message and get a response with persistent memory:

use kernex_runtime::RuntimeBuilder;
use kernex_core::traits::Provider;
use kernex_core::message::Request;
use kernex_providers::ollama::OllamaProvider;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let runtime = RuntimeBuilder::new()
        .data_dir("~/.my-agent")
        .system_prompt("You are a helpful assistant.")
        .channel("cli")
        .build()
        .await?;

    let provider = OllamaProvider::from_config(
        "http://localhost:11434".into(),
        "llama3.2".into(),
        None,
    )?;

    let request = Request::text("user-1", "What is Rust?");
    let response = runtime.complete(&provider, &request).await?;
    println!("{}", response.text);

    Ok(())
}

runtime.complete() handles the full pipeline: build context from memory → enrich with skills → send to provider → save exchange.

Use individual crates for fine-grained control:

use kernex_providers::openai::OpenAiProvider;
use kernex_memory::Store;
use kernex_skills::load_skills;
use kernex_pipelines::load_topology;

Providers

Kernex ships with 6 built-in AI providers:

Provider Module API Key Required
Claude Code CLI claude_code No (uses local CLI)
Anthropic anthropic Yes
OpenAI openai Yes
Ollama ollama No (local)
OpenRouter openrouter Yes
Gemini gemini Yes

Using any OpenAI-compatible endpoint

The OpenAI provider accepts a custom base_url, making it work with any compatible service:

use kernex_providers::openai::OpenAiProvider;

// LiteLLM proxy
let provider = OpenAiProvider::from_config(
    "http://localhost:4000/v1".into(),
    "sk-...".into(),
    "gpt-4".into(),
    None,
)?;

// DeepSeek
let provider = OpenAiProvider::from_config(
    "https://api.deepseek.com/v1".into(),
    "sk-...".into(),
    "deepseek-chat".into(),
    None,
)?;

// Cerebras
let provider = OpenAiProvider::from_config(
    "https://api.cerebras.ai/v1".into(),
    "csk-...".into(),
    "llama3.1-70b".into(),
    None,
)?;

Implementing a custom provider

use kernex_core::traits::Provider;
use kernex_core::context::Context;
use kernex_core::message::Response;

#[async_trait::async_trait]
impl Provider for MyProvider {
    fn name(&self) -> &str { "my-provider" }
    fn requires_api_key(&self) -> bool { true }
    async fn is_available(&self) -> bool { true }

    async fn complete(&self, context: &Context) -> kernex_core::error::Result<Response> {
        // Your implementation here
        todo!()
    }
}

Project Structure

~/.kernex/                  # Default data directory
├── config.toml             # Runtime configuration
├── memory.db               # SQLite persistent memory
├── skills/                 # Skill definitions
│   └── my-skill/
│       └── SKILL.md        # TOML/YAML frontmatter + instructions
├── projects/               # Project definitions
│   └── my-project/
│       └── AGENTS.md       # Project instructions + skills (or ROLE.md)
└── topologies/             # Pipeline definitions
    └── my-pipeline/
        ├── TOPOLOGY.toml   # Phase definitions
        └── agents/         # Agent .md files

Examples

Runnable examples in crates/kernex-runtime/examples/:

# Interactive chat with Ollama (local, no API key)
cargo run --example simple_chat

# Persistent memory: facts, lessons, outcomes
cargo run --example memory_agent

# Load skills and match triggers
cargo run --example skill_loader

# Load and inspect a multi-agent pipeline topology
cargo run --example pipeline_loader

Reference skills for common MCP servers in examples/skills/.

Development

# Build all crates
cargo build --workspace

# Run all tests
cargo test --workspace

# Lint
cargo clippy --workspace -- -D warnings

# Format
cargo fmt --check

Versioning

This project follows Semantic Versioning. All crates in the workspace share the same version number.

  • MAJOR — breaking API changes
  • MINOR — new features, backward compatible
  • PATCH — bug fixes, backward compatible

See CHANGELOG.md for release history.

Contributing

Contributions are welcome. Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feat/my-feature)
  3. Ensure all checks pass: cargo build && cargo clippy -- -D warnings && cargo test && cargo fmt --check
  4. Commit with conventional commits (feat:, fix:, refactor:, docs:, test:)
  5. Open a Pull Request

License

Licensed under either of:

at your option.