Expand description
§Aether Core
Aether Core is a Rust library for building AI agents (LLM + prompt + tools, running in a loop).
§What makes Aether unique?
Aether has the following design principles:
- A truly minimal harness: By default, Aether agents have no system prompt and no tools. Thus every token in the context window is yours to control.
- Tools from MCP: Aether takes the stance that “MCP is all you need”. Agents get tools exclusively from MCP servers. This makes it easy to extend agents using any language. And if you’re using Rust, this library provides a
in-memorytransport.
§Why Aether?
AI agents are simple: just a LLM + prompt + tool, running in a loop. Yet many frameworks over-abstract this into oblivion.
Aether aims to give you a great developer experience via a simple API that exposes a powerful set of composable primitives:
-
Agents: Aether agents run in dedicated tokio tasks and communicate via async message passing (i.e. they’re actors). Hardware permitting, you can run hundreds of agents in a single process.
-
LLMs: Aether supports models from Anthropic,
OpenAI,OpenRouter, Llama.cpp and Ollama out of the box. You can implement your own provider via theStreamableModelProvidertrait and combine multiple models from different providers into an “alloyed” model viaAlloyedModelProvider. -
Prompts: Are just strings. But Aether provides nice helpers to do things like recursively load
AGENTS.mdfiles into your agent’s system prompt and compose prompts from multiple sources. -
Tools: “MCP is all you need”. Agents get tools exclusively via MCP servers. You can easily configure your agent’s MCP servers with a
mcp.jsonfile and run custom “in-memory” (Rust) MCP servers in dedicated tokio tasks. -
Tests: Aether provides a built-in set of test helpers that make it trivial to write robust unit and integration tests for your agents.
§Installation
Add Aether to your Cargo.toml:
[dependencies]
aether-agent-core = "0.1"§Examples
§Minimal Agent (No Tools)
use aether_core::core::{AgentMessage, Prompt, UserMessage, agent};
use llm::providers::openrouter::OpenRouterProvider;
use std::io::{self, Write};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Choose your LLM. Alternatively use AnthropicProvider, LlamaCppProvider..etc
// For this example, OPENROUTER_API_KEY needs to be set in your environment
let llm = OpenRouterProvider::default("z-ai/glm-4.6")?;
// 2. Create an Agent
let (tx, mut rx, _handle) = agent(llm) // <-- Give it an LLM
.system_prompt(Prompt::text("You are a helpful assistant.")) // <-- Give it a system prompt
.spawn() // <-- Spawn it into a tokio task
.await?;
// 3. Send the agent a message
tx.send(UserMessage::text("Explain async Rust in one paragraph"))
.await?;
// 4. Stream the agent's response back
loop {
use AgentMessage::*;
match rx.recv().await {
Some(Text { chunk, is_complete, .. }) => {
if !is_complete {
print!("{chunk}");
io::stdout().flush().unwrap();
} else {
println!("\n");
}
}
Some(ToolCall { .. }) => {
// Tool calls not used in this minimal example
}
Some(ToolResult { .. }) => {
// Tool results not used in this minimal example
}
Some(ToolError { .. }) => {
// Tool errors not used in this minimal example
}
Some(ToolProgress { .. }) => {
// Tool progress not used in this minimal example
}
Some(Done) => break,
Some(Error { message }) => {
eprintln!("Error: {message}");
break;
}
Some(Cancelled { .. }) => {
eprintln!("Agent cancelled");
break;
}
_ => {}
None => break,
}
}
Ok(())
}§Agent with Tools and AGENTS.md system prompt
Create a mcp.json file in the current working directory:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
},
"playwright": {
"command": "npx",
"args": ["-y", "@executeautomation/playwright-mcp-server"]
}
}
}And create an AGENTS.md file with a system prompt:
# BotBot
You are Mr. BotBot, a kickass coding agent equipped with SOTA filesystem and web browsing tools...And bring Mr. BotBot to life!
use aether_core::core::{AgentMessage, UserMessage, Prompt, agent};
use aether_core::mcp::{mcp, McpSpawnResult};
use llm::providers::openrouter::OpenRouterProvider;
use std::io::{self, Write};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let llm = OpenRouterProvider::default("z-ai/glm-4.6")?;
// 1. Connect to MCP servers
let McpSpawnResult {
tool_definitions: tools,
instructions: _,
command_tx: mcp_tx,
elicitation_rx: _,
handle: _mcp_handle,
..
} = mcp()
.from_json_file("mcp.json") // <-- Load MCP servers from JSON
.await?
.spawn() // <-- Spawn the MCP client into a tokio task (multiple agents can use it)
.await?;
// 2. Create Agent
let (tx, mut rx, _handle) = agent(llm)
.system_prompt(Prompt::from_globs(vec!["AGENTS.md".into()], ".".into())) // <-- Load system prompt from AGENTS.md
.tools(mcp_tx, tools) // <-- Give the agent MCP tools
.spawn()
.await?;
// Send your agent a message and stream the results back
tx.send(UserMessage::text(
"Read the README.md file and summarize it",
))
.await?;
loop {
use AgentMessage::*;
match rx.recv().await {
Some(Text { chunk, is_complete, .. }) => {
if !is_complete {
print!("{chunk}");
io::stdout().flush().unwrap();
} else {
println!();
}
}
Some(ToolCall { request, .. }) => {
println!("\nCalling tool: {}", request.name);
}
Some(ToolResult { result, .. }) => {
println!("Tool '{}' completed", result.name);
}
Some(ToolError { error, .. }) => {
eprintln!("Tool '{}' failed: {}", error.name, error.error);
}
Some(ToolProgress { .. }) => {
// Tool progress updates (can be used to show progress bars, etc.)
}
Some(Done) => {
println!("\nAgent finished");
break;
}
Some(Error { message }) => {
eprintln!("Error: {message}");
break;
}
Some(Cancelled { .. }) => {
eprintln!("Agent cancelled");
break;
}
_ => {}
None => break,
}
}
Ok(())
}§License
MIT
Re-exports§
pub use agent_spec::AgentSpec;pub use agent_spec::AgentSpecExposure;