Awaken
Production AI agent runtime for Rust — type-safe state, multi-protocol serving, plugin extensibility.
30-second mental model
- Tools — typed functions your agent can call; JSON schema is generated at compile time
- Agents — each agent has a system prompt, a model, and a set of allowed tools; the LLM drives orchestration through natural language — no predefined graphs
- State — typed and scoped (
thread/run), with merge strategies for safe concurrent writes and immutable snapshots - Plugins — lifecycle hooks for permissions, observability, context management, skills, MCP, and more
Your agent picks tools, calls them, reads and updates state, and repeats — all orchestrated by the runtime through 8 typed phases. Every state change is committed atomically after the gather phase.
Try it in 5 minutes
Prerequisites:
[]
= "0.1"
= { = "1", = ["full"] }
= "0.1"
= "1"
Copy this into src/main.rs and run cargo run:
use Arc;
use ;
use async_trait;
use ;
use Message;
use AgentEvent;
use VecEventSink;
use GenaiExecutor;
use AgentSpec;
use ModelEntry;
use ;
;
async
Serve over any protocol
Start the built-in server and connect from React, Next.js, or another agent — no code changes:
use *;
use ;
use Arc;
let store = new;
let runtime = new;
let mailbox = new;
let state = new;
serve.await?;
Frontend protocols
| Protocol | Endpoint | Frontend |
|---|---|---|
| AI SDK v6 | POST /v1/ai-sdk/chat |
React useChat() |
| AG-UI | POST /v1/ag-ui/run |
CopilotKit <CopilotKit> |
| A2A | POST /v1/a2a/tasks/send |
Other agents |
React + AI SDK v6:
import { useChat } from "ai/react";
const { messages, input, handleSubmit } = useChat({
api: "http://localhost:3000/v1/ai-sdk/chat",
});
Next.js + CopilotKit:
import { CopilotKit } from "@copilotkit/react-core";
<CopilotKit runtimeUrl="http://localhost:3000/v1/ag-ui/run">
<YourApp />
</CopilotKit>
Built-in plugins
All features are enabled by default via the full feature. Use default-features = false to opt out.
| Plugin | What it does | Feature flag |
|---|---|---|
| Permission | Firewall-style tool access control with Deny/Allow/Ask rules, glob/regex matching, and HITL suspension via mailbox. | permission |
| Reminder | Injects system or conversation-level context messages when tool calls match configured patterns. | reminder |
| Observability | OpenTelemetry telemetry aligned with GenAI Semantic Conventions; supports OTLP, file, and in-memory export. | observability |
| MCP | Connects to external MCP servers and registers their tools as native Awaken tools. | mcp |
| Skills | Discovers skill packages and injects a catalog before inference so the LLM can activate skills on demand. | skills |
| Generative UI | Streams declarative UI components to frontends via the A2UI protocol. | generative-ui |
awaken-ext-deferred-tools provides lazy tool loading; add it as a direct dependency if needed — it is not included in the full feature.
Why Awaken
- One backend serves every frontend protocol — React (AI SDK v6), Next.js (AG-UI), other agents (A2A), and tool servers (MCP) from the same binary.
- The LLM orchestrates — define each agent's identity and tool access; no hand-coded DAGs or state machines.
- Type-safe state with compile-time checks, scoped lifetimes, and merge strategies for safe concurrent writes.
- Production-ready: circuit breaker, exponential backoff, graceful shutdown, Prometheus metrics, and health probes included.
- Zero
unsafe— the entire workspace forbidsunsafeand relies on the Rust compiler for memory safety.
When to use Awaken
- You want a Rust backend for AI agents with compile-time safety
- You need to serve multiple frontend or agent protocols from one backend
- Your tools need to safely share state during concurrent execution
- You need auditable thread history, checkpoints, and resumable control paths
- You are comfortable wiring your own tools, providers, and model registry instead of relying on batteries-included defaults
When NOT to use Awaken
- You need built-in file/shell/web tools out of the box — consider OpenAI Agents SDK, Dify, or CrewAI
- You want a visual workflow builder — consider Dify, LangGraph Studio
- You want Python and rapid prototyping — consider LangGraph, AG2, PydanticAI
- You need a stable, slow-moving surface area more than an evolving runtime platform
- You need LLM-managed memory (agent decides what to remember) — consider Letta
Architecture
Awaken is split into three runtime layers. awaken-contract defines the shared contracts: agent specs, model/provider specs, tools, events, transport traits, and the typed state model. awaken-runtime resolves an AgentSpec into a ResolvedAgent, builds an ExecutionEnv from plugins, executes the phase loop, and manages active runs plus external control such as cancellation and HITL decisions. awaken-server exposes that same runtime through HTTP routes, SSE replay, mailbox-backed background execution, and protocol adapters for AI SDK v6, AG-UI, A2A, and MCP.
Around those layers sit storage and extensions. awaken-stores provides memory, file, and PostgreSQL backends for threads and runs. awaken-ext-* crates extend the runtime at phase and tool boundaries.
awaken Facade crate with feature flags
├─ awaken-contract Contracts: specs, tools, events, transport, state model
├─ awaken-runtime Resolver, phase engine, loop runner, runtime control
├─ awaken-server Routes, mailbox, SSE transport, protocol adapters
├─ awaken-stores Memory, file, and PostgreSQL persistence
├─ awaken-tool-pattern Glob/regex matching used by extensions
└─ awaken-ext-* Optional runtime extensions
Examples and learning paths
| Example | What it shows |
|---|---|
live_test |
Basic LLM integration |
multi_turn |
Multi-turn with persistent threads |
tool_call_live |
Tool calling with calculator |
ai-sdk-starter |
React + AI SDK v6 full-stack |
copilotkit-starter |
Next.js + CopilotKit full-stack |
&& &&
| Goal | Start with | Then |
|---|---|---|
| Build your first agent | First Agent tutorial | Build an Agent guide |
| See a full-stack app | AI SDK starter | CopilotKit starter |
| Explore the API | Reference docs | cargo doc --workspace --no-deps --open |
| Migrate from tirea | Migration guide |
Contributing
See CONTRIBUTING.md and DEVELOPMENT.md for setup details.
Good first issues are a great entry point. Quick contribution flow: fork → create a branch → write tests → open a PR.
Areas where contributions are especially welcome:
- Additional storage backends (Redis, SQLite)
- Built-in tool implementations (file read/write, web search)
- Token cost tracking and budget enforcement
- Model fallback/degradation chains
Join the conversation on GitHub Discussions.
Awaken is a ground-up rewrite of tirea; it is not backwards-compatible. The tirea 0.5 codebase is archived on the tirea-0.5 branch.
License
Dual-licensed under MIT or Apache-2.0.