Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Oris
A programmable execution runtime for AI agents.
Oris is not a prompt framework.
It is a runtime layer that lets software systems execute reasoning, not just generate text.
Modern LLM applications are no longer single requests. They are long-running processes: planning, tool use, memory updates, retries, and human approval.
Today this logic lives in ad-hoc code, background jobs, and fragile queues.
Oris turns that into a first-class execution system.
What Oris actually provides
Oris is closer to Temporal / Ray than to a chat SDK.
It provides a persistent execution environment for agentic workloads:
- Stateful execution graphs
- Durable checkpoints
- Interruptible runs (human-in-the-loop)
- Tool calling as system actions
- Multi-step planning loops
- Deterministic replay
- Recovery after crash or deploy
Instead of writing:
"call LLM → parse → call tool → retry → store memory → schedule task"
You define an execution graph, and the runtime runs it.
Why this exists
LLMs changed backend architecture.
We are moving from:
request → response
to:
goal → process → decisions → actions → memory → continuation
This is not an API problem anymore.
It is an execution problem.
Oris is an attempt to build the execution layer for software that thinks before it acts.
Mental model
If databases manage data and message queues manage communication
Oris manages reasoning processes.
What you can build with it
- autonomous coding agents
- long-running research agents
- human-approval workflows
- retrieval-augmented systems
- operational copilots
- AI operations pipelines
Status
Early but functional. The runtime, graph execution, and agent loop are implemented and usable today.
Quick start (30 seconds)
Add the crate and set your API key:
Minimal LLM call:
use ;
async
Hello-world state graph (no API key needed):
use ;
use Message;
async
Architecture
flowchart TB
User[User Request]
Runtime[Runtime: Graph or Agent]
Tools[Tools]
LLM[LLM Provider]
Memory[Memory or State]
User --> Runtime
Runtime --> Tools
Runtime --> LLM
Runtime --> Memory
Tools --> Runtime
LLM --> Runtime
Memory --> Runtime
Key concepts
- State graphs — Define workflows as directed graphs; run, stream, and optionally persist state (e.g. SQLite or in-memory).
- Agents and tools — Give agents tools (search, filesystem, custom); use multi-agent routers and subagents.
- Persistence and interrupts — Checkpoint state, resume runs, and pause for human approval or review.
See the examples directory for runnable code.
Install and config
# With a vector store (e.g. PostgreSQL):
# With Ollama (local):
Common environment variables:
| Provider | Variable |
|---|---|
| OpenAI | OPENAI_API_KEY |
| Anthropic | ANTHROPIC_API_KEY |
| Ollama | OLLAMA_HOST (optional, default http://localhost:11434) |
Examples and docs
API documentation · Examples directory
License and attribution
MIT. This project includes code derived from langchain-rust; see LICENSE.