1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
//! # Daimon
//!
//! A Rust-native AI agent framework for building LLM-powered agents with tool use,
//! memory, and streaming. Daimon implements the ReAct (Reason-Act-Observe) pattern:
//! the agent calls a model, optionally invokes tools, observes results, and repeats
//! until it produces a final response.
//!
//! ## Quick Start
//!
//! ```ignore
//! use daimon::prelude::*;
//!
//! #[tokio::main]
//! async fn main() -> daimon::Result<()> {
//! let agent = Agent::builder()
//! .model(daimon::model::openai::OpenAi::new("gpt-4o"))
//! .system_prompt("You are a helpful assistant.")
//! .build()?;
//!
//! let response = agent.prompt("What is Rust?").await?;
//! println!("{}", response.text());
//! Ok(())
//! }
//! ```
//!
//! ## Feature Flags
//!
//! | Feature | Description |
//! |---------|-------------|
//! | `openai` | OpenAI API provider (default) |
//! | `anthropic` | Anthropic Claude API provider (default) |
//! | `macros` | `#[tool_fn]` proc macro (default) |
//! | `gemini` | Google Gemini / Vertex AI provider (via `daimon-provider-gemini`) |
//! | `azure` | Azure OpenAI Service provider (via `daimon-provider-azure`) |
//! | `bedrock` | AWS Bedrock provider (via `daimon-provider-bedrock`) |
//! | `ollama` | Ollama local model provider |
//! | `sqlite` | SQLite memory backend |
//! | `redis` | Redis memory backend + task broker |
//! | `nats` | NATS JetStream task broker |
//! | `amqp` | RabbitMQ (AMQP) task broker |
//! | `sqs` | AWS SQS task broker (via `daimon-provider-bedrock`) |
//! | `pubsub` | Google Cloud Pub/Sub task broker (via `daimon-provider-gemini`) |
//! | `servicebus` | Azure Service Bus task broker (via `daimon-provider-azure`) |
//! | `mcp` | Model Context Protocol client & server |
//! | `otel` | OpenTelemetry OTLP span export |
//! | `qdrant` | Qdrant vector store retriever |
//! | `pgvector` | pgvector-backed vector store (via `daimon-plugin-pgvector`) |
//! | `opensearch` | OpenSearch k-NN vector store (via `daimon-plugin-opensearch`) |
//! | `grpc` | gRPC transport for distributed execution |
//! | `full` | All providers + macros + MCP + SQLite + Redis + NATS + AMQP + gRPC + OTel + SQS + Pub/Sub + Service Bus + pgvector |
//!
//! The core framework compiles with no features; enable providers as needed.
//!
//! ## Plugin Interface
//!
//! The [`Model`] trait (from [`daimon_core`]) is the plugin interface. To create
//! a new LLM provider, depend on `daimon-core` and implement `Model`. See the
//! `daimon-provider-*` crates for examples.
//!
//! ## Module Overview
//!
//! - [`agent`] — Agent builder, ReAct loop, multi-agent patterns, resumable runs
//! - [`model`] — LLM provider trait and implementations
//! - [`tool`] — Tool trait, registry, and execution
//! - [`memory`] — Conversation memory implementations
//! - [`stream`] — Streaming response types
//! - [`hooks`] — Lifecycle hooks for observability and control
//! - [`orchestration`] — Chain, graph, DAG, and workflow orchestration
//! - [`retriever`] — RAG retriever trait and tool adapter
//! - [`checkpoint`] — Checkpointing and state persistence
//! - [`a2a`] — Google Agent-to-Agent protocol support
//! - [`distributed`] — Distributed agent execution across processes
//! - [`mcp`] — Model Context Protocol client and server (stdio, HTTP, WebSocket)
//! - [`telemetry`] — OpenTelemetry OTLP export (feature = "otel")
pub use tool_fn;
pub use ;