1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
// src/lib.rs
//! # Ambi: A Flexible, Multi-Backend AI Agent Framework
//!
//! `ambi` is an incredibly fast, highly modular, and fully customizable AI agent framework
//! written entirely in Rust. It acts as the bridge between Large Language Models (LLMs) and
//! your Rust application, providing a robust execution loop, tool-calling capabilities, and
//! deterministic context management.
//!
//! ## Core Features
//!
//! - **Multi-Backend Support**: Seamlessly switch between cloud APIs (OpenAI format, DeepSeek, etc.)
//! and hyper-optimized local inference via `llama.cpp` using static Cargo features.
//! - **Deterministic Tool Calling**: Expose your Rust functions to the LLM. Features strict
//! timeout controls, distinction between idempotent and non-idempotent operations, and
//! graceful JSON truncation recovery.
//! - **Robust Context Eviction**: Never worry about `max_tokens` overflow again. Ambi uses a
//! deterministic FIFO algorithm to prune conversation history while preserving critical context.
//! - **Multimodal (Vision) Ready**: Built-in support for processing images, whether through
//! native integrated models (e.g., Qwen2-VL) or external vision projectors (e.g., LLaVA).
//!
//! ## Quick Start
//!
//! ```rust,no_run
//! use ambi::{Agent, AgentState, ChatRunner};
//! use ambi::llm::providers::openai_api::config::OpenAIEngineConfig;
//! use std::sync::Arc;
//! use tokio::sync::RwLock;
//!
//! #[tokio::main]
//! async fn main() -> ambi::error::Result<()> {
//! // 1. Initialize the configuration
//! let config = OpenAIEngineConfig {
//! api_key: "your-api-key".to_string(),
//! base_url: "https://api.openai.com/v1".to_string(),
//! model_name: "gpt-4o".to_string(),
//! temp: 0.7,
//! top_p: 0.95,
//! };
//!
//! // 2. Build the Agent
//! let agent = Agent::make(ambi::LLMEngineConfig::OpenAI(config))
//! .await?
//! .preamble("You are a helpful and concise assistant.")
//! .with_standard_formatting();
//!
//! // 3. Initialize Conversation State
//! let state = Arc::new(RwLock::new(AgentState::new()));
//! let runner = ChatRunner;
//!
//! // 4. Execute the pipeline
//! let response = runner.chat(&agent, &state, "Hello, world!").await?;
//! println!("Assistant: {}", response);
//!
//! Ok(())
//! }
//! ```
//!
//! # Runtime Requirements
//!
//! Ambi requires the **Tokio** async runtime with the `rt-multi-thread` feature.
//! The following is the minimum setup in `Cargo.toml`:
//!
//! ```toml
//! [dependencies]
//! tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
//! ```
// Enforce documentation across the crate
compile_error!;
compile_error!;
/// Agent Framework Core
/// Configuration
/// Error Handling
/// LLM Engine
/// Types
/// Cross-platform compilation adaptation
pub use tool;
pub use ;
pub use ChatRunner;
pub use ;
pub use ;