1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
//! LLM Provider Clients and Abstractions
//!
//! This module provides a unified interface for interacting with various Large Language
//! Model (LLM) providers. It abstracts away provider-specific implementations behind
//! common traits, allowing the rest of the application to work with any supported LLM.
//!
//! # Architecture
//!
//! The module follows a factory pattern:
//! - [`LLMClient`] - The core trait that all providers implement
//! - [`LLMClientFactory`] - Factory trait for creating provider clients
//! - [`ProviderRegistry`] - Registry for managing multiple providers
//! - [`ConfigBasedLLMFactory`] - Creates clients based on `ares.toml` configuration
//! - [`ToolCoordinator`](crate::llm::coordinator::ToolCoordinator) - Generic multi-turn tool calling coordinator
//! - [`ClientPool`](crate::llm::pool::ClientPool) - Connection pooling for efficient client reuse (DIR-44)
//!
//! # Supported Providers
//!
//! Enable providers via Cargo features:
//! - `openai` - OpenAI API (GPT-4, GPT-3.5, etc.)
//! - `anthropic` - Anthropic API (Claude 3, Claude 3.5, etc.)
//! - `ollama` - Local Ollama server
//! - `llamacpp` - llama.cpp server
//!
//! # Example
//!
//! ```ignore
//! use ares::llm::{ConfigBasedLLMFactory, LLMClientFactory, Provider};
//!
//! let factory = ConfigBasedLLMFactory::new(&config);
//! let client = factory.create_client(Provider::OpenAI)?;
//!
//! let response = client.generate("What is 2+2?", None).await?;
//! println!("{}", response.content);
//! ```
//!
//! # Connection Pooling (DIR-44)
//!
//! Use the [`ClientPool`](crate::llm::pool::ClientPool) for efficient connection reuse:
//!
//! ```ignore
//! use ares::llm::pool::{ClientPool, PoolConfig};
//!
//! let pool = ClientPool::new(PoolConfig::default());
//! pool.register_provider("openai", provider);
//!
//! // Get a pooled client - automatically returned when guard is dropped
//! let guard = pool.get("openai").await?;
//! let response = guard.generate("Hello!").await?;
//! ```
//!
//! # Tool Calling
//!
//! Use the [`ToolCoordinator`](crate::llm::coordinator::ToolCoordinator) for multi-turn tool calling with any provider:
//!
//! ```ignore
//! use ares::llm::coordinator::{ToolCoordinator, ToolCallingConfig};
//!
//! let coordinator = ToolCoordinator::new(client, registry, ToolCallingConfig::default());
//! let result = coordinator.execute(Some("System prompt"), "User query").await?;
//! ```
//!
//! # Streaming
//!
//! All providers support streaming responses via the `generate_stream` method,
//! which returns a `Pin<Box<dyn Stream<Item = Result<String>>>>`.
/// Model capabilities and requirement matching (DIR-43).
/// Core LLM client trait and streaming response types.
/// Generic tool coordinator for multi-turn tool calling.
/// Connection pooling for LLM clients (DIR-44).
/// Registry for managing multiple LLM provider instances.
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;