1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
//! # rivven-llm — LLM Provider Facade
//!
//! Unified async API for Large Language Model providers.
//!
//! This crate provides a provider-agnostic interface for:
//! - **Chat completions** — send messages, get structured responses
//! - **Text embeddings** — generate vector representations of text
//!
//! ## Supported Providers
//!
//! | Provider | Feature | Chat | Embeddings |
//! |:---------|:--------|:-----|:-----------|
//! | OpenAI | `openai` (default) | ✓ | ✓ |
//! | AWS Bedrock | `bedrock` | ✓ | ✓ |
//!
//! ## Quick Start
//!
//! ```rust,no_run
//! use rivven_llm::{LlmProvider, ChatRequest, ChatMessage, Role};
//! use rivven_llm::openai::OpenAiProvider;
//!
//! # async fn example() -> Result<(), rivven_llm::LlmError> {
//! let provider = OpenAiProvider::builder()
//! .api_key("sk-...")
//! .model("gpt-4o-mini")
//! .build()?;
//!
//! let request = ChatRequest::builder()
//! .message(ChatMessage::user("Summarize this text: ..."))
//! .temperature(0.3)
//! .max_tokens(256)
//! .build();
//!
//! let response = provider.chat(&request).await?;
//! println!("{}", response.content());
//! # Ok(())
//! # }
//! ```
// Re-export core types at crate root
pub use ;
pub use LlmProvider;
pub use ;