Expand description
§rivven-llm — LLM Provider Facade
Unified async API for Large Language Model providers.
This crate provides a provider-agnostic interface for:
- Chat completions — send messages, get structured responses
- Text embeddings — generate vector representations of text
§Supported Providers
| Provider | Feature | Chat | Embeddings |
|---|---|---|---|
| OpenAI | openai (default) | ✓ | ✓ |
| AWS Bedrock | bedrock | ✓ | ✓ |
§Quick Start
use rivven_llm::{LlmProvider, ChatRequest, ChatMessage, Role};
use rivven_llm::openai::OpenAiProvider;
let provider = OpenAiProvider::builder()
.api_key("sk-...")
.model("gpt-4o-mini")
.build()?;
let request = ChatRequest::builder()
.message(ChatMessage::user("Summarize this text: ..."))
.temperature(0.3)
.max_tokens(256)
.build();
let response = provider.chat(&request).await?;
println!("{}", response.content());Re-exports§
pub use error::LlmError;pub use error::LlmResult;pub use provider::LlmProvider;pub use types::ChatChoice;pub use types::ChatMessage;pub use types::ChatRequest;pub use types::ChatRequestBuilder;pub use types::ChatResponse;pub use types::Embedding;pub use types::EmbeddingRequest;pub use types::EmbeddingRequestBuilder;pub use types::EmbeddingResponse;pub use types::EmbeddingUsage;pub use types::FinishReason;pub use types::Role;pub use types::Usage;