rivven-llm
LLM provider facade for Rivven — unified async API for Large Language Model providers.
Features
| Provider | Feature | Chat | Embeddings |
|---|---|---|---|
| OpenAI | openai (default) |
✓ | ✓ |
| AWS Bedrock | bedrock |
✓ | ✓ |
Feature gates are additive. The openai feature pulls in reqwest + secrecy;
the bedrock feature pulls in aws-config + aws-sdk-bedrockruntime.
Enabling only one provider avoids compiling the other's dependency tree.
Quick Start
use ;
use OpenAiProvider;
let provider = builder
.api_key
.model
.build?;
let request = builder
.system
.user
.temperature
.max_tokens
.build;
let response = provider.chat.await?;
println!;
Embeddings
use ;
use OpenAiProvider;
let provider = builder
.api_key
.embedding_model
.build?;
let request = single;
let response = provider.embed.await?;
let vector = response.first_embedding.unwrap;
println!;
AWS Bedrock
use BedrockProvider;
let provider = builder
.region
.chat_model
.build
.await?;
Credentials are resolved from the standard AWS credential chain (env vars, profiles, IMDS, ECS).
Uses the official aws-sdk-bedrockruntime — SigV4 signing, credential refresh, and retries are handled automatically by the SDK.
Error Handling
All errors return LlmError with helpful classification:
match provider.chat.await
Error variants: Config, Auth, RateLimited (with optional retry_after_secs), Timeout, ModelNotFound, ContentFiltered, TokenLimitExceeded, Provider, Connection, Serialization, Transient, Internal.
Security
- API keys stored with
secrecy— never appear in Debug/Display output - Error messages truncated to prevent log amplification from malicious API responses
- UTF-8 safe truncation — never panics on multi-byte characters
- Retry-After headers parsed from 429 responses (OpenAI)
- SigV4 signing for Bedrock — handled automatically by the official AWS SDK; no credentials in request URLs
- Transport error classification — SDK-level timeouts, network failures, and construction errors are mapped to correct
LlmErrorvariants (not lost as opaque internal errors) - Capability checks — providers declare chat/embedding support; callers get clear errors
- Minimal dependency surface —
reqwest/secrecygated onopenai; AWS SDK gated onbedrock
Architecture
┌─────────────────────────────────────────┐
│ rivven-llm facade │
│ │
│ ┌──────────────────────────────────┐ │
│ │ LlmProvider trait │ │
│ │ chat() → ChatResponse │ │
│ │ embed() → EmbeddingResponse │ │
│ └──────────────────────────────────┘ │
│ ▲ ▲ │
│ │ │ │
│ ┌──────┴──┐ ┌──────┴──────┐ │
│ │ OpenAI │ │ Bedrock │ │
│ │ reqwest │ │ aws-sdk- │ │
│ │ REST │ │ bedrockrt │ │
│ └─────────┘ └─────────────┘ │
│ (openai feat) (bedrock feat) │
└─────────────────────────────────────────┘
License
Apache-2.0. See LICENSE.