oxify-connect-llm
The Ecosystem - LLM Provider Integrations for OxiFY
Overview
oxify-connect-llm provides a unified, type-safe interface for multiple LLM providers. Part of OxiFY's Connector Strategy for mass-producing integrations via macros and trait abstractions.
Status: ✅ Phase 2 Complete - OpenAI & Anthropic Production Ready Roadmap: AWS Bedrock, Google Gemini, Cohere, Mistral, Ollama (Phase 3) Part of: OxiFY Enterprise Architecture (Codename: Absolute Zero)
Supported Providers (Production)
- ✅ OpenAI: GPT-3.5, GPT-4, GPT-4-turbo
- ✅ Anthropic: Claude 3 (Opus, Sonnet, Haiku)
Planned Providers (Phase 3)
- 🚧 AWS Bedrock: Claude, Llama, Titan, Mistral
- 🚧 Google Gemini: Gemini Pro, Gemini Ultra
- 🚧 Cohere: Command, Command-R
- 🚧 Mistral: Mistral Large, Mixtral
- 🚧 Local Models: Ollama, llama.cpp, vLLM
Architecture
Supported Providers
OpenAI
use ;
let provider = new;
let response = provider.complete.await?;
println!;
Anthropic Claude
use ;
let provider = new;
let response = provider.complete.await?;
println!;
Local Models (Planned)
let provider = new;
Features
Retry Logic
Automatic retry with exponential backoff for:
- Rate limiting (429)
- Temporary failures (500, 502, 503)
- Network errors
Streaming Support (Planned)
let mut stream = provider.complete_stream.await?;
while let Some = stream.next.await
Token Counting
let usage = response.usage.unwrap;
println!;
println!;
println!;
Caching (Planned)
Cache responses to reduce costs:
let provider = new
.with_cache;
Configuration
OpenAI
Request Options
Error Handling
Testing
Mock provider for testing:
use MockProvider;
let provider = new
.with_response;
let response = provider.complete.await?;
assert_eq!;
Future Enhancements
- Function calling support
- Vision model support (GPT-4V, Claude 3)
- Embedding generation
- Fine-tuned model support
- Cost tracking per request
- Prompt template library
See Also
oxify-model: LlmConfig definitionoxify-engine: Workflow execution