Expand description
FlyLLM is a Rust library that provides a load-balanced, multi-provider client for Large Language Models.
It enables developers to seamlessly work with multiple LLM providers (OpenAI, Anthropic, Google, Mistral…) through a unified API with request routing, load balancing, and failure handling.
§Features
- Multi-provider support: Integrate with OpenAI, Anthropic, Google, and Mistral
- Load balancing: Distribute requests across multiple providers
- Automatic retries: Handle provider failures with configurable retry policies
- Task routing: Route specific tasks to the most suitable providers
- Metrics tracking: Monitor response times, error rates, and token usage
§Example
use flyllm::{LlmManager, ProviderType, GenerationRequest, TaskDefinition};
async fn example() {
// Create a manager
let mut manager = LlmManager::new();
// Add providers
manager.add_provider(
ProviderType::OpenAI,
"api-key".to_string(),
"gpt-4-turbo".to_string(),
vec![],
true
);
// Generate a response
let request = GenerationRequest {
prompt: "Explain Rust in one paragraph".to_string(),
task: None,
params: None,
};
let responses = manager.generate_sequentially(vec![request]).await;
println!("{}", responses[0].content);
}
Re-exports§
pub use providers::ProviderType;
pub use providers::LlmRequest;
pub use providers::LlmResponse;
pub use providers::LlmProvider;
pub use providers::create_provider;
pub use providers::AnthropicProvider;
pub use providers::OpenAIProvider;
pub use providers::ModelInfo;
pub use providers::ModelDiscovery;
pub use errors::LlmError;
pub use errors::LlmResult;
pub use load_balancer::LlmManager;
pub use load_balancer::GenerationRequest;
pub use load_balancer::LlmManagerResponse;
pub use load_balancer::TaskDefinition;
Modules§
Functions§
- use_
logging - Initialize the logging system