ferrous-llm
A unified cross-platform Rust library for interacting with multiple Large Language Model providers. Ferrous-LLM provides a modular, type-safe, and performant abstraction layer that allows developers to easily switch between different LLM providers while maintaining consistent APIs.
๐ Features
- Multi-Provider Support: Unified interface for OpenAI, Anthropic, and Ollama providers
- Modular Architecture: Separate crates for core functionality and each provider
- Type Safety: Leverages Rust's type system for safe LLM interactions
- Streaming Support: Real-time streaming capabilities for chat completions
- Memory Management: Dedicated memory crate for conversation context handling
- Async/Await: Full async support with tokio runtime
- Comprehensive Examples: Working examples for all supported providers
- Extensible Design: Easy to add new providers and capabilities
๐ฆ Installation
Add ferrous-llm to your Cargo.toml
:
[]
= "0.2.0"
Feature Flags
By default, no providers are enabled. Enable the providers you need:
[]
= { = "0.2.0", = ["openai", "anthropic", "ollama"] }
Available features:
openai
- OpenAI provider supportanthropic
- Anthropic Claude provider supportollama
- Ollama local model provider supportfull
- All providers (equivalent to enabling all individual features)
๐๏ธ Architecture
Ferrous-LLM is organized as a workspace with the following crates:
ferrous-llm-core
- Core traits, types, and error handlingferrous-llm-openai
- OpenAI provider implementationferrous-llm-anthropic
- Anthropic provider implementationferrous-llm-ollama
- Ollama provider implementationferrous-llm-memory
- Memory and context management utilities
๐ง Quick Start
Basic Chat Example
use ;
async
Streaming Chat Example
use ;
use StreamExt;
async
๐ Supported Providers
OpenAI
use ;
// From environment variables
let config = from_env?;
// Or configure manually
let config = OpenAIConfig ;
let provider = new?;
Environment Variables:
OPENAI_API_KEY
- Your OpenAI API key (required)OPENAI_MODEL
- Model to use (default: "gpt-3.5-turbo")OPENAI_BASE_URL
- API base URL (default: "https://api.openai.com/v1")
Anthropic
use ;
let config = from_env?;
let provider = new?;
Environment Variables:
ANTHROPIC_API_KEY
- Your Anthropic API key (required)ANTHROPIC_MODEL
- Model to use (default: "claude-3-sonnet-20240229")ANTHROPIC_BASE_URL
- API base URL (default: "https://api.anthropic.com")
Ollama
use ;
let config = from_env?;
let provider = new?;
Environment Variables:
OLLAMA_MODEL
- Model to use (default: "llama2")OLLAMA_BASE_URL
- Ollama server URL (default: "http://localhost:11434")
๐ฏ Core Traits
Ferrous-LLM follows the Interface Segregation Principle with focused traits:
ChatProvider
Core chat functionality that most LLM providers support.
StreamingProvider
Extends ChatProvider with streaming capabilities.
Additional Traits
CompletionProvider
- Text completion (non-chat)ToolProvider
- Function/tool callingEmbeddingProvider
- Text embeddingsImageProvider
- Image generationSpeechToTextProvider
- Speech transcriptionTextToSpeechProvider
- Speech synthesis
๐ Examples
The examples/
directory contains comprehensive examples:
openai_chat.rs
- Basic OpenAI chatopenai_chat_streaming.rs
- OpenAI streaming chatanthropic_chat.rs
- Basic Anthropic chatanthropic_chat_streaming.rs
- Anthropic streaming chatollama_chat.rs
- Basic Ollama chatollama_chat_streaming.rs
- Ollama streaming chat
Run examples with:
# Set up environment variables
# Run specific examples
๐งช Testing
Run tests for all crates:
# Run all tests
# Run tests for specific provider
# Run integration tests
๐ค Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
Development Setup
-
Clone the repository:
-
Install Rust (if not already installed):
|
-
Set up environment variables:
# Edit .env with your API keys
-
Run tests:
Adding a New Provider
- Create a new crate in
crates/ferrous-llm-{provider}/
- Implement the required traits from
ferrous-llm-core
- Add integration tests
- Update the main crate's feature flags
- Add examples demonstrating usage
๐ License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
๐ Links
๐ Acknowledgments
- The Rust community for excellent async and HTTP libraries
- OpenAI, Anthropic, and Ollama for their APIs and documentation
- Contributors and users of this library
Note: This library is in active development. APIs may change before 1.0 release.