Expand description
LLM (Large Language Model) integration layer.
This module provides a unified interface for interacting with various LLM providers including OpenAI, Anthropic, Ollama, Google, and many others.
§Supported Providers
Ceylon supports 13+ LLM providers through the UniversalLLMClient:
- OpenAI - GPT-4, GPT-3.5-turbo, etc.
- Anthropic - Claude 3 Opus, Sonnet, Haiku
- Ollama - Local models (Llama, Mistral, Gemma, etc.)
- Google - Gemini Pro
- DeepSeek - DeepSeek Chat, DeepSeek Coder
- X.AI - Grok
- Groq - High-speed inference
- Azure OpenAI - Enterprise OpenAI deployment
- Cohere - Command models
- Mistral - Mistral AI models
- Phind - CodeLlama variants
- OpenRouter - Multi-provider routing
- ElevenLabs - Voice/audio generation
§Configuration
Use LLMConfig for comprehensive configuration:
use runtime::llm::LLMConfig;
let config = LLMConfig::new("openai::gpt-4")
.with_api_key("sk-...")
.with_temperature(0.7)
.with_max_tokens(2048)
.with_resilience(true, 3);§API Key Detection
Ceylon automatically detects API keys from environment variables:
OPENAI_API_KEYANTHROPIC_API_KEYGOOGLE_API_KEYDEEPSEEK_API_KEYXAI_API_KEYGROQ_API_KEYMISTRAL_API_KEYCOHERE_API_KEYPHIND_API_KEYOPENROUTER_API_KEY- And more…
§Tool Calling
Ceylon supports native tool calling for compatible models and falls back to text-based tool invocation for others.
§Examples
§Basic Usage
use runtime::llm::{UniversalLLMClient, LLMClient, LLMResponse};
use runtime::llm::types::Message;
let client = UniversalLLMClient::new("openai::gpt-4", None)?;
let messages = vec![Message {
role: "user".to_string(),
content: "Hello!".to_string(),
}];
let response: LLMResponse<String> = client
.complete::<LLMResponse<String>, String>(&messages, &[])
.await?;§Advanced Configuration
use runtime::llm::{LLMConfig, UniversalLLMClient};
let llm_config = LLMConfig::new("anthropic::claude-3-opus-20240229")
.with_api_key(std::env::var("ANTHROPIC_API_KEY").unwrap())
.with_temperature(0.8)
.with_max_tokens(4096)
.with_reasoning(true);
let client = UniversalLLMClient::new_with_config(llm_config)?;§Local Models with Ollama
use runtime::llm::UniversalLLMClient;
// No API key needed for local models
let client = UniversalLLMClient::new("ollama::llama2", None)?;Re-exports§
pub use llm_agent::LlmAgent;pub use llm_agent::LlmAgentBuilder;pub use react::FinishReason;pub use react::ReActConfig;pub use react::ReActEngine;pub use react::ReActResult;pub use react::ReActStep;
Modules§
Structs§
- LLMConfig
- Configuration for LLM providers with all builder options.
- LLMProvider
Config - Legacy config for backward compatibility
- LLMResponse
- Response from an LLM including generated content and tool calls.
- MockLLM
Client - Mock LLM client for testing - doesn’t make real API calls.
- Tool
Call - A request from the LLM to call a tool.
- UniversalLLM
Client
Traits§
- LLMClient
- LLMResponse
Trait - Trait for LLM response types with tool calling support.