Module llm

Module llm 

Source
Expand description

LLM (Large Language Model) integration layer.

This module provides a unified interface for interacting with various LLM providers including OpenAI, Anthropic, Ollama, Google, and many others.

§Supported Providers

Ceylon supports 13+ LLM providers through the UniversalLLMClient:

  • OpenAI - GPT-4, GPT-3.5-turbo, etc.
  • Anthropic - Claude 3 Opus, Sonnet, Haiku
  • Ollama - Local models (Llama, Mistral, Gemma, etc.)
  • Google - Gemini Pro
  • DeepSeek - DeepSeek Chat, DeepSeek Coder
  • X.AI - Grok
  • Groq - High-speed inference
  • Azure OpenAI - Enterprise OpenAI deployment
  • Cohere - Command models
  • Mistral - Mistral AI models
  • Phind - CodeLlama variants
  • OpenRouter - Multi-provider routing
  • ElevenLabs - Voice/audio generation

§Configuration

Use LLMConfig for comprehensive configuration:

use runtime::llm::LLMConfig;

let config = LLMConfig::new("openai::gpt-4")
    .with_api_key("sk-...")
    .with_temperature(0.7)
    .with_max_tokens(2048)
    .with_resilience(true, 3);

§API Key Detection

Ceylon automatically detects API keys from environment variables:

  • OPENAI_API_KEY
  • ANTHROPIC_API_KEY
  • GOOGLE_API_KEY
  • DEEPSEEK_API_KEY
  • XAI_API_KEY
  • GROQ_API_KEY
  • MISTRAL_API_KEY
  • COHERE_API_KEY
  • PHIND_API_KEY
  • OPENROUTER_API_KEY
  • And more…

§Tool Calling

Ceylon supports native tool calling for compatible models and falls back to text-based tool invocation for others.

§Examples

§Basic Usage

use runtime::llm::{UniversalLLMClient, LLMClient, LLMResponse};
use runtime::llm::types::Message;

let client = UniversalLLMClient::new("openai::gpt-4", None)?;
let messages = vec![Message {
    role: "user".to_string(),
    content: "Hello!".to_string(),
}];

let response: LLMResponse<String> = client
    .complete::<LLMResponse<String>, String>(&messages, &[])
    .await?;

§Advanced Configuration

use runtime::llm::{LLMConfig, UniversalLLMClient};

let llm_config = LLMConfig::new("anthropic::claude-3-opus-20240229")
    .with_api_key(std::env::var("ANTHROPIC_API_KEY").unwrap())
    .with_temperature(0.8)
    .with_max_tokens(4096)
    .with_reasoning(true);

let client = UniversalLLMClient::new_with_config(llm_config)?;

§Local Models with Ollama

use runtime::llm::UniversalLLMClient;

// No API key needed for local models
let client = UniversalLLMClient::new("ollama::llama2", None)?;

Re-exports§

pub use llm_agent::LlmAgent;
pub use llm_agent::LlmAgentBuilder;
pub use react::FinishReason;
pub use react::ReActConfig;
pub use react::ReActEngine;
pub use react::ReActResult;
pub use react::ReActStep;

Modules§

llm_agent
react
types

Structs§

LLMConfig
Configuration for LLM providers with all builder options.
LLMProviderConfig
Legacy config for backward compatibility
LLMResponse
Response from an LLM including generated content and tool calls.
MockLLMClient
Mock LLM client for testing - doesn’t make real API calls.
ToolCall
A request from the LLM to call a tool.
UniversalLLMClient

Traits§

LLMClient
LLMResponseTrait
Trait for LLM response types with tool calling support.