Skip to main content

Crate adk_model

Crate adk_model 

Source
Expand description

§adk-model

LLM model integrations for ADK (Gemini, OpenAI, Anthropic, DeepSeek, Groq, Ollama, Fireworks AI, Together AI, Mistral AI, Perplexity, Cerebras, SambaNova, Amazon Bedrock, Azure AI Inference).

§Overview

This crate provides LLM implementations for ADK agents. Currently supports:

  • GeminiModel - Google’s Gemini models (3 Pro, 2.5 Flash, etc.)
  • OpenAIClient - OpenAI models (GPT-5, GPT-5-mini, o3, etc.) — requires openai feature
  • AzureOpenAIClient - Azure OpenAI Service — requires openai feature
  • OpenAICompatible - Any OpenAI-compatible API (xAI, Fireworks, Together, Mistral, Perplexity, Cerebras, SambaNova, or custom) — requires openai feature, use OpenAICompatibleConfig presets
  • AnthropicClient - Anthropic Claude models — requires anthropic feature
  • DeepSeekClient - DeepSeek models — requires deepseek feature
  • GroqClient - Groq ultra-fast inference — requires groq feature
  • OllamaModel - Local LLMs via Ollama — requires ollama feature
  • BedrockClient - Amazon Bedrock via AWS SDK — requires bedrock feature
  • AzureAIClient - Azure AI Inference endpoints — requires azure-ai feature
  • MockLlm - Mock LLM for testing

§Quick Start

§Gemini

use adk_model::GeminiModel;
use std::sync::Arc;

let api_key = std::env::var("GOOGLE_API_KEY").unwrap();
let model = GeminiModel::new(&api_key, "gemini-2.5-flash").unwrap();

§OpenAI

use adk_model::openai::{OpenAIClient, OpenAIConfig};

let model = OpenAIClient::new(OpenAIConfig::new(
    std::env::var("OPENAI_API_KEY").unwrap(),
    "gpt-5-mini",
)).unwrap();

§Anthropic

use adk_model::anthropic::{AnthropicClient, AnthropicConfig};

let model = AnthropicClient::new(AnthropicConfig::new(
    std::env::var("ANTHROPIC_API_KEY").unwrap(),
    "claude-sonnet-4-5-20250929",
)).unwrap();

§DeepSeek

use adk_model::deepseek::{DeepSeekClient, DeepSeekConfig};

// Chat model
let chat = DeepSeekClient::chat(std::env::var("DEEPSEEK_API_KEY").unwrap()).unwrap();

// Reasoner with thinking mode
let reasoner = DeepSeekClient::reasoner(std::env::var("DEEPSEEK_API_KEY").unwrap()).unwrap();

§OpenAI-Compatible Providers (Fireworks, Together, Mistral, Perplexity, Cerebras, SambaNova, xAI)

All OpenAI-compatible providers use OpenAICompatible with provider presets:

use adk_model::openai_compatible::{OpenAICompatible, OpenAICompatibleConfig};

// Fireworks AI
let model = OpenAICompatible::new(OpenAICompatibleConfig::fireworks(
    std::env::var("FIREWORKS_API_KEY").unwrap(),
    "accounts/fireworks/models/llama-v3p1-8b-instruct",
)).unwrap();

// Together AI
let model = OpenAICompatible::new(OpenAICompatibleConfig::together(
    std::env::var("TOGETHER_API_KEY").unwrap(),
    "meta-llama/Llama-3.3-70B-Instruct-Turbo",
)).unwrap();

// Or any custom OpenAI-compatible endpoint
let model = OpenAICompatible::new(
    OpenAICompatibleConfig::new("your-api-key", "your-model")
        .with_base_url("https://your-endpoint.com/v1")
        .with_provider_name("my-provider"),
).unwrap();

§Amazon Bedrock

use adk_model::bedrock::{BedrockClient, BedrockConfig};

// Uses AWS IAM credentials from the environment (no API key needed)
let config = BedrockConfig::new("us-east-1", "anthropic.claude-sonnet-4-20250514-v1:0");
let model = BedrockClient::new(config).await.unwrap();

§Azure AI Inference

use adk_model::azure_ai::{AzureAIClient, AzureAIConfig};

let model = AzureAIClient::new(AzureAIConfig::new(
    "https://my-endpoint.eastus.inference.ai.azure.com",
    std::env::var("AZURE_AI_API_KEY").unwrap(),
    "meta-llama-3.1-8b-instruct",
)).unwrap();

§Ollama (Local)

use adk_model::ollama::{OllamaModel, OllamaConfig};

// Default: localhost:11434
let model = OllamaModel::new(OllamaConfig::new("llama3.2")).unwrap();

§Supported Models

§Gemini

ModelDescription
gemini-3-pro-previewMost intelligent, complex agentic workflows (1M context)
gemini-3-flash-previewFrontier intelligence at Flash speed (1M context)
gemini-2.5-proAdvanced reasoning and multimodal (1M context)
gemini-2.5-flashBalanced speed and capability, recommended (1M context)
gemini-2.5-flash-liteUltra-fast for high-volume tasks (1M context)

§OpenAI

ModelDescription
gpt-5Strongest coding and agentic model with adaptive reasoning
gpt-5-miniEfficient variant for most tasks
o3Advanced reasoning model for complex problem solving
o4-miniEfficient reasoning model (200K context)
gpt-4.1General purpose model with 1M context

§Anthropic

ModelDescription
claude-opus-4-5-20251101Most capable for complex autonomous tasks
claude-sonnet-4-5-20250929Best balance of intelligence, speed, and cost
claude-haiku-4-5-20251001Ultra-efficient for high-volume workloads
claude-opus-4-20250514Hybrid model with extended thinking
claude-sonnet-4-20250514Balanced model with extended thinking

§DeepSeek

ModelDescription
deepseek-chatV3.2 non-thinking mode for fast general-purpose tasks
deepseek-reasonerV3.2 thinking mode with chain-of-thought reasoning

§Groq

ModelDescription
meta-llama/llama-4-scout-17b-16e-instructLlama 4 Scout via Groq LPU
llama-3.3-70b-versatileVersatile large model
llama-3.1-8b-instantUltra-fast at 560 T/s

§OpenAI-Compatible Providers (via openai feature)

Use OpenAICompatibleConfig presets — one client, one feature flag:

ProviderPresetEnv Var
Fireworks AIOpenAICompatibleConfig::fireworks()FIREWORKS_API_KEY
Together AIOpenAICompatibleConfig::together()TOGETHER_API_KEY
Mistral AIOpenAICompatibleConfig::mistral()MISTRAL_API_KEY
PerplexityOpenAICompatibleConfig::perplexity()PERPLEXITY_API_KEY
CerebrasOpenAICompatibleConfig::cerebras()CEREBRAS_API_KEY
SambaNovaOpenAICompatibleConfig::sambanova()SAMBANOVA_API_KEY
xAI (Grok)OpenAICompatibleConfig::xai()XAI_API_KEY

§Other Providers

ProviderFeature FlagEnv Var
Amazon BedrockbedrockAWS IAM credentials
Azure AI Inferenceazure-aiAZURE_AI_API_KEY

§Features

  • Async streaming with backpressure
  • Tool/function calling support
  • Multimodal input (text, images, audio, video, PDF)
  • Generation configuration (temperature, top_p, etc.)
  • OpenAI-compatible APIs (Ollama, vLLM, etc.)

Re-exports§

pub use gemini::GeminiModel;
pub use mock::MockLlm;
pub use provider::ModelProvider;
pub use retry::RetryConfig;
pub use retry::ServerRetryHint;

Modules§

gemini
mock
provider
retry