adk-gemini
Rust client library for Google's Gemini API — content generation, streaming, function calling, embeddings, image/speech generation, batch processing, caching, and Vertex AI.
Overview
adk-gemini is a comprehensive Rust client for the Google Gemini API, maintained as part of the ADK-Rust project. It provides full coverage of the Gemini API surface:
- Content generation (text, images, audio)
- Real-time streaming responses
- Function calling and tool integration (including Google Search and URL Context)
- Thinking mode (Gemini 2.5 / Gemini 3)
- Thought signatures for multi-turn thinking context
- Text embeddings
- Image generation and editing
- Text-to-speech (single and multi-speaker)
- Batch processing
- Content caching
- File upload and management
- Structured JSON output
- Grounding with Google Search
- Vertex AI (Google Cloud) support with ADC, service accounts, and WIF
- Multimodal input (images, video, PDF, audio)
Installation
[]
= "0.4"
Or through adk-model:
[]
= { = "0.4", = ["gemini"] }
Quick Start
use Gemini;
async
Client Constructors
| Constructor | Description |
|---|---|
Gemini::new(api_key) |
Default model (gemini-2.5-flash) via v1beta |
Gemini::pro(api_key) |
Gemini 2.5 Pro |
Gemini::with_model(api_key, model) |
Specific model |
Gemini::with_v1(api_key) |
Stable v1 API |
Gemini::with_model_v1(api_key, model) |
Specific model on v1 |
Gemini::with_base_url(api_key, url) |
Custom endpoint |
Gemini::with_google_cloud(api_key, project, location) |
Vertex AI |
Gemini::with_google_cloud_adc(project, location) |
Vertex AI with ADC |
Gemini::with_service_account_json(json) |
Service account (auto-detects project) |
Gemini::with_google_cloud_wif_json(json, project, location, model) |
Workload Identity Federation |
Examples
Streaming
use Gemini;
use TryStreamExt;
let client = new?;
let mut stream = client
.generate_content
.with_system_prompt
.with_user_message
.execute_stream
.await?;
while let Some = stream.try_next.await?
Function Calling
use *;
use JsonSchema;
use ;
let get_weather = new.;
let response = client
.generate_content
.with_user_message
.with_function
.with_function_calling_mode
.execute
.await?;
if let Some = response.function_calls.first
Google Search Grounding
use ;
let response = client
.generate_content
.with_user_message
.with_tool
.execute
.await?;
println!;
// Access grounding metadata
if let Some = response.candidates.first
.and_then
URL Context
use ;
let response = client
.generate_content
.with_user_message
.with_tool
.execute
.await?;
Thinking Mode (Gemini 2.5 / Gemini 3)
Gemini 3: Level-Based Thinking
use ;
let response = client
.generate_content
.with_user_message
.with_thinking_level
.with_thoughts_included
.execute
.await?;
Available levels: Minimal, Low, Medium, High.
Gemini 2.5: Budget-Based Thinking
let response = client
.generate_content
.with_user_message
.with_thinking_budget
.with_thoughts_included
.execute
.await?;
// Access the model's reasoning
for thought in response.thoughts
println!;
Thought Signatures (Multi-Turn Thinking Context)
When using thinking mode with function calling, Gemini 2.5+ returns thoughtSignature values that preserve the model's reasoning context across conversation turns. Include these signatures in subsequent requests to maintain coherent multi-turn thinking.
let response = client
.generate_content
.with_user_message
.with_tool
.with_thinking_config
.execute
.await?;
// Extract function calls with their thought signatures
for in response.function_calls_with_thoughts
Note:
thoughtSignatureis a Part-level field, not part of thefunctionCallobject. TheFunctionCallstruct carries the signature internally for convenience during deserialization, but it is serialized only at the Part level when sending requests to the API.
Structured JSON Output
use json;
let schema = json!;
let response = client
.generate_content
.with_user_message
.with_response_mime_type
.with_response_schema
.execute
.await?;
let parsed: Value = from_str?;
Text Embeddings
use ;
let client = with_model?;
let response = client
.embed_content
.with_text
.with_task_type
.execute
.await?;
println!;
Image Generation
let client = with_model?;
let response = client
.generate_content
.with_user_message
.execute
.await?;
// Response contains inline image data (base64-encoded)
for candidate in &response.candidates
Text-to-Speech
use *;
let client = with_model?;
let response = client
.generate_content
.with_user_message
.with_generation_config
.execute
.await?;
Content Caching
use Duration;
let cache = client
.create_cache
.with_display_name?
.with_system_instruction
.with_user_message
.with_ttl
.execute
.await?;
// Reuse the cache across multiple queries
let response = client
.generate_content
.with_cached_content
.with_user_message
.execute
.await?;
Batch Processing
let request1 = client
.generate_content
.with_user_message
.build;
let request2 = client
.generate_content
.with_user_message
.build;
let batch = client
.batch_generate_content
.with_request
.with_request
.execute
.await?;
Multi-Turn Conversation
let response1 = client
.generate_content
.with_system_prompt
.with_user_message
.execute
.await?;
let response2 = client
.generate_content
.with_system_prompt
.with_user_message
.with_model_message
.with_user_message
.execute
.await?;
Vertex AI (Google Cloud)
// API key auth (regional endpoint)
let client = with_google_cloud?;
// Application Default Credentials (regional endpoint)
let client = with_google_cloud_adc?;
// Global endpoint — uses https://aiplatform.googleapis.com
// No custom base URL workaround needed for Gemini 3 models.
let client = with_google_cloud_adc?;
// Service account
let sa_json = read_to_string?;
let client = with_google_cloud_service_account_json?;
// Workload Identity Federation
let wif_json = read_to_string?;
let client = with_google_cloud_wif_json?;
When location is "global", the endpoint resolves to https://aiplatform.googleapis.com. For any other location (e.g., "us-central1", "europe-west4"), the regional format https://{location}-aiplatform.googleapis.com is used.
Generation Config
use GenerationConfig;
let response = client
.generate_content
.with_user_message
.with_generation_config
.execute
.await?;
Backend Architecture
adk-gemini uses a pluggable backend trait to support both AI Studio and Vertex AI:
| Backend | Transport | Auth | Use Case |
|---|---|---|---|
StudioBackend |
REST | API key | Default — Google AI Studio |
VertexBackend |
REST SSE + gRPC fallback | ADC / Service Account / WIF | Google Cloud Vertex AI |
The GeminiBackend trait defines send_request() and send_streaming_request() methods. All Gemini::new() / Gemini::with_model() constructors use StudioBackend by default. Vertex AI constructors (with_google_cloud, with_google_cloud_adc, etc.) use VertexBackend.
use ;
// Studio backend (default)
let client = new?;
// Vertex AI backend
let client = with_google_cloud_adc?;
// Explicit backend via builder
let client = new
.with_api_key
.with_model
.build?;
API Modules
| Module | Description |
|---|---|
generation |
Content generation (text, images, audio) |
embedding |
Text embedding generation |
batch |
Batch processing for multiple requests |
files |
File upload and management |
cache |
Content caching for reusable contexts |
safety |
Content moderation and safety settings |
tools |
Function calling and tool integration |
models |
Core primitive types (Content, Part, Role, Blob) |
backend |
Pluggable backend trait (StudioBackend, VertexBackend) |
prelude |
Convenient re-exports of commonly used types |
Environment Variables
# Gemini API
GEMINI_API_KEY=your-api-key
# or
GOOGLE_API_KEY=your-api-key
# Vertex AI (Google Cloud)
GOOGLE_CLOUD_PROJECT=my-project
GOOGLE_CLOUD_LOCATION=us-central1
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
Running Examples
Related Crates
- adk-rust - Meta-crate with all components
- adk-model - Multi-provider LLM integrations (uses adk-gemini internally)
- adk-core - Core
Llmtrait - adk-agent - Agent implementations
License
MIT
Original work Copyright (c) 2024 @flachesis Modifications Copyright (c) 2024 Zavora AI
Part of ADK-Rust
This crate is part of the ADK-Rust framework for building AI agents in Rust.
Attribution
This crate is a fork of the excellent gemini-rust library by @flachesis. We are deeply grateful for their work in creating and maintaining this high-quality Gemini API client.
Upstream Project
- Repository: github.com/flachesis/gemini-rust
- Crates.io: crates.io/crates/gemini-rust
- Original Author: @flachesis
Why a Fork?
The ADK-Rust project requires certain extensions for deep integration with the Agent Development Kit — exporting additional types (e.g., GroundingMetadata, GroundingChunk) for grounding support, future ADK-specific extensions for agent workflows, and workspace-level version management. We regularly sync with upstream to incorporate improvements and fixes.
Our Commitment
- Staying aligned with the upstream gemini-rust project as much as possible
- Contributing back any general improvements that benefit the broader community
- Maintaining attribution and respecting the original MIT license
- Minimizing divergence — only adding ADK-specific extensions when necessary
Acknowledgments
- @flachesis — Creator and maintainer of the original gemini-rust library
- @npatsakula — Major contributions to the upstream project