# Gemini Crate Usage Guide
A comprehensive guide on how to use the `gemini_crate` library in your Rust projects to interact with Google's Gemini AI API.
## Table of Contents
- [Installation](#installation)
- [Setup](#setup)
- [Basic Usage](#basic-usage)
- [Advanced Usage](#advanced-usage)
- [Error Handling](#error-handling)
- [Configuration](#configuration)
- [Examples](#examples)
- [Best Practices](#best-practices)
- [Troubleshooting](#troubleshooting)
## Installation
Add `gemini_crate` to your `Cargo.toml`:
```toml
[dependencies]
gemini_crate = "0.1.0"
tokio = { version = "1.0", features = ["full"] }
dotenvy = "0.15"
```
## Setup
### 1. Get a Gemini API Key
1. Go to [Google AI Studio](https://aistudio.google.com/app/apikey)
2. Create a new API key
3. Copy the generated key
### 2. Environment Configuration
Create a `.env` file in your project root:
```env
GEMINI_API_KEY=your_actual_api_key_here
```
**Important**: Add `.env` to your `.gitignore` file to keep your API key secure:
```gitignore
.env
```
### 3. Import the Crate
```rust
use gemini_crate::client::GeminiClient;
use gemini_crate::errors::Error;
```
## Basic Usage
### Simple Text Generation
```rust
use gemini_crate::client::GeminiClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Load environment variables
dotenvy::dotenv().ok();
// Create a client
let client = GeminiClient::new()?;
// Generate text
let response = client
.generate_text("gemini-2.5-flash", "What is the capital of France?")
.await?;
println!("Response: {}", response);
Ok(())
}
```
### Listing Available Models
```rust
use gemini_crate::client::GeminiClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let client = GeminiClient::new()?;
// Get all available models
let models = client.list_models().await?;
println!("Available models:");
for model in models.models {
println!("- {} ({})", model.name, model.display_name);
println!(" Description: {}", model.description);
println!(" Supported methods: {:?}", model.supported_generation_methods);
}
Ok(())
}
```
## Advanced Usage
### Using Different Models
The library supports various Gemini models. Choose based on your needs:
```rust
use gemini_crate::client::GeminiClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let client = GeminiClient::new()?;
// Fast model for quick responses
let quick_response = client
.generate_text("gemini-2.5-flash", "Quick question: What's 2+2?")
.await?;
// Pro model for complex tasks
let detailed_response = client
.generate_text("gemini-2.5-pro", "Explain quantum computing in detail")
.await?;
// Latest model (automatically updated)
let latest_response = client
.generate_text("gemini-flash-latest", "What are the latest AI trends?")
.await?;
println!("Quick: {}", quick_response);
println!("Detailed: {}", detailed_response);
println!("Latest: {}", latest_response);
Ok(())
}
```
### Batch Processing
```rust
use gemini_crate::client::GeminiClient;
use futures::future::try_join_all;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let client = GeminiClient::new()?;
let questions = vec![
"What is the capital of Japan?",
"Explain photosynthesis briefly",
"What's the largest planet in our solar system?",
];
// Process multiple questions concurrently
let tasks = questions.into_iter().map(|question| {
let client = &client;
async move {
client.generate_text("gemini-2.5-flash", question).await
}
});
let responses = try_join_all(tasks).await?;
for (i, response) in responses.iter().enumerate() {
println!("Response {}: {}", i + 1, response);
}
Ok(())
}
```
### Custom Configuration
```rust
use gemini_crate::{client::GeminiClient, config::Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create custom configuration
let config = Config::from_api_key("your_custom_api_key".to_string());
let client = GeminiClient::with_config(config);
let response = client
.generate_text("gemini-2.5-flash", "Hello, world!")
.await?;
println!("Response: {}", response);
Ok(())
}
```
## Error Handling
The library provides comprehensive error handling through the `Error` enum:
```rust
use gemini_crate::{client::GeminiClient, errors::Error};
#[tokio::main]
async fn main() {
dotenvy::dotenv().ok();
let client = match GeminiClient::new() {
Ok(c) => c,
Err(Error::Config(msg)) => {
eprintln!("Configuration error: {}", msg);
eprintln!("Make sure GEMINI_API_KEY is set in your .env file");
return;
}
Err(e) => {
eprintln!("Failed to create client: {}", e);
return;
}
};
match client.generate_text("gemini-2.5-flash", "Hello!").await {
Ok(response) => println!("Success: {}", response),
Err(Error::Network(e)) => {
eprintln!("Network error: {}", e);
eprintln!("Check your internet connection");
}
Err(Error::Api(msg)) => {
eprintln!("API error: {}", msg);
eprintln!("This might be a rate limit or invalid model");
}
Err(Error::Json(e)) => {
eprintln!("JSON parsing error: {}", e);
eprintln!("The API response format might have changed");
}
Err(e) => eprintln!("Other error: {}", e),
}
}
```
## Configuration
### Environment Variables
The library supports the following environment variables:
- `GEMINI_API_KEY` (required): Your Google Gemini API key
### .env File Example
```env
# Required: Your Gemini API key
GEMINI_API_KEY=AIzaSyBYour_API_Key_Here
# Optional: Set logging level (if using env_logger)
RUST_LOG=info
```
## Examples
### CLI Application
```rust
// main.rs
use gemini_crate::client::GeminiClient;
use std::env;
use std::io::{self, Write};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let args: Vec<String> = env::args().collect();
if args.len() < 2 {
eprintln!("Usage: {} <prompt>", args[0]);
return Ok(());
}
let prompt = args[1..].join(" ");
let client = GeminiClient::new()?;
print!("Thinking... ");
io::stdout().flush()?;
let response = client
.generate_text("gemini-2.5-flash", &prompt)
.await?;
println!("\n{}", response);
Ok(())
}
```
### Web Service Integration
```rust
// For use with axum or warp
use gemini_crate::client::GeminiClient;
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
struct ChatRequest {
message: String,
model: Option<String>,
}
#[derive(Serialize)]
struct ChatResponse {
response: String,
}
async fn chat_endpoint(
client: GeminiClient,
request: ChatRequest,
) -> Result<ChatResponse, Box<dyn std::error::Error>> {
let model = request.model.as_deref().unwrap_or("gemini-2.5-flash");
let response = client
.generate_text(model, &request.message)
.await?;
Ok(ChatResponse { response })
}
```
### Interactive Chat Loop
```rust
use gemini_crate::client::GeminiClient;
use std::io::{self, Write};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let client = GeminiClient::new()?;
println!("Gemini Chat - Type 'quit' to exit");
loop {
print!("You: ");
io::stdout().flush()?;
let mut input = String::new();
io::stdin().read_line(&mut input)?;
let input = input.trim();
if input == "quit" {
break;
}
if input.is_empty() {
continue;
}
print!("Gemini: ");
io::stdout().flush()?;
match client.generate_text("gemini-2.5-flash", input).await {
Ok(response) => println!("{}\n", response),
Err(e) => eprintln!("Error: {}\n", e),
}
}
println!("Goodbye!");
Ok(())
}
```
## Best Practices
### 1. Network Reliability
The library includes built-in retry logic with exponential backoff, especially important for Starlink connections:
```rust
use gemini_crate::client::GeminiClient;
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let client = GeminiClient::new()?;
// The client automatically handles network drops and retries
// You can wrap in additional timeout if needed
let response = tokio::time::timeout(
Duration::from_secs(30),
client.generate_text("gemini-2.5-flash", "Your prompt here")
).await??;
println!("Response: {}", response);
Ok(())
}
```
### 2. Model Selection
Choose the right model for your use case:
```rust
// For quick, simple tasks
let model = "gemini-2.5-flash";
// For complex reasoning and analysis
let model = "gemini-2.5-pro";
// For always getting the latest version
let model = "gemini-flash-latest";
// For specific versions (more predictable)
let model = "gemini-2.0-flash-001";
```
### 3. Rate Limiting
Implement your own rate limiting for production use:
```rust
use std::sync::Arc;
use tokio::sync::Semaphore;
use gemini_crate::client::GeminiClient;
struct RateLimitedClient {
client: GeminiClient,
semaphore: Arc<Semaphore>,
}
impl RateLimitedClient {
fn new(client: GeminiClient, max_concurrent: usize) -> Self {
Self {
client,
semaphore: Arc::new(Semaphore::new(max_concurrent)),
}
}
async fn generate_text(&self, model: &str, prompt: &str) -> Result<String, Box<dyn std::error::Error>> {
let _permit = self.semaphore.acquire().await?;
self.client.generate_text(model, prompt).await.map_err(Into::into)
}
}
```
### 4. Error Recovery
```rust
use gemini_crate::{client::GeminiClient, errors::Error};
use tokio::time::{sleep, Duration};
async fn resilient_generate(
client: &GeminiClient,
model: &str,
prompt: &str,
max_retries: u32,
) -> Result<String, Error> {
let mut retries = 0;
loop {
match client.generate_text(model, prompt).await {
Ok(response) => return Ok(response),
Err(Error::Network(_)) if retries < max_retries => {
retries += 1;
let delay = Duration::from_secs(2_u64.pow(retries));
println!("Network error, retrying in {} seconds...", delay.as_secs());
sleep(delay).await;
}
Err(e) => return Err(e),
}
}
}
```
## Troubleshooting
### Common Issues
1. **"GEMINI_API_KEY must be set" Error**
- Make sure your `.env` file is in the project root
- Verify the API key is correct
- Check that `dotenvy::dotenv().ok()` is called before creating the client
2. **"Model not found" Error**
- Use `client.list_models()` to see available models
- Update to a current model name (avoid deprecated ones like `gemini-pro`)
3. **Network Timeouts**
- The library has built-in retry logic
- For Starlink connections, consider adding application-level timeouts
- Check your internet connection
4. **Rate Limiting**
- Implement delays between requests
- Use the batch processing pattern for multiple requests
- Consider upgrading your API quota
### Debug Mode
Enable debug logging to troubleshoot issues:
```rust
// Add to Cargo.toml
[dependencies]
env_logger = "0.10"
// In your main function
fn main() {
env_logger::init();
// ... rest of your code
}
```
Set the environment variable:
```env
RUST_LOG=debug
```
### Testing
Create tests that don't hit the actual API:
```rust
#[cfg(test)]
mod tests {
use super::*;
use gemini_crate::config::Config;
#[tokio::test]
async fn test_client_creation() {
let config = Config::from_api_key("test_key".to_string());
let client = GeminiClient::with_config(config);
// Test client methods with mocked responses
}
}
```
## Support
For issues and questions:
1. Check this documentation first
2. Look at the examples in the repository
3. File an issue on the project's GitHub repository
4. Check the [Google AI documentation](https://ai.google.dev/docs) for API-specific questions
## License
This library is provided as-is. Make sure to comply with Google's Gemini API terms of service when using this crate.