gemini_crate 0.1.0

A robust Rust client library for Google's Gemini AI API with built-in error handling, retry logic, and comprehensive model support
Documentation
# Gemini Crate


A robust Rust client library for Google's Gemini AI API with built-in error handling, retry logic, and comprehensive model support.

[![Crates.io](https://img.shields.io/crates/v/gemini_crate.svg)](https://crates.io/crates/gemini_crate)
[![Documentation](https://docs.rs/gemini_crate/badge.svg)](https://docs.rs/gemini_crate)
[![License](https://img.shields.io/badge/license-MIT%2FApache--2.0-blue.svg)](LICENSE)

## Features


- 🚀 **Simple API** - Easy-to-use client for Gemini AI models
- 🔄 **Automatic Retries** - Built-in exponential backoff for network reliability
- 🌐 **Starlink Optimized** - Designed for satellite internet connections with dropout handling
- 📦 **Model Discovery** - List and discover available Gemini models
- 🛡️ **Comprehensive Error Handling** - Detailed error types for robust applications
-**Async/Await Support** - Fully asynchronous with Tokio
- 🔧 **Configurable** - Flexible configuration options

## Quick Start


### 1. Add to your project


```toml
[dependencies]
gemini_crate = "0.1.0"
tokio = { version = "1.0", features = ["full"] }
dotenvy = "0.15"
```

### 2. Set up your API key


Create a `.env` file in your project root:

```env
GEMINI_API_KEY=your_gemini_api_key_here
```

Get your API key from [Google AI Studio](https://aistudio.google.com/app/apikey).

### 3. Basic usage


```rust
use gemini_crate::client::GeminiClient;

#[tokio::main]

async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Load environment variables
    dotenvy::dotenv().ok();
    
    // Create client
    let client = GeminiClient::new()?;
    
    // Generate text
    let response = client
        .generate_text("gemini-2.5-flash", "What is the capital of France?")
        .await?;
    
    println!("Response: {}", response);
    
    Ok(())
}
```

## Usage Examples


### List Available Models


```rust
use gemini_crate::client::GeminiClient;

#[tokio::main]

async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenvy::dotenv().ok();
    let client = GeminiClient::new()?;
    
    let models = client.list_models().await?;
    
    for model in models.models {
        println!("- {} ({})", model.name, model.display_name);
        println!("  Methods: {:?}", model.supported_generation_methods);
    }
    
    Ok(())
}
```

### Error Handling


```rust
use gemini_crate::{client::GeminiClient, errors::Error};

#[tokio::main]

async fn main() {
    dotenvy::dotenv().ok();
    
    let client = match GeminiClient::new() {
        Ok(c) => c,
        Err(Error::Config(msg)) => {
            eprintln!("Configuration error: {}", msg);
            eprintln!("Make sure GEMINI_API_KEY is set in your .env file");
            return;
        }
        Err(e) => {
            eprintln!("Failed to create client: {}", e);
            return;
        }
    };
    
    match client.generate_text("gemini-2.5-flash", "Hello!").await {
        Ok(response) => println!("Success: {}", response),
        Err(Error::Network(e)) => eprintln!("Network error: {}", e),
        Err(Error::Api(msg)) => eprintln!("API error: {}", msg),
        Err(e) => eprintln!("Other error: {}", e),
    }
}
```

### Batch Processing


```rust
use gemini_crate::client::GeminiClient;
use futures::future::try_join_all;

#[tokio::main]

async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenvy::dotenv().ok();
    let client = GeminiClient::new()?;
    
    let questions = vec![
        "What is the capital of Japan?",
        "Explain photosynthesis briefly",
        "What's the largest planet?",
    ];
    
    let tasks = questions.into_iter().map(|question| {
        client.generate_text("gemini-2.5-flash", question)
    });
    
    let responses = try_join_all(tasks).await?;
    
    for (i, response) in responses.iter().enumerate() {
        println!("Response {}: {}", i + 1, response);
    }
    
    Ok(())
}
```

## Available Models


The library supports all current Gemini models:

| Model | Best For | Speed | Context |
|-------|----------|--------|---------|
| `gemini-2.5-flash` | General tasks | Fast | 1M tokens |
| `gemini-2.5-pro` | Complex reasoning | Medium | 2M tokens |
| `gemini-flash-latest` | Latest features | Fast | Variable |
| `gemini-pro-latest` | Latest pro features | Medium | Variable |

Use `client.list_models()` to see all available models and their capabilities.

## Examples


Run the included examples:

```bash
# Interactive chat

cargo run --example simple_chat

# List all models

cargo run --example list_models

# Batch processing demo

cargo run --example batch_processing
```

## Error Types


The library provides comprehensive error handling:

- `Error::Network` - Network connectivity issues
- `Error::Api` - Gemini API errors (rate limits, invalid requests)
- `Error::Json` - Response parsing errors
- `Error::Config` - Configuration issues (missing API key)

## Best Practices


### 1. Environment Setup

```env
# .env file

GEMINI_API_KEY=your_api_key_here
RUST_LOG=info  # Optional: for debugging
```

### 2. Rate Limiting

```rust
use std::time::Duration;
use tokio::time::sleep;

// Add delays between requests
for prompt in prompts {
    let response = client.generate_text("gemini-2.5-flash", prompt).await?;
    println!("{}", response);
    sleep(Duration::from_millis(500)).await; // Be nice to the API
}
```

### 3. Model Selection

```rust
// For quick responses
let model = "gemini-2.5-flash";

// For complex reasoning
let model = "gemini-2.5-pro"; 

// For latest features
let model = "gemini-flash-latest";
```

## Network Reliability


The library is designed for unreliable connections (like Starlink):

- ✅ Automatic retry with exponential backoff
- ✅ Transient error detection
- ✅ Timeout handling
- ✅ Network dropout recovery

## Configuration


### Environment Variables

- `GEMINI_API_KEY` (required) - Your Gemini API key

### Custom Configuration

```rust
use gemini_crate::{client::GeminiClient, config::Config};

let config = Config::from_api_key("your_api_key".to_string());
let client = GeminiClient::with_config(config);
```

## Documentation


- [Full Usage Guide]USAGE.md - Comprehensive examples and patterns
- [API Documentation]https://docs.rs/gemini_crate - Complete API reference
- [Examples]examples/ - Ready-to-run example applications

## Requirements


- Rust 2024 edition
- Tokio async runtime
- Valid Google Gemini API key

## Contributing


1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass: `cargo test`
5. Run clippy: `cargo clippy`
6. Submit a pull request

## License


Licensed under either of:

- Apache License, Version 2.0 ([LICENSE-APACHE]LICENSE-APACHE)
- MIT License ([LICENSE-MIT]LICENSE-MIT)

at your option.

## Troubleshooting


### Common Issues


**"GEMINI_API_KEY must be set"**
- Ensure your `.env` file is in the project root
- Verify the API key is correct
- Call `dotenvy::dotenv().ok()` before creating the client

**"Model not found"**
- Use `client.list_models()` to see available models
- Update to current model names (avoid deprecated ones like `gemini-pro`)

**Network timeouts**
- The library has built-in retry logic
- For Starlink connections, consider application-level timeouts
- Check internet connectivity

For more help, see the [full troubleshooting guide](USAGE.md#troubleshooting).