genai 0.5.0

Multi-AI Providers Library for Rust. (OpenAI, Gemini, Anthropic, xAI, Ollama, Groq, DeepSeek, Grok)
Documentation

genai - Multi-AI Providers Library for Rust

Currently natively supports: OpenAI, Anthropic, Gemini, xAI, Ollama, Groq, DeepSeek, Cohere, Together, Fireworks, Nebius, Mimo, Zai (Zhipu AI), BigModel.

Also allows a custom URL with ServiceTargetResolver (see examples/c06-target-resolver.rs)

Provides a single, ergonomic API to many generative AI providers, such as Anthropic, OpenAI, Gemini, xAI, Ollama, Groq, and more.

NOTE: Big update with v0.5.0 - New adapters (BigModel, MIMO), Gemini Thinking support, Anthropic Reasoning Effort, and a more robust internal streaming engine.

v0.5.0 - (2026-01-09)

  • What's new:
    • New Adapters: BigModel.cn and MIMO model adapter (thanks to Akagi201).
    • zai - change namespace strategy with (zai:: for default, and zai-codding:: for subscription, same Adapter)
    • Gemini Thinking & Thought: Full support for Gemini Thought signatures (thanks to Himmelschmidt) and thinking levels.
    • Reasoning Effort Control: Support for ReasoningEffort for Anthropic (Claude 3.7/4.5) and Gemini (Thinking levels), including ReasoningEffort::None.
    • Content & Binary Improvements: Enhanced binary/PDF API and size tracking.
    • Internal Stream Refactor: Switched to a unified EventSourceStream and WebStream for better reliability and performance across all providers.
    • Dependency Upgrade: Now using reqwest 0.13.
  • What's still awesome:
    • Normalized and ergonomic Chat API across all major providers.
    • Native protocol support for Gemini and Anthropic protocols (Reasoning/Thinking controls).
    • PDF, Image, and Embedding support.
    • Custom Auth, Endpoint, and Header overrides.

See: - CHANGELOG

Big Thanks to

Usage examples

  • Check out AIPACK, which wraps this genai library into an agentic runtime to run, build, and share AI Agent Packs. See pro@coder for a simple example of how I use AI PACK/genai for production coding.

Note: Feel free to send me a short description and a link to your application or library using genai.

Key Features

Examples | Thanks | Library Focus | Changelog | Provider Mapping: ChatOptions | Usage

Examples

examples/c00-readme.rs

//! Base examples demonstrating the core capabilities of genai

use genai::chat::printer::{print_chat_stream, PrintChatStreamOptions};
use genai::chat::{ChatMessage, ChatRequest};
use genai::Client;

const MODEL_OPENAI: &str = "gpt-4o-mini"; // o1-mini, gpt-4o-mini
const MODEL_ANTHROPIC: &str = "claude-3-haiku-20240307";
// or namespaced with simple name "fireworks::qwen3-30b-a3b", or "fireworks::accounts/fireworks/models/qwen3-30b-a3b"
const MODEL_FIREWORKS: &str = "accounts/fireworks/models/qwen3-30b-a3b";
const MODEL_TOGETHER: &str = "together::openai/gpt-oss-20b";
const MODEL_GEMINI: &str = "gemini-2.0-flash";
const MODEL_GROQ: &str = "llama-3.1-8b-instant";
const MODEL_OLLAMA: &str = "gemma:2b"; // sh: `ollama pull gemma:2b`
const MODEL_XAI: &str = "grok-3-mini";
const MODEL_DEEPSEEK: &str = "deepseek-chat";
const MODEL_ZAI: &str = "glm-4-plus";
const MODEL_COHERE: &str = "command-r7b-12-2024";

// NOTE: These are the default environment keys for each AI Adapter Type.
//       They can be customized; see `examples/c02-auth.rs`
const MODEL_AND_KEY_ENV_NAME_LIST: &[(&str, &str)] = &[
	// -- De/activate models/providers
	(MODEL_OPENAI, "OPENAI_API_KEY"),
	(MODEL_ANTHROPIC, "ANTHROPIC_API_KEY"),
	(MODEL_GEMINI, "GEMINI_API_KEY"),
	(MODEL_FIREWORKS, "FIREWORKS_API_KEY"),
	(MODEL_TOGETHER, "TOGETHER_API_KEY"),
	(MODEL_GROQ, "GROQ_API_KEY"),
	(MODEL_XAI, "XAI_API_KEY"),
	(MODEL_DEEPSEEK, "DEEPSEEK_API_KEY"),
	(MODEL_OLLAMA, ""),
	(MODEL_ZAI, "ZAI_API_KEY"),
	(MODEL_COHERE, "COHERE_API_KEY"),
];

// NOTE: Model to AdapterKind (AI Provider) type mapping rule
//  - starts_with "gpt"      -> OpenAI
//  - starts_with "claude"   -> Anthropic
//  - starts_with "command"  -> Cohere
//  - starts_with "gemini"   -> Gemini
//  - model in Groq models   -> Groq
//  - starts_with "glm"      -> ZAI
//  - For anything else      -> Ollama
//
// This can be customized; see `examples/c03-mapper.rs`

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
	let question = "Why is the sky red?";

	let chat_req = ChatRequest::new(vec![
		// -- Messages (de/activate to see the differences)
		ChatMessage::system("Answer in one sentence"),
		ChatMessage::user(question),
	]);

	let client = Client::default();

	let print_options = PrintChatStreamOptions::from_print_events(false);

	for (model, env_name) in MODEL_AND_KEY_ENV_NAME_LIST {
		// Skip if the environment name is not set
		if !env_name.is_empty() && std::env::var(env_name).is_err() {
			println!("===== Skipping model: {model} (env var not set: {env_name})");
			continue;
		}

		let adapter_kind = client.resolve_service_target(model).await?.model.adapter_kind;

		println!("\n===== MODEL: {model} ({adapter_kind}) =====");

		println!("\n--- Question:\n{question}");

		println!("\n--- Answer:");
		let chat_res = client.exec_chat(model, chat_req.clone(), None).await?;
		println!("{}", chat_res.first_text().unwrap_or("NO ANSWER"));

		println!("\n--- Answer: (streaming)");
		let chat_res = client.exec_chat_stream(model, chat_req.clone(), None).await?;
		print_chat_stream(chat_res, Some(&print_options)).await?;

		println!();
	}

	Ok(())
}

More Examples

Library Focus:

  • Focuses on standardizing chat completion APIs across major AI services.

  • Native implementation, meaning no per-service SDKs.

    • Reason: While there are some variations across the various APIs, they all follow the same pattern and high-level flow and constructs. Managing the differences at a lower layer is actually simpler and more cumulative across services than doing SDK gymnastics.
  • Prioritizes ergonomics and commonality, with depth being secondary. (If you require a complete client API, consider using async-openai and ollama-rs; they are both excellent and easy to use.)

  • Initially, this library will mostly focus on text chat APIs, with images and function calling coming later.

ChatOptions

  • (1) - OpenAI-compatible notes
    • Models: OpenAI, DeepSeek, Groq, Ollama, xAI, Mimo, Together, Fireworks, Nebius, Zai, Together, Fireworks, Nebius, Zai
Property OpenAI Compatibles (*1) Anthropic Gemini generationConfig. Cohere
temperature temperature temperature temperature temperature
max_tokens max_tokens max_tokens (default 1024) maxOutputTokens max_tokens
top_p top_p top_p topP p

Usage

Property OpenAI Compatibles (1) Anthropic usage. Gemini usageMetadata. Cohere meta.tokens.
prompt_tokens prompt_tokens input_tokens (added) promptTokenCount (2) input_tokens
completion_tokens completion_tokens output_tokens (added) candidatesTokenCount (2) output_tokens
total_tokens total_tokens (computed) totalTokenCount (2) (computed)
prompt_tokens_details prompt_tokens_details cached/cache_creation N/A for now N/A for now
completion_tokens_details completion_tokens_details N/A for now N/A for now N/A for now
  • (1) - OpenAI-compatible notes

  • (2): Gemini tokens

    • Right now, with the Gemini Stream API, it's not clear whether usage for each event is cumulative or must be summed. It appears to be cumulative, meaning the last message shows the total amount of input, output, and total tokens, so that is the current assumption. See possible tweet answer for more info.

Notes on Possible Direction

  • Will add more data on ChatResponse and ChatStream, especially metadata about usage.
  • Add vision/image support to chat messages and responses.
  • Add function calling support to chat messages and responses.
  • Add embed and embed_batch.
  • Add the AWS Bedrock variants (e.g., Mistral and Anthropic). Most of the work will be on the "interesting" token signature scheme; trying to avoid bringing in large SDKs, this might be a lower-priority feature.
  • Add the Google Vertex AI variants.
  • May add the Azure OpenAI variant (not sure yet).

Links