grok_api 0.1.71

Rust client library for the Grok AI API (xAI)
Documentation

grok_api

Crates.io Documentation License Buy Me a Coffee

A Rust client library for the Grok AI API (xAI). Simple, robust, and production-ready.

Features

  • ๐Ÿš€ Easy to use โ€” Simple async API with builder pattern
  • ๐Ÿ”„ Automatic retries โ€” Built-in retry logic for transient failures
  • ๐Ÿ›ก๏ธ Robust error handling โ€” Comprehensive error types with detailed messages
  • ๐ŸŒ Network resilient โ€” Optimised for challenging conditions (Starlink, satellite connections)
  • ๐Ÿ”ง Flexible configuration โ€” Customise timeouts, retries, and more
  • ๐Ÿ› ๏ธ Tool / Function calling โ€” Full support for function calling and agentic workflows
  • ๐Ÿง  Reasoning-aware โ€” Capability helpers guard against sending unsupported params to reasoning models
  • ๐Ÿ“ฆ Rust 2024 edition โ€” Built on the latest edition with MSRV 1.85

Quick Start

Add this to your Cargo.toml:

[dependencies]

grok_api = "0.1"

tokio = { version = "1", features = ["full"] }

Simple Example

use grok_api::{GrokClient, Result};

#[tokio::main]
async fn main() -> Result<()> {
    let client = GrokClient::new("your-api-key")?;

    let response = client
        .chat("What is Rust?", None)
        .await?;

    println!("Response: {}", response);
    Ok(())
}

Conversation with History

use grok_api::{GrokClient, ChatMessage, Result};

#[tokio::main]
async fn main() -> Result<()> {
    let client = GrokClient::new("your-api-key")?;

    let messages = vec![
        ChatMessage::system("You are a helpful Rust expert."),
        ChatMessage::user("How do I create a Vec?"),
    ];

    let response = client
        .chat_with_history(&messages)
        .temperature(0.7)
        .max_tokens(1000)
        .model("grok-4.3")   // recommended default (replaces grok-4-1-fast-reasoning)
        .send()
        .await?;

    println!("Response: {}", response.content().unwrap_or(""));
    println!("Tokens used: {}", response.usage.total_tokens);

    Ok(())
}

Advanced Configuration

use grok_api::{GrokClient, Result};

#[tokio::main]
async fn main() -> Result<()> {
    let client = GrokClient::builder()
        .api_key("your-api-key")
        .timeout_secs(60)
        .max_retries(5)
        .base_url("https://custom-endpoint.com")  // optional
        .build()?;

    // Use client...

    Ok(())
}

API Key

Get your API key from x.ai/api.

Set it as an environment variable:

export GROK_API_KEY="your-api-key-here"

Or pass it directly to the client:

let client = GrokClient::new("your-api-key")?;

Available Models

Last synced with xAI API โ€” May 2026 Call GET https://api.x.ai/v1/models with your key to see your account's live list.

โš ๏ธ Retirements effective 2026-05-15 12:00 PT โ€” the following API strings will stop working: grok-4-1-fast-reasoning, grok-4-1-fast-non-reasoning, grok-4-0709, grok-code-fast-1, grok-3, grok-imagine-image-pro. See the Migration Guide below.

๐ŸŒŸ Grok 4.3 โ€” New Flagship (1 M token context, May 2026)

API string Model variant Best for
grok-4.3 Model::Grok4_3 Recommended default โ€” fastest & most intelligent; replaces all retiring 4.1/4/3 models

Grok 4.3 highlights:

  • 1,000,000 token context window
  • Configurable reasoning_effort: "low" / "medium" / "high"
  • $1.25 / 1 M input ยท $2.50 / 1 M output
  • For lower latency without reasoning, set reasoning_effort = "low"

๐Ÿ† Grok 4.20 โ€” Flagship (2 M token context)

API string Model variant Best for
grok-4.20-0309-reasoning Model::Grok4_20_0309Reasoning Complex reasoning, maths, science, multi-step agentic tasks
grok-4.20-non-reasoning Model::Grok4_20NonReasoning Fast non-reasoning, high-throughput, lower latency
grok-4.20-0309-non-reasoning Model::Grok4_20_0309NonReasoning Dated variant of the 4.20 standard model
grok-4.20-multi-agent-0309 Model::Grok4_20MultiAgent0309 Deep research, complex workflows, multi-agent pipelines

Note: Grok4_20_0309Reasoning / Grok4_20MultiAgent0309 do not support presence_penalty, frequency_penalty, stop, or reasoning_effort. logprobs is silently ignored by all Grok 4.20 models.

๐Ÿ—„๏ธ Legacy (Grok 3 Mini โ€” still active)

API string Model variant Notes
grok-3-mini Model::Grok3Mini Efficient smaller model; 131 K context; not retiring May 2026

๐Ÿ–ผ๏ธ Image Generation

API string Model variant Notes
grok-imagine-image Model::GrokImagineImage Standard image generation

๐ŸŽฅ Video Generation

API string Model variant Notes
grok-imagine-video Model::GrokImagineVideo Video generation

๐Ÿšจ Deprecated models (retire 2026-05-15)

These variants still compile but emit a Rust deprecation warning. Migrate before 2026-05-15.

Deprecated API string Model variant Use instead
grok-4-1-fast-reasoning Model::Grok4_1FastReasoning Model::Grok4_3
grok-4-1-fast-non-reasoning Model::Grok4_1FastNonReasoning Model::Grok4_20NonReasoning
grok-4-0709 Model::Grok4_0709 Model::Grok4_3
grok-3 Model::Grok3 Model::Grok4_3
grok-code-fast-1 Model::GrokCodeFast1 Model::Grok4_3
grok-imagine-image-pro Model::GrokImagineImagePro Model::GrokImagineImage

Recommended models at a glance

Use case Recommended model
Everyday / CLI default grok-4.3
Heavy coding tasks grok-4.3
Maximum reasoning grok-4.20-0309-reasoning
Speed + high volume grok-4.20-non-reasoning
Agentic / multi-step grok-4.20-multi-agent-0309
Image generation grok-imagine-image

Using the Model enum

use grok_api::models::Model;

// Type-safe โ€” no typos
let model = Model::Grok4_3;  // new recommended default
println!("{}", model.as_str()); // "grok-4.3"

// Guard pure reasoning models against unsupported params
if model.is_reasoning_model() {
    // Do NOT set frequency_penalty or presence_penalty
}

// grok-4.3 supports configurable reasoning effort
if model.supports_reasoning_effort() {
    println!("Can set reasoning_effort = low | medium | high");
}

// Check context window
if let Some(ctx) = model.context_window() {
    println!("Context: {} tokens", ctx);  // 1_000_000 for grok-4.3
}

// Check capabilities
println!("Language model:    {}", model.is_language_model());
println!("Image model:       {}", model.is_image_model());
println!("Video model:       {}", model.is_video_model());
println!("Supports logprobs: {}", model.supports_logprobs());

// Parse from a string (e.g. from config / env var)
let m = Model::parse("grok-4.3").expect("unknown model");

// All active (non-deprecated) models
for m in Model::all() {
    println!("{}", m);
}

// All models including deprecated ones (for migration tooling)
for m in Model::all_including_deprecated() {
    println!("{}", m);
}

Error Handling

use grok_api::{GrokClient, Error};

match client.chat("Hello", None).await {
    Ok(response)               => println!("Success: {}", response),
    Err(Error::Authentication) => eprintln!("Invalid API key"),
    Err(Error::RateLimit)      => eprintln!("Rate limit exceeded โ€” back off and retry"),
    Err(Error::Network(msg))   => eprintln!("Network error: {}", msg),
    Err(Error::Timeout(secs))  => eprintln!("Timeout after {} seconds", secs),
    Err(e)                     => eprintln!("Other error: {}", e),
}

Retry Logic

Network errors are automatically retried with exponential backoff:

let client = GrokClient::builder()
    .api_key("your-api-key")
    .max_retries(5)   // retry up to 5 times
    .build()?;

Retryable errors include:

  • Network timeouts
  • Connection failures
  • Server errors (5xx)
  • Starlink / satellite network drops

Function Calling / Tools

Full support for Grok's function calling and agentic tool use:

use grok_api::{GrokClient, ChatMessage};
use serde_json::json;

let tools = vec![
    json!({
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City name"
                    }
                },
                "required": ["location"]
            }
        }
    })
];

let messages = vec![
    ChatMessage::user("What's the weather in San Francisco?")
];

let response = client
    .chat_with_history(&messages)
    .model("grok-4.3")
    .tools(tools)
    .send()
    .await?;

if response.has_tool_calls() {
    for call in response.tool_calls().unwrap() {
        println!("Tool: {}", call.function.name);
        println!("Args: {}", call.function.arguments);
        // parse args โ†’ call your function โ†’ feed result back
        let result = "Sunny, 18 ยฐC";
        messages.push(ChatMessage::tool(result, &call.id));
    }
}

Starlink Optimisation

The library includes special handling for Starlink and other satellite connections:

  • Automatic detection of connection drops
  • Exponential backoff with jitter
  • Extended timeout handling
use grok_api::{GrokClient, Error};

let client = GrokClient::builder()
    .api_key("your-api-key")
    .timeout_secs(60)   // longer timeout for satellite latency
    .max_retries(5)     // more retries for intermittent drops
    .build()?;

match client.chat("Hello", None).await {
    Ok(response) => println!("Success: {}", response),
    Err(e) if e.is_starlink_drop() => {
        eprintln!("Starlink connection dropped โ€” all retries exhausted");
    }
    Err(e) => eprintln!("Error: {}", e),
}

Examples

# Set your API key

export GROK_API_KEY="your-api-key"


# Simple single-turn chat

cargo run --example simple_chat


# Multi-turn conversation

cargo run --example conversation


# Streaming responses

cargo run --example streaming


# Function / tool calling

cargo run --example tools_example


# Video / multimodal

cargo run --example video_chat


Cargo Features

Feature Default Description
retry โœ… yes Automatic retry with exponential backoff
starlink โŒ no Extra optimisations for satellite connections
[dependencies]

grok_api = { version = "0.1", features = ["starlink"] }


Testing

# Unit + doc tests (no API key needed)

cargo test


# With debug logging

RUST_LOG=debug cargo test


# Clippy

cargo clippy -- -D warnings


Minimum Supported Rust Version (MSRV)

This crate requires Rust 1.85 or later (Rust 2024 edition).

Update your toolchain with:

rustup update stable


Documentation

Full API documentation is available at docs.rs/grok_api.


Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under either of:

at your option.


Migration Guide: 0.1.6 โ†’ 0.1.7

Six models retire on 2026-05-15 12:00 PT. After that date, the xAI API will return an error for any request that names a retired model.

Quick replacement table

Old model string Use this instead
grok-4-1-fast-reasoning grok-4.3
grok-4-1-fast-non-reasoning grok-4.20-non-reasoning
grok-4-0709 grok-4.3
grok-code-fast-1 grok-4.3
grok-3 grok-4.3
grok-imagine-image-pro grok-imagine-image

Rust code migration

// Before (0.1.6)
let model = Model::Grok4_1FastReasoning; // โš ๏ธ deprecated in 0.1.7

// After (0.1.7)
let model = Model::Grok4_3;             // โœ… new recommended default
// Before (0.1.6)
let model = Model::Grok4_1FastNonReasoning; // โš ๏ธ deprecated in 0.1.7

// After (0.1.7)
let model = Model::Grok4_20NonReasoning;    // โœ… stable 4.20 non-reasoning alias
// Before (0.1.6)
let model = Model::GrokImagineImagePro; // โš ๏ธ deprecated in 0.1.7

// After (0.1.7)
let model = Model::GrokImagineImage;   // โœ… standard image generation

Deprecated variants still compile (with a warning) and still work until 2026-05-15. This gives you time to migrate at your own pace.


Disclaimer

This is an unofficial library and is not affiliated with, endorsed by, or sponsored by xAI or X Corp.


Links


Changelog

See CHANGELOG.md for full release notes and version history.


Support

For bugs and feature requests, please open an issue.


Made with โค๏ธ and Rust โ€” John McConnell