# grok_api
[](https://crates.io/crates/grok_api)
[](https://docs.rs/grok_api)
[](https://github.com/microtech/grok-api#license)
[](https://www.buymeacoffee.com/Cobble)
A Rust client library for the **Grok AI API** (xAI). Simple, robust, and production-ready.
## Features
- 🚀 **Easy to use** — Simple async API with builder pattern
- 🔄 **Automatic retries** — Built-in retry logic for transient failures
- 🛡️ **Robust error handling** — Comprehensive error types with detailed messages
- 🌐 **Network resilient** — Optimised for challenging conditions (Starlink, satellite connections)
- 🔧 **Flexible configuration** — Customise timeouts, retries, and more
- 🛠️ **Tool / Function calling** — Full support for function calling and agentic workflows
- 🧠 **Reasoning-aware** — Capability helpers guard against sending unsupported params to reasoning models
- 📦 **Rust 2024 edition** — Built on the latest edition with MSRV 1.85
---
## Quick Start
Add this to your `Cargo.toml`:
```toml
[dependencies]
grok_api = "0.1"
tokio = { version = "1", features = ["full"] }
```
### Simple Example
```rust
use grok_api::{GrokClient, Result};
#[tokio::main]
async fn main() -> Result<()> {
let client = GrokClient::new("your-api-key")?;
let response = client
.chat("What is Rust?", None)
.await?;
println!("Response: {}", response);
Ok(())
}
```
### Conversation with History
```rust
use grok_api::{GrokClient, ChatMessage, Result};
#[tokio::main]
async fn main() -> Result<()> {
let client = GrokClient::new("your-api-key")?;
let messages = vec![
ChatMessage::system("You are a helpful Rust expert."),
ChatMessage::user("How do I create a Vec?"),
];
let response = client
.chat_with_history(&messages)
.temperature(0.7)
.max_tokens(1000)
.model("grok-4-1-fast-reasoning") // recommended default
.send()
.await?;
println!("Response: {}", response.content().unwrap_or(""));
println!("Tokens used: {}", response.usage.total_tokens);
Ok(())
}
```
### Advanced Configuration
```rust
use grok_api::{GrokClient, Result};
#[tokio::main]
async fn main() -> Result<()> {
let client = GrokClient::builder()
.api_key("your-api-key")
.timeout_secs(60)
.max_retries(5)
.base_url("https://custom-endpoint.com") // optional
.build()?;
// Use client...
Ok(())
}
```
---
## API Key
Get your API key from [x.ai/api](https://x.ai/api).
Set it as an environment variable:
```bash
export GROK_API_KEY="your-api-key-here"
```
Or pass it directly to the client:
```rust
let client = GrokClient::new("your-api-key")?;
```
---
## Available Models
> Last synced with xAI API — **April 2026**
> Call `GET https://api.x.ai/v1/models` with your key to see your account's live list.
### 🏆 Grok 4.20 — Flagship (2 M token context)
| `grok-4.20-0309-reasoning` | `Model::Grok4_20_0309Reasoning` | Complex reasoning, maths, science, multi-step agentic tasks |
| `grok-4.20-0309-non-reasoning` | `Model::Grok4_20_0309NonReasoning` | General tasks, high-throughput, lower latency |
| `grok-4.20-multi-agent-0309` | `Model::Grok4_20MultiAgent0309` | Deep research, complex workflows, multi-agent pipelines |
> **Note:** Grok 4.20 reasoning models do **not** support `presence_penalty`,
> `frequency_penalty`, `stop`, or `reasoning_effort`. Sending these fields returns an API error.
> `logprobs` is silently ignored by all Grok 4.20 models.
### ⚡ Grok 4.1 Fast (2 M token context)
| `grok-4-1-fast-reasoning` | `Model::Grok4_1FastReasoning` | **Recommended default** — great balance of speed, cost, and reasoning |
| `grok-4-1-fast-non-reasoning` | `Model::Grok4_1FastNonReasoning` | Fast standard completions, high volume |
### 🧑💻 Code-Optimised
| `grok-code-fast-1` | `Model::GrokCodeFast1` | Code generation, debugging, file editing, agentic coding |
### 📦 Grok 4 Previous Flagship
| `grok-4-0709` | `Model::Grok4_0709` | Still active; reasoning-only model |
### 🗄️ Legacy (Grok 3)
| `grok-3` | `Model::Grok3` | Previous flagship; 131 K context |
| `grok-3-mini` | `Model::Grok3Mini` | Efficient smaller model; 131 K context |
### 🖼️ Image Generation
| `grok-imagine-image-pro` | `Model::GrokImagineImagePro` | High-quality image generation |
| `grok-imagine-image` | `Model::GrokImagineImage` | Standard image generation |
### 🎬 Video Generation
| `grok-imagine-video` | `Model::GrokImagineVideo` | Video generation |
### Recommended models at a glance
| Everyday / CLI default | `grok-4-1-fast-reasoning` |
| Heavy coding tasks | `grok-code-fast-1` |
| Maximum intelligence | `grok-4.20-0309-reasoning` |
| Speed + high volume | `grok-4.20-0309-non-reasoning` |
| Agentic / multi-step | `grok-4.20-multi-agent-0309` |
| Image generation | `grok-imagine-image-pro` |
### Using the `Model` enum
```rust
use grok_api::models::Model;
// Type-safe — no typos
let model = Model::Grok4_1FastReasoning;
println!("{}", model.as_str()); // "grok-4-1-fast-reasoning"
// Guard reasoning models against unsupported params
if model.is_reasoning_model() {
// Do NOT set frequency_penalty or presence_penalty
}
// Check context window
if let Some(ctx) = model.context_window() {
println!("Context: {} tokens", ctx);
}
// Check capabilities
println!("Language model: {}", model.is_language_model());
println!("Image model: {}", model.is_image_model());
println!("Video model: {}", model.is_video_model());
println!("Supports logprobs: {}", model.supports_logprobs());
// Parse from a string (e.g. from config / env var)
let m = Model::parse("grok-4-1-fast-reasoning").expect("unknown model");
// All active models
for m in Model::all() {
println!("{}", m);
}
```
---
## Error Handling
```rust
use grok_api::{GrokClient, Error};
match client.chat("Hello", None).await {
Ok(response) => println!("Success: {}", response),
Err(Error::Authentication) => eprintln!("Invalid API key"),
Err(Error::RateLimit) => eprintln!("Rate limit exceeded — back off and retry"),
Err(Error::Network(msg)) => eprintln!("Network error: {}", msg),
Err(Error::Timeout(secs)) => eprintln!("Timeout after {} seconds", secs),
Err(e) => eprintln!("Other error: {}", e),
}
```
### Retry Logic
Network errors are automatically retried with exponential backoff:
```rust
let client = GrokClient::builder()
.api_key("your-api-key")
.max_retries(5) // retry up to 5 times
.build()?;
```
Retryable errors include:
- Network timeouts
- Connection failures
- Server errors (5xx)
- Starlink / satellite network drops
---
## Function Calling / Tools
Full support for Grok's function calling and agentic tool use:
```rust
use grok_api::{GrokClient, ChatMessage};
use serde_json::json;
let tools = vec![
json!({
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}
})
];
let messages = vec![
ChatMessage::user("What's the weather in San Francisco?")
];
let response = client
.chat_with_history(&messages)
.model("grok-4-1-fast-reasoning")
.tools(tools)
.send()
.await?;
if response.has_tool_calls() {
for call in response.tool_calls().unwrap() {
println!("Tool: {}", call.function.name);
println!("Args: {}", call.function.arguments);
// parse args → call your function → feed result back
let result = "Sunny, 18 °C";
messages.push(ChatMessage::tool(result, &call.id));
}
}
```
---
## Starlink Optimisation
The library includes special handling for Starlink and other satellite connections:
- Automatic detection of connection drops
- Exponential backoff with jitter
- Extended timeout handling
```rust
use grok_api::{GrokClient, Error};
let client = GrokClient::builder()
.api_key("your-api-key")
.timeout_secs(60) // longer timeout for satellite latency
.max_retries(5) // more retries for intermittent drops
.build()?;
match client.chat("Hello", None).await {
Ok(response) => println!("Success: {}", response),
Err(e) if e.is_starlink_drop() => {
eprintln!("Starlink connection dropped — all retries exhausted");
}
Err(e) => eprintln!("Error: {}", e),
}
```
---
## Examples
```bash
# Set your API key
export GROK_API_KEY="your-api-key"
# Simple single-turn chat
cargo run --example simple_chat
# Multi-turn conversation
cargo run --example conversation
# Streaming responses
cargo run --example streaming
# Function / tool calling
cargo run --example tools_example
# Video / multimodal
cargo run --example video_chat
```
---
## Cargo Features
| `retry` | ✅ yes | Automatic retry with exponential backoff |
| `starlink` | ❌ no | Extra optimisations for satellite connections |
```toml
[dependencies]
grok_api = { version = "0.1", features = ["starlink"] }
```
---
## Testing
```bash
# Unit + doc tests (no API key needed)
cargo test
# With debug logging
RUST_LOG=debug cargo test
# Clippy
cargo clippy -- -D warnings
```
---
## Minimum Supported Rust Version (MSRV)
This crate requires **Rust 1.85** or later (Rust 2024 edition).
Update your toolchain with:
```bash
rustup update stable
```
---
## Documentation
Full API documentation is available at [docs.rs/grok_api](https://docs.rs/grok_api).
---
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
---
## License
This project is licensed under either of:
- Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or <http://www.apache.org/licenses/LICENSE-2.0>)
- MIT License ([LICENSE-MIT](LICENSE-MIT) or <http://opensource.org/licenses/MIT>)
at your option.
---
## Disclaimer
This is an **unofficial** library and is not affiliated with, endorsed by, or sponsored by xAI or X Corp.
---
## Links
- [Crates.io](https://crates.io/crates/grok_api)
- [Documentation](https://docs.rs/grok_api)
- [GitHub Repository](https://github.com/microtech/grok-api)
- [xAI Official Site](https://x.ai)
- [Buy Me a Coffee ☕](https://www.buymeacoffee.com/Cobble)
---
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for full release notes and version history.
---
## Support
For bugs and feature requests, please [open an issue](https://github.com/microtech/grok-api/issues).
---
*Made with ❤️ and Rust — [John McConnell](mailto:john.microtech@gmail.com)*