xAI SDK
A comprehensive Rust SDK for xAI's API, providing type-safe gRPC clients for all xAI services including Grok language models, embeddings, image generation, and more.
Features
- Complete API Coverage: Full gRPC client implementation for all xAI services
- Type Safety: Auto-generated Rust types from Protocol Buffers
- Async/Await: Built on Tokio for high-performance async operations
- Multiple Models: Support for all xAI language models (Grok-2, Grok-3, etc.)
- Streaming Support: Real-time streaming for chat completions and text generation
- Response Assembly: Convert streaming chunks into complete responses
- Secure: TLS encryption with automatic certificate validation
- Production Ready: Comprehensive error handling and connection management
Quick Start
Prerequisites
- Rust 1.70+ installed
- xAI API key
Installation
Add to your Cargo.toml:
[]
= "0.3.0"
= { = "1.0", = ["full"] }
= "1.0"
Running the Examples
-
Set your API key as an environment variable:
-
Run the authentication info example:
-
Run the raw text sampling example:
-
Run the chat completion example (supports multiple modes):
# Blocking completion # Streaming completion # Streaming with assembly -
Run the multi-client example (demonstrates using multiple services with shared channel):
-
Run the interceptor composition example:
Usage Examples
Modular Client Architecture (Recommended)
The SDK uses a modular architecture where each service has its own client module with automatic authentication:
use ;
use Request;
// Create an authenticated client - no manual auth needed!
let mut client = new.await?;
let request = new;
// Authentication is handled automatically by the client
let response = client.sample_text.await?;
println!;
Using Multiple Services
use ;
use Request;
// Each service has its own authenticated client
let chat_client = new.await?;
let sample_client = new.await?;
let models_client = new.await?;
let embed_client = new.await?;
// All clients handle authentication automatically
// No need to manually add auth headers to requests!
Chat Completion
use ;
use Request;
// Create authenticated chat client
let mut client = new.await?;
let message = Message ;
let request = new;
// Authentication is automatic - no manual auth needed!
let response = client.get_completion.await?;
println!;
Streaming Chat Completion
use ;
use Request;
// Create authenticated chat client
let mut client = new.await?;
let message = Message ;
let request = new;
// Process streaming response with automatic authentication
let stream = client.get_completion_chunk.await?.into_inner;
let consumer = with_stdout;
let chunks = process.await?;
// Assemble complete response
if let Some = assemble
API Services
The SDK provides clients for all xAI services:
Chat Service
GetCompletion- Blocking chat completionGetCompletionChunk- Streaming chat completionStartDeferredCompletion- Async completion with pollingGetDeferredCompletion- Retrieve async resultsGetStoredCompletion- Stored chat completionDeleteStoredCompletion- Delete stored chat completion
Sample Service
SampleText- Raw text generationSampleTextStreaming- Streaming text generation
Models Service
ListLanguageModels- List available language modelsListEmbeddingModels- List embedding modelsListImageGenerationModels- List image generation models
Embed Service
Embed- Generate embeddings from text or images
Image Service
GenerateImage- Create images from text prompts
Auth Service
get_api_key_info- Get API key information
Client Modules
The SDK is organized into focused modules, each providing easy client creation:
Available Modules
auth- Authentication serviceschat- Chat completions and streamingdocuments- Document processingembed- Text and image embeddingsimage- Image generationmodels- Model listing and informationsample- Text sampling and generationtokenize- Text tokenization
Client Creation
Each module provides a client submodule with automatic authentication:
use ;
use Request;
// Create authenticated clients for different services
let chat_client = new.await?;
let sample_client = new.await?;
let models_client = new.await?;
let embed_client = new.await?;
// All requests are automatically authenticated - no manual auth needed!
let request = new;
let response = chat_client.get_completion.await?;
Complete Example
Here's a complete example showing multiple services using the modular architecture:
use ;
use Request;
async
Streaming Utilities
The SDK provides powerful utilities for working with streaming responses:
Stream Consumer
A flexible callback system for processing streaming data:
on_content_token(total_choices, choice_idx, token)- Called for each piece of response contenton_reason_token(total_choices, choice_idx, token)- Called for each piece of reasoning contenton_chunk(chunk)- Called for each complete chunk received
Stream Processing Functions
chat::stream::process- Process streaming responses with custom callbackschat::stream::assemble- Convert collected chunks into complete responseschat::stream::Consumer::with_stdout()- Pre-configured consumer for single-choice real-time outputchat::stream::Consumer::with_buffered_stdout()- Pre-configured consumer for multi-choice buffered output
Configuration
The SDK supports comprehensive configuration options:
- Temperature: Controls randomness (0.0 to 2.0)
- Top-p: Nucleus sampling parameter (0.0 to 1.0)
- Max tokens: Maximum tokens to generate
- Log probabilities: Enable detailed token probability logging
- Multiple completions: Generate multiple responses per request
- Stop sequences: Custom stop conditions
- Frequency/Presence penalties: Control repetition and topic diversity
Security
- TLS Encryption: Automatic HTTPS with certificate validation
- Authentication: Bearer token support for API key authentication
- Secure by Default: No manual TLS configuration required
Error Handling
Comprehensive error handling for:
- Connection errors and timeouts
- Authentication failures
- API rate limiting
- Invalid parameters
- Network issues
Development
This SDK is built using:
- Protocol Buffers: Auto-generated Rust types from xAI's
.protodefinitions - Tonic: Modern gRPC framework for Rust with async/await support
- Prost: High-performance Protocol Buffer implementation
- Tokio: Async runtime for Rust
The code is generated from xAI's official Protocol Buffer definitions, ensuring compatibility and type safety.
Changelog
See CHANGELOG.md for a detailed list of changes and new features.
License
This project is licensed under the MIT License.