LLM Kit xAI
xAI (Grok) provider for LLM Kit - Complete integration with xAI's Grok models featuring reasoning capabilities, integrated search, and image generation.
Note: This provider uses the standardized builder pattern. See the Quick Start section for the recommended usage.
Features
- Text Generation: Full support for Grok models with streaming and tool calling
- Streaming: Real-time response streaming with Server-Sent Events
- Tool Calling: Complete function calling support with tool execution
- Image Generation: Create images with grok-2-image model
- Reasoning Mode: Access model reasoning with configurable effort levels
- Integrated Search: Web, X (Twitter), news, and RSS search capabilities
- Citations: Automatic citation extraction from search results
- Response Format: JSON mode and structured outputs with JSON schema
Installation
Add this to your Cargo.toml:
[]
= "0.1"
= "0.1"
= "0.1"
= { = "1", = ["full"] }
Quick Start
Using the Client Builder (Recommended)
use XaiClient;
use ;
async
Using Settings Directly (Alternative)
use ;
use ;
async
Configuration
Environment Variables
Set your xAI API key as an environment variable:
Using the Client Builder
use XaiClient;
let provider = new
.api_key
.base_url
.header
.name
.build;
Builder Methods
The XaiClient builder supports:
.api_key(key)- Set the API key.base_url(url)- Set custom base URL.name(name)- Set provider name.header(key, value)- Add a single custom header.headers(map)- Add multiple custom headers.build()- Build the provider
Supported Models
Chat Models
grok-4- Latest Grok-4 model with advanced capabilitiesgrok-4-fast-reasoning- Fast model with reasoning capabilitiesgrok-4-fast-non-reasoning- Fast model without reasoninggrok-code-fast-1- Optimized for code generationgrok-3- Grok-3 base modelgrok-3-fast- Faster Grok-3 variantgrok-3-mini- Smaller, efficient modelgrok-2-vision-1212- Vision-capable modelgrok-2-1212- Grok-2 model with December 2024 updatesgrok-beta- Beta model with latest features
// Create a chat model
let model = provider.chat_model;
Image Models
grok-2-image- Image generation model
// Create an image model
let model = provider.image_model;
Provider-Specific Options
xAI supports advanced features through provider options that can be passed using the llm-kit-core API.
Reasoning Mode
Control the model's reasoning effort level:
use GenerateText;
use json;
let result = new
.provider_options
.execute
.await?;
Access reasoning content in the response:
// Reasoning content is automatically extracted to result.content
for content in result.content
Integrated Search
Enable web, X (Twitter), news, or RSS search:
use GenerateText;
use json;
let result = new
.provider_options
.execute
.await?;
Citations
Citations are automatically extracted from search results:
let result = new.execute.await?;
// Citations available in result.content
for content in result.content
Response Format (JSON Mode)
Force structured JSON outputs:
use GenerateText;
use LanguageModelResponseFormat;
use json;
// Simple JSON mode
let result = new
.with_response_format
.execute
.await?;
// Structured outputs with JSON schema
let schema = json!;
let result = new
.with_response_format
.execute
.await?;
Parallel Function Calling
Control parallel tool execution:
use GenerateText;
use json;
let result = new
.tools
.provider_options
.execute
.await?;
Available Provider Options
| Option | Type | Description |
|---|---|---|
reasoningEffort |
string |
Reasoning effort level: "low", "medium", "high" |
searchParameters.recencyFilter |
string |
Time filter: "hour", "day", "week", "month", "year" |
searchParameters.sources |
array |
Search sources: web, x, news, rss |
parallelFunctionCalling |
bool |
Enable parallel tool execution |
Examples
See the examples/ directory for complete examples:
chat.rs- Basic chat completion usingdo_generate()directlystream.rs- Streaming responses usingdo_stream()directlychat_tool_calling.rs- Tool calling usingdo_generate()directlystream_tool_calling.rs- Streaming with tools usingdo_stream()directlyimage_generation.rs- Image generation usingdo_generate()directly
Run examples with:
Documentation
License
MIT
Contributing
Contributions are welcome! Please see the Contributing Guide for more details.