LLM Kit DeepSeek
DeepSeek provider for LLM Kit - Complete integration with DeepSeek's chat and reasoning models.
Note: This provider uses the standardized builder pattern. See the Quick Start section for the recommended usage.
Features
- Text Generation: Generate text using DeepSeek models (deepseek-chat, deepseek-reasoner)
- Streaming: Stream responses in real-time for immediate feedback
- Tool Calling: Support for function/tool calling with custom tools
- Reasoning Models: Advanced reasoning capabilities with deepseek-reasoner (R1)
- Prompt Caching: Automatic prompt caching with cache hit/miss token tracking
- Cache Metadata: Track prompt cache efficiency for optimization
Installation
Add this to your Cargo.toml:
[]
= "0.1"
= "0.1"
= "0.1"
= { = "1", = ["full"] }
Quick Start
Using the Client Builder (Recommended)
use DeepSeekClient;
use ;
async
Using Settings Directly (Alternative)
use ;
use ;
async
Configuration
Environment Variables
Set your DeepSeek API key as an environment variable:
# Optional
Using the Client Builder
use DeepSeekClient;
let provider = new
.api_key
.base_url
.header
.build;
Using Settings Directly
use ;
let settings = new
.with_api_key
.with_base_url
.add_header;
let provider = new;
Loading from Environment
use DeepSeekClient;
// Reads from DEEPSEEK_API_KEY environment variable
let provider = new
.load_api_key_from_env
.build;
Builder Methods
The DeepSeekClient builder supports:
.api_key(key)- Set the API key.base_url(url)- Set custom base URL.header(key, value)- Add a single custom header.headers(map)- Add multiple custom headers.load_api_key_from_env()- Load API key from DEEPSEEK_API_KEY environment variable.build()- Build the provider
Reasoning Models
DeepSeek's reasoner model (R1) provides advanced reasoning capabilities for complex problem-solving:
use DeepSeekClient;
use ;
let provider = new
.load_api_key_from_env
.build;
let model = provider.chat_model;
let result = new
.execute
.await?;
// Access reasoning and answer separately
for output in result.experimental_output.iter
println!;
Streaming
Stream responses for real-time output:
use DeepSeekClient;
use ;
use StreamExt;
let provider = new
.load_api_key_from_env
.build;
let model = provider.chat_model;
let result = new
.temperature
.execute
.await?;
let mut text_stream = result.text_stream;
while let Some = text_stream.next.await
Supported Models
All DeepSeek models are supported, including:
- deepseek-chat - Main chat model for general tasks and conversations
- deepseek-reasoner - Advanced reasoning model (R1) for complex problem-solving and logic tasks
For a complete list of available models, see the DeepSeek documentation.
Provider-Specific Features
Prompt Cache Statistics
DeepSeek provides detailed information about prompt cache hits and misses to help optimize performance:
use DeepSeekClient;
use ;
let provider = new.build;
let model = provider.chat_model;
let result = new
.execute
.await?;
// Access cache metadata
if let Some = result.provider_metadata
This helps you understand cache efficiency and optimize your prompts for better performance and cost savings.
Examples
See the examples/ directory for complete examples:
chat.rs- Basic chat completion with DeepSeekstream.rs- Streaming responseschat_tool_calling.rs- Tool calling with custom toolsstream_tool_calling.rs- Streaming with tool callsreasoning.rs- Using the deepseek-reasoner model for complex reasoning
Run examples with:
Make sure to set your DEEPSEEK_API_KEY environment variable first.
Documentation
License
Licensed under:
- MIT license (LICENSE-MIT)
Contributing
Contributions are welcome! Please see the Contributing Guide for more details.