ds-api
A Rust SDK for building LLM agents on top of DeepSeek (and any OpenAI-compatible API). Define tools in plain Rust, plug them into an agent, and consume a stream of events as the model thinks, calls tools, and responds.
Quickstart
Set your API key and add the dependency:
# Cargo.toml
[]
= "0.6.0"
= "0.3"
= { = "1", = ["full"] }
= { = "1", = ["derive"] }
use ;
use StreamExt;
use ;
;
async
The agent runs the full loop for you: it calls the model, dispatches any tool calls, feeds the results back, and keeps going until the model stops requesting tools.
Defining tools
Annotate an impl Tool for YourStruct block with #[tool]. Each method becomes a callable tool:
- Doc comment on the impl block → tool description
/// param: descriptionlines in each method's doc comment → argument descriptions- Return type just needs to be
serde::Serialize— the macro handles the JSON schema
use tool;
use ;
;
One struct can have multiple methods — they register as separate tools. Stack as many tools as you need with .add_tool(...).
Streaming
Call .with_streaming() to get token-by-token output instead of waiting for the full response:
let mut stream = new
.with_streaming
.add_tool
.chat;
while let Some = stream.next.await
AgentEvent reference
| Variant | When | Notes |
|---|---|---|
Token(String) |
Model is speaking | Streaming: one fragment per chunk. Non-streaming: whole reply at once. |
ReasoningToken(String) |
Model is thinking | Only from reasoning models (e.g. deepseek-reasoner). |
ToolCall(ToolCallChunk) |
Tool call in progress | chunk.id, chunk.name, chunk.delta. Streaming: multiple per call. Non-streaming: one per call. |
ToolResult(ToolCallResult) |
Tool finished | result.name, result.args, result.result. |
Using a different model or provider
Any OpenAI-compatible endpoint works:
// OpenRouter
let agent = custom;
// deepseek-reasoner (think before responding)
let agent = new
.with_model;
Injecting messages mid-run
You can send a message into a running agent loop — useful when the user types something while the agent is still executing tools.
The interrupt channel is attached with .with_interrupt_channel() and returns the agent plus a sender you can use from any task. The sender type (InterruptSender) is a re-export of tokio::sync::mpsc::UnboundedSender<String>, so it is cheap to clone and use concurrently:
let = new
.with_streaming
.add_tool
.with_interrupt_channel;
Behavior and semantics
- Sending an interrupt: call
tx.send("...".into()).unwrap()from any task or callback. The message will be delivered into the agent's conversation history. - During tool execution: the agent now actively listens for interrupts while a tool is running. If an interrupt message arrives while a tool is executing, the executor will:
- Immediately append the interrupt text to the conversation history as a
Role::Usermessage (and drain any queued interrupt messages in order). - Abort the currently running tool (the tool future is cancelled) and stop executing further tools for the current round. (can close only when the tool is awaiting)
- Record a placeholder result for the aborted tool (the runtime exposes this as an error-shaped JSON result), and then proceed to the next API turn so the model sees the injected user message.
- Immediately append the interrupt text to the conversation history as a
- Between turns / idle transition: any queued interrupts are drained before the next API call so injected messages are always visible to the model on the next turn.
Example: cancel a running tool and pivot
// Start the agent and get an interrupt sender.
let = new
.with_streaming
.add_tool
.with_interrupt_channel;
// In another task (e.g. user action), send an interrupt to change the plan.
tx.send.unwrap;
// If the agent is currently executing a tool, that tool will be aborted and the
// interrupt will be pushed into history so the next API turn sees it.
let mut stream = agent.chat;
Notes
InterruptSenderis non-blocking and can be cloned; use it from any async context without awaiting.- Aborting a tool is implemented by cancelling the tool future (via the runtime). This is effective for most async tools, but if a tool holds on to external, non-cancellable resources you may want to implement cooperative cancellation inside the tool (for example, by checking a cancellation token).
- The agent ensures interrupt message ordering by draining remaining queued interrupt messages when an interrupt is observed.
MCP tools
MCP (Model Context Protocol) lets you use external processes as tools — Node scripts, Python services, anything that speaks MCP over stdio:
// Requires the `mcp` feature
let agent = new
.add_tool;
Contributing
PRs welcome. Keep changes focused; update public API docs when behaviour changes.
License
MIT OR Apache-2.0