ai
Simple to use AI library for Rust with OpenAI compatible providers and graph-based workflow execution.
This library is work in progress, and the API is subject to change.
Table of Contents
Using the library
Add ai as a dependency along with tokio. For
streaming add futures crate, for CancellationToken support add tokio-util.
This library directly uses reqwest for http client when making requests to the
servers.
cargo add ai
For comprehensive usage documentation in your projects, download the CLAUDE.md guide:
# Using wget
# Using curl
Cargo Features
| Feature | Description | Default |
|---|---|---|
openai_client |
Enable OpenAI client | ✅ |
azure_openai_client |
Enable Azure OpenAI client | ✅ |
ollama_client |
Enable Ollama client | |
native_tls |
Enable native TLS for reqwest http client | |
rustls_tls |
Enable rustls TLS for reqwest http client | ✅ |
Examples
| Example Name | Description |
|---|---|
| azure_openai_chat_completions | Basic chat completions using Azure OpenAI API |
| chat_completions_streaming | Chat completions streaming example |
| chat_completions_streaming_with_cancellation_token | Chat completions streaming with cancellation token |
| chat_completions_tool_calling | Tool/Function calling example |
| chat_console | Console chat example |
| clients_dynamic_runtime | Dynamic runtime client selection |
| graph_example | Graph workflow execution with conditional logic |
| openai_chat_completions | Basic chat completions using OpenAI API |
| openai_embeddings | Text embeddings with OpenAI API |
| react_agent | ReAct agent with reasoning and action capabilities |
Chat Completion API
use ;
async
Embeddings API
use ;
async
Using tuples for messages. Unrecognized role will cause panic.
let request = &default
.model
.messages
.build?;
Graph
Build and execute complex workflows with conditional logic and async node execution.
use ;
use HashMap;
async
The graph can be visualized using Mermaid syntax. Use draw_mermaid() to generate the diagram:
flowchart TD
__start__([START])
__end__([END])
improve_content[improve_content]
generate_content[generate_content]
polish_content[polish_content]
__start__ --> generate_content
improve_content --> polish_content
polish_content --> __end__
generate_content -->|improve| improve_content
generate_content -->|polish| polish_content
classDef startEnd fill:#e1f5fe,stroke:#01579b,stroke-width:2px
class __start__,__end__ startEnd
Visit mermaid.live to see a live preview of your graph diagrams.
Clients
OpenAI
let openai = new?;
let openai = from_url?;
let openai = from_env?;
Gemini API via OpenAI
Set http1_title_case_headers for Gemini API.
let gemini = default
.http_client
.api_key
.base_url
.build?;
Azure OpenAI
cargo add ai --features=azure_openai_client
let azure_openai = default
.auth
// .auth(ai::clients::azure_openai::Auth::ApiKey(
// std::env::var(ai::clients::azure_openai::AZURE_OPENAI_API_KEY_ENV_VAR)
// .map_err(|e| Error::EnvVarError(ai::clients::azure_openai::AZURE_OPENAI_API_KEY_ENV_VAR.to_string(), e))?
// .into(),
// ))
.api_version
.base_url
.build?;
Pass the deployment_id as model of the ChatCompletionRequest.
Use the following command to get bearer token.
Ollama
Suggest using openai client instead of ollama for maximum compatibility.
let ollama = new?;
let ollama = from_url?;
LICENSE
MIT