LLMY
All-in-one LLM utilities for Rust — plug OpenAI / Azure settings straight into clap, track spend with built-in billing, and replay every request when things go wrong.
Quick start
Add the dependency (the root crate re-exports everything):
[]
= "0.3"
Features
1. Clap integration — up to 3 LLM slots
llmy-clap provides three generated arg structs (OpenAISetup, OptOpenAISetup, OptOptOpenAISetup) so you can wire one, two, or three LLMs into any clap-based CLI with zero boilerplate. Each slot is controlled by its own set of env-vars / flags, and can be converted to the core LLM client in one call.
use Parser;
use OpenAISetup; // primary
use OptOpenAISetup; // optional secondary
async
Run it:
# OpenAI
OPENAI_API_KEY=sk-...
# Azure
OPENAI_API_KEY=...
Every setting (temperature, timeout, retries, max tokens, reasoning effort, tool choice, …) is exposed as a flag and an env-var:
| Flag | Env var | Default |
|---|---|---|
--model |
OPENAI_API_MODEL |
o1 |
--llm-temperature |
LLM_TEMPERATURE |
0.8 |
--llm-max-completion-tokens |
LLM_MAX_COMPLETION_TOKENS |
16384 |
--llm-retry |
LLM_RETRY |
5 |
--llm-prompt-timeout |
LLM_PROMPT_TIMEOUT |
1200 (s) |
--llm-stream |
LLM_STREAM |
false |
--reasoning-effort |
LLM_REASONING_EFFORT |
— |
The second and third slots use the prefixes OPT_ and OPT_OPT_ for their env-vars (e.g. OPT_OPENAI_API_KEY, OPT_OPT_OPENAI_API_MODEL).
2. Detailed debug logging (LLM_DEBUG)
Point LLM_DEBUG at a directory and every LLM round-trip is saved as an XML-like .xml (not strict XML — just an easy-to-skim tagged format) and a raw .json — perfect for post-mortem debugging or dataset building.
LLM_DEBUG=./debug_logs OPENAI_API_KEY=sk-...
This creates a per-process subfolder with numbered files:
debug_logs/
└── 48291-0-main/
├── llm-000000000001.xml
├── llm-000000000001.json
├── llm-000000000002.xml
└── llm-000000000002.json
The .xml file looks like:
=====================
You are a helpful assistant.
Explain async Rust in one sentence.
{
"type": "object",
"properties": { "query": { "type": "string" } }
}
=====================
=====================
Async Rust lets you write concurrent code ...
=====================
The .json companion contains the full serialised CreateChatCompletionRequest / CreateChatCompletionResponse objects for programmatic analysis.
3. Built-in billing with automatic budget enforcement
llmy ships with up-to-date per-token pricing for 30+ models (GPT-4o, o1, o3, GPT-5 family, Gemini, …). Token usage is tracked in real-time including cached-input and reasoning token discounts. When spend exceeds the budget cap the client returns LLMYError::Billing immediately — no more surprise bills.
use ;
use OpenAIModel;
use LLMSettings;
let llm = LLMnew;
match llm.prompt_once.await
Via clap the cap defaults to $10 and can be overridden:
A sample of the built-in pricing table (USD per 1 M tokens):
| Model | Input | Output | Cached input |
|---|---|---|---|
gpt-4o |
2.50 | 10.00 | 1.25 |
gpt-4o-mini |
0.15 | 0.60 | 0.075 |
o1 |
15.00 | 60.00 | 7.50 |
o3 |
2.00 | 8.00 | 0.50 |
o4-mini |
1.10 | 4.40 | 0.275 |
gpt-4.1 |
2.00 | 8.00 | 0.50 |
For models not in the list, pass pricing inline:
# name, in, out, cached
4. Offline token estimation
llmy includes a built-in tokenizer with fast, offline BPE token estimation for 110+ models across OpenAI, Anthropic, Google, and more. Encodings and model metadata are baked into the binary at compile time — no network calls, no data files to ship.
Four encodings are supported: cl100k_base, o200k_base, p50k_base (OpenAI / tiktoken) and claude (Anthropic).
use ;
// Encode text into token IDs
let tokens: = encode;
// Count tokens directly
let n = count_tokens;
// Or let the library resolve the encoding from a model ID
let n = count_tokens_for_model; // Some(4)
let n = count_tokens_for_model; // Some(4)
The model registry is generated from the same source-of-truth JSON used by the billing system, so model look-ups, pricing, and token counts always stay in sync.
License
MIT