llm-toolkit
Basic llm tools for rust
Motivation & Philosophy
High-level LLM frameworks like LangChain, while powerful, can be problematic in Rust. Their heavy abstractions and complex type systems often conflict with Rust's strengths, imposing significant constraints and learning curves on developers.
There is a clear need for a different kind of tool: a low-level, unopinionated, and minimalist toolkit that provides robust "last mile" utilities for LLM integration, much like how candle provides core building blocks for ML without dictating the entire application architecture.
This document proposes the creation of llm-toolkit, a new library crate designed to be the professional's choice for building reliable, high-performance LLM-powered applications in Rust.
Core Design Principles
-
Minimalist & Unopinionated: The toolkit will NOT impose any specific application architecture. Developers are free to design their own
UseCases andServices.llm-toolkitsimply provides a set of sharp, reliable "tools" to be called when needed. -
Focused on the "Last Mile Problem": The toolkit focuses on solving the most common and frustrating problems that occur at the boundary between a strongly-typed Rust application and the unstructured, often unpredictable string-based responses from LLM APIs.
-
Minimal Dependencies: The toolkit will have minimal dependencies (primarily
serdeandminijinja) to ensure it can be added to any Rust project with negligible overhead and maximum compatibility.
Features
| Feature Area | Description | Key Components | Status |
|---|---|---|---|
| Content Extraction | Safely extracting structured data (like JSON) from unstructured LLM responses. | extract module (FlexibleExtractor, extract_json) |
Implemented |
| Prompt Generation | Building complex prompts from Rust data structures with a powerful templating engine. | prompt! macro, #[derive(ToPrompt)] |
Implemented |
| Intent Extraction | Extracting structured intents (e.g., enums) from LLM responses. | intent module (IntentExtractor, PromptBasedExtractor) |
Implemented |
| Resilient Deserialization | Deserializing LLM responses into Rust types, handling schema variations. | (Planned) | Planned |
Prompt Generation
llm-toolkit offers two powerful and convenient ways to generate prompts, powered by the minijinja templating engine.
1. Ad-hoc Prompts with prompt! macro
For quick prototyping and flexible prompt creation, the prompt! macro provides a println!-like experience. You can pass any serde::Serialize-able data as context.
use prompt;
use Serialize;
let user = User ;
let task = "designing a new macro";
let p = prompt!.unwrap;
assert_eq!;
2. Structured Prompts with #[derive(ToPrompt)]
For core application logic, you can derive the ToPrompt trait on your structs to generate prompts in a type-safe way.
Setup:
First, enable the derive feature in your Cargo.toml:
[]
= { = "0.1.0", = ["derive"] }
= { = "1.0", = ["derive"] }
Usage:
Then, use the #[derive(ToPrompt)] and #[prompt(...)] attributes on your struct. The struct must also derive serde::Serialize.
use ToPrompt;
use Serialize;
let user = UserProfile ;
let p = user.to_prompt;
// The following would be printed:
// USER PROFILE:
// Name: Yui
// Role: World-Class Pro Engineer
3. Enum Documentation with #[derive(ToPrompt)]
For enums, the ToPrompt derive macro provides flexible ways to generate prompts that describe your enum variants for LLM consumption. You can use doc comments, custom descriptions, or exclude variants entirely.
Basic Usage with Doc Comments
By default, the macro extracts documentation from Rust doc comments (///) on both the enum and its variants:
use ToPrompt;
/// Represents different user intents for a chatbot
Advanced Attribute Controls
The ToPrompt derive macro supports powerful attribute-based controls for fine-tuning the generated prompts:
#[prompt("...")]- Provide a custom description that overrides the doc comment#[prompt(skip)]- Exclude a variant from the prompt entirely (useful for internal-only variants)- No attribute - Variants without doc comments or attributes will show just the variant name
Here's a comprehensive example showcasing all features:
use ToPrompt;
/// Represents different actions a user can take in the system
let action = CreateDocument;
let p = action.to_prompt;
// The following would be printed:
// UserAction: Represents different actions a user can take in the system
//
// Possible values:
// - CreateDocument: User wants to create a new document
// - Search: User is searching for existing content
// - UpdateProfile: Custom: User is updating their profile settings and preferences
// - DeleteItem
Note how in the output:
CreateDocumentandSearchuse their doc commentsUpdateProfileuses the custom description from#[prompt("...")]InternalDebugActionis completely excluded due to#[prompt(skip)]DeleteItemappears with just its name since it has no documentation
Future Directions
Image Handling Abstraction
A planned feature is to introduce a unified interface for handling image inputs across different LLM providers. This would abstract away the complexities of dealing with various data formats (e.g., Base64, URLs, local file paths) and model-specific requirements, providing a simple and consistent API for multimodal applications.