tiny-agent-rs
A lightweight, type-safe Rust agent library for LLM tool calling with strong typing and deterministic error handling.
Features
- Type-Safe Tool Calling: Define tools with Rust types and automatic JSON Schema generation
- Deterministic Error Handling: Comprehensive error types with structured payloads
- Async-First: Built on Tokio for efficient async execution
- Modular Design: Clean separation between tools, validation, and agent logic
- OpenAI Integration: Native support for OpenAI function calling
- CLI Tool: Ready-to-use command-line interface
Quick Start
Installation
Basic Usage
use ;
use Client;
async
CLI Usage
# Set your OpenAI API key
# Run the agent
# Use different model
# Custom timeout and iterations
Creating Custom Tools
use Tool;
use async_trait;
use ;
use JsonSchema;
;
Architecture
- Agent: Main orchestrator handling LLM interactions and tool execution
- FunctionFactory: Registry and execution manager for tools
- Tool: Trait for implementing callable functions with schema validation
- Validator: Parameter validation using serde or JSON Schema
- Error: Comprehensive error handling with structured payloads
Examples
See the examples/ directory for more complete examples:
- Basic calculator tool
- Weather information tool
- Custom tool implementation
- Error handling patterns
Development
# Clone the repository
# Run tests
# Run example
# Build with CLI
Pre-commit Hooks and Secret Scanning
- Point Git hooks at
githooks/:git config core.hooksPath githooks. - Install gitleaks and ensure it is on your
PATH; the pre-commit hook fails fast if it cannot rungitleaks protect --staged --redact. - The hook sequence is
gitleaks→cargo fmt --all -- --check→cargo clippy --all-targets --all-features -D warnings→cargo test. - Run
gitleaks detect --redact --source .before opening pull requests to scan the full tree.
Requirements
- Rust 1.70+
- OpenAI API key for LLM integration
- Tokio runtime
License
MIT License - see LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Roadmap
- Additional LLM provider support
- Streaming responses
- Tool result caching
- More built-in tools
- WASM support