anthropic-async
A production-ready Anthropic API client for Rust with prompt caching support.
Features
- ✅ Full support for Messages API (create, count tokens)
- ✅ Models API (list, get)
- 🚀 Prompt caching with TTL management
- 🔐 Dual authentication (API key or Bearer token)
- 🔄 Automatic retry with exponential backoff
- 🎛️ Beta feature support
- 📝 Comprehensive examples
- 🦀 100% safe Rust with strong typing
Installation
Add to your Cargo.toml:
[]
= "0.1.0"
Quick Start
use ;
async
Authentication
The client supports two authentication methods:
API Key (Primary)
// From environment variable
let client = new; // Uses ANTHROPIC_API_KEY
// Explicit
let config = new
.with_api_key;
let client = with_config;
Bearer Token (OAuth/Enterprise)
// From environment variable
// Set ANTHROPIC_AUTH_TOKEN
let client = new;
// Explicit
let config = new
.with_bearer;
let client = with_config;
Prompt Caching
Reduce costs and latency with prompt caching:
use CacheControl;
let req = MessagesCreateRequest ;
TTL Rules
- Cache entries can have 5-minute or 1-hour TTLs
- When mixing TTLs, 1-hour entries must appear before 5-minute entries
- Minimum cacheable prompt: 1024 tokens (Opus/Sonnet), 2048 (Haiku 3.5)
Beta Features
Enable beta features using the configuration:
use BetaFeature;
let config = new
.with_beta_features;
Or use custom beta strings:
let config = new
.with_beta;
Error Handling and Retries
The client automatically retries on:
- 408 Request Timeout
- 409 Conflict
- 429 Rate Limited
- 5xx Server Errors
- 529 Overloaded
Retries use exponential backoff and respect Retry-After headers.
match client.messages.create.await
Examples
See the examples/ directory for complete examples:
01-basic-completion- Simple message creation04-model-listing- List available models05-kitchen-sink- All features demonstration
Run an example:
&&
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT OR Apache-2.0 license.