# models-dev
[](https://opensource.org/licenses/MIT)
[](https://rust-lang.org)
[](https://crates.io/crates/models-dev)
[](https://docs.rs/models-dev)
A smart Rust client library for the models.dev API with intelligent caching capabilities.
## ๐ Overview
`models-dev` is a high-performance Rust client for the models.dev API that provides comprehensive information about AI model providers and their available models. The library features intelligent caching with ETag-based conditional requests, ensuring optimal performance for repeated API calls while always serving fresh data when available.
### Key Features
- **Smart Conditional Requests**: Uses ETag-based HTTP conditional requests to minimize bandwidth usage
- **Transparent Caching**: Automatic cache management with the same simple API interface
- **Performance Optimized**: Significant speed improvements for repeated requests (up to 10x faster)
- **Comprehensive Error Handling**: Specific error types for better debugging and error recovery
- **Clean Architecture**: Well-structured codebase with clear separation of concerns
- **Idiomatic Rust**: Leverages Rust's type system and async/await for safe, concurrent operations
## ๐ฆ Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
models-dev = "0.1.0"
```
### Feature Flags
The library supports different TLS backends:
```toml
# Default (native TLS)
models-dev = { version = "0.1.0" }
# Rustls TLS
models-dev = { version = "0.1.0", features = ["rustls-tls"] }
# Native TLS
models-dev = { version = "0.1.0", features = ["native-tls"] }
```
### Minimum Rust Version
This library requires **Rust 1.70+**.
## ๐ฏ Quick Start
Here's a simple example to get you started:
```rust
use models_dev::ModelsDevClient;
#[tokio::main]
async fn main() -> Result<(), models_dev::ModelsDevError> {
// Create a client with default settings
let client = ModelsDevClient::new();
// Fetch providers (first call hits the API)
let response = client.fetch_providers().await?;
println!("Found {} providers", response.providers.len());
// Second call uses conditional request (much faster)
let response2 = client.fetch_providers().await?;
println!("Still {} providers", response2.providers.len());
Ok(())
}
```
## ๐ง Advanced Usage
### Smart Caching
The library automatically handles caching with conditional HTTP requests. When you call `fetch_providers()`, it:
1. Sends a HEAD request with ETag to check if data has changed
2. If data hasn't changed, returns cached response immediately
3. If data has changed, fetches fresh data and updates cache
```rust
use models_dev::ModelsDevClient;
use std::time::Instant;
#[tokio::main]
async fn main() -> Result<(), models_dev::ModelsDevError> {
let client = ModelsDevClient::new();
// First call - hits API
let start = Instant::now();
let response1 = client.fetch_providers().await?;
let duration1 = start.elapsed();
// Second call - uses conditional request
let start = Instant::now();
let response2 = client.fetch_providers().await?;
let duration2 = start.elapsed();
println!("First call: {:?}", duration1);
println!("Second call: {:?}", duration2);
if duration2 < duration1 {
let speedup = duration1.as_millis() as f64 / duration2.as_millis() as f64;
println!("Speedup: {:.2}x faster!", speedup);
}
Ok(())
}
```
### Cache Management
You can manually manage the cache:
```rust
use models_dev::ModelsDevClient;
let client = ModelsDevClient::new();
// Get cache information
let cache_info = client.cache_info();
println!("Cache has metadata: {}", cache_info.has_metadata);
// Clear cache (forces fresh API request)
client.clear_cache()?;
// Fetch fresh data
let response = client.fetch_providers().await?;
```
### Custom Configuration
Create a client with custom settings:
```rust
use models_dev::ModelsDevClient;
use std::time::Duration;
// Custom API base URL
let client = ModelsDevClient::with_base_url("https://custom.api.models.dev");
// The client uses a 30-second timeout by default
println!("Timeout: {:?}", client.timeout());
```
### Error Handling
The library provides comprehensive error handling:
```rust
use models_dev::{ModelsDevClient, ModelsDevError};
#[tokio::main]
async fn main() {
let client = ModelsDevClient::new();
match client.fetch_providers().await {
Ok(response) => {
println!("Success: {} providers", response.providers.len());
}
Err(ModelsDevError::HttpError(e)) => {
eprintln!("HTTP error: {}", e);
}
Err(ModelsDevError::JsonError(e)) => {
eprintln!("JSON parsing error: {}", e);
}
Err(ModelsDevError::ApiError(msg)) => {
eprintln!("API error: {}", msg);
}
Err(ModelsDevError::Timeout) => {
eprintln!("Request timed out");
}
Err(ModelsDevError::CacheError(msg)) => {
eprintln!("Cache error: {}", msg);
}
Err(e) => {
eprintln!("Other error: {}", e);
}
}
}
```
## ๐ API Reference
### ModelsDevClient
The main client struct for interacting with the models.dev API.
#### Methods
##### `new() -> Self`
Creates a new client with default settings.
##### `with_base_url(api_base_url: impl Into<String>) -> Self`
Creates a client with a custom API base URL.
##### `fetch_providers() -> Result<ModelsDevResponse, ModelsDevError>`
Fetches provider information with smart caching.
##### `clear_cache() -> Result<(), ModelsDevError>`
Clears the cache metadata.
##### `cache_info() -> CacheInfo`
Returns information about the current cache state.
##### `api_base_url() -> &str`
Returns the API base URL.
##### `timeout() -> Duration`
Returns the request timeout.
### Data Structures
#### ModelsDevResponse
Top-level response containing provider information.
```rust
pub struct ModelsDevResponse {
pub providers: HashMap<String, Provider>,
}
```
#### Provider
Information about an AI model provider.
```rust
pub struct Provider {
pub id: String,
pub name: String,
pub npm: String,
pub env: Vec<String>,
pub doc: String,
pub api: Option<String>,
pub models: HashMap<String, Model>,
}
```
#### Model
Information about a specific AI model.
```rust
pub struct Model {
pub id: String,
pub name: String,
pub attachment: bool,
pub reasoning: bool,
pub temperature: bool,
pub tool_call: bool,
pub knowledge: Option<String>,
pub release_date: Option<String>,
pub last_updated: Option<String>,
pub modalities: Modalities,
pub open_weights: bool,
pub cost: Option<ModelCost>,
pub limit: ModelLimit,
}
```
### Error Types
#### ModelsDevError
Comprehensive error enum for all possible failures:
- `HttpError(reqwest::Error)` - HTTP request failures
- `JsonError(serde_json::Error)` - JSON parsing errors
- `ApiError(String)` - API error responses
- `Timeout` - Network timeout
- `InvalidUrl(String)` - Invalid URL configuration
- `CacheError(String)` - Cache operation failures
## ๐ Examples
The library includes comprehensive examples:
### Basic Usage
```bash
cargo run --example basic_usage
```
Demonstrates basic client usage and caching benefits.
### Smart Caching
```bash
cargo run --example smart_caching_example
```
Shows advanced caching functionality and performance comparisons.
### Integration
```bash
cargo run --example integration_example
```
Demonstrates integration patterns and error handling.
## ๐งช Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Run unit tests only
cargo test --lib
# Run integration tests
cargo test --test integration
# Run with output
cargo test -- --nocapture
```
### Test Coverage
The library includes:
- 12 unit tests covering core functionality
- 4 doc tests for API examples
- Integration tests for real API scenarios
- All tests are passing with comprehensive coverage
## ๐ค Contributing
We welcome contributions! Togather input, make a new issue to discuss the feature or fix you plan to working on. Please follow these guidelines:
### Development Setup
1. Fork the repository
2. Clone your fork: `git clone https://github.com/your-username/models-dev`
3. Create a feature branch: `git checkout -b feature-name`
4. Install development dependencies: `cargo build`
5. Run tests: `cargo test`
### Code Style Guidelines
- Follow the project's cognitive load reduction philosophy
- Use specific error types with `thiserror`
- Prefer `Result<T, ModelsDevError>` over generic errors
- Use `?` operator for early returns, avoid nested conditionals
- Leverage serde derive macros for API data structures
- Keep modules cohesive, use `pub(crate)` for implementation details
- Use descriptive variable names to reduce cognitive load
- Prefer async/await with tokio for network operations
### Pull Request Process
1. Ensure all tests pass: `cargo test`
2. Format code: `cargo fmt`
3. Run clippy: `cargo clippy -- -D warnings`
4. Update documentation if needed
5. Write clear commit messages following conventional commit format
6. Submit a pull request with a clear description of changes
### Commit Message Format
Follow the conventional commit format:
```
type(scope): description
# Examples
feat(client): add custom timeout configuration
fix(caching): resolve etag comparison issue
docs(readme): update installation instructions
test(examples): add integration test coverage
```
## ๐ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## ๐ Changelog
### Version 0.1.0 (Current)
**Initial Release**
- โ
Smart conditional HTTP requests using ETag-based caching
- โ
Transparent caching with automatic cache management
- โ
Performance optimization for repeated requests (up to 10x speedup)
- โ
Comprehensive error handling with specific error types
- โ
Clean, idiomatic Rust API design
- โ
Cache management utilities (`clear_cache()`, `cache_info()`)
- โ
Comprehensive examples and documentation
- โ
Full test coverage (12 unit tests + 4 doc tests)
- โ
Support for multiple TLS backends
- โ
Zero-cost abstractions leveraging Rust's type system
### Key Milestones Achieved
- **Architecture**: Clean separation between client logic, data types, and error handling
- **Performance**: Smart caching reduces API calls and improves response times
- **Reliability**: Comprehensive error handling and testing
- **Usability**: Simple API interface with advanced features available when needed
- **Documentation**: Comprehensive README with examples and API reference
## ๐ฏ Design Philosophy
**[Cognitive load is what matters.](https://minds.md/zakirullin/cognitive#long)** This library is designed with the following principles:
1. **Simple Interface**: The same `fetch_providers()` method works for both fresh and cached data
2. **Type Safety**: Leverage Rust's type system to prevent entire classes of bugs
3. **Performance**: Smart caching happens automatically, no manual coordination required
4. **Error Clarity**: Specific error types guide developers to solutions
5. **Zero-Cost Abstractions**: Caching and optimizations don't add runtime overhead when not needed
The library reduces cognitive load by:
- Providing a single method for all provider fetching needs
- Automatically handling cache invalidation and updates
- Using descriptive types that prevent common mistakes
- Offering clear error messages that guide debugging
---
**Built with โค๏ธ for the Rust community**
*Rust's type system should reduce cognitive load, not increase it. If you're fighting the compiler, redesign.*