LLM Client - Shared Logic
Runtime-agnostic LLM client logic for building requests and parsing responses. This crate has NO runtime dependencies (no async, no HTTP client).
Supported Providers
- OpenAI (GPT-4, GPT-3.5, etc.)
- Anthropic (Claude 3, etc.)
- OpenRouter (100+ models)
Usage
use ;
// Build a request
let messages = vec!;
let request = new
.model
.messages
.api_key
.temperature
.max_tokens
.stream
.build
.unwrap;
// Use your runtime's HTTP client to send request.url, request.headers, request.body
// Then parse the response:
let response_json = r#"{"choices":[{"message":{"content":"Hello!"}}]}"#;
let response = parse.unwrap;
println!;