llm_adapter
A small Rust library for adapting multiple LLM provider APIs into one internal request/response model.
llm_adapter gives you:
- A provider-neutral core model (
CoreRequest,CoreResponse,StreamEvent) - Protocol codecs for OpenAI Chat Completions, OpenAI Responses, Anthropic Messages, and Gemini GenerateContent
- Streaming SSE parsing and cross-protocol stream rewriting helpers
- Optional middleware and fallback routing building blocks for host applications
Status
Early-stage crate (0.1.0) focused on API shape stability and test coverage.
Supported Backends
Protocol codecs for:
- OpenAI Chat Completions
- OpenAI Responses
- Gemini GenerateContent
- Anthropic Messages
Endpoint shapes for:
- OpenAI Chat Completions / Responses (
BackendRequestLayer::ChatCompletions/BackendRequestLayer::ChatCompletionsNoV1/BackendRequestLayer::CloudflareWorkersAi/BackendRequestLayer::Responses) - Google Gemini (
BackendRequestLayer::GeminiApi/BackendRequestLayer::GeminiVertex) - Anthropic (
BackendRequestLayer::Anthropic/BackendRequestLayer::VertexAnthropic)
Add To Your Project
[]
= { = "0.1.0" }
Quick Start
use BTreeMap;
use ;
Streaming
Use collect_stream_events for collect-all behavior, or dispatch_stream_events_with for incremental processing:
use ;
// inside your function, with `client`, `config`, `request` ready
let mut on_event = ;
dispatch_stream_events_with?;
Fallback Routing and Middleware
This crate exposes reusable orchestration helpers:
router::dispatch_with_fallbackrouter::dispatch_stream_with_fallbackmiddleware::run_request_middleware_chainmiddleware::run_stream_middleware_chain
They are designed for host apps that want custom retry, fallback, and policy pipelines.
Benchmark CLI
This repository also ships a benchmark binary, now using llm_adapter::backend::dispatch_request instead of manual endpoint requests.
Configuration auto-discovery order:
llm-benchmark.tomlbenchmark.tomlconfig.toml
Compatibility CLI
llm_compat provides provider compatibility checks.
Development
License
Licensed under AGPL-3.0-only.