Expand description
§llmposter
A fixture-driven mock server for LLM APIs. Speaks OpenAI Chat Completions, Anthropic Messages, Gemini generateContent, and OpenAI Responses API — both streaming (SSE) and non-streaming.
§Quick Start
use llmposter::{ServerBuilder, Fixture};
#[tokio::test]
async fn test_llm_client() {
let server = ServerBuilder::new()
.fixture(
Fixture::new()
.match_user_message("hello")
.respond_with_content("Hi from the mock!"),
)
.build()
.await
.unwrap();
// Point your LLM client at server.url() instead of the real API
let base_url = server.url();
// ... your client code here ...
}§Features
- 4 LLM API formats: OpenAI, Anthropic, Gemini, OpenAI Responses
- Streaming & non-streaming: SSE and JSON-array responses
- Fixture-driven: YAML files or programmatic builder API
- Failure simulation: latency, truncation, disconnect, corruption, error codes
- Stateful scenarios: multi-turn matching via named state machines
- Request capture: verify what your client sent with
server.get_requests() - Auth simulation: bearer tokens, OAuth2 mock server (optional
oauthfeature) - Deterministic: fixed IDs, sequential counters, no randomness
§Modules
fixture— fixture types, matching, YAML loadingserver—ServerBuilder,MockServer,CapturedRequestauth— bearer token and OAuth2 middlewarecli— CLI binary entry point
Re-exports§
pub use auth::AuthState;pub use auth::TokenStatus;pub use fixture::FailureConfig;pub use fixture::Fixture;pub use fixture::Refusal;pub use fixture::ScenarioConfig;pub use fixture::StreamingConfig;pub use fixture::ToolCall;pub use server::OAuthConfig;pub use server::CapturedRequest;pub use server::MockServer;pub use server::RequestOutcome;pub use server::ServerBuilder;
Modules§
- auth
- Bearer token authentication and OAuth2 middleware.
- cli
- CLI binary entry point and argument parsing.
- fixture
- Fixture types, matching logic, and YAML loading.
- server
- Server builder, mock server, and request capture.
Enums§
- Provider
- Provider identifier — which endpoint was hit.