Crate mockforge_http

Crate mockforge_http 

Source
Expand description

§MockForge HTTP

HTTP/REST API mocking library for MockForge.

This crate provides HTTP-specific functionality for creating mock REST APIs, including OpenAPI integration, request validation, AI-powered response generation, and management endpoints.

§Overview

MockForge HTTP enables you to:

  • Serve OpenAPI specs: Automatically generate mock endpoints from OpenAPI/Swagger
  • Validate requests: Enforce schema validation with configurable modes
  • AI-powered responses: Generate intelligent responses using LLMs
  • Management API: Real-time monitoring, configuration, and control
  • Request logging: Comprehensive HTTP request/response logging
  • Metrics collection: Track performance and usage statistics
  • Server-Sent Events: Stream logs and metrics to clients

§Quick Start

§Basic HTTP Server from OpenAPI

use axum::Router;
use mockforge_core::openapi_routes::ValidationMode;
use mockforge_core::ValidationOptions;
use mockforge_http::build_router;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Build router from OpenAPI specification
    let router = build_router(
        Some("./api-spec.json".to_string()),
        Some(ValidationOptions {
            request_mode: ValidationMode::Enforce,
            ..ValidationOptions::default()
        }),
        None,
    ).await;

    // Start the server
    let addr: std::net::SocketAddr = "0.0.0.0:3000".parse()?;
    let listener = tokio::net::TcpListener::bind(addr).await?;
    axum::serve(listener, router).await?;

    Ok(())
}

§With Management API

Enable real-time monitoring and configuration:

use mockforge_http::{management_router, ManagementState};

let state = ManagementState::new(None, None, 3000);

// Build management router
let mgmt_router = management_router(state);

// Mount under your main router
let app = axum::Router::new()
    .nest("/__mockforge", mgmt_router);

§AI-Powered Responses

Generate intelligent responses based on request context:

use mockforge_data::intelligent_mock::{IntelligentMockConfig, ResponseMode};
use mockforge_http::{process_response_with_ai, AiResponseConfig};
use serde_json::json;

let ai_config = AiResponseConfig {
    intelligent: Some(
        IntelligentMockConfig::new(ResponseMode::Intelligent)
            .with_prompt("Generate realistic user data".to_string()),
    ),
    drift: None,
};

let response = process_response_with_ai(
    Some(json!({"name": "Alice"})),
    ai_config
        .intelligent
        .clone()
        .map(serde_json::to_value)
        .transpose()?,
    ai_config
        .drift
        .clone()
        .map(serde_json::to_value)
        .transpose()?,
)
.await?;

§Key Features

§OpenAPI Integration

  • Automatic endpoint generation from specs
  • Request/response validation
  • Schema-based mock data generation

§Management & Monitoring

§Advanced Features

§Middleware

MockForge HTTP includes several middleware layers:

  • Request Tracing: [http_tracing_middleware] - Distributed tracing integration
  • Metrics Collection: metrics_middleware - Prometheus-compatible metrics
  • Operation Metadata: op_middleware - OpenAPI operation tracking

§Management API Endpoints

When using the management router, these endpoints are available:

  • GET /health - Health check
  • GET /stats - Server statistics
  • GET /logs - Request logs (SSE stream)
  • GET /metrics - Performance metrics
  • GET /fixtures - List available fixtures
  • POST /config/* - Update configuration

§Examples

See the examples directory for complete working examples.

§Documentation

Re-exports§

pub use ai_handler::process_response_with_ai;
pub use ai_handler::AiResponseConfig;
pub use ai_handler::AiResponseHandler;
pub use management::management_router;
pub use management::ManagementState;
pub use management::MockConfig;
pub use management::ServerConfig;
pub use management::ServerStats;
pub use management_ws::ws_management_router;
pub use management_ws::MockEvent;
pub use management_ws::WsManagementState;
pub use metrics_middleware::collect_http_metrics;
pub use http_tracing_middleware::http_tracing_middleware;
pub use coverage::calculate_coverage;
pub use coverage::CoverageReport;
pub use coverage::MethodCoverage;
pub use coverage::RouteCoverage;

Modules§

ai_handler
AI-powered response handler for HTTP requests
auth
Authentication middleware for MockForge HTTP server
chain_handlers
Chain management HTTP handlers for MockForge
coverage
Mock Coverage Tracking
http_tracing_middleware
HTTP tracing middleware for distributed tracing
latency_profiles
Operation-aware latency/failure profiles (per operationId and per tag).
management
management_ws
metrics_middleware
HTTP metrics collection middleware
middleware
HTTP middleware modules
op_middleware
Middleware/utilities to apply latency/failure and overrides per operation.
rag_ai_generator
RAG-based AI generator implementation
replay_listing
Record/replay listing for HTTP/gRPC/WS fixtures.
request_logging
HTTP request logging middleware
sse
Server Sent Events (SSE) support for MockForge

Structs§

HttpServerState
Shared state for tracking OpenAPI routes
RouteInfo
Route info for storing in state

Functions§

build_router
Build the base HTTP router, optionally from an OpenAPI spec.
build_router_with_auth
Build the base HTTP router with authentication support
build_router_with_auth_and_latency
Build the base HTTP router with authentication and latency support
build_router_with_chains
Build the base HTTP router with chaining support
build_router_with_chains_and_multi_tenant
Build the base HTTP router with chaining and multi-tenant support
build_router_with_latency
Build the base HTTP router with latency injection support
build_router_with_multi_tenant
Build the base HTTP router with multi-tenant workspace support
serve_router
Serve a provided router on the given port.
start
Backwards-compatible start that builds + serves the base router.
start_with_auth_and_injectors
Start HTTP server with authentication and injectors support
start_with_auth_and_latency
Start HTTP server with authentication and latency support
start_with_latency
Start HTTP server with latency injection support