MockForge Observability
Comprehensive observability features for MockForge including Prometheus metrics, OpenTelemetry tracing, structured logging, and system monitoring.
This crate provides enterprise-grade observability capabilities to monitor MockForge performance, track system health, and debug issues in production environments. Perfect for understanding how your mock servers behave under load and ensuring reliable testing infrastructure.
Features
- Prometheus Metrics: Comprehensive metrics collection with automatic export
- OpenTelemetry Tracing: Distributed tracing with Jaeger and OTLP support
- Structured Logging: JSON-formatted logs with configurable levels and outputs
- System Metrics: CPU, memory, and thread monitoring
- Flight Recorder: Request/response recording for debugging
- Multi-Protocol Support: Metrics for HTTP, gRPC, WebSocket, and GraphQL
- Performance Monitoring: Response times, throughput, and error rates
- Health Checks: Built-in health endpoints and status monitoring
Quick Start
Basic Metrics Collection
use MetricsRegistry;
async
Structured Logging
use ;
async
OpenTelemetry Tracing
use ;
async
Core Components
Prometheus Metrics
Comprehensive metrics collection with automatic Prometheus export:
use MetricsRegistry;
let registry = new;
// HTTP metrics
registry.record_http_request;
registry.record_http_response_size; // bytes
// gRPC metrics
registry.record_grpc_request; // method, status, duration
// WebSocket metrics
registry.record_websocket_connection;
registry.record_websocket_message; // message size
// GraphQL metrics
registry.record_graphql_request; // operation, success, duration
// Connection metrics
registry.record_active_connection;
registry.record_connection_closed;
Available Metrics
HTTP Metrics
mockforge_http_requests_total{method, path, status}- Total HTTP requestsmockforge_http_request_duration_seconds{method, path}- Request duration histogrammockforge_http_response_size_bytes- Response size distributionmockforge_http_active_connections- Current active connections
gRPC Metrics
mockforge_grpc_requests_total{method, status}- Total gRPC requestsmockforge_grpc_request_duration_seconds{method}- gRPC request durationmockforge_grpc_active_streams- Active gRPC streams
WebSocket Metrics
mockforge_websocket_connections_total- Total WebSocket connectionsmockforge_websocket_active_connections- Current active WebSocket connectionsmockforge_websocket_messages_total{direction}- WebSocket messages sent/receivedmockforge_websocket_message_size_bytes- WebSocket message size distribution
GraphQL Metrics
mockforge_graphql_requests_total{operation, success}- Total GraphQL requestsmockforge_graphql_request_duration_seconds{operation}- GraphQL request durationmockforge_graphql_errors_total{type}- GraphQL error count
System Metrics
mockforge_system_cpu_usage_percent- CPU usage percentagemockforge_system_memory_usage_bytes- Memory usage in bytesmockforge_system_threads_total- Total thread count
Structured Logging
JSON-formatted logging with configurable outputs:
use LoggingConfig;
let config = LoggingConfig ;
// Initialize logging
init_logging?;
// Structured logs with context
info!;
OpenTelemetry Tracing
Distributed tracing with multiple backends:
use OtelTracingConfig;
// Jaeger tracing
let jaeger_config = OtelTracingConfig ;
// OTLP tracing (generic OpenTelemetry protocol)
let otlp_config = OtelTracingConfig ;
init_with_otel.await?;
System Metrics Collection
Automatic system resource monitoring:
use ;
let config = SystemMetricsConfig ;
start_system_metrics_collector.await?;
Configuration
Logging Configuration
use LoggingConfig;
let logging_config = LoggingConfig ;
Tracing Configuration
use OtelTracingConfig;
let tracing_config = OtelTracingConfig ;
Metrics Configuration
Metrics are automatically configured with sensible defaults. Customize via environment variables:
# Metrics collection
# System metrics
Integration Examples
HTTP Server with Full Observability
use ;
use ;
use Arc;
async
async
async
gRPC Server with Tracing
use ;
use ;
async
Performance Considerations
- Metrics Overhead: Minimal performance impact with efficient metric collection
- Logging Performance: JSON formatting adds small overhead, file I/O can be async
- Tracing Sampling: Use sampling rates to control tracing volume in production
- System Metrics: Collection interval can be adjusted based on monitoring needs
- Memory Usage: Metrics registries use bounded memory with cleanup mechanisms
Troubleshooting
Common Issues
Metrics not appearing:
- Check Prometheus scrape configuration
- Verify metrics endpoint is accessible
- Ensure metrics are being recorded before scraping
Logs not structured:
- Verify JSON format is enabled in LoggingConfig
- Check log level settings
- Ensure tracing subscriber is properly initialized
Tracing not working:
- Verify Jaeger/OTLP endpoint is accessible
- Check service name configuration
- Ensure sampling rate allows traces through
High memory usage:
- Adjust log file rotation settings
- Reduce system metrics collection frequency
- Check for metric registry leaks
Development
Testing Observability Features
Custom Metrics
use ;
// Register custom metrics
lazy_static!
// Use custom metrics
CUSTOM_COUNTER.inc;
let _timer = CUSTOM_HISTOGRAM.start_timer; // Measures until dropped
Examples
See the examples directory for complete working examples including:
- Full observability stack setup
- Custom metrics implementation
- Distributed tracing configuration
- Log aggregation patterns
- Performance monitoring dashboards
Related Crates
mockforge-core: Core mocking functionalityprometheus: Metrics collection librarytracing: Logging and tracing frameworkopentelemetry: Observability standards
License
Licensed under MIT OR Apache-2.0