docs.rs failed to build kotoba-bench-0.1.16
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
KotobaDB Benchmarking Suite
A comprehensive performance benchmarking framework for KotobaDB with advanced analytics, trend analysis, and regression detection.
ð Features
- Comprehensive Workloads: CRUD operations, query performance, transaction throughput, memory usage, storage operations
- Advanced Analytics: Performance trend analysis, bottleneck identification, statistical significance testing
- Regression Detection: Automated performance regression detection with baseline comparisons
- Real-time Monitoring: Live metrics collection during benchmark execution
- Multiple Report Formats: JSON, CSV, HTML reports with charts and detailed analysis
- Workload Generation: Realistic application patterns (YCSB, social network, e-commerce)
ð Quick Start
use *;
use DB;
// Create database instance
let db = DB open_lsm.await?;
// Create benchmark configuration
let config = BenchmarkConfig ;
// Run CRUD benchmark
let crud_benchmark = new;
let runner = new;
let result = runner.run_benchmark.await?;
// Generate reports
let reporter = new;
reporter.generate_reports?;
ð Benchmark Types
CRUD Operations Benchmark
let crud_benchmark = new
.with_operation_mix;
Query Performance Benchmark
let query_benchmark = new;
Transaction Throughput Benchmark
let tx_benchmark = new; // 10 operations per transaction
Memory Usage Benchmark
let memory_benchmark = new; // 1MB per operation
Storage Operations Benchmark
let storage_benchmark = new;
ð Advanced Analytics
Performance Analysis
let analyzer = new;
analyzer.add_result;
let analysis = analyzer.analyze;
println!;
println!;
println!;
Trend Analysis
let mut trend_analyzer = new;
trend_analyzer.add_snapshot;
let trends = trend_analyzer.analyze_trends;
println!;
Regression Detection
let mut comparator = new;
comparator.set_baseline;
if let Some = comparator.compare
ðïļ Configuration
Benchmark Configuration
let config = BenchmarkConfig ;
Load Patterns
use *;
// Ramp up load
let ramp_up_generator = new;
// Bursty load
let bursty_generator = new;
// Spike load
let spike_generator = new;
ð Reports and Output
Console Reports
)
HTML Reports
Interactive HTML reports with charts:
- Throughput over time
- Latency distribution histograms
- Memory usage trends
- Error rate monitoring
- Performance comparison charts
JSON/CSV Export
Structured data export for:
- CI/CD integration
- Historical trend analysis
- Custom dashboard creation
- Performance regression tracking
ð Performance Profiling
Real-time Profiling
let mut profiler = new;
profiler.start_profiling;
// During benchmark execution
profiler.sample; // Collect current metrics
profiler.record_event; // Record custom events
// Generate profiling report
let report = profiler.generate_report;
println!;
Custom Metrics
profiler.record_event;
profiler.record_event;
profiler.record_event;
ð·ïļ Baseline Management
Setting Baselines
let baseline_result = runner.run_benchmark.await?;
save_baseline?;
Regression Alerts
let current_result = runner.run_benchmark.await?;
let regression = compare_with_baseline?;
if regression.has_regression
ð Workload Patterns
YCSB Workloads
// YCSB-A: 50% reads, 50% updates
let ycsb_a = new;
// YCSB-B: 95% reads, 5% updates
let ycsb_b = new;
// YCSB-C: 100% reads
let ycsb_c = new;
Application Workloads
// Social network patterns
let social_network = new;
// E-commerce patterns
let ecommerce = new;
ðŊ Best Practices
Benchmark Setup
- Warmup: Always include adequate warmup periods
- Steady State: Run benchmarks long enough for stable performance
- Isolation: Run benchmarks on dedicated hardware
- Consistency: Use identical configurations for comparisons
Result Interpretation
- Statistical Significance: Check confidence intervals
- Trend Analysis: Look at performance over time
- Bottleneck Identification: Use profiling to find root causes
- Regression Detection: Compare against known good baselines
Performance Optimization
- Iterative Testing: Make one change at a time
- Measurement Accuracy: Use sufficient sample sizes
- Realistic Workloads: Test with production-like patterns
- Resource Monitoring: Track all relevant metrics
ð§ Advanced Usage
Custom Workload Implementation
Integration Testing
// Run multiple benchmarks in suite
let mut suite = new;
suite.add_benchmark;
suite.add_benchmark;
suite.add_benchmark;
let results = suite.run_all.await?;
let analysis = analyze_suite;
ð Performance Metrics
Key Metrics
- Throughput: Operations per second
- Latency: Response time percentiles (p50, p95, p99, p999)
- Error Rate: Percentage of failed operations
- Memory Usage: Peak and average memory consumption
- Storage I/O: Read/write throughput and efficiency
- CPU Utilization: Core and system CPU usage
Statistical Analysis
- Confidence Intervals: Statistical significance of results
- Trend Analysis: Performance changes over time
- Regression Detection: Automated performance degradation alerts
- Stability Metrics: Performance variability analysis
ðïļ Performance Targets
Throughput Goals
- CRUD Operations: > 5,000 ops/sec
- Query Operations: > 10,000 ops/sec
- Transaction Throughput: > 2,000 tx/sec
Latency Goals
- p95 Latency: < 5ms for typical operations
- p99 Latency: < 20ms for all operations
- Error Rate: < 0.1% under normal load
Scalability Goals
- Linear Scaling: Throughput scales with CPU cores
- Memory Efficiency: < 100MB baseline + 1MB per 1000 ops/sec
- Storage Efficiency: > 80% storage utilization
Remember: Measure, analyze, optimize, repeat! ððŽâĄ