Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
System Analysis
A comprehensive Rust library for analyzing system capabilities, workload requirements, and optimal resource allocation. This crate provides tools for determining if a system can run specific workloads, scoring hardware capabilities, and recommending optimal configurations with a focus on AI/ML workloads.
๐ Features
- ๐ Comprehensive System Analysis: Detailed hardware capability assessment including CPU, GPU, memory, storage, and network
- ๐ค AI/ML Specialization: Built-in support for AI inference and training workloads with model parameter analysis
- โ๏ธ Workload Modeling: Flexible framework for defining workload requirements and characteristics
- โ Compatibility Checking: Determine if a system can run specific workloads with detailed compatibility scoring
- ๐ Resource Utilization Prediction: Estimate resource usage patterns for workloads
- ๐ด Bottleneck Detection: Identify system bottlenecks and performance limitations
- โฌ๏ธ Upgrade Recommendations: Suggest specific hardware upgrades for better performance
- โ๏ธ Optimal Configuration: Find the best hardware configuration for specific workloads
- ๐ Cross-Platform Support: Works on Windows, Linux, and macOS
- ๐โโ๏ธ Performance Benchmarking: Built-in benchmarking tools for capability validation
- ๐ Trend Analysis: Track performance trends over time
๐ฆ Installation
Add this to your Cargo.toml
:
[]
= "0.1.0"
= { = "1.0", = ["full"] }
๐ Quick Start
Basic System Analysis
use SystemAnalyzer;
async
AI Workload Analysis
use ;
use ;
use ;
async
Core Concepts
System Profile
The SystemProfile
represents a comprehensive analysis of system capabilities:
let system_profile = analyzer.analyze_system.await?;
// Access individual scores
println!;
println!;
println!;
// Get AI capabilities assessment
let ai_capabilities = system_profile.ai_capabilities;
println!;
Workload Requirements
Define what your workload needs to run effectively:
let mut requirements = new;
// Add resource requirements
requirements.add_resource_requirement;
requirements.add_resource_requirement;
Compatibility Analysis
Check if a system can run your workload:
let compatibility = analyzer.check_compatibility?;
println!;
println!;
println!;
// Get missing requirements
for missing in compatibility.missing_requirements
Resource Utilization Prediction
Estimate how your workload will use system resources:
let utilization = analyzer.predict_utilization?;
println!;
println!;
println!;
Upgrade Recommendations
Get specific recommendations for improving system performance:
let upgrades = analyzer.recommend_upgrades?;
for upgrade in upgrades
Advanced Usage
Custom Workloads
Implement the Workload
trait for custom workload types:
use ;
GPU Detection
Enable GPU detection for NVIDIA GPUs:
[]
= { = "0.1.0", = ["gpu-detection"] }
Performance Benchmarking
Run performance benchmarks to validate system capabilities:
use BenchmarkRunner;
use Duration;
let benchmark = new;
let cpu_result = benchmark.run_cpu_benchmark?;
let memory_result = benchmark.run_memory_benchmark?;
println!;
println!;
Examples
The examples/
directory contains comprehensive examples:
basic_analysis.rs
: Basic system analysis and workload compatibilityai_workload_analysis.rs
: AI/ML workload analysis with multiple models
Run examples with:
Configuration
Customize the analyzer behavior:
use AnalyzerConfig;
let config = AnalyzerConfig ;
let analyzer = with_config;
API Reference
Main Types
SystemAnalyzer
: Main analyzer for system capabilitiesSystemProfile
: Comprehensive system analysis resultsWorkloadRequirements
: Specification of workload needsCompatibilityResult
: Results of compatibility analysisResourceUtilization
: Resource usage predictionsUpgradeRecommendation
: Hardware upgrade suggestions
Resource Types
ResourceType
: CPU, GPU, Memory, Storage, NetworkCapabilityLevel
: VeryLow, Low, Medium, High, VeryHigh, ExceptionalResourceAmount
: Different ways to specify resource amounts
Workload Types
AIInferenceWorkload
: AI model inference workloadsModelParameters
: AI model parameter specifications- Custom workloads via the
Workload
trait
Error Handling
The crate uses the SystemAnalysisError
enum for comprehensive error handling:
use SystemAnalysisError;
match result
Platform Support
- Windows: Full support with Windows-specific optimizations
- Linux: Full support with Linux-specific hardware detection
- macOS: Full support with macOS-specific APIs
Performance Considerations
- System analysis results are cached for 5 minutes by default
- GPU detection can be disabled to reduce startup time
- Network testing is disabled by default as it can be slow
- Benchmarking is optional and disabled by default
Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
License
Licensed under either of
- Apache License, Version 2.0
- MIT License
at your option.
Changelog
0.1.0 (Initial Release)
- Basic system analysis functionality
- AI workload specialization
- Compatibility checking
- Resource utilization prediction
- Upgrade recommendations
- Cross-platform support
- Comprehensive examples and documentation
๐ง Advanced Usage
Custom Workload Definition
use ;
System Capability Profiling
use ;
async
Advanced Performance Benchmarking
use ;
async
๐๏ธ Architecture
The crate is organized into several key modules:
analyzer
: Main system analysis logic and orchestrationcapabilities
: Hardware capability assessment and scoringworkloads
: Workload definitions and AI/ML specializationsresources
: Resource management and requirement modelingtypes
: Core data types and structuresutils
: Utility functions and helper toolserror
: Comprehensive error handling
๐งช Testing
Run the comprehensive test suite:
# Run all tests
# Run tests with output
# Run specific test modules
# Run benchmarks
๐ Benchmarking
The crate includes comprehensive benchmarks:
# Run all benchmarks
# Run specific benchmarks
# Generate HTML benchmark reports
๐ฏ Use Cases
AI/ML Model Deployment
- Model Compatibility: Check if your system can run specific AI models
- Performance Estimation: Predict inference speed and resource usage
- Hardware Recommendations: Get specific upgrade suggestions for AI workloads
- Batch Processing: Optimize batch sizes based on available resources
System Planning
- Hardware Procurement: Make informed decisions about hardware purchases
- Capacity Planning: Understand current and future resource needs
- Bottleneck Analysis: Identify and resolve system limitations
- Cost Optimization: Balance performance and cost requirements
DevOps and Infrastructure
- Container Sizing: Right-size containers based on workload requirements
- Auto-scaling: Make intelligent scaling decisions based on system capabilities
- Resource Allocation: Optimize resource distribution across workloads
- Performance Monitoring: Track system capability trends over time
๐ Feature Flags
Enable optional features based on your needs:
[]
= { = "0.1.0", = ["gpu-detection"] }
Available features:
gpu-detection
: Enable NVIDIA GPU detection and CUDA capabilitiesadvanced-benchmarks
: Include additional benchmarking toolsml-models
: Extended AI/ML model support and optimizations
๐ค Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
Development Setup
Contribution Areas
- Hardware Support: Add support for new hardware types
- Workload Types: Implement new workload categories
- Platform Support: Enhance cross-platform compatibility
- Performance: Optimize analysis algorithms
- Documentation: Improve docs and examples
๐ License
This project is licensed under either of
- Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
๐ Acknowledgments
- The Rust community for excellent system information crates
- Contributors and testers who help improve this project
- AI/ML community for workload insights and requirements
๐ Roadmap
- Support for ARM and Apple Silicon optimization
- Integration with cloud provider APIs
- Real-time system monitoring capabilities
- Machine learning-based performance prediction
- Web-based dashboard for system analysis
- Integration with container orchestration platforms
- Support for specialized AI hardware (TPUs, FPGAs)
For more detailed documentation, please visit docs.rs/system-analysis.