torsh-utils
Comprehensive utilities and tools for the ToRSh deep learning framework.
Overview
This crate provides essential utilities for model development, debugging, optimization, and deployment in the ToRSh ecosystem. It includes benchmarking tools, profiling utilities, TensorBoard integration, mobile optimization, and development environment management.
Features
- Benchmarking: Performance analysis and model benchmarking tools
- Profiling: Bottleneck detection and performance profiling
- TensorBoard Integration: Logging and visualization support
- Mobile Optimization: Model optimization for mobile deployment
- Environment Collection: Development environment diagnostics
- C++ Extensions: Build system for custom C++ operations
- Model Zoo: Model repository and management utilities
Modules
benchmark: Model benchmarking and performance analysisbottleneck: Performance bottleneck detection and profilingtensorboard: TensorBoard logging and visualizationmobile_optimizer: Mobile deployment optimizationcollect_env: Environment and system information collectioncpp_extension: C++ extension building utilitiesmodel_zoo: Model repository management
Usage
Benchmarking
use *;
use Module;
// Benchmark a model
let config = BenchmarkConfig ;
let result = benchmark_model?;
println!;
Profiling Bottlenecks
use *;
// Profile model bottlenecks
let report = profile_bottlenecks?;
println!;
for in report.operation_times
TensorBoard Integration
use *;
// Create TensorBoard writer
let mut writer = new?;
// Log scalars
writer.add_scalar?;
writer.add_scalar?;
// Log histograms
let weights: = model.get_parameter?;
writer.add_histogram?;
writer.close?;
Mobile Optimization
use *;
// Optimize model for mobile
let config = MobileOptimizerConfig ;
let optimized_model = optimize_for_mobile?;
Environment Collection
use *;
// Collect environment information
let env_info = collect_env?;
println!;
println!;
println!;
Features
Default Features
std: Standard library supporttensorboard: TensorBoard integration
Optional Features
profiling: Advanced profiling capabilitiesmobile: Mobile optimization toolscpp-extensions: C++ extension building
Dependencies
torsh-core: Core types and device abstractiontorsh-tensor: Tensor operationstorsh-nn: Neural network modulestorsh-profiler: Performance profilingreqwest: HTTP client for model downloadsprometheus: Metrics collectionsysinfo: System information gathering
Performance
torsh-utils is optimized for:
- Minimal overhead benchmarking with high-resolution timing
- Efficient memory usage tracking and analysis
- Low-latency profiling with minimal instrumentation impact
- Streaming TensorBoard logging for large-scale training
Compatibility
Designed to integrate seamlessly with:
- PyTorch TensorBoard logs (compatible format)
- Standard ML development workflows
- CI/CD pipelines for model validation
- Mobile deployment pipelines (iOS/Android)
Examples
See the examples/ directory for:
- Comprehensive benchmarking workflows
- Profiling and optimization guides
- TensorBoard integration examples
- Mobile deployment tutorials