torsh-cli
Command-line tools for the ToRSh deep learning framework with PyTorch-compatible CLI interface.
Overview
This crate provides a comprehensive command-line interface for ToRSh, enabling model training, inference, conversion, benchmarking, and management directly from the terminal. It offers a familiar experience for users coming from PyTorch while leveraging Rust's performance and safety.
Features
- Model Training: Train models with configuration files or command-line arguments
- Inference: Run predictions on trained models with various input formats
- Model Conversion: Convert between different model formats (ONNX, TorchScript, etc.)
- Benchmarking: Profile and benchmark model performance
- Model Hub: Download and manage pre-trained models
- Dataset Management: Download and prepare datasets for training
- Quantization: Quantize models for deployment
- Profiling: Analyze model performance and memory usage
- Interactive Mode: REPL for quick experimentation
Installation
# Install from source
# Install with all features
Usage
Basic Commands
# Display help
# Train a model
# Run inference
# Benchmark a model
# Convert model format
# Download pre-trained model from hub
Training
From Configuration File
# Train with YAML config
# Override config parameters
Example training_config.yaml:
model:
type: resnet
num_classes: 10
pretrained: false
data:
train_path: ./data/train
val_path: ./data/val
batch_size: 64
num_workers: 4
training:
epochs: 50
learning_rate: 0.01
optimizer: sgd
momentum: 0.9
weight_decay: 0.0001
scheduler:
type: step
step_size: 30
gamma: 0.1
checkpoints:
save_dir: ./checkpoints
save_interval: 5
Direct Command-Line Arguments
Inference
# Single file inference
# Batch inference
# Streaming inference
Benchmarking
# Benchmark model throughput
# Profile memory usage
# Compare multiple models
Model Conversion
# Convert PyTorch to ONNX
# Convert to TorchScript
# Quantize during conversion
Model Hub
# List available models
# Search for models
# Download model
# Upload model to hub
Dataset Management
# List available datasets
# Download dataset
# Prepare custom dataset
# Split dataset
Quantization
# Quantize model to INT8
# Dynamic quantization
# QAT (Quantization-Aware Training)
Profiling
# Profile model execution
# Generate detailed report
# Layer-wise profiling
Interactive Mode
# Start interactive REPL
# Load model in REPL
Within the REPL:
>>>
>>> =
>>> = @
>>>
>>> =
>>> =
Configuration
Global Configuration
The CLI uses a global configuration file located at ~/.torsh/config.toml:
[]
= "cuda:0"
= "float32"
= 4
[]
= "~/.torsh/hub"
= "https://hub.torsh.ai"
[]
= "info"
= "text"
[]
= 8
= true
Environment Variables
# Set default device
# Set cache directory
# Set log level
# Number of threads
Advanced Features
Custom Scripts
# Run Python-like script
# With arguments
Model Inspection
# Show model architecture
# Show detailed layer information
# Export to dot format for visualization
Distributed Training
# Launch distributed training
Shell Completion
Generate shell completion scripts:
# Bash
# Zsh
# Fish
# PowerShell
Examples
Train ResNet on CIFAR-10
Fine-tune Pre-trained Model
Export for Production
Integration with SciRS2
This crate leverages the SciRS2 ecosystem for:
- High-performance tensor operations through
scirs2-core - Neural network implementations via
scirs2-neural - Optimization algorithms from
scirs2-optimizeandoptirs - Metrics and evaluation through
scirs2-metrics
All operations follow the SciRS2 POLICY for consistent, maintainable code.
Development
Building
# Build CLI
# Build with all features
# Release build
Testing
# Run tests
# Integration tests
License
Licensed under the Apache License, Version 2.0. See LICENSE for details.