axonml-cli
Overview
axonml-cli is the official command-line interface for the AxonML machine learning framework. It provides a comprehensive toolkit for managing ML projects, training models, evaluating performance, and deploying to production.
The CLI supports the full ML workflow from project initialization to model serving, with integrations for Kaggle, Weights & Biases, and a built-in model hub.
Features
-
Project Management - Create and initialize AxonML projects with customizable templates and configurations.
-
Training & Evaluation - Train models from configuration files, resume from checkpoints, and evaluate with comprehensive metrics.
-
Model Operations - Convert between formats (ONNX, SafeTensors, Ferrite), quantize models (Q4, Q8, F16), and inspect architectures.
-
Data Management - Upload, analyze, validate, and preview datasets with automatic type detection and statistics.
-
Deployment - Export models for production, start inference servers, and manage dashboard services.
-
Integrations - Kaggle dataset downloads, W&B experiment tracking, and pretrained model hub access.
-
Terminal UI - Interactive TUI for exploring models and datasets with real-time visualization.
-
GPU Support - Detect, benchmark, and manage GPU devices for accelerated training.
Modules
| Module | Description |
|---|---|
cli |
Command-line argument definitions using clap derive macros |
commands |
Implementation of all CLI subcommands |
config |
Project configuration file parsing (TOML/JSON) |
error |
CLI-specific error types and result definitions |
Installation
Install from crates.io:
Or build from source:
Command Reference
| Command | Description |
|---|---|
new |
Create a new AxonML project |
init |
Initialize AxonML in existing directory |
train |
Train a model from configuration |
resume |
Resume training from checkpoint |
eval |
Evaluate model performance |
predict |
Make predictions with trained model |
convert |
Convert models between formats |
export |
Export models for deployment |
inspect |
Inspect model architecture |
report |
Generate evaluation reports |
serve |
Start inference server (feature: serve) |
wandb |
W&B integration (feature: wandb) |
upload |
Upload model files |
data |
Dataset management |
scaffold |
Generate Rust training projects |
zip |
Create/extract model bundles |
rename |
Rename models and datasets |
quant |
Quantize models (Q4, Q8, F16) |
load |
Load models/datasets into workspace |
analyze |
Comprehensive analysis and reports |
bench |
Benchmark models and hardware |
gpu |
GPU detection and management |
tui |
Launch terminal user interface |
kaggle |
Kaggle dataset integration |
hub |
Pretrained model hub |
dataset |
Dataset management (NexusConnectBridge) |
start |
Start dashboard and API server |
stop |
Stop running services |
status |
Check service status |
logs |
View service logs |
Usage
Project Commands
# Create a new AxonML project
# Initialize AxonML in an existing directory
# Generate a Rust training project scaffold
Training Commands
# Train a model from configuration
# Resume training from a checkpoint
# Evaluate model performance
# Make predictions
Model Commands
# Inspect model architecture
# Convert model formats
# Export for deployment
# Quantize model
# Generate evaluation report
Data Commands
# Analyze a dataset
# Upload and configure dataset
# Validate dataset structure
# Preview dataset samples
Workspace Commands
# Load model into workspace
# Load dataset into workspace
# Analyze loaded model
# Generate comprehensive report
Benchmarking
# Benchmark model performance
# Benchmark at different batch sizes
# Compare multiple models
# Benchmark hardware capabilities
GPU Management
# List available GPUs
# Show detailed GPU information
# Select GPU for training
# Benchmark GPU performance
Hub & Kaggle Integration
# List pretrained models
# Download pretrained weights
# Configure Kaggle credentials
# Search Kaggle datasets
# Download Kaggle dataset
Dashboard & Server
# Start dashboard and API server
# Start only the API server
# Check service status
# View logs
# Stop services
Additional Commands
# Create model/dataset bundle
# Extract bundle
# Rename model
# Launch terminal UI
Configuration
The CLI uses axonml.toml for project configuration:
[]
= "my-ml-project"
= "0.1.0"
= "My machine learning project"
[]
= 50
= 32
= 0.001
= "cuda:0"
= 1
= "./output"
= 4
[]
= "adam"
= 0.0001
= 0.9
= 0.999
[]
= "cosine"
= 50
= 0.00001
= 5
[]
= "resnet18"
= 10
= 0.1
[]
= "./data/train"
= "./data/val"
= 0.1
= true
= true
= true
Global Options
# Enable verbose output
# Suppress all output except errors
Tests
Run the test suite:
Run integration tests:
License
Licensed under either of:
- MIT License
- Apache License, Version 2.0
at your option.