runnx-0.1.0 has been yanked.
RunNX
A minimal, verifiable ONNX runtime implementation in Rust.

Overview
Fast, fearless, and fully verifiable ONNX in Rust.
This project provides a minimal, educational ONNX runtime implementation focused on:
- Simplicity: Easy to understand and modify
- Verifiability: Clear, testable code with comprehensive documentation
- Performance: Efficient operations using ndarray
- Safety: Memory-safe Rust implementation
Features
- ✅ Basic tensor operations (
Add,Mul,MatMul,Conv, &c.) - ✅ Model loading and validation
- ✅ Inference execution
- ✅ Error handling and logging
- ✅ Benchmarking support
- ✅ Async support (optional)
- ✅ Command-line runner
- ✅ Comprehensive examples
Quick Start
Installation
Add this to your Cargo.toml:
[]
= "0.1.0"
Basic Usage
use ;
// Load a model
let model = from_file?;
// Create input tensor
let input = from_array;
// Run inference
let outputs = model.run?;
// Get results
let result = outputs.get.unwrap;
println!;
Command Line Usage
# Run inference on a model
# Run with async support
Architecture
The runtime is organized into several key components:
Core Components
- Model: ONNX model representation and loading
- Graph: Computational graph with nodes and edges
- Tensor: N-dimensional array wrapper with type safety
- Operators: Implementation of ONNX operations
- Runtime: Execution engine with optimizations
Supported Operators
| Operator | Status | Notes |
|---|---|---|
Add |
✅ | Element-wise addition |
Mul |
✅ | Element-wise multiplication |
MatMul |
✅ | Matrix multiplication |
Conv |
✅ | 2D Convolution |
Relu |
✅ | Rectified Linear Unit |
Sigmoid |
✅ | Sigmoid activation |
Reshape |
✅ | Tensor reshaping |
Transpose |
✅ | Tensor transposition |
Examples
Simple Linear Model
use ;
use Array2;
Model Loading and Inference
use ;
use HashMap;
Performance
The runtime includes benchmarking capabilities:
# Run benchmarks
# Generate HTML reports
Example benchmark results:
- Basic operations: ~10-50 µs
- Small model inference: ~100-500 µs
- Medium model inference: ~1-10 ms
Development
Running Tests
# Run all tests
# Run tests with logging
RUST_LOG=debug
# Run specific test
Building Documentation
# Build and open documentation
# Build with private items
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests and documentation
- Run
cargo testandcargo bench - Submit a pull request
License
This project is licensed under
- Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
Acknowledgments
- ONNX - Open Neural Network Exchange format
- ndarray - Rust's
ndarraylibrary - Candle - Inspiration for some design patterns
Roadmap
- Add more operators (Softmax, BatchNorm, etc.)
- GPU acceleration support
- Quantization support
- Model optimization passes
- WASM compilation target
- Python bindings