Burn ONNX
Import ONNX models into the Burn deep learning framework.
Overview
burn-onnx converts ONNX models to native Burn Rust code, allowing you to run models from PyTorch,
TensorFlow, and other frameworks on any Burn backend - from WebAssembly to CUDA.
Key features:
- Generates readable, modifiable Rust source code from ONNX models
- Produces
burnpackweight files for efficient loading - Works with any Burn backend (CPU, GPU, WebGPU, embedded)
- Supports both
stdandno_stdenvironments - Full opset compliance: all supported operators work across ONNX opset versions 1 through 24
- Graph simplification (enabled by default): attention coalescing, constant folding, constant shape propagation, idempotent-op elimination, identity-element elimination, CSE, dead code elimination, and permute-reshape detection
Quick Start
Add to your Cargo.toml:
[]
= "0.21"
In your build.rs:
use ModelGen;
Include the generated code in src/model/mod.rs:
Then use the model:
use NdArray;
use crateModel;
let model: = default;
let output = model.forward;
For detailed usage instructions, see the ONNX Import Guide in the Burn Book.
Examples
| Example | Description |
|---|---|
| onnx-inference | Basic ONNX model inference |
| image-classification-web | WebAssembly/WebGPU image classifier |
Model Validation
We validate burn-onnx against 26 real-world models spanning image classification, object detection, depth estimation, NLP, speech, and generative AI. Each model check verifies the full pipeline: ONNX import, Rust codegen, weight loading, and numerical accuracy against ONNX Runtime reference outputs.
Supported Operators
See the Supported ONNX Operators table for the complete list of supported operators.
Contributing
We welcome contributions! Please read the Contributing Guidelines before opening a PR, and the Development Guide for architecture and implementation details.
For questions and discussions, join us on Discord.
License
Licensed under either of Apache License, Version 2.0 or MIT license at your option.