mnn-rs
Rust bindings for MNN (Mobile Neural Network), Alibaba's efficient and lightweight deep learning inference framework.
Features
- Safe Rust API: All MNN operations are wrapped in safe Rust types with proper error handling
- Cross-platform: Supports Windows, Linux, macOS, Android, and iOS
- Prebuilt Binaries: Automatically download prebuilt binaries - no CMake or C++ compiler required!
- Multiple Backends: CPU, CUDA, OpenCL, Vulkan, and Metal
- Static/Dynamic Linking: Choose between static or dynamic linking
- Async Support: Optional async API with Tokio integration
- Build from Source: Option to build MNN locally when needed
Quick Start
Default Build (Recommended)
By default, the crate will automatically download prebuilt MNN binaries from GitHub Releases:
No prerequisites required! The prebuilt binaries are available for:
- Windows (x86_64 MSVC, x86 MSVC)
- Linux (x86_64, aarch64)
- macOS (x86_64 Intel, aarch64 Apple Silicon)
- Android (arm64-v8a, armeabi-v7a)
- iOS (arm64 device, arm64 simulator)
Custom Prebuilt URL
You can specify a custom download URL for prebuilt binaries:
Building from Source
If you need to build MNN locally (e.g., for custom build options):
This requires:
- Git (for cloning MNN source)
- CMake (for building MNN)
- C++ compiler (MSVC on Windows, GCC/Clang on Linux/macOS)
Using Pre-built MNN
If you have a pre-built MNN library, you can use it directly:
# Set MNN library paths
# Build without auto-download
Usage
Basic Inference
use ;
Async Inference (with Tokio)
use ;
async
Features
Linking Mode
| Feature | Description |
|---|---|
static (default) |
Static link MNN library |
dynamic |
Dynamic link MNN library |
Backend Support
| Feature | Description | Platform |
|---|---|---|
cpu (default) |
CPU backend | All |
cuda |
NVIDIA GPU backend | Windows, Linux |
opencl |
OpenCL GPU backend | Windows, Linux, macOS, Android |
vulkan |
Vulkan GPU backend | Windows, Linux, Android |
metal |
Metal GPU backend | macOS, iOS |
Precision Support
| Feature | Description |
|---|---|
fp16 |
FP16 precision support |
int8 |
INT8 precision support |
quantization |
Quantization support |
x86 SIMD Optimizations
| Feature | Description | Default (x86_64) | Default (x86) |
|---|---|---|---|
sse |
SSE instructions | ON | OFF |
avx2 |
AVX2 instructions | OFF | OFF |
avx512 |
AVX512 instructions | OFF | OFF |
Build Options
| Feature | Default | Description |
|---|---|---|
use-prebuilt |
✓ | Download prebuilt MNN binaries from GitHub Releases |
build-from-source |
Build MNN from source (requires CMake, C++ compiler) | |
system-mnn |
Use system-installed MNN library | |
generate-bindings |
Generate FFI bindings using bindgen |
Async Support
| Feature | Description |
|---|---|
async |
Enable async API with Tokio |
Examples
See the examples/ directory for more usage examples:
basic_inference.rs- Basic inference workflowasync_inference.rs- Async inference with Tokiogpu_backend.rs- Using GPU backends
# Run basic inference example
Cross-Compilation
Prebuilt binaries are available for most cross-compilation targets. If prebuilt binaries are not available for your target, enable build-from-source:
Android
# Install Android target
# Build for Android (uses prebuilt binaries by default)
# Or build from source (requires NDK and Ninja)
Supported Android targets:
aarch64-linux-android(arm64-v8a)armv7-linux-androideabi(armeabi-v7a)x86_64-linux-androidi686-linux-android
Building from source requirements:
- Android NDK (set
ANDROID_NDK_HOMEorNDK_HOMEenvironment variable) - Ninja build system (
choco install ninjaon Windows,brew install ninjaon macOS)
iOS
# Install iOS target
# Build for iOS device (uses prebuilt binaries by default)
# Build for iOS simulator (Apple Silicon Macs)
# Build for iOS simulator (Intel Macs)
Supported iOS targets:
aarch64-apple-ios(iOS device)aarch64-apple-ios-sim(iOS simulator on Apple Silicon)x86_64-apple-ios(iOS simulator on Intel Macs)
Linux
# Standard build
# Cross-compile for ARM
macOS
# Intel Macs
# Apple Silicon Macs
Windows
# MSVC (recommended)
# MinGW
Environment Variables
| Variable | Description |
|---|---|
MNN_PREBUILT_URL |
Custom URL for prebuilt MNN binaries |
MNN_SOURCE_PATH |
Path to MNN source directory (for build-from-source) |
MNN_LIB_DIR |
Path to pre-built MNN library |
MNN_INCLUDE_DIR |
Path to MNN headers |
MNN_DEBUG_BUILD |
Print debug information during build |
CUDA_PATH |
CUDA installation path |
ANDROID_NDK_HOME |
Android NDK installation path |
API Documentation
See https://docs.rs/mnn-rs for full API documentation.
MNN Version
This crate is compatible with MNN 2.9.5+.
License
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT License (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contribution
Contributions are welcome! Please feel free to submit a Pull Request.
Acknowledgments
- MNN - Alibaba's Mobile Neural Network inference engine
- All contributors who helped with this project