docs.rs failed to build mnn-rs-0.1.0
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build:
mnn-rs-0.1.5
mnn-rs
Rust bindings for MNN (Mobile Neural Network), Alibaba's efficient and lightweight deep learning inference framework.
Features
- Safe Rust API: All MNN operations are wrapped in safe Rust types with proper error handling
- Cross-platform: Supports Windows, Linux, macOS, Android, and iOS
- Multiple Backends: CPU, CUDA, OpenCL, Vulkan, and Metal
- Static/Dynamic Linking: Choose between static or dynamic linking
- Async Support: Optional async API with Tokio integration
- Build from Source: Automatically clone and build MNN from GitHub
Quick Start
Default Build (Recommended)
By default, the crate will automatically clone and build MNN from GitHub:
This requires:
- Git (for cloning MNN source)
- CMake (for building MNN)
- C++ compiler (MSVC on Windows, GCC/Clang on Linux/macOS)
Using Pre-built MNN
If you have a pre-built MNN library, you can disable the auto-build:
# Set MNN library paths
# Build without build-from-source feature
Usage
Basic Inference
use ;
Async Inference (with Tokio)
use ;
async
Features
Linking Mode
| Feature | Description |
|---|---|
static (default) |
Static link MNN library |
dynamic |
Dynamic link MNN library |
Backend Support
| Feature | Description | Platform |
|---|---|---|
cpu (default) |
CPU backend | All |
cuda |
NVIDIA GPU backend | Windows, Linux |
opencl |
OpenCL GPU backend | Windows, Linux, macOS, Android |
vulkan |
Vulkan GPU backend | Windows, Linux, Android |
metal |
Metal GPU backend | macOS, iOS |
Precision Support
| Feature | Description |
|---|---|
fp16 |
FP16 precision support |
int8 |
INT8 precision support |
quantization |
Quantization support |
x86 SIMD Optimizations
| Feature | Description | Default (x86_64) | Default (x86) |
|---|---|---|---|
sse |
SSE instructions | ON | OFF |
avx2 |
AVX2 instructions | OFF | OFF |
avx512 |
AVX512 instructions | OFF | OFF |
Build Options
| Feature | Description |
|---|---|
build-from-source |
Automatically clone and build MNN from GitHub |
system-mnn |
Use system-installed MNN |
generate-bindings |
Generate FFI bindings using bindgen |
Async Support
| Feature | Description |
|---|---|
async |
Enable async API with Tokio |
Examples
See the examples/ directory for more usage examples:
basic_inference.rs- Basic inference workflowasync_inference.rs- Async inference with Tokiogpu_backend.rs- Using GPU backends
# Run basic inference example
# Run with build-from-source
Cross-Compilation
Windows (x86_64-pc-windows-gnu)
# Install MinGW-w64 toolchain
Windows (i686-pc-windows-gnu)
Android
Environment Variables
| Variable | Description |
|---|---|
MNN_SOURCE_PATH |
Path to MNN source directory |
MNN_LIB_DIR |
Path to pre-built MNN library |
MNN_INCLUDE_DIR |
Path to MNN headers |
MNN_DEBUG_BUILD |
Print debug information during build |
CUDA_PATH |
CUDA installation path |
ANDROID_NDK_HOME |
Android NDK installation path |
API Documentation
See https://docs.rs/mnn-rs for full API documentation.
MNN Version
This crate is compatible with MNN 2.9.5+.
License
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT License (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contribution
Contributions are welcome! Please feel free to submit a Pull Request.
Acknowledgments
- MNN - Alibaba's Mobile Neural Network inference engine
- All contributors who helped with this project