Hodu, a user-friendly ML framework built in Rust.
Hodu (호두) is a Korean word meaning "walnut".
About Hodu
Hodu is a machine learning library built with user convenience at its core, designed for both rapid prototyping and seamless production deployment—including embedded environments.
Core Differentiators
Built on Rust's foundation of memory safety and zero-cost abstractions, Hodu offers unique advantages:
- Hybrid Execution Model: Seamlessly switch between dynamic execution for rapid prototyping and static computation graphs for optimized production deployment
- Memory Safety by Design: Leverage Rust's ownership system to eliminate common ML deployment issues like memory leaks and data races
- Embedded-First Architecture: Full
no_stdsupport enables ML inference on microcontrollers and resource-constrained devices - Zero-Cost Abstractions: High-level APIs that compile down to efficient machine code without runtime overhead
Dual Backend Architecture
- HODU Backend: Pure Rust implementation with
no_stdsupport for embedded environments- CPU operations with SIMD optimization
- CUDA GPU acceleration (with
cudafeature) - Metal GPU support for macOS (with
metalfeature)
- XLA Backend: JIT compilation via OpenXLA/PJRT (requires
std)- Advanced graph-level optimizations
- CPU and CUDA device support
- Production-grade performance for static computation graphs
[!WARNING]
This is a personal learning and development project. As such:
- The framework is under active development
- Features may be experimental or incomplete
- Functionality is not guaranteed for production use
It is recommended to use the latest version.
[!CAUTION]
Current Development Status:
- CUDA GPU support is not yet fully implemented and is under active development
- Metal GPU support is not yet fully implemented and is under active development
- SIMD optimizations are not yet implemented and are under active development
Get started
Here are some examples that demonstrate matrix multiplication using both dynamic execution and static computation graphs.
Dynamic Execution
This example shows direct tensor operations that are executed immediately:
use *;
With the cuda feature enabled, you can use CUDA in dynamic execution with the following setting:
- set_runtime_device(Device::CPU);
+ set_runtime_device(Device::CUDA(0));
Static Computation Graphs
For more complex workflows or when you need reusable computation graphs, you can use the Builder pattern:
use *;
With the cuda feature enabled, you can use CUDA in static computation graphs with the following setting:
let mut script = builder.build()?;
+ script.set_device(Device::CUDA(0));
With the xla feature enabled, you can use XLA in static computation graphs with the following setting:
let mut script = builder.build()?;
+ script.set_backend(Backend::XLA);
Features
Default Features
| Feature | Description | Dependencies |
|---|---|---|
std |
Standard library support | - |
serde |
Serialization/deserialization support | - |
Optional Features
| Feature | Description | Dependencies | Required Features |
|---|---|---|---|
cuda |
NVIDIA CUDA GPU support | CUDA toolkit | - |
metal |
Apple Metal GPU support | Metal framework (macOS) | - |
xla |
Google XLA compiler backend | XLA libraries | std |
Supported platforms
Docs
Inspired by
Hodu draws inspiration from the following amazing projects:
- maidenx - The predecessor project to Hodu
- candle - Minimalist ML framework for Rust
- GoMlx - An Accelerated Machine Learning Framework For Go
Credits
Hodu Character Design: Created by Eira