π¦ RUNN
A Compact Rust Neural Network Library
runn is a feature-rich, easy-to-use library for building, training, and evaluating feed-forward neural networks in Rust. It supports a wide range of activation functions, optimizers, regularization techniques, fine-grained parallelization, hyperparameter search, and moreβall with a user-friendly API.
π Table of Contents
- Installation
- Quickstart
- Features
- Examples
- Documentation
- Contributing
- License
- Authors & Acknowledgments
πΎ Installation
Add runn to your project:
Or add it manually to your Cargo.toml:
[]
= "0.1"
β‘ Quickstart
runn adopts a fluent interface design pattern for ease of use. Most components are initialized with sensible defaults, which you can override as needed.
Here's how to build and train a simple neural network:
use ;
Sample Output:
Training completed successfully.
Results: Loss:0.0000, Classification Metrics: Accuracy:100.0000, Micro Precision:1.0000, Micro Recall:1.0000, Macro F1 Score:1.0000, Micro F1 Score:1.0000
Metrics by Class:
Class 0: Precision:1.0000 Recall:1.0000 F1 Score:1.0000
πΎ Save & Load
You can save and load your trained networks:
network.save.unwrap;
let mut loaded_network = load.unwrap;
let results = loaded_network.predict.unwrap;
π Hyperparameter Search
Easily perform hyperparameter search for learning rate, batch size, and layer size:
let network_search = new
.network
.parallelize
.learning_rates
.batch_sizes
.hidden_layer
.hidden_layer
.export
.build
.unwrap; // Handle error in production use
let ns = network_search.search;
Results are exported to a CSV file, including loss and training metrics.
β¨ Features
| Feature | Built-in Support |
|---|---|
| Activations | ELU, GeLU, ReLU, LeakyReLU, Linear, Sigmoid, Softmax, Softplus, Swish, Tanh |
| Optimizers | SGD, Momentum, RMSProp, Adam, AdamW, AMSGrad |
| Loss Functions | Cross-Entropy, Mean Squared Error |
| Regularization | L1, L2, Dropout |
| Schedulers | Exponential, Step |
| Early Stopping | Loss, Accuracy, R2 |
| Save & Load | JSON & MessagePack |
| Logging Summary | TensorBoard |
| Hyperparameter Search | Layer size, batch size, learning rate |
| Normalization | MinMax, Zscore |
| Parallelization | Forward & Backward runs for batch groups |
π Examples
With runn, you can build complex networks tailored to your needs:
let network = new
.layer
.layer
.layer
.loss_function // loss function with epsilon
.optimizer
.seed // seed for reproducibility
.early_stopper
.regularization // L2 regularization
.regularization // Dropout regularization
.epochs
.batch_size
.batch_group_size // number of batches to process in groups
.parallelize // number of threads to use for parallel process the batch groups
.summary // tensorboard summary
.normalize_input // normalization of the input data
.build
.unwrap; // Handle error in production use
π οΈ Utility Methods
runn provides handy utility functions:
-
helper::one_hot_encode
Converts categorical labels into one-hot encoded vectors.let training_targets = one_hot_encode; -
helper::stratified_split
Stratified split for matrix inputs and single-column targets.let = stratified_split; -
helper::random_split
Random split for matrix inputs and multi-column targets.let = random_split; -
helper::pretty_compare_matrices
Pretty-prints three matrices side-by-side as an ASCII-art comparison.pretty_compare_matrices
π¦ Example Projects
| Example | Description | Train | Hyperparameter Search |
|---|---|---|---|
| triplets | Multi-class classification | cargo run --example triplets |
cargo run --example triplets -- -search |
| iris | Multi-class classification | cargo run --example iris |
cargo run --example iris -- -search |
| wine | Multi-class classification | cargo run --example wine |
cargo run --example wine -- -search |
| energy efficiency | Regression | cargo run --example energy_efficiency |
cargo run --example energy_efficiency -- -search |
π Documentation
- Full API docs on docs.rs/runn
- Generate local docs:
π€ Contributing
Contributions are welcome! Please see CONTRIBUTING.md for:
- Issue templates and PR guidelines
- Code style (rustfmt, Clippy) and testing (
cargo test)
π License
Dual-licensed under MIT OR Apache-2.0. See LICENSE-MIT and LICENSE-APACHE.