neurons is a neural network library written from scratch in Rust. It provides a flexible and efficient way to build, train, and evaluate neural networks. The library is designed to be modular, allowing for easy customization of network architectures, activation functions, objective functions, and optimization techniques.
Jump to
Features
Modular design
- Ready-to-use dense, convolutional and maxpool layers.
- Inferred input shapes when adding layers.
- Easily specify activation functions, biases, and dropout.
- Customizable objective functions and optimization techniques.
Fast
- Leveraging Rust's performance and parallelization capabilities.
Everything built from scratch
- Only dependencies are
rayonandplotters. Whereplottersonly is used through some of the examples (thus optional).Various examples showcasing the capabilities
- Located in the
examples/directory.
The package
The package is divided into separate modules, each containing different parts of the library, everything being connected through the network.rs module.
Core
tensor.rs
Describes the custom tensor struct and its operations. A tensor is here divided into four different types:
Single: One-dimensional data (Vec<_>).Double: Two-dimensional data (Vec<Vec<_>>).Triple: Three-dimensional data (Vec<Vec<Vec<_>>>).Quadruple: Four-dimensional data (Vec<Vec<Vec<Vec<_>>>>).Each shape following the same pattern of operations, but with increasing dimensions. Thus, every tensor contains information about its shape and data. The reason for wrapping the data in this way is to easily allow for dynamic shapes and types in the network.
random.rs
Functionality for random number generation. Used when initializing the weights of the network.
network.rs
Describes the network struct and its operations. The network contains a vector of layers, an optimizer, and an objective function. The network is built layer by layer, and then trained using the
learnfunction. See quickstart or theexamples/directory for more information.
Layers
dense.rs
Describes the dense layer and its operations.
convolution.rs
Describes the convolutional layer and its operations. If the input is a tensor of shape
Single, the layer will automatically reshape it into aTripletensor.maxpool.rs
Describes the maxpool layer and its operations. If the input is a tensor of shape
Single, the layer will automatically reshape it into aTripletensor.
Functions
activation.rs
Contains all the possible activation functions to be used.
objective.rs
Contains all the possible objective functions to be used.
optimizer.rs
Contains all the possible optimization techniques to be used.
Quickstart
use ;
Releases
Add skeleton for feedback block structure.
Missing correct handling of backward pass.
How should the optimizer be handled (wrt. buffer, etc.)?
Before:
network.set_optimizer;
Now:
network.set_optimizer;
Layers now automatically reshape input tensors to the correct shape.
I.e., your network could be conv->dense->conv etc.
Earlier versions only allowed conv/maxpool->dense connections.
Note: While this is now possible, some testing proved this to be suboptimal in terms of performance.
Combines operations to single-loop instead of repeadedly iterating over the `tensor::Tensor`'s.
Benchmarking `examples/example_benchmark.rs` (mnist version):
v2.0.1: 16.504570304s (1.05x speedup)
v2.0.0: 17.268632412s
Weight updates are now batched correctly.
See `network::Network::learn` for details.
Benchmarking examples/example_benchmark.rs (mnist version):
batched (128): 17.268632412s (4.82x speedup)
unbatched (1): 83.347593292s
Optimizer step more intuitive and easy to read.
Using `tensor::Tensor` instead of manually handing vectors.
Network of convolutional and dense layers works.
Batched training (`network::Network::learn`).
Parallelization of batches (`rayon`).
Benchmarking `examples/example_benchmark.rs` (iris version):
v0.3.0: 0.318811179s (6.95x speedup)
v0.2.2: 2.218362758s
Convolutional layer.
Improved documentation.
Initial feedback connection implementation.
Improved documentation.
Custom tensor struct.
Unit tests.
Dense feedforward network.
Activation functions.
Objective functions.
Optimization techniques.
Progress
- Dense
- Convolutional
- Forward pass
- Padding
- Stride
- Dilation
- Backward pass
- Padding
- Stride
- Dilation
- Max pooling
- Feedback
- Forward pass
- Linear
- Sigmoid
- Tanh
- ReLU
- LeakyReLU
- Softmax
- AE
- MAE
- MSE
- RMSE
- CrossEntropy
- BinaryCrossEntropy
- KLDivergence
- SGD
- SGDM
- Adam
- AdamW
- RMSprop
- Minibatch
- Feedforward (dubbed
Network) - Recurrent
- Skip connections
- Feedback blocks
- Selectable gradient accumulation
- Handle backward pass of feedback block in
network::Network
- Dropout
- Early stopping
- Batch normalization
- Parallelization of batches
- Other parallelization?
- NOTE: Slowdown when parallelizing everything (commit: 1f94cea56630a46d40755af5da20714bc0357146).
- Unit tests
- Thorough testing of activation functions
- Thorough testing of objective functions
- Thorough testing of optimization techniques
- Thorough testing of feedback blocks
- Integration tests
- Network forward pass
- Network backward pass
- Network training (i.e., weight updates)
- XOR
- Iris
- MLP
- MLP + Feedback
- Linear regression
- MLP
- MLP + Feedback
- Classification TBA.
- MLP
- MLP + Feedback
- MNIST
- MLP
- MLP + Feedback
- CNN
- CNN + Feedback
- CIFAR-10
- CNN
- CNN + Feedback
- Documentation
- Custom random weight initialization
- Custom tensor type
- Plotting
- Data from file
- General data loading functionality
- Custom icon/image for documentation
- Custom stylesheet for documentation
- Add number of parameters when displaying
Network - Network type specification (e.g. f32, f64)
- Serialisation (saving and loading)
- Single layer weights
- Entire network weights
- Custom (binary) file format, with header explaining contents
- Logging