neurons 0.2.0

Modular neural networks in Rust.
Documentation

Modular neural networks in Rust.

Create modular neural networks in Rust with ease! For educational purposes; operations are not throughly optimized.


Quickstart

use neurons::tensor::Shape;
use neurons::network::Network;
use neurons::activation::Activation;
use neurons::optimizer::Optimizer;
use neurons::objective::Objective;

fn main() {

    // New feedforward network with four inputs
    let mut network = Network::new(Shape::Dense(4));

    // Dense(output, activation, bias, Some(dropout))
    network.dense(100, Activation::ReLU, false, None);

    // Convolution(filters, kernel, stride, padding, activation, bias, Some(dropout))
    network.convolution(5, (5, 5), (1, 1), (1, 1), Activation::ReLU, false, Some(0.1));

    // Dense(output, activation, bias, Some(dropout))
    network.dense(10, Activation::Softmax, false, None);

    network.set_optimizer(
        optimizer::Optimizer::AdamW(
            optimizer::AdamW {
                learning_rate: 0.001,
                beta1: 0.9,
                beta2: 0.999,
                epsilon: 1e-8,
                decay: 0.01,

                momentum: vec![],           // To be filled by the network
                velocity: vec![],           // To be filled by the network
            }
        )
    );
    network.set_objective(
        objective::Objective::MSE,          // Objective function
        Some((-1f32, 1f32))                 // Gradient clipping
    );

    println!("{}", network);

    let (x, y) = {  };                      // Load data
    let epochs = 1000;
    let loss = network.learn(x, y, epochs); // Train the network
}

Examples can be found in the examples directory.


Progress

  • Layer types

    • Dense
    • Convolutional
      • Forward pass
        • Padding
        • Stride
        • Dilation
      • Backward pass
        • Padding
        • Stride
        • Dilation
  • Activation functions

    • Linear
    • Sigmoid
    • Tanh
    • ReLU
    • LeakyReLU
    • Softmax
  • Objective functions

    • AE
    • MAE
    • MSE
    • RMSE
    • CrossEntropy
    • BinaryCrossEntropy
    • KLDivergence
  • Optimization techniques

    • SGD
    • SGDM
    • Adam
    • AdamW
    • RMSprop
    • Minibatch
  • Architecture

    • Feedforward (dubbed Network)
    • Convolutional
    • Recurrent
    • Feedback connections
      • Dense to Dense
      • Dense to Convolutional
      • Convolutional to Dense
      • Convolutional to Convolutional
  • Regularization

    • Dropout
    • Batch normalization
    • Early stopping
  • Parallelization

    • Multi-threading
  • Testing

    • Unit tests
      • Thorough testing of algebraic operations
      • Thorough testing of activation functions
      • Thorough testing of objective functions
      • Thorough testing of optimization techniques
      • Thorough testing of feedback scaling (wrt. gradients)
    • Integration tests
  • Examples

    • XOR
    • Iris
      • MLP
      • MLP + Feedback
    • Linear regression
      • MLP
      • MLP + Feedback
    • Classification TBA.
      • MLP
      • MLP + Feedback
    • MNIST
      • MLP
      • MLP + Feedback
      • CNN
      • CNN + Feedback
    • CIFAR-10
      • CNN
      • CNN + Feedback
  • Other

    • Documentation
    • Custom random weight initialization
    • Custom tensor type
    • Plotting
    • Data from file
      • General data loading functionality
    • Custom icon/image for documentation
    • Custom stylesheet for documentation
    • Type conversion (e.g. f32, f64)
    • Network type specification (e.g. f32, f64)
    • Saving and loading
      • Single layer weights
      • Entire network weights
      • Custom (binary) file format, with header explaining contents
    • Logging

Inspiration

Sources