neurons 0.1.2

Modular neural networks in Rust.
neurons-0.1.2 has been yanked.
Visit the last successful build: neurons-2.6.2

Modular neural networks in Rust.

Create modular neural networks in Rust with ease! For educational purposes; vector operations are not throughly optimized.


Quickstart

use neurons::tensor::Shape;
use neurons::feedforward::Feedforward;
use neurons::activation::Activation;
use neurons::optimizer::Optimizer;
use neurons::objective::Objective;

fn main() {
  
    // New feedforward network with four inputs
    let mut network = Feedforward::new(Shape::Dense(4));

    // Dense(output, activation, bias, Some(dropout))
    network.dense(100, Activation::ReLU, false, None);
  
    // Convolution(filters, kernel, stride, padding, activation, bias, Some(dropout))
    network.convolution(5, (5, 5), (1, 1), (1, 1), Activation::ReLU, false, Some(0.1));
  
    // Dense(output, activation, bias, Some(dropout))
    network.dense(10, Activation::Softmax, false, None);
    
    network.set_optimizer(
        optimizer::Optimizer::AdamW(
            optimizer::AdamW {
                learning_rate: 0.001,
                beta1: 0.9,
                beta2: 0.999,
                epsilon: 1e-8,
                decay: 0.01,

                momentum: vec![],           // To be filled by the network
                velocity: vec![],           // To be filled by the network
            }
        )
    );
    network.set_objective(
        objective::Objective::MSE,          // Objective function
        Some((-1f32, 1f32))                 // Gradient clipping
    );
  
    println!("{}", network);
  
    let (x, y) = {  };                      // Load data
    let epochs = 1000;
    let loss = network.learn(x, y, epochs); // Train the network
}

Examples can be found in the examples directory.


Progress

  • Layer types

    • Dense
    • Convolutional
      • Forward pass
        • Padding
        • Stride
        • Dilation
      • Backward pass
        • Padding
        • Stride
        • Dilation
    • Feedback
  • Activation functions

    • Linear
    • Sigmoid
    • Tanh
    • ReLU
    • LeakyReLU
    • Softmax
  • Objective functions

    • AE
    • MAE
    • MSE
    • RMSE
    • CrossEntropy
    • BinaryCrossEntropy
    • KLDivergence
  • Optimization techniques

    • SGD
    • SGDM
    • Adam
    • AdamW
    • RMSprop
    • Minibatch
  • Architecture

    • Feedforward
    • Convolutional
    • Recurrent
    • Feedback
  • Regularization

    • Dropout
    • Batch normalization
    • Early stopping
  • Parallelization

    • Multi-threading
  • Testing

    • Unit tests
      • Thorough testing of activation functions
      • Thorough testing of objective functions
      • Thorough testing of optimization techniques
    • Integration tests
  • Other

    • Documentation
    • Custom random weight initialization
    • Custom tensor type
    • Plotting
    • Data from file
      • General data loading functionality
    • Custom icon/image for documentation
    • Type conversion (e.g. f32, f64)
    • Network type specification (e.g. f32, f64)
    • Saving and loading
    • Logging

Inspiration

Sources