Expand description
axonml-nn - Neural Network Module Library
Provides neural network layers, activation functions, loss functions, and utilities for building deep learning models in Axonml.
§Key Components
- Module trait: Core interface for all neural network modules
- Parameter: Wrapper for learnable parameters
- Sequential: Container for chaining modules
- Layers: Linear, Conv, RNN, LSTM, Attention, etc.
- Activations: ReLU, Sigmoid, Tanh, GELU, etc.
- Loss Functions: MSE, CrossEntropy, BCE, etc.
- Initialization: Xavier, Kaiming, orthogonal, etc.
- Functional API: Stateless operations
§Example
ⓘ
use axonml_nn::prelude::*;
// Build a simple MLP
let model = Sequential::new()
.add(Linear::new(784, 256))
.add(ReLU)
.add(Linear::new(256, 10));
// Forward pass
let output = model.forward(&input);
// Compute loss
let loss = CrossEntropyLoss::new().compute(&output, &target);
// Backward pass
loss.backward();@version 0.1.0 @author AutomataNexus Development Team
Re-exports§
pub use module::Module;pub use module::ModuleList;pub use parameter::Parameter;pub use sequential::Sequential;pub use layers::AdaptiveAvgPool2d;pub use layers::AvgPool1d;pub use layers::AvgPool2d;pub use layers::BatchNorm1d;pub use layers::BatchNorm2d;pub use layers::Conv1d;pub use layers::Conv2d;pub use layers::Dropout;pub use layers::Embedding;pub use layers::GRUCell;pub use layers::GroupNorm;pub use layers::InstanceNorm2d;pub use layers::LSTMCell;pub use layers::LayerNorm;pub use layers::Linear;pub use layers::MaxPool1d;pub use layers::MaxPool2d;pub use layers::MultiHeadAttention;pub use layers::RNNCell;pub use layers::GRU;pub use layers::LSTM;pub use layers::RNN;pub use activation::Identity;pub use activation::LeakyReLU;pub use activation::LogSoftmax;pub use activation::ReLU;pub use activation::SiLU;pub use activation::Sigmoid;pub use activation::Softmax;pub use activation::Tanh;pub use activation::ELU;pub use activation::GELU;pub use loss::BCELoss;pub use loss::BCEWithLogitsLoss;pub use loss::CrossEntropyLoss;pub use loss::L1Loss;pub use loss::MSELoss;pub use loss::NLLLoss;pub use loss::Reduction;pub use loss::SmoothL1Loss;pub use init::constant;pub use init::diag;pub use init::eye;pub use init::glorot_normal;pub use init::glorot_uniform;pub use init::he_normal;pub use init::he_uniform;pub use init::kaiming_normal;pub use init::kaiming_uniform;pub use init::normal;pub use init::ones;pub use init::orthogonal;pub use init::randn;pub use init::sparse;pub use init::uniform;pub use init::uniform_range;pub use init::xavier_normal;pub use init::xavier_uniform;pub use init::zeros;pub use init::InitMode;
Modules§
- activation
- Activation Modules - Non-linear Activation Functions
- functional
- Functional API - Stateless Neural Network Operations
- init
- Weight Initialization - Parameter Initialization Strategies
- layers
- Neural Network Layers
- loss
- Loss Functions - Training Objectives
- module
- Module Trait - Neural Network Module Interface
- parameter
- Parameter - Learnable Parameter Wrapper
- prelude
- Common imports for neural network development.
- sequential
- Sequential - Sequential Container for Modules