Skip to main content

Crate axonml_nn

Crate axonml_nn 

Source
Expand description

axonml-nn - Neural Network Module Library

§File

crates/axonml-nn/src/lib.rs

§Author

Andrew Jewell Sr - AutomataNexus

§Updated

March 8, 2026

§Disclaimer

Use at own risk. This software is provided “as is”, without warranty of any kind, express or implied. The author and AutomataNexus shall not be held liable for any damages arising from the use of this software.

Re-exports§

pub use module::Module;
pub use module::ModuleList;
pub use parameter::Parameter;
pub use sequential::Sequential;
pub use layers::AdaptiveAvgPool2d;
pub use layers::AvgPool1d;
pub use layers::AvgPool2d;
pub use layers::BatchNorm1d;
pub use layers::BatchNorm2d;
pub use layers::Conv1d;
pub use layers::Conv2d;
pub use layers::ConvTranspose2d;
pub use layers::CrossAttention;
pub use layers::DifferentialAttention;
pub use layers::Dropout;
pub use layers::Embedding;
pub use layers::Expert;
pub use layers::FFT1d;
pub use layers::GATConv;
pub use layers::GCNConv;
pub use layers::GRU;
pub use layers::GRUCell;
pub use layers::GroupNorm;
pub use layers::GroupSparsity;
pub use layers::InstanceNorm2d;
pub use layers::LSTM;
pub use layers::LSTMCell;
pub use layers::LayerNorm;
pub use layers::Linear;
pub use layers::LotteryTicket;
pub use layers::MaxPool1d;
pub use layers::MaxPool2d;
pub use layers::MoELayer;
pub use layers::MoERouter;
pub use layers::MultiHeadAttention;
pub use layers::PackedTernaryWeights;
pub use layers::RNN;
pub use layers::RNNCell;
pub use layers::ResidualBlock;
pub use layers::STFT;
pub use layers::Seq2SeqTransformer;
pub use layers::SparseLinear;
pub use layers::TernaryLinear;
pub use layers::TransformerDecoder;
pub use layers::TransformerDecoderLayer;
pub use layers::TransformerEncoder;
pub use layers::TransformerEncoderLayer;
pub use activation::ELU;
pub use activation::Flatten;
pub use activation::GELU;
pub use activation::Identity;
pub use activation::LeakyReLU;
pub use activation::LogSoftmax;
pub use activation::ReLU;
pub use activation::SiLU;
pub use activation::Sigmoid;
pub use activation::Softmax;
pub use activation::Tanh;
pub use loss::BCELoss;
pub use loss::BCEWithLogitsLoss;
pub use loss::CrossEntropyLoss;
pub use loss::L1Loss;
pub use loss::MSELoss;
pub use loss::NLLLoss;
pub use loss::Reduction;
pub use loss::SmoothL1Loss;
pub use init::InitMode;
pub use init::constant;
pub use init::diag;
pub use init::eye;
pub use init::glorot_normal;
pub use init::glorot_uniform;
pub use init::he_normal;
pub use init::he_uniform;
pub use init::kaiming_normal;
pub use init::kaiming_uniform;
pub use init::normal;
pub use init::ones;
pub use init::orthogonal;
pub use init::randn;
pub use init::sparse;
pub use init::uniform;
pub use init::uniform_range;
pub use init::xavier_normal;
pub use init::xavier_uniform;
pub use init::zeros;

Modules§

activation
Activation Modules - Non-linear Activation Functions
functional
Functional API - Stateless Neural Network Operations
init
Weight Initialization - Parameter Initialization Strategies
layers
Neural Network Layers
loss
Loss Functions - Training Objectives
module
Module Trait - Neural Network Module Interface
parameter
Parameter - Learnable Parameter Wrapper
prelude
Common imports for neural network development.
sequential
Sequential - Sequential Container for Modules