Module nn

Source
Expand description

candle-nn

§Other Crates

Candle consists of a number of crates. This crate holds structs and functions that allow you to build and train neural nets. You may wish to look at the docs for the other crates which can be found here:

Re-exports§

pub use activation::prelu;
pub use activation::Activation;
pub use activation::PReLU;
pub use attention::scaled_dot_product_attention;
pub use batch_norm::batch_norm;
pub use batch_norm::BatchNorm;
pub use batch_norm::BatchNormConfig;
pub use conv::conv1d;
pub use conv::conv1d_no_bias;
pub use conv::conv2d;
pub use conv::conv2d_no_bias;
pub use conv::conv_transpose1d;
pub use conv::conv_transpose1d_no_bias;
pub use conv::conv_transpose2d;
pub use conv::conv_transpose2d_no_bias;
pub use conv::Conv1d;
pub use conv::Conv1dConfig;
pub use conv::Conv2d;
pub use conv::Conv2dConfig;
pub use conv::ConvTranspose1d;
pub use conv::ConvTranspose1dConfig;
pub use conv::ConvTranspose2d;
pub use conv::ConvTranspose2dConfig;
pub use embedding::embedding;
pub use embedding::Embedding;
pub use func::func;
pub use func::func_t;
pub use func::Func;
pub use func::FuncT;
pub use group_norm::group_norm;
pub use group_norm::GroupNorm;
pub use init::Init;
pub use layer_norm::layer_norm;
pub use layer_norm::rms_norm_non_quant;
pub use layer_norm::rms_norm_quant;
pub use layer_norm::LayerNorm;
pub use layer_norm::LayerNormConfig;
pub use layer_norm::RmsNorm;
pub use linear::linear;
pub use linear::linear_b;
pub use linear::linear_no_bias;
pub use linear::Linear;
pub use ops::kvconcat;
pub use ops::Dropout;
pub use optim::AdamW;
pub use optim::Optimizer;
pub use optim::ParamsAdamW;
pub use optim::SGD;
pub use rnn::gru;
pub use rnn::lstm;
pub use rnn::GRUConfig;
pub use rnn::LSTMConfig;
pub use rnn::GRU;
pub use rnn::LSTM;
pub use rnn::RNN;
pub use rope::RotaryEmbedding;
pub use sequential::seq;
pub use sequential::Sequential;
pub use var_builder::VarBuilder;
pub use var_map::VarMap;
pub use crate::core::Module;
pub use crate::core::ModuleT;

Modules§

activation
Activation Functions
attention
batch_norm
Batch Normalization.
conv
Convolution Layers.
embedding
Embedding Layer.
encoding
Encoding Utilities. (e.g., one-hot/cold encoding)
func
Layers defined by closures.
group_norm
Group Normalization.
init
Variable initialization.
kv_cache
Cache Implementations
layer_norm
Layer Normalization.
linear
Linear layer
loss
Loss Calculations
ops
Tensor ops.
optim
Various optimization algorithms.
rnn
Recurrent Neural Networks
rope
rotary_emb
Rotary Embeddings
sequential
Sequential Layer
var_builder
A VarBuilder is used to retrieve variables used by a model. These variables can either come from a pre-trained checkpoint, e.g. using VarBuilder::from_mmaped_safetensors, or initialized for training, e.g. using VarBuilder::from_varmap.
var_map