pub trait Layer {
// Required methods
fn forward(&mut self, input: &Tensor) -> Tensor;
fn backward(&mut self, grad_output: &Tensor) -> Result<Tensor, ModelError>;
fn update_parameters_sgd(&mut self, _lr: f32);
fn update_parameters_adam(
&mut self,
_lr: f32,
_beta1: f32,
_beta2: f32,
_epsilon: f32,
_t: u64,
);
fn update_parameters_rmsprop(&mut self, _lr: f32, _rho: f32, _epsilon: f32);
fn get_weights(&self) -> LayerWeight<'_>;
// Provided methods
fn layer_type(&self) -> &str { ... }
fn output_shape(&self) -> String { ... }
fn param_count(&self) -> usize { ... }
}Expand description
Defines the interface for neural network layers.
This trait provides the core functionality that all neural network layers must implement, including forward and backward propagation, as well as parameter updates for different optimization algorithms.
Required Methods§
Sourcefn update_parameters_sgd(&mut self, _lr: f32)
fn update_parameters_sgd(&mut self, _lr: f32)
Updates the layer parameters using Stochastic Gradient Descent.
§Parameters
_lr- Learning rate for parameter updates
Sourcefn update_parameters_adam(
&mut self,
_lr: f32,
_beta1: f32,
_beta2: f32,
_epsilon: f32,
_t: u64,
)
fn update_parameters_adam( &mut self, _lr: f32, _beta1: f32, _beta2: f32, _epsilon: f32, _t: u64, )
Updates the layer parameters using Adam optimizer.
§Parameters
_lr- Learning rate for parameter updates_beta1- Exponential decay rate for the first moment estimates_beta2- Exponential decay rate for the second moment estimates_epsilon- Small constant for numerical stability_t- Current training iteration
Sourcefn update_parameters_rmsprop(&mut self, _lr: f32, _rho: f32, _epsilon: f32)
fn update_parameters_rmsprop(&mut self, _lr: f32, _rho: f32, _epsilon: f32)
Updates the layer parameters using RMSprop optimizer.
§Parameters
_lr- Learning rate for parameter updates_rho- Decay rate for moving average of squared gradients_epsilon- Small constant for numerical stability
Sourcefn get_weights(&self) -> LayerWeight<'_>
fn get_weights(&self) -> LayerWeight<'_>
Returns a map of all weights in the layer.
This method provides access to all weight matrices and bias vectors used by the LSTM layer. The weights are organized by gate (input, forget, cell, output) and by their role (kernel, recurrent_kernel, bias) within each gate.
§Returns
- A
LayerWeightenum containing:LayerWeight::Densefor Dense layers with weight and biasLayerWeight::SimpleRNNfor SimpleRNN layers with kernel, recurrent_kernel, and biasLayerWeight::LSTMfor LSTM layers with weights for input, forget, cell, and output gates
Provided Methods§
Sourcefn layer_type(&self) -> &str
fn layer_type(&self) -> &str
Returns the type name of the layer (e.g., “Dense”).
§Returns
A string slice representing the layer type
Sourcefn output_shape(&self) -> String
fn output_shape(&self) -> String
Returns a description of the output shape of the layer.
§Returns
A string describing the output dimensions
Sourcefn param_count(&self) -> usize
fn param_count(&self) -> usize
Returns the total number of trainable parameters in the layer.
§Returns
The count of parameters as an usize