Trait Layer

Source
pub trait Layer {
    // Required methods
    fn forward(&mut self, input: &Tensor) -> Tensor;
    fn backward(&mut self, grad_output: &Tensor) -> Result<Tensor, ModelError>;
    fn update_parameters_sgd(&mut self, _lr: f32);
    fn update_parameters_adam(
        &mut self,
        _lr: f32,
        _beta1: f32,
        _beta2: f32,
        _epsilon: f32,
        _t: u64,
    );
    fn update_parameters_rmsprop(&mut self, _lr: f32, _rho: f32, _epsilon: f32);
    fn get_weights(&self) -> LayerWeight<'_>;

    // Provided methods
    fn layer_type(&self) -> &str { ... }
    fn output_shape(&self) -> String { ... }
    fn param_count(&self) -> usize { ... }
}
Expand description

Defines the interface for neural network layers.

This trait provides the core functionality that all neural network layers must implement, including forward and backward propagation, as well as parameter updates for different optimization algorithms.

Required Methods§

Source

fn forward(&mut self, input: &Tensor) -> Tensor

Performs forward propagation through the layer.

§Parameters
  • input - The input tensor to the layer
§Returns

The output tensor after forward computation

Source

fn backward(&mut self, grad_output: &Tensor) -> Result<Tensor, ModelError>

Performs backward propagation through the layer.

§Parameters
  • grad_output - The gradient tensor from the next layer
§Returns
  • Ok(Tensor) - The gradient tensor to be passed to the previous layer
  • Err(ModelError::ProcessingError(String)) - If the layer encountered an error during processing`
Source

fn update_parameters_sgd(&mut self, _lr: f32)

Updates the layer parameters using Stochastic Gradient Descent.

§Parameters
  • _lr - Learning rate for parameter updates
Source

fn update_parameters_adam( &mut self, _lr: f32, _beta1: f32, _beta2: f32, _epsilon: f32, _t: u64, )

Updates the layer parameters using Adam optimizer.

§Parameters
  • _lr - Learning rate for parameter updates
  • _beta1 - Exponential decay rate for the first moment estimates
  • _beta2 - Exponential decay rate for the second moment estimates
  • _epsilon - Small constant for numerical stability
  • _t - Current training iteration
Source

fn update_parameters_rmsprop(&mut self, _lr: f32, _rho: f32, _epsilon: f32)

Updates the layer parameters using RMSprop optimizer.

§Parameters
  • _lr - Learning rate for parameter updates
  • _rho - Decay rate for moving average of squared gradients
  • _epsilon - Small constant for numerical stability
Source

fn get_weights(&self) -> LayerWeight<'_>

Returns a map of all weights in the layer.

This method provides access to all weight matrices and bias vectors used by the LSTM layer. The weights are organized by gate (input, forget, cell, output) and by their role (kernel, recurrent_kernel, bias) within each gate.

§Returns
  • A LayerWeight enum containing:
    • LayerWeight::Dense for Dense layers with weight and bias
    • LayerWeight::SimpleRNN for SimpleRNN layers with kernel, recurrent_kernel, and bias
    • LayerWeight::LSTM for LSTM layers with weights for input, forget, cell, and output gates

Provided Methods§

Source

fn layer_type(&self) -> &str

Returns the type name of the layer (e.g., “Dense”).

§Returns

A string slice representing the layer type

Source

fn output_shape(&self) -> String

Returns a description of the output shape of the layer.

§Returns

A string describing the output dimensions

Source

fn param_count(&self) -> usize

Returns the total number of trainable parameters in the layer.

§Returns

The count of parameters as an usize

Implementors§