pub struct Layer<L: LinAlg = CpuLinAlg> {
pub weights: L::Matrix,
pub bias: L::Vector,
pub activation: Activation,
}Expand description
A single dense layer with weights, bias, and activation function.
Generic over a LinAlg backend L. Defaults to CpuLinAlg for
backward compatibility.
Weights have shape [output_size × input_size]. Bias has length output_size.
§Examples
use pc_rl_core::activation::Activation;
use pc_rl_core::layer::Layer;
use rand::SeedableRng;
use rand::rngs::StdRng;
let mut rng = StdRng::seed_from_u64(42);
let layer: Layer = Layer::new(4, 3, Activation::Tanh, &mut rng);
let output: Vec<f64> = layer.forward(&vec![1.0, 0.0, -1.0, 0.5]);
assert_eq!(output.len(), 3);Fields§
§weights: L::MatrixWeight matrix of shape [output_size × input_size].
bias: L::VectorBias vector of length output_size.
activation: ActivationActivation function applied element-wise after the linear transform.
Implementations§
Source§impl<L: LinAlg> Layer<L>
impl<L: LinAlg> Layer<L>
Sourcepub fn new(
input_size: usize,
output_size: usize,
activation: Activation,
rng: &mut impl Rng,
) -> Self
pub fn new( input_size: usize, output_size: usize, activation: Activation, rng: &mut impl Rng, ) -> Self
Creates a new layer with Xavier-initialized weights and zero bias.
§Arguments
input_size- Number of inputs to this layer.output_size- Number of neurons (outputs) in this layer.activation- Activation function to apply after the linear transform.rng- Random number generator for weight initialization.
Sourcepub fn forward(&self, input: &L::Vector) -> L::Vector
pub fn forward(&self, input: &L::Vector) -> L::Vector
Computes activation(W * input + bias).
§Panics
Panics if input.len() != input_size (number of columns in weights).
Sourcepub fn transpose_forward(
&self,
input: &L::Vector,
activation: Activation,
) -> L::Vector
pub fn transpose_forward( &self, input: &L::Vector, activation: Activation, ) -> L::Vector
Computes activation(W^T * input) (no bias).
Used for PC top-down predictions. The activation parameter is
separate from self.activation because at the output→last-hidden
boundary, different activations may apply.
§Panics
Panics if input.len() != output_size (number of rows in weights).
Sourcepub fn backward(
&mut self,
input: &L::Vector,
output: &L::Vector,
delta: &L::Vector,
lr: f64,
surprise_scale: f64,
) -> L::Vector
pub fn backward( &mut self, input: &L::Vector, output: &L::Vector, delta: &L::Vector, lr: f64, surprise_scale: f64, ) -> L::Vector
Backpropagation with gradient and weight clipping.
Returns the propagated delta for the layer below (length = input_size).
§Arguments
input- Input that was fed to this layer during forward pass.output- Output of this layer from the forward pass (post-activation).delta- Error signal from the layer above.lr- Base learning rate.surprise_scale- Multiplier onlrbased on surprise score.
§Panics
Panics on dimension mismatches.