Trait leaf::layer::ComputeInputGradient
[−]
[src]
pub trait ComputeInputGradient<T, B: IBackend> { fn compute_input_gradient(
&self,
backend: &B,
weights_data: &[&SharedTensor<T>],
output_data: &[&SharedTensor<T>],
output_gradients: &[&SharedTensor<T>],
input_data: &[&SharedTensor<T>],
input_gradients: &mut [&mut SharedTensor<T>]
); }
A Layer that can compute the gradient with respect to its input.
Required Methods
fn compute_input_gradient(
&self,
backend: &B,
weights_data: &[&SharedTensor<T>],
output_data: &[&SharedTensor<T>],
output_gradients: &[&SharedTensor<T>],
input_data: &[&SharedTensor<T>],
input_gradients: &mut [&mut SharedTensor<T>]
)
&self,
backend: &B,
weights_data: &[&SharedTensor<T>],
output_data: &[&SharedTensor<T>],
output_gradients: &[&SharedTensor<T>],
input_data: &[&SharedTensor<T>],
input_gradients: &mut [&mut SharedTensor<T>]
)
Compute gradients with respect to the inputs and write them into input_gradients
.
Implementors
impl<B: IBackend + Relu<f32>> ComputeInputGradient<f32, B> for ReLU
impl<B: IBackend + Sigmoid<f32>> ComputeInputGradient<f32, B> for Sigmoid
impl<B: IBackend + Tanh<f32>> ComputeInputGradient<f32, B> for TanH
impl<B: IBackend + LayerOps<f32>> ComputeInputGradient<f32, B> for Linear
impl<B: IBackend + LogSoftmax<f32>> ComputeInputGradient<f32, B> for LogSoftmax
impl<B: IBackend + Softmax<f32>> ComputeInputGradient<f32, B> for Softmax
impl<B: IBackend> ComputeInputGradient<f32, B> for NegativeLogLikelihood
impl<B: IBackend> ComputeInputGradient<f32, B> for Reshape
impl<B: IBackend + LayerOps<f32> + 'static> ComputeInputGradient<f32, B> for Sequential<B>