DeviceBatchNorm

Trait DeviceBatchNorm 

Source
pub trait DeviceBatchNorm<U, C, I, const N: usize>
where U: UnitValue<U>, I: BatchDataType + Debug + 'static, <I as BatchDataType>::Type: Debug + 'static,
{ // Required methods fn forward_batch_norm<'a>( &self, input: &'a I, scale: &C, bias: &C, estimated_mean: &C, estimated_variance: &C, ) -> Result<I, EvaluateError>; fn forward_batch_norm_train<'a>( &self, input: &'a I, scale: &C, bias: &C, estimated_mean: &C, estimated_variance: &C, ) -> Result<(I, C, C), EvaluateError>; fn batch_forward_batch_norm<'a>( &self, input: &'a <I as BatchDataType>::Type, scale: &C, bias: &C, estimated_mean: &C, estimated_variance: &C, ) -> Result<<I as BatchDataType>::Type, EvaluateError>; fn batch_forward_batch_norm_train<'a>( &self, input: &'a <I as BatchDataType>::Type, scale: &C, bias: &C, running_mean: &C, running_variance: &C, momentum: U, ) -> Result<(<I as BatchDataType>::Type, C, C, C, C), TrainingError>; fn backward_batch_norm<'a>( &self, loss: &'a I, input: &'a I, scale: &C, saved_mean: &C, saved_inv_variance: &C, ) -> Result<(I, C, C), TrainingError>; fn batch_backward_batch_norm<'a>( &self, loss: &'a <I as BatchDataType>::Type, input: &'a <I as BatchDataType>::Type, scale: &C, saved_mean: &C, saved_inv_variance: &C, ) -> Result<(<I as BatchDataType>::Type, C, C), TrainingError>; }
Expand description

Features defining the implementation of the various computational processes in the batch normalization layer

Required Methods§

Source

fn forward_batch_norm<'a>( &self, input: &'a I, scale: &C, bias: &C, estimated_mean: &C, estimated_variance: &C, ) -> Result<I, EvaluateError>

Forward propagation calculation

§Arguments
  • input - input
  • scale - γ
  • bias - β
  • estimated_mean - μΒ
  • estimated_variance - σΒ

output = γ * ((input - μΒ) / sqrt(σ^2Β + 1e-6)) + β

§Errors

This function may return the following errors

Source

fn forward_batch_norm_train<'a>( &self, input: &'a I, scale: &C, bias: &C, estimated_mean: &C, estimated_variance: &C, ) -> Result<(I, C, C), EvaluateError>

Forward propagation calculation (implemented in training mode)

§Arguments
  • input - input
  • scale - γ
  • bias - β
  • estimated_mean - μΒ
  • estimated_variance - σΒ

output = (γ * ((input - μΒ) / sqrt(σ^2Β + 1e-6)) + β,μΒ,1 / (σΒ + 1e-6))

§Errors

This function may return the following errors

Source

fn batch_forward_batch_norm<'a>( &self, input: &'a <I as BatchDataType>::Type, scale: &C, bias: &C, estimated_mean: &C, estimated_variance: &C, ) -> Result<<I as BatchDataType>::Type, EvaluateError>

Forward propagation calculation in batch

§Arguments
  • input - input
  • scale - γ
  • bias - β
  • estimated_mean - μΒ
  • estimated_variance - σΒ

output = γ * ((input - μΒ) / sqrt(σ^2Β + 1e-6)) + β

§Errors

This function may return the following errors

Source

fn batch_forward_batch_norm_train<'a>( &self, input: &'a <I as BatchDataType>::Type, scale: &C, bias: &C, running_mean: &C, running_variance: &C, momentum: U, ) -> Result<(<I as BatchDataType>::Type, C, C, C, C), TrainingError>

Forward propagation calculation in batch (implemented in training mode)

§Arguments
  • input - input
  • scale - γ
  • bias - β
  • running_mean - μΒ
  • running_variance - σΒ

running_mean = running_mean * momentum + (1 - momentum) * μΒ running_variance = running_variance * momentum + (1 - momentum) * μΒ

output = (γ * ((input - μΒ) / sqrt(σ^2Β + 1e-6)) + β,,μΒ,1 / (σΒ + 1e-6),running_mean,running_variance)

§Errors

This function may return the following errors

Source

fn backward_batch_norm<'a>( &self, loss: &'a I, input: &'a I, scale: &C, saved_mean: &C, saved_inv_variance: &C, ) -> Result<(I, C, C), TrainingError>

Error back propagation calculation

§Arguments
  • loss - loss input
  • input - input
  • scale - γ
  • saved_mean - μΒ calculated during forward propagation
  • saved_inv_variance - Inverse of σΒ calculated during forward propagati (1 / (σΒ + 1e-6))
§Errors

This function may return the following errors

Source

fn batch_backward_batch_norm<'a>( &self, loss: &'a <I as BatchDataType>::Type, input: &'a <I as BatchDataType>::Type, scale: &C, saved_mean: &C, saved_inv_variance: &C, ) -> Result<(<I as BatchDataType>::Type, C, C), TrainingError>

Error back propagation calculation in batch

§Arguments
  • loss - loss input
  • input - input
  • scale - γ
  • saved_mean - μΒ calculated during forward propagation
  • saved_inv_variance - Inverse of σΒ calculated during forward propagati (1 / (σΒ + 1e-6))
§Errors

This function may return the following errors

Implementors§

Source§

impl<U, I, const N: usize> DeviceBatchNorm<U, Arr<U, N>, I, N> for DeviceCpu<U>
where U: UnitValue<U>, I: BatchDataType + Debug + From<Arr<U, N>> + 'static, <I as BatchDataType>::Type: Debug + 'static + TryFrom<<SerializedVec<U, Arr<U, N>> as IntoConverter>::Converter, Error = TypeConvertError>, SerializedVec<U, Arr<U, N>>: IntoConverter, for<'a> ArrView<'a, U, N>: From<&'a I>, for<'a> SerializedVecView<'a, U, Arr<U, N>>: TryFrom<&'a <I as BatchDataType>::Type, Error = TypeConvertError>,

Source§

impl<U, I, const N: usize> DeviceBatchNorm<U, CudaTensor1dPtr<U, N>, I, N> for DeviceGpu<U>
where U: UnitValue<U> + DataTypeInfo + AsVoidPtr, I: BatchDataType + Debug + From<CudaTensor1dPtr<U, N>> + 'static, <I as BatchDataType>::Type: Debug + 'static + TryFrom<<CudaVec<U, CudaTensor1dPtr<U, N>> as IntoConverter>::Converter, Error = TypeConvertError>, CudaVec<U, CudaTensor1dPtr<U, N>>: IntoConverter, for<'a> CudaTensor1dPtrView<'a, U, N>: From<&'a I>, for<'a> CudaVecView<'a, U, CudaTensor1dPtr<U, N>>: TryFrom<&'a <I as BatchDataType>::Type, Error = TypeConvertError>, f64: From<U>,