[][src]Enum onednn_sys::dnnl_normalization_flags_t

#[repr(u32)]
#[non_exhaustive]pub enum dnnl_normalization_flags_t {
    dnnl_normalization_flags_none,
    dnnl_use_global_stats,
    dnnl_use_scaleshift,
    dnnl_fuse_norm_relu,
}

Flags for normalization primitives.

Variants (Non-exhaustive)

Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.
dnnl_normalization_flags_none

Use no normalization flags

If specified

  • on forward training propagation mean and variance are computed and stored as output
  • on backward propagation compute full derivative wrt data
  • on backward propagation prop_kind == #dnnl_backward_data has the same behavior as prop_kind == #dnnl_backward
dnnl_use_global_stats

Use global statistics

If specified

  • on forward propagation use mean and variance provided by user (input)
  • on backward propagation reduces the amount of computations, since mean and variance are considered as constants

If not specified:

  • on forward propagation mean and variance are computed and stored as output
  • on backward propagation compute full derivative wrt data
dnnl_use_scaleshift

Use scale and shift parameters

If specified:

  • on forward propagation use scale and shift (aka scale and bias) for the batch normalization results
  • on backward propagation (for prop_kind == #dnnl_backward) compute diff wrt scale and shift (hence one extra output used)

If no specified:

  • on backward propagation prop_kind == #dnnl_backward_data has the same behavior as prop_kind == #dnnl_backward
dnnl_fuse_norm_relu

Fuse with ReLU

The flag implies negative slope being 0. On training this is the only configuration supported. For inference, to use non-zero negative slope consider using @ref dev_guide_attributes_post_ops.

If specified:

  • on inference this option behaves the same as if the primitive were fused with ReLU using post ops API with zero negative slope.
  • on training primitive requires workspace (required to be able to perform backward pass)

Trait Implementations

impl Clone for dnnl_normalization_flags_t[src]

impl Copy for dnnl_normalization_flags_t[src]

impl Debug for dnnl_normalization_flags_t[src]

impl Eq for dnnl_normalization_flags_t[src]

impl Hash for dnnl_normalization_flags_t[src]

impl PartialEq<dnnl_normalization_flags_t> for dnnl_normalization_flags_t[src]

impl StructuralEq for dnnl_normalization_flags_t[src]

impl StructuralPartialEq for dnnl_normalization_flags_t[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.