Expand description
During training, randomly zeroes some of the elements of self
with probability p using
samples from a Bernoulli distribution. Each channel will be zeroed out independently on
every forward call.
This has proven to be an effective technique for regularization and preventing the
co-adaptation of neurons as described in the paper
Improving neural networks by preventing co-adaptation of feature detectors.
Furthermore, the outputs are scaled by a factor of 1/(1 - p) during training. This means
that during evaluation the resulting variable simply computes an identity function.
Creates a dropout layer.
p
- probability of an element to be zeroed.
Applies the dropout to the variable in input.
input
- variable in input to the layer.
Sets self
in evaluation mode.
Sets self
in training mode.
Register self
’s status to the model’s status state status
.
Registers self
’s parameters to the model’s status parameters params
.
impl<T> Any for T where
T: 'static + ?Sized,
Immutably borrows from an owned value. Read more
Mutably borrows from an owned value. Read more
impl<T, U> Into<U> for T where
U: From<T>,
The alignment of pointer.
The type for initializers.
Initializes a with the given initializer. Read more
Mutably dereferences the given pointer. Read more
Drops the object pointed to by the given pointer. Read more
The type returned in the event of a conversion error.
The type returned in the event of a conversion error.
impl<V, T> VZip<V> for T where
V: MultiLane<T>,