Struct nncombinator::activation::SoftMax
source · Expand description
SoftMax Implementation
Implementations§
Trait Implementations§
source§impl<U, const N: usize> Activation<U, Arr<U, N>, Arr<U, N>, DeviceCpu<U>> for SoftMax<U, DeviceCpu<U>>where
U: UnitValue<U>,
impl<U, const N: usize> Activation<U, Arr<U, N>, Arr<U, N>, DeviceCpu<U>> for SoftMax<U, DeviceCpu<U>>where
U: UnitValue<U>,
source§fn apply(
&self,
device: &DeviceCpu<U>,
input: &Arr<U, N>
) -> Result<Arr<U, N>, EvaluateError>
fn apply(
&self,
device: &DeviceCpu<U>,
input: &Arr<U, N>
) -> Result<Arr<U, N>, EvaluateError>
Apply the activation function Read more
source§fn derive(
&self,
device: &DeviceCpu<U>,
o: &Arr<U, N>,
loss: &Arr<U, N>,
u: &Arr<U, N>
) -> Result<Arr<U, N>, TrainingError>
fn derive(
&self,
device: &DeviceCpu<U>,
o: &Arr<U, N>,
loss: &Arr<U, N>,
u: &Arr<U, N>
) -> Result<Arr<U, N>, TrainingError>
Apply derivatives of the activation function Read more
source§fn is_canonical_link<L: LossFunction<U>>(&self, l: &L) -> bool
fn is_canonical_link<L: LossFunction<U>>(&self, l: &L) -> bool
Returns whether or not the canonical linkage function can be used. Read more
source§impl<U, const N: usize> Activation<U, Arr<U, N>, Arr<U, N>, DeviceGpu<U>> for SoftMax<U, DeviceGpu<U>>where
U: UnitValue<U> + DataTypeInfo,
DeviceGpu<U>: Device<U>,
CudaPtr<U>: TryFrom<U, Error = CudaError>,
SoftMaxForward<U>: Kernel<Args = ActivationForwardArgs<U>>,
SoftMaxBackward<U>: Kernel<Args = ActivationBackwardArgs<U>>,
impl<U, const N: usize> Activation<U, Arr<U, N>, Arr<U, N>, DeviceGpu<U>> for SoftMax<U, DeviceGpu<U>>where
U: UnitValue<U> + DataTypeInfo,
DeviceGpu<U>: Device<U>,
CudaPtr<U>: TryFrom<U, Error = CudaError>,
SoftMaxForward<U>: Kernel<Args = ActivationForwardArgs<U>>,
SoftMaxBackward<U>: Kernel<Args = ActivationBackwardArgs<U>>,
source§fn apply(
&self,
_: &DeviceGpu<U>,
input: &Arr<U, N>
) -> Result<Arr<U, N>, EvaluateError>
fn apply(
&self,
_: &DeviceGpu<U>,
input: &Arr<U, N>
) -> Result<Arr<U, N>, EvaluateError>
Apply the activation function Read more
source§fn derive(
&self,
_: &DeviceGpu<U>,
o: &Arr<U, N>,
loss: &Arr<U, N>,
u: &Arr<U, N>
) -> Result<Arr<U, N>, TrainingError>
fn derive(
&self,
_: &DeviceGpu<U>,
o: &Arr<U, N>,
loss: &Arr<U, N>,
u: &Arr<U, N>
) -> Result<Arr<U, N>, TrainingError>
Apply derivatives of the activation function Read more
source§fn is_canonical_link<L: LossFunction<U>>(&self, l: &L) -> bool
fn is_canonical_link<L: LossFunction<U>>(&self, l: &L) -> bool
Returns whether or not the canonical linkage function can be used. Read more
source§impl<U, I, const N: usize> Activation<U, I, Arr<U, N>, DeviceCpu<U>> for SoftMax<U, DeviceCpu<U>>where
U: UnitValue<U>,
I: IndexedParallelIterator<Item = U> + Clone,
<I as IntoParallelIterator>::Iter: IndexedParallelIterator<Item = U>,
impl<U, I, const N: usize> Activation<U, I, Arr<U, N>, DeviceCpu<U>> for SoftMax<U, DeviceCpu<U>>where
U: UnitValue<U>,
I: IndexedParallelIterator<Item = U> + Clone,
<I as IntoParallelIterator>::Iter: IndexedParallelIterator<Item = U>,
source§fn apply(&self, _: &DeviceCpu<U>, input: &I) -> Result<Arr<U, N>, EvaluateError>
fn apply(&self, _: &DeviceCpu<U>, input: &I) -> Result<Arr<U, N>, EvaluateError>
Apply the activation function Read more
source§fn derive(
&self,
_: &DeviceCpu<U>,
o: &I,
loss: &I,
_: &I
) -> Result<Arr<U, N>, TrainingError>
fn derive(
&self,
_: &DeviceCpu<U>,
o: &I,
loss: &I,
_: &I
) -> Result<Arr<U, N>, TrainingError>
Apply derivatives of the activation function Read more
source§fn is_canonical_link<L: LossFunction<U>>(&self, l: &L) -> bool
fn is_canonical_link<L: LossFunction<U>>(&self, l: &L) -> bool
Returns whether or not the canonical linkage function can be used. Read more