[][src]Trait onnxruntime::tensor::ndarray_tensor::NdArrayTensor

pub trait NdArrayTensor<S, T, D> {
    fn softmax(&self, axis: Axis) -> Array<T, D>
    where
        D: RemoveAxis,
        S: RawData + Data + RawData<Elem = T>,
        <S as RawData>::Elem: Clone,
        T: NdFloat + SubAssign + DivAssign
; }

Trait extending ndarray::ArrayBase with useful tensor operations.

Generic

The trait is generic over:

Required methods

fn softmax(&self, axis: Axis) -> Array<T, D> where
    D: RemoveAxis,
    S: RawData + Data + RawData<Elem = T>,
    <S as RawData>::Elem: Clone,
    T: NdFloat + SubAssign + DivAssign

Calculate the softmax of the tensor along a given axis

Trait Bounds

The function is generic and thus has some trait bounds:

  • D: ndarray::RemoveAxis: The summation over an axis reduces the dimension of the tensor. A 0-D tensor thus cannot have a softmax calculated.
  • S: ndarray::RawData + ndarray::Data + ndarray::RawData<Elem = T>: The storage of the tensor can be an owned array (ndarray::Array) or an array view (ndarray::ArrayView).
  • <S as ndarray::RawData>::Elem: std::clone::Clone: The elements of the tensor must be Clone.
  • T: ndarray::NdFloat + std::ops::SubAssign + std::ops::DivAssign: The elements of the tensor must be workable as floats and must support -= and /= operations.
Loading content...

Implementations on Foreign Types

impl<S, T, D> NdArrayTensor<S, T, D> for ArrayBase<S, D> where
    D: RemoveAxis,
    S: RawData + Data + RawData<Elem = T>,
    <S as RawData>::Elem: Clone,
    T: NdFloat + SubAssign + DivAssign
[src]

Loading content...

Implementors

Loading content...