Trait dfdx::tensor_ops::MaxTo
source · pub trait MaxTo: HasErr + HasShape {
// Required method
fn try_max<Dst: Shape, Ax: Axes>(
self
) -> Result<Self::WithShape<Dst>, Self::Err>
where Self::Shape: ReduceShapeTo<Dst, Ax>;
// Provided method
fn max<Dst: Shape, Ax: Axes>(self) -> Self::WithShape<Dst>
where Self::Shape: ReduceShapeTo<Dst, Ax> { ... }
}
Expand description
Reduction along multiple axes using max
.
Required Methods§
sourcefn try_max<Dst: Shape, Ax: Axes>(
self
) -> Result<Self::WithShape<Dst>, Self::Err>where
Self::Shape: ReduceShapeTo<Dst, Ax>,
fn try_max<Dst: Shape, Ax: Axes>( self ) -> Result<Self::WithShape<Dst>, Self::Err>where Self::Shape: ReduceShapeTo<Dst, Ax>,
Fallible version of MaxTo::max
Provided Methods§
sourcefn max<Dst: Shape, Ax: Axes>(self) -> Self::WithShape<Dst>where
Self::Shape: ReduceShapeTo<Dst, Ax>,
fn max<Dst: Shape, Ax: Axes>(self) -> Self::WithShape<Dst>where Self::Shape: ReduceShapeTo<Dst, Ax>,
Max reduction. Pytorch equivalent: t.amax(Ax)
NOTE This evenly distributes gradients between all equal maximum values, instead of only exactly 1 value.
Example reducing a single axis:
let t: Tensor<Rank2<2, 3>, f32, _> = dev.tensor([[1.0, 2.0, 3.0], [-1.0, -2.0, -3.0]]);
let r = t.max::<Rank1<2>, _>(); // or `max::<_, Axis<1>>()`
assert_eq!(r.array(), [3.0, -1.0]);
Reducing multiple axes:
let r = t.max::<Rank0, _>();
assert_eq!(r.array(), 3.0);