Trait dfdx::tensor_ops::MinTo
source · pub trait MinTo: HasErr + HasShape {
// Required method
fn try_min<Dst: Shape, Ax: Axes>(
self
) -> Result<Self::WithShape<Dst>, Self::Err>
where Self::Shape: ReduceShapeTo<Dst, Ax>;
// Provided method
fn min<Dst: Shape, Ax: Axes>(self) -> Self::WithShape<Dst>
where Self::Shape: ReduceShapeTo<Dst, Ax> { ... }
}
Expand description
Reduction along multiple axes using min
.
Required Methods§
sourcefn try_min<Dst: Shape, Ax: Axes>(
self
) -> Result<Self::WithShape<Dst>, Self::Err>where
Self::Shape: ReduceShapeTo<Dst, Ax>,
fn try_min<Dst: Shape, Ax: Axes>( self ) -> Result<Self::WithShape<Dst>, Self::Err>where Self::Shape: ReduceShapeTo<Dst, Ax>,
Fallible version of MinTo::min
Provided Methods§
sourcefn min<Dst: Shape, Ax: Axes>(self) -> Self::WithShape<Dst>where
Self::Shape: ReduceShapeTo<Dst, Ax>,
fn min<Dst: Shape, Ax: Axes>(self) -> Self::WithShape<Dst>where Self::Shape: ReduceShapeTo<Dst, Ax>,
Min reduction. Pytorch equivalent: t.amin(Ax)
NOTE This evenly distributes gradients between all equal maximum values, instead of only exactly 1 value.
Example reducing a single axis:
let t: Tensor<Rank2<2, 3>, f32, _> = dev.tensor([[1.0, 2.0, 3.0], [-1.0, -2.0, -3.0]]);
let r = t.min::<Rank1<2>, _>(); // or `min::<_, Axis<1>>()`
assert_eq!(r.array(), [1.0, -3.0]);
Reducing multiple axes:
let r = t.min::<Rank0, _>();
assert_eq!(r.array(), -3.0);