| Base implementation, everything can
| be overwritten
|
| Put forward and backward in the same
| template?
|
| Max computation is done element-wise,
| so that each element of the output slice
| corresponds to the max value of the respective
| elements in the input slices. Operation
| doesn’t change the shape of individual
| blocks. This implementation imitates
| torch nn.Max operator.
|
| If the maximum value occurs more than
| once, the operator will return the first
| occurrence of value. When computing
| the gradient using the backward propagation,
| the gradient input corresponding to
| the first occurrence of the maximum
| value will be used.
|
| Max computes the element-wise max of
| the input slices.
|
| Operation doesn’t change the shape
| of the individual blocks.
|
| Mean computes the element-wise mean
| of the input slices.
|
| Operation doesn’t change the shape
| of the individual blocks.
|
Sums the integer elements of the input tensor.
| Sums the elements of the input tensor.
| Tensor type must be float32.
|
| Github Links:
|
| - https://github.com/pytorch/pytorch/blob/master/caffe2/operators/reduction_ops.cc
|
| Put forward and backward in the same
| template?
|
| Summation is done element-wise across
| slices of the input tensor and doesn’t
| change the shape of the individual blocks.
|
| Put forward and backward in the same
| template?
|
| Input slices are first scaled by SCALARS
| and then summed element-wise.
|
| It doesn’t change the shape of the individual
| blocks.
|