| This op computes the elementwise linear
| combination of a batch of input vectors
| with a weight vector and bias vector.
| As input, the op takes an input tensor
| $X$ of shape $NxD$, a weight vector $w$
| of length $D$, and a bias vector $b$ of
| length $D$.
|
| Here, $N$ represents the batch size
| and $D$ represents the length of the
| feature vectors. The output, $Y$, is
| a tensor of shape $NxD$ and is calculated
| as
|
| $$Y_{ij} = X_{ij}w_j + b_j \ for \ i\in{N},
| j\in{D}$$
|
| Github Links:
|
| - https://github.com/pytorch/pytorch/blob/master/caffe2/operators/elementwise_linear_op.h
|
| - https://github.com/pytorch/pytorch/blob/master/caffe2/operators/elementwise_linear_op.cc
|
| Element-wise sum of each of the input tensors. The
| first input tensor can be used in-place as the
| output tensor, in which case the sum will be done
| in place and results will be accumulated the first
| input tensor. All inputs and outputs must have the
| same shape and data type.
|
| Github Links:
|
| - https://github.com/pytorch/pytorch/blob/master/caffe2/operators/elementwise_sum_op.cc
|
| The IsMemberOf op takes an input tensor
| X and a list of values as argument,
| and produces one output data tensor
| Y.
|
| The output tensor is the same shape as
| X and contains booleans. The output
| is calculated as the function f(x)
| = x in value and is applied to X elementwise.
|
| Github Links:
|
| - https://github.com/caffe2/caffe2/blob/master/caffe2/operators/elementwise_logical_ops.cc
|
| - https://github.com/caffe2/caffe2/blob/master/caffe2/operators/elementwise_logical_ops.h
|
| Performs element-wise negation on
| input tensor X
.
|
| Github Links:
|
| - https://github.com/pytorch/pytorch/blob/master/caffe2/operators/elementwise_ops_schema.cc
|
| Computes sign for each element of the
| input: -1, 0 or 1.
|
| Github Link:
|
| - https://github.com/pytorch/pytorch/blob/master/caffe2/operators/elementwise_ops_schema.cc
|
| SumReduceLike operator takes 2 tensors as
| input. It performs reduce sum to the first input
| so that the output looks like the second one.
|
| It assumes that the first input has more
| dimensions than the second, and the dimensions of
| the second input is the contiguous subset of the
| dimensions of the first.
|
| For example, the following tensor shapes are
| supported:
|
| shape(A) = (2, 3, 4, 5), shape(B) = (4, 5)
| shape(A) = (2, 3, 4, 5), shape(B) = (,), i.e. B is a scalar
| shape(A) = (2, 3, 4, 5), shape(B) = (3, 4), with axis=1
| shape(A) = (2, 3, 2, 5), shape(B) = (2), with axis=0
|
| Sum reduction operator that is used for computing
| the gradient in cases where the forward op is in
| broadcast mode.
| UnaryFunctorWithDefaultCtor is a functor that
| can be used as the functor of an
| UnaryElementwiseWithArgsOp.
|
| It simply forwards the operator() call into
| another functor that doesn’t accept arguments
| in its constructor.
| Operator Where takes three input data
| (Tensor, Tensor, Tensor) and produces one output
| data (Tensor) where z = c ? x : y is applied
| elementwise.
|