pub enum Op<B: Backend> {
Show 22 variants
None,
Binary {
lhs: Tensor<B>,
rhs: Tensor<B>,
op: BinaryOp,
},
Unary {
input: Tensor<B>,
op: UnaryOp,
},
Reduce {
input: Tensor<B>,
op: ReduceOp,
dims: Vec<usize>,
keep_dim: bool,
},
Matmul {
lhs: Tensor<B>,
rhs: Tensor<B>,
},
Reshape {
input: Tensor<B>,
src_shape: Shape,
},
Transpose {
input: Tensor<B>,
dim0: usize,
dim1: usize,
},
Narrow {
input: Tensor<B>,
dim: usize,
start: usize,
len: usize,
},
Affine {
input: Tensor<B>,
mul: f64,
add: f64,
},
Contiguous {
input: Tensor<B>,
},
Conv2d {
input: Tensor<B>,
weight: Tensor<B>,
bias: Option<Tensor<B>>,
stride: [usize; 2],
padding: [usize; 2],
},
MaxPool2d {
input: Tensor<B>,
kernel_size: [usize; 2],
stride: [usize; 2],
padding: [usize; 2],
indices: Vec<usize>,
},
Cat {
inputs: Vec<Tensor<B>>,
dim: usize,
sizes: Vec<usize>,
},
Powf {
input: Tensor<B>,
exponent: f64,
},
Clamp {
input: Tensor<B>,
min: f64,
max: f64,
},
WhereCond {
mask: Tensor<B>,
on_true: Tensor<B>,
on_false: Tensor<B>,
},
Gather {
input: Tensor<B>,
index: Tensor<B>,
dim: usize,
},
Pad {
input: Tensor<B>,
padding: Vec<[usize; 2]>,
},
AvgPool2d {
input: Tensor<B>,
kernel_size: [usize; 2],
stride: [usize; 2],
padding: [usize; 2],
},
Conv1d {
input: Tensor<B>,
weight: Tensor<B>,
bias: Option<Tensor<B>>,
stride: usize,
padding: usize,
},
IndexSelect {
input: Tensor<B>,
indices: Tensor<B>,
dim: usize,
},
ToDtype {
input: Tensor<B>,
src_dtype: DType,
},
}Expand description
Records the operation that produced a tensor, storing references to inputs.
Each variant holds the actual input Tensor(s) (Arc-wrapped, cheap to clone) plus the operation parameters. backward() uses these to compute gradients via the chain rule.
Op is generic over the Backend because it stores Tensor.
Variants§
None
No operation — this is a leaf tensor (input data or trainable parameter).
Binary
Element-wise binary: result = op(lhs, rhs)
Unary
Element-wise unary: result = op(input)
Reduce
Reduction: result = reduce(input, dims)
Matmul
Matrix multiplication: result = lhs @ rhs
Reshape
Reshape (includes squeeze/unsqueeze): same data, different shape. src_shape records the original shape so backward can reshape gradients back.
Transpose
Transpose: swap two dimensions
Narrow
Narrow/slice along a dimension
Affine
Affine transform: result = input * mul + add
Contiguous
Contiguous copy: same logical values, but data is now contiguous in memory. Gradient passes through unchanged.
Conv2d
2D convolution: result = conv2d(input, weight) + bias input: [N, C_in, H, W], weight: [C_out, C_in, kH, kW]
Fields
MaxPool2d
2D max-pooling. input: [N, C, H, W] indices stores the argmax positions for backward.
Fields
Cat
Concatenation along a dimension.
inputs are the original tensors that were concatenated.
dim is the concatenation dimension.
sizes stores the size of each input along dim (needed by backward
to slice the gradient back into per-input pieces via narrow).
Powf
Element-wise power: result = input ^ exponent.
Clamp
Element-wise clamp: result = clamp(input, min, max).
WhereCond
Conditional select: result[i] = if mask[i] { on_true[i] } else { on_false[i] }.
Gather
Gather elements along a dimension using index tensor.
Pad
Constant padding.
AvgPool2d
2D average-pooling. input: [N, C, H, W]
Conv1d
1D convolution: result = conv1d(input, weight) + bias input: [N, C_in, L], weight: [C_out, C_in, K]
IndexSelect
Index select along a dimension: result = input.index_select(dim, indices) Backward = scatter-add of grad_output into grad_input at index positions.
ToDtype
Dtype conversion: result = input.to_dtype(target_dtype) Backward casts gradient back to the original dtype.