Skip to main content

Module kernel

Module kernel 

Source
Expand description

Raw-pointer kernel functions that operate directly on f64 slices.

These bypass the Value::Tensor wrapper, operating on contiguous &[f64] data. The interpreter resolves tensor pointers once at call entry, then dispatches to these zero-overhead kernels.

All functions here are safe Rust — they accept slices, not raw pointers. The “raw pointer” concept means: the interpreter does one to_vec() or buffer.borrow() at entry, then passes the contiguous slice through.

Functions§

conv1d_circular
1D convolution on a sliding window of a circular buffer.
conv1d_dispatched
1D convolution using dispatched summation.
conv1d_raw
1D convolution with stride=1, no padding (“valid” mode).
conv2d_dispatched
2D convolution using dispatched summation strategy.
conv2d_raw
2D convolution — NCHW layout, valid mode (no padding), configurable stride.
gelu_raw
Approximate GELU: x * 0.5 * (1 + tanh(sqrt(2/pi) * (x + 0.044715 * x^3)))
layer_norm_dispatched
Layer normalization using dispatched summation for mean/variance.
layer_norm_raw
Layer normalization over the last dimension.
linear_dispatched
Linear projection using dispatched summation.
linear_raw
Linear projection: Y[outer, out_f] = X[outer, in_f] @ W^T[out_f, in_f] + bias[out_f]
matmul_dispatched
Matrix multiply using dispatched summation strategy.
matmul_raw
Matrix multiply: C[m,n] = A[m,k] × B[k,n] with Kahan-summed dots.
maxpool1d_raw
Max-pooling over 1D signal, stride = pool_size.
maxpool2d_raw
2D max-pooling — NCHW layout, stride = pool_size (non-overlapping).
relu_raw
ReLU: max(0, x) element-wise.
softmax_raw
Softmax over the last dimension of a contiguous buffer.