Skip to main content

Module tensor

Module tensor 

Source
Expand description

N-dimensional tensor with element-wise, reduction, linalg, and NN operations. N-dimensional tensor with element-wise, reduction, linalg, and NN operations.

Tensor is the primary numerical type in CJC. It is backed by a Buffer<f64> with COW (copy-on-write) semantics, so cloning a tensor is O(1) and mutation triggers a deep copy only when the buffer is shared.

§Determinism Guarantees

  • Reductions (sum, mean, sum_axis) use BinnedAccumulatorF64 for order-invariant, bit-identical results.
  • Matmul uses Kahan-compensated accumulation (sequential path) or tiled + parallel strategies (large matrices) with deterministic per-row thread assignment.
  • SIMD kernels (via tensor_simd) avoid hardware FMA to preserve cross-platform bit-identity.
  • No HashMap/HashSet anywhere – all ordering is deterministic.

§Layout

Tensors use row-major (C-order) layout with explicit strides. Non-contiguous views (from slice, transpose, broadcast_to) share the underlying buffer with adjusted strides and offset. Call Tensor::to_contiguous to materialize a contiguous copy when needed.

§Operation Categories

CategoryMethods
Constructionzeros, ones, randn, from_vec, from_bytes
Shapeshape, ndim, len, reshape, transpose, slice, unsqueeze, squeeze, flatten
Element-wiseadd, sub, mul_elem, div_elem, elem_pow, map, map_simd
Reductionssum, mean, sum_axis, mean_axis, var_axis, std_axis
Linalgmatmul, bmm, linear, einsum
NNsoftmax, layer_norm, relu, sigmoid, gelu, conv1d, conv2d, maxpool2d
Indexingget, set, gather, scatter, index_select, argsort
Attentionscaled_dot_product_attention, split_heads, merge_heads

Structs§

Tensor
An N-dimensional tensor backed by a Buffer<f64>.