Expand description
N-dimensional tensor with element-wise, reduction, linalg, and NN operations. N-dimensional tensor with element-wise, reduction, linalg, and NN operations.
Tensor is the primary numerical type in CJC. It is backed by a
Buffer<f64> with COW (copy-on-write) semantics, so cloning a tensor
is O(1) and mutation triggers a deep copy only when the buffer is shared.
§Determinism Guarantees
- Reductions (
sum,mean,sum_axis) useBinnedAccumulatorF64for order-invariant, bit-identical results. - Matmul uses Kahan-compensated accumulation (sequential path) or tiled + parallel strategies (large matrices) with deterministic per-row thread assignment.
- SIMD kernels (via
tensor_simd) avoid hardware FMA to preserve cross-platform bit-identity. - No
HashMap/HashSetanywhere – all ordering is deterministic.
§Layout
Tensors use row-major (C-order) layout with explicit strides. Non-contiguous
views (from slice, transpose, broadcast_to) share the underlying
buffer with adjusted strides and offset. Call Tensor::to_contiguous to
materialize a contiguous copy when needed.
§Operation Categories
| Category | Methods |
|---|---|
| Construction | zeros, ones, randn, from_vec, from_bytes |
| Shape | shape, ndim, len, reshape, transpose, slice, unsqueeze, squeeze, flatten |
| Element-wise | add, sub, mul_elem, div_elem, elem_pow, map, map_simd |
| Reductions | sum, mean, sum_axis, mean_axis, var_axis, std_axis |
| Linalg | matmul, bmm, linear, einsum |
| NN | softmax, layer_norm, relu, sigmoid, gelu, conv1d, conv2d, maxpool2d |
| Indexing | get, set, gather, scatter, index_select, argsort |
| Attention | scaled_dot_product_attention, split_heads, merge_heads |
Structs§
- Tensor
- An N-dimensional tensor backed by a
Buffer<f64>.