runmat-runtime 0.4.1

Core runtime for RunMat with builtins, BLAS/LAPACK integration, and execution APIs
Documentation
{
  "title": "sum",
  "category": "math/reduction",
  "keywords": [
    "sum",
    "reduction",
    "omitnan",
    "gpu"
  ],
  "summary": "Sum elements of scalars, vectors, matrices, or N-D tensors with MATLAB-compatible options.",
  "references": [],
  "gpu_support": {
    "elementwise": false,
    "reduction": true,
    "precisions": [
      "f32",
      "f64"
    ],
    "broadcasting": "matlab",
    "notes": "Prefers provider reduce_sum_dim / reduce_sum hooks; falls back to host for omitnan, multi-axis, or class-prototype requests."
  },
  "fusion": {
    "elementwise": false,
    "reduction": true,
    "max_inputs": 1,
    "constants": "inline"
  },
  "requires_feature": null,
  "tested": {
    "unit": "builtins::math::reduction::sum::tests",
    "integration": "builtins::math::reduction::sum::tests::sum_gpu_provider_roundtrip"
  },
  "description": "`sum(X)` adds together elements of scalars, vectors, matrices, and higher-dimensional tensors. When no dimension is supplied, the reduction runs along the first non-singleton dimension.",
  "behaviors": [
    "`sum(X)` on an `m × n` matrix returns a row vector (`1 × n`) with column sums.",
    "`sum(X, 2)` returns a column vector (`m × 1`) containing row sums.",
    "`sum(X, dims)` accepts a vector of dimensions (e.g., `[1 3]`) and collapses each listed axis while leaving the others untouched.",
    "`sum(X, 'all')` flattens every dimension into a single scalar sum.",
    "Logical inputs are promoted to double precision (`true → 1.0`, `false → 0.0`).",
    "`sum(___, 'omitnan')` ignores `NaN` values; if every element in the slice is `NaN`, the result becomes `0`.",
    "`sum(___, 'includenan')` (default) propagates `NaN` when any element in the slice is `NaN`.",
    "`sum(___, outtype)` accepts `'double'`, `'default'`, or `'native'` to control the output class.",
    "`sum(___, 'like', prototype)` matches the numeric class and residency of `prototype` when supported by the active provider.",
    "Empty inputs or reductions along dimensions with size `0` return zeros that follow MATLAB shape semantics."
  ],
  "examples": [
    {
      "description": "Summing the elements of a matrix",
      "input": "A = [1 2 3; 4 5 6];\ncolSums = sum(A);\nrowSums = sum(A, 2)",
      "output": "colSums = [5 7 9];\nrowSums = [6; 15]"
    },
    {
      "description": "Summing across multiple dimensions",
      "input": "B = reshape(1:24, [3 4 2]);\ncollapse = sum(B, [1 3])",
      "output": "collapse = [48 66 84 102]"
    },
    {
      "description": "Summing with NaN values ignored",
      "input": "values = [1 NaN 3];\ntotal = sum(values, 'omitnan')",
      "output": "total = 4"
    },
    {
      "description": "Summing on the GPU and matching an existing prototype",
      "input": "proto = gpuArray.zeros(1, 1, 'single');\nresult = sum(gpuArray([1 2 3]), 'all', 'like', proto)",
      "output": "result =\n  1x1 gpuArray  single\n     6"
    },
    {
      "description": "Summing all elements of an array into a scalar",
      "input": "C = [1 2 3; 4 5 6];\ngrandTotal = sum(C, 'all')",
      "output": "grandTotal = 21"
    },
    {
      "description": "Summing with native output type",
      "input": "ints = int32([100 200 300]);\ntotal = sum(ints, 'native')",
      "output": "total = int32(600)"
    }
  ],
  "faqs": [
    {
      "question": "When should I use the `sum` function?",
      "answer": "Use `sum` whenever you need to add together slices of a tensor, whether across a single dimension, multiple dimensions, or the entire dataset."
    },
    {
      "question": "Does `sum` produce double arrays by default?",
      "answer": "Yes. Unless you request `'native'` or provide a `'like'` prototype, the result is a dense double-precision array on the host."
    },
    {
      "question": "What does `sum(A)` return?",
      "answer": "For arrays, `sum(A)` reduces along the first non-singleton dimension, returning a new array whose reduced axis has size `1`. Scalars are returned unchanged."
    },
    {
      "question": "How do I compute the sum of a specific dimension?",
      "answer": "Provide the dimension index: `sum(A, 2)` sums rows, `sum(A, 3)` sums along the third dimension, and so on. You can also pass a vector to collapse multiple dimensions."
    },
    {
      "question": "What happens if all elements are `NaN` and I request `'omitnan'`?",
      "answer": "`sum(..., 'omitnan')` treats `NaN` values as missing data. If every element in the slice is `NaN`, the result becomes `0`, matching MATLAB semantics."
    },
    {
      "question": "Does `sum` preserve integer classes?",
      "answer": "Only when you explicitly request `'native'` or `'like'`. Otherwise integers are promoted to double precision so you do not have to manage overflow manually."
    }
  ],
  "links": [
    {
      "label": "prod",
      "url": "./prod"
    },
    {
      "label": "mean",
      "url": "./mean"
    },
    {
      "label": "cumsum",
      "url": "./cumsum"
    },
    {
      "label": "gpuArray",
      "url": "./gpuarray"
    },
    {
      "label": "gather",
      "url": "./gather"
    },
    {
      "label": "all",
      "url": "./all"
    },
    {
      "label": "any",
      "url": "./any"
    },
    {
      "label": "cummax",
      "url": "./cummax"
    },
    {
      "label": "cummin",
      "url": "./cummin"
    },
    {
      "label": "cumprod",
      "url": "./cumprod"
    },
    {
      "label": "diff",
      "url": "./diff"
    },
    {
      "label": "max",
      "url": "./max"
    },
    {
      "label": "median",
      "url": "./median"
    },
    {
      "label": "min",
      "url": "./min"
    },
    {
      "label": "nnz",
      "url": "./nnz"
    },
    {
      "label": "std",
      "url": "./std"
    },
    {
      "label": "var",
      "url": "./var"
    }
  ],
  "source": {
    "label": "`crates/runmat-runtime/src/builtins/math/reduction/sum.rs`",
    "url": "https://github.com/runmat-org/runmat/blob/main/crates/runmat-runtime/src/builtins/math/reduction/sum.rs"
  },
  "gpu_residency": "You usually do **not** need to call `gpuArray` yourself. The fusion planner keeps tensors on the GPU whenever the provider exposes the required kernels. To mirror MATLAB, RunMat still accepts and respects explicit `gpuArray` / `gather` usage and the `'like'` option to control residency explicitly.",
  "gpu_behavior": [
    "RunMat Accelerate keeps tensors on the GPU whenever a provider is active:\n\n1. If a tensor already resides on the device, the runtime calls the provider’s `reduce_sum_dim` (or `reduce_sum` for whole-array reductions). Successful hooks return a new GPU handle so downstream consumers stay on device. 2. Cases that require extra bookkeeping—such as `'omitnan'`, multi-axis reductions, or `'like'`/`'native'` class coercions—fall back to the host implementation. The runtime gathers the data, computes the correct MATLAB result, and re-uploads it only when a `'like'` prototype demands GPU residency. 3. The fusion planner keeps surrounding elementwise producers and consumers on the GPU, so manual `gpuArray` / `gather` calls are optional unless you want to force residency for interoperability with legacy MATLAB workflows."
  ]
}