{
"title": "var",
"category": "math/reduction",
"keywords": [
"var",
"variance",
"statistics",
"gpu",
"omitnan",
"all"
],
"summary": "Variance of scalars, vectors, matrices, or N-D tensors with MATLAB-compatible options.",
"references": [],
"gpu_support": {
"elementwise": false,
"reduction": true,
"precisions": [
"f32",
"f64"
],
"broadcasting": "matlab",
"notes": "Falls back to host when 'omitnan' is requested or the provider lacks std/elementwise kernels for the squaring step."
},
"fusion": {
"elementwise": false,
"reduction": false,
"max_inputs": 1,
"constants": "inline"
},
"requires_feature": null,
"tested": {
"unit": "builtins::math::reduction::var::tests",
"integration": "builtins::math::reduction::var::tests::var_gpu_provider_roundtrip",
"gpu": "builtins::math::reduction::var::tests::var_wgpu_dim1_matches_cpu"
},
"description": "`var(x)` measures the spread of the elements in `x` by returning their variance. By default, RunMat matches MATLAB’s sample definition (dividing by `n-1`) and works along the first non-singleton dimension.",
"behaviors": [
"`var(X)` on an `m × n` matrix returns a `1 × n` row vector with the sample variance of each column.",
"`var(X, 1)` switches to population normalisation (`n` in the denominator). Use `var(X, 0)` or `var(X, [])` to keep the default sample behaviour.",
"`var(X, flag, dim)` lets you pick both the normalisation (`flag = 0` sample, `1` population, or `[]`) and the dimension to reduce. `var(X, flag, 'all')` collapses every dimension, while `var(X, flag, vecdim)` accepts a dimension vector such as `[1 3]` and reduces all listed axes in a single call.",
"Strings like `'omitnan'` and `'includenan'` decide whether `NaN` values are skipped or propagated.",
"Logical inputs are promoted to double precision before reduction so that results follow MATLAB’s numeric rules.",
"Empty slices return `NaN` with MATLAB-compatible shapes. Scalars return `0`, regardless of the normalisation mode.",
"Dimensions greater than `ndims(X)` leave the input unchanged.",
"Weighted variances (`flag` as a vector) are not implemented yet; RunMat reports a descriptive error when they are requested.\n\nComplex tensors are not currently supported; convert them to real magnitudes manually before calling `var`."
],
"examples": [
{
"description": "Sample variance of a vector",
"input": "x = [1 2 3 4 5];\nv = var(x); % uses flag = 0 (sample) by default",
"output": "v = 2.5"
},
{
"description": "Population variance of each column",
"input": "A = [1 3 5; 2 4 6];\nvpop = var(A, 1); % divide by n instead of n-1",
"output": "vpop = [0.25 0.25 0.25]"
},
{
"description": "Collapsing every dimension at once",
"input": "B = reshape(1:12, [3 4]);\noverall = var(B, 0, 'all')",
"output": "overall = 13"
},
{
"description": "Reducing across multiple dimensions",
"input": "C = cat(3, [1 2; 3 4], [5 6; 7 8]);\nsliceVariance = var(C, [], [1 3]); % keep columns, reduce rows & pages",
"output": "sliceVariance = [6.6667 6.6667]"
},
{
"description": "Ignoring NaN values",
"input": "D = [1 NaN 3; 2 4 NaN];\nrowVariance = var(D, 0, 2, 'omitnan')",
"output": "rowVariance = [2; 2]"
},
{
"description": "Variance on the GPU without manual `gpuArray`",
"input": "G = rand(1024, 512);\nspread = var(G, 1, 'all'); % stays on the GPU when the provider supports std reductions"
},
{
"description": "Preserving default behaviour with an empty normalisation flag",
"input": "C = [1 2; 3 4];\nrowVariance = var(C, [], 2)",
"output": "rowVariance = [0.5; 0.5]"
}
],
"faqs": [
{
"question": "What values can I pass as the normalisation flag?",
"answer": "Use `0` (or `[]`) for the sample definition, `1` for population. RunMat rejects non-scalar weight vectors and reports that weighted variances are not implemented yet."
},
{
"question": "How can I collapse multiple dimensions?",
"answer": "Pass a vector of dimensions such as `var(A, [], [1 3])`. You can also use `'all'` to collapse every dimension into a single scalar."
},
{
"question": "How do `'omitnan'` and `'includenan'` work?",
"answer": "`'omitnan'` skips NaN values; if every element in a slice is NaN the result is NaN. `'includenan'` (the default) propagates a single NaN to the output slice."
},
{
"question": "What happens if I request a dimension greater than `ndims(X)`?",
"answer": "RunMat returns the input unchanged so that MATLAB-compatible code relying on that behaviour continues to work. Scalars still return `0` to follow MATLAB’s edge-case semantics."
},
{
"question": "Are complex inputs supported?",
"answer": "Not yet. RunMat currently requires real inputs for `var`. Convert complex data to magnitude or separate real/imaginary parts before calling the builtin."
}
],
"links": [
{
"label": "std",
"url": "./std"
},
{
"label": "mean",
"url": "./mean"
},
{
"label": "sum",
"url": "./sum"
},
{
"label": "gpuArray",
"url": "./gpuarray"
},
{
"label": "gather",
"url": "./gather"
},
{
"label": "all",
"url": "./all"
},
{
"label": "any",
"url": "./any"
},
{
"label": "cummax",
"url": "./cummax"
},
{
"label": "cummin",
"url": "./cummin"
},
{
"label": "cumprod",
"url": "./cumprod"
},
{
"label": "cumsum",
"url": "./cumsum"
},
{
"label": "diff",
"url": "./diff"
},
{
"label": "max",
"url": "./max"
},
{
"label": "median",
"url": "./median"
},
{
"label": "min",
"url": "./min"
},
{
"label": "nnz",
"url": "./nnz"
},
{
"label": "prod",
"url": "./prod"
}
],
"source": {
"label": "`crates/runmat-runtime/src/builtins/math/reduction/var.rs`",
"url": "https://github.com/runmat-org/runmat/blob/main/crates/runmat-runtime/src/builtins/math/reduction/var.rs"
},
"gpu_residency": "Usually you do not need to call `gpuArray` manually. The fusion planner keeps tensors on the GPU across fused expressions and gathers them only when necessary. For explicit control or MATLAB compatibility, you can still call `gpuArray`/`gather` yourself.",
"gpu_behavior": [
"When RunMat Accelerate is active, device-resident tensors remain on the GPU whenever the provider implements the relevant hooks. Providers that expose `reduce_std_dim`/`reduce_std` execute the reduction in-place on the device; the runtime squares the resulting standard deviation tensor with an elementwise multiply in device memory. Whenever `'omitnan'`, multi-axis reductions, or unsupported shapes are requested, RunMat transparently gathers the data to the host, computes the result there, and materialises the variance tensor before returning."
]
}