{
"title": "prod",
"category": "math/reduction",
"keywords": [
"prod",
"product",
"reduction",
"omitnan",
"gpu"
],
"summary": "Multiply elements of scalars, vectors, matrices, or N-D tensors with MATLAB-compatible options.",
"references": [],
"gpu_support": {
"elementwise": false,
"reduction": true,
"precisions": [
"f32",
"f64"
],
"broadcasting": "matlab",
"notes": "Prefers provider reduce_prod_dim / reduce_prod hooks; falls back to the host for omitnan, multi-axis, or class-prototype requests."
},
"fusion": {
"elementwise": false,
"reduction": true,
"max_inputs": 1,
"constants": "inline"
},
"requires_feature": null,
"tested": {
"unit": "builtins::math::reduction::prod::tests",
"integration": "builtins::math::reduction::prod::tests::prod_gpu_provider_roundtrip"
},
"description": "`prod(X)` multiplies the elements of scalars, vectors, matrices, and higher-dimensional tensors. When no dimension is supplied, the reduction runs along the first non-singleton dimension.",
"behaviors": [
"`prod(X)` on an `m × n` matrix returns a row vector (`1 × n`) with column-wise products.",
"`prod(X, 2)` returns a column vector (`m × 1`) containing row-wise products.",
"`prod(X, dims)` accepts a vector of dimensions (for example `[1 3]`) and collapses each listed axis while leaving the others untouched.",
"`prod(X, 'all')` flattens every dimension into a single scalar product.",
"Logical inputs are promoted to double precision (`true → 1.0`, `false → 0.0`) unless you request `'native'` or `'like'` output classes.",
"`prod(___, 'omitnan')` ignores `NaN` values; if every element in the slice is `NaN`, the result becomes `1`, the multiplicative identity.",
"`prod(___, 'includenan')` (default) propagates `NaN` whenever a `NaN` appears in that slice.",
"`prod(___, outtype)` accepts `'double'`, `'default'`, or `'native'` to control the output class.",
"`prod(___, 'like', prototype)` matches the numeric class and residency of `prototype` when supported by the active provider.",
"Empty inputs or reductions along dimensions with size `0` return ones that follow MATLAB shape semantics."
],
"examples": [
{
"description": "Multiplying the elements of a matrix",
"input": "A = [1 2 3; 4 5 6];\ncolProd = prod(A);\nrowProd = prod(A, 2)",
"output": "colProd = [4 10 18];\nrowProd = [6; 120]"
},
{
"description": "Multiplying across multiple dimensions",
"input": "B = reshape(1:24, [3 4 2]);\nprod13 = prod(B, [1 3])",
"output": "prod13 =\n 16380 587520 4021920 16030080"
},
{
"description": "Multiplying with NaN values ignored",
"input": "values = [2 NaN 4];\ncleanProd = prod(values, 'omitnan')",
"output": "cleanProd = 8"
},
{
"description": "Multiplying on the GPU and matching an existing prototype",
"input": "G = gpuArray(ones(1024, 1024) + 0.01);\nproto = gpuArray(zeros(1, 1));\ngpuResult = prod(G, 'like', proto);\nresult = gather(gpuResult)"
},
{
"description": "Multiplying all elements of an array into a scalar",
"input": "P = prod(1:10, 'all')",
"output": "P = 3628800"
},
{
"description": "Multiplying with native output type",
"input": "ints = int16([2 3 4]);\nnativeProd = prod(ints, 'native')",
"output": "nativeProd = int16(24)"
}
],
"faqs": [
{
"question": "When should I use the `prod` function?",
"answer": "Use `prod` whenever you need multiplicative reductions: geometric means, determinant-like products, or scaling chains of factors."
},
{
"question": "Does `prod` produce double arrays by default?",
"answer": "Yes. Unless you request `'native'` or provide a `'like'` prototype, the result is a dense double-precision array on the host."
},
{
"question": "What does `prod(A)` return?",
"answer": "If you call `prod(A)` where `A` is an array, the result is a new array of the same shape as `A` with products taken along the first non-singleton dimension."
},
{
"question": "How do I compute the product of a specific dimension?",
"answer": "Pass the dimension as the second argument (`prod(A, 2)` for row-wise products) or provide a dimension vector (`prod(A, [1 3])`) to collapse multiple axes at once."
},
{
"question": "What happens if all elements are `NaN` and I request `'omitnan'`?",
"answer": "The result becomes `1`, matching MATLAB's multiplicative identity semantics for empty slices."
},
{
"question": "Does `prod` preserve integer classes?",
"answer": "Only when you explicitly request `'native'` or `'like'`. Otherwise, integers are promoted to double precision so you do not have to manage overflow manually."
}
],
"links": [
{
"label": "sum",
"url": "./sum"
},
{
"label": "mean",
"url": "./mean"
},
{
"label": "cumprod",
"url": "./cumprod"
},
{
"label": "gpuArray",
"url": "./gpuarray"
},
{
"label": "gather",
"url": "./gather"
},
{
"label": "all",
"url": "./all"
},
{
"label": "any",
"url": "./any"
},
{
"label": "cummax",
"url": "./cummax"
},
{
"label": "cummin",
"url": "./cummin"
},
{
"label": "cumsum",
"url": "./cumsum"
},
{
"label": "diff",
"url": "./diff"
},
{
"label": "max",
"url": "./max"
},
{
"label": "median",
"url": "./median"
},
{
"label": "min",
"url": "./min"
},
{
"label": "nnz",
"url": "./nnz"
},
{
"label": "std",
"url": "./std"
},
{
"label": "var",
"url": "./var"
}
],
"source": {
"label": "`crates/runmat-runtime/src/builtins/math/reduction/prod.rs`",
"url": "https://github.com/runmat-org/runmat/blob/main/crates/runmat-runtime/src/builtins/math/reduction/prod.rs"
},
"gpu_residency": "You usually do **not** need to call `gpuArray` yourself in RunMat. The fusion planner keeps residency on the GPU for fused expressions, and reduction kernels execute on the device whenever the provider exposes the necessary hooks. To match MathWorks MATLAB behaviour—or to bootstrap GPU residency explicitly—you can still create GPU arrays manually.",
"gpu_behavior": [
"When RunMat Accelerate is active, tensors that already reside on the GPU remain on the device. The runtime calls `reduce_prod_dim` (or `reduce_prod` for whole-array products) on the active provider when available. Requests that require `'omitnan'`, multi-axis reductions, or class coercions fall back to the host implementation, compute the correct MATLAB result, and re-upload only when a `'like'` prototype demands GPU residency."
]
}