{
"title": "power",
"category": "math/elementwise",
"keywords": [
"power",
"element-wise power",
"dot caret",
"gpu",
"broadcasting"
],
"summary": "Element-wise exponentiation A .^ B with MATLAB-compatible broadcasting, complex support, and GPU fallbacks.",
"references": [
"https://www.mathworks.com/help/matlab/ref/power.html"
],
"gpu_support": {
"elementwise": true,
"reduction": false,
"precisions": [
"f32",
"f64"
],
"broadcasting": "matlab",
"notes": "Uses provider elem_pow when both operands are gpuArrays of the same shape; gathers to the host for implicit expansion, complex inputs, or when elem_pow is unavailable."
},
"fusion": {
"elementwise": true,
"reduction": false,
"max_inputs": 2,
"constants": "inline"
},
"requires_feature": null,
"tested": {
"unit": "builtins::math::elementwise::power::tests::power_scalar_numbers",
"integration": "builtins::math::elementwise::power::tests::power_matrix_broadcast",
"gpu": "builtins::math::elementwise::power::tests::power_gpu_pair_roundtrip",
"wgpu": "builtins::math::elementwise::power::tests::power_wgpu_matches_cpu_elementwise",
"like_gpu": "builtins::math::elementwise::power::tests::power_like_gpu_residency",
"like_complex": "builtins::math::elementwise::power::tests::power_like_complex_promotes_output"
},
"description": "`Y = power(A, B)` (or `A .^ B`) raises each element of `A` to the corresponding element of `B` using MATLAB's implicit-expansion rules. Scalars broadcast automatically, and complex inputs produce complex outputs that match MATLAB behaviour.",
"behaviors": [
"Supports scalars, vectors, matrices, and N-D tensors with the same shape or compatible singleton dimensions. Size mismatches raise the standard MATLAB error.",
"Logical and character inputs are promoted to double precision before powering (`'A'.^2` uses the Unicode code points of the characters).",
"Complex bases and/or exponents follow the analytic identity `z.^w = exp(w * log(z))`, so negative bases with fractional exponents return complex results instead of `NaN`.",
"Empty tensors propagate emptiness; the result uses the broadcasted size.",
"The optional `'like', prototype` arguments mirror the numeric flavour and residency (host vs gpuArray) of `prototype`, just like MATLAB."
],
"examples": [
{
"description": "Raise a scalar to a power",
"input": "y = power(2, 5)",
"output": "y = 32"
},
{
"description": "Compute element-wise powers of a matrix",
"input": "A = [1 2 3; 4 5 6];\nB = power(A, 2)",
"output": "B =\n 1 4 9\n 16 25 36"
},
{
"description": "Broadcast exponents across rows",
"input": "base = (1:3)';\nexponent = [1 2 3];\nresult = power(base, exponent)",
"output": "result =\n 1 1 1\n 2 4 8\n 3 9 27"
},
{
"description": "Generate complex powers from negative bases",
"input": "values = power([-2 -1 0 1 2], 0.5)",
"output": "values = [0.0000 + 1.4142i, 0.0000 + 1.0000i, 0, 1, 1.4142]"
},
{
"description": "Keep GPU results with a `'like'` prototype",
"input": "proto = gpuArray.zeros(1, 1, 'single');\nx = [1 2 3];\ny = [2 3 4];\ndevicePowers = power(x, y, 'like', proto);\nresult = gather(devicePowers)",
"output": "devicePowers =\n 1x3 gpuArray single\n 1 8 81\nresult =\n 1 8 81"
},
{
"description": "Convert character codes before powering",
"input": "codes = power('ABC', 2)",
"output": "codes = [4225 4356 4489]"
}
],
"faqs": [
{
"question": "Does `power` support MATLAB implicit expansion?",
"answer": "Yes. Singleton dimensions expand automatically, and size mismatches raise a dimension error with the usual MATLAB wording."
},
{
"question": "What numeric type does `power` return?",
"answer": "Real inputs produce doubles. Results promote to complex doubles whenever the exponentiation would leave the real line (for example `(-2).^0.5` or complex exponents)."
},
{
"question": "Can I mix scalars and arrays?",
"answer": "Absolutely. Scalars broadcast to match the other operand. This includes scalar gpuArrays."
},
{
"question": "What happens if only one operand is on the GPU?",
"answer": "If the other operand is a scalar, RunMat keeps everything on the GPU. Otherwise, it gathers the device operand, performs the computation on the host, and returns a host tensor (unless `'like'` instructs the runtime to re-upload the result)."
},
{
"question": "Does `power` modify the inputs in-place?",
"answer": "No. The builtin always allocates a fresh tensor (or complex tensor). Fusion can eliminate temporary allocations when the expression continues with other element-wise operations."
},
{
"question": "Can I force the result to stay on the GPU?",
"answer": "Yes—pass `'like', gpuArrayPrototype`. The runtime mirrors the residency of the prototype and uploads the result when necessary."
}
],
"links": [
{
"label": "power",
"url": "./power"
},
{
"label": "times",
"url": "./times"
},
{
"label": "rdivide",
"url": "./rdivide"
},
{
"label": "ldivide",
"url": "./ldivide"
},
{
"label": "pow2",
"url": "./pow2"
},
{
"label": "gpuArray",
"url": "./gpuarray"
},
{
"label": "gather",
"url": "./gather"
},
{
"label": "abs",
"url": "./abs"
},
{
"label": "angle",
"url": "./angle"
},
{
"label": "conj",
"url": "./conj"
},
{
"label": "double",
"url": "./double"
},
{
"label": "exp",
"url": "./exp"
},
{
"label": "expm1",
"url": "./expm1"
},
{
"label": "factorial",
"url": "./factorial"
},
{
"label": "gamma",
"url": "./gamma"
},
{
"label": "hypot",
"url": "./hypot"
},
{
"label": "imag",
"url": "./imag"
},
{
"label": "log",
"url": "./log"
},
{
"label": "log10",
"url": "./log10"
},
{
"label": "log1p",
"url": "./log1p"
},
{
"label": "log2",
"url": "./log2"
},
{
"label": "minus",
"url": "./minus"
},
{
"label": "plus",
"url": "./plus"
},
{
"label": "real",
"url": "./real"
},
{
"label": "sign",
"url": "./sign"
},
{
"label": "single",
"url": "./single"
},
{
"label": "sqrt",
"url": "./sqrt"
}
],
"source": {
"label": "`crates/runmat-runtime/src/builtins/math/elementwise/power.rs`",
"url": "https://github.com/runmat-org/runmat/blob/main/crates/runmat-runtime/src/builtins/math/elementwise/power.rs"
},
"gpu_residency": "Most workflows do **not** require manual `gpuArray` calls. RunMat's auto-offload and fusion planner keep chains of element-wise operations on the GPU whenever the provider can satisfy them. When an operation needs a fallback (implicit expansion, complex inputs, or unsupported kernels), RunMat transparently gathers to the host, computes the MATLAB-accurate result, and honours any `'like'` residency hints you supplied.",
"gpu_behavior": [
"When both operands are gpuArrays with identical shapes, RunMat calls the provider's `elem_pow` hook. The WGPU backend uses a fused WGSL kernel, and the in-process provider executes on host data without leaving the GPU abstraction.",
"If only one operand lives on the GPU and the other is a scalar, RunMat materialises a device buffer for the scalar and still uses `elem_pow`.",
"Implicit expansion, complex operands, and providers that lack `elem_pow` automatically fall back to the host implementation. Results respect `'like'` residency hints, re-uploading to the GPU when requested."
]
}