{
"title": "rdivide",
"category": "math/elementwise",
"keywords": [
"rdivide",
"element-wise division",
"./",
"gpu",
"implicit expansion"
],
"summary": "Element-wise division A ./ B with MATLAB-compatible implicit expansion, complex support, and GPU fallbacks.",
"references": [
"https://www.mathworks.com/help/matlab/ref/rdivide.html"
],
"gpu_support": {
"elementwise": true,
"reduction": false,
"precisions": [
"f32",
"f64"
],
"broadcasting": "matlab",
"notes": "Prefers provider elem_div/scalar_div/scalar_rdiv hooks; gathers to host for implicit expansion or unsupported operand types."
},
"fusion": {
"elementwise": true,
"reduction": false,
"max_inputs": 2,
"constants": "inline"
},
"requires_feature": null,
"tested": {
"unit": "builtins::math::elementwise::rdivide::tests::rdivide_scalar_numbers",
"integration": "builtins::math::elementwise::rdivide::tests::rdivide_row_column_broadcast",
"gpu": "builtins::math::elementwise::rdivide::tests::rdivide_gpu_pair_roundtrip",
"wgpu": "builtins::math::elementwise::rdivide::tests::rdivide_wgpu_matches_cpu_elementwise",
"like_gpu": "builtins::math::elementwise::rdivide::tests::rdivide_like_gpu_prototype_keeps_residency",
"like_host": "builtins::math::elementwise::rdivide::tests::rdivide_like_host_gathers_gpu_value",
"like_complex": "builtins::math::elementwise::rdivide::tests::rdivide_like_complex_prototype_yields_complex"
},
"description": "`rdivide(A, B)` (or the operator form `A ./ B`) divides corresponding elements of `A` and `B`, honouring MATLAB's implicit expansion rules so that scalars and singleton dimensions broadcast automatically.",
"behaviors": [
"Supports real, complex, logical, and character inputs; logical and character data are promoted to double precision before division.",
"Implicit expansion works across any dimension, provided the non-singleton extents match. Size mismatches raise the standard MATLAB-compatible error.",
"Complex operands follow the analytic rule `(a + ib) ./ (c + id) = ((ac + bd) + i(bc - ad)) / (c^2 + d^2)`, matching MATLAB's behaviour for infinities and NaNs.",
"Empty dimensions propagate naturally—if the broadcasted shape contains a zero extent, the result is empty with that shape.",
"Integer inputs promote to double precision, mirroring the behaviour of other RunMat arithmetic builtins.",
"The optional `'like'` prototype makes the output adopt the residency (host or GPU) and complexity characteristics of the prototype, letting you keep implicit-expansion expressions on the device without manual `gpuArray` calls. When the prototype is complex, the result remains a complex host array (complex gpuArray prototypes are not yet supported)."
],
"examples": [
{
"description": "Divide two matrices element-wise",
"input": "A = [8 12 18; 2 10 18];\nB = [2 3 6; 2 5 9];\nQ = rdivide(A, B)",
"output": "Q =\n 4 4 3\n 1 2 2"
},
{
"description": "Divide a matrix by a scalar",
"input": "A = magic(3);\nscaled = rdivide(A, 2)",
"output": "scaled =\n 4.5 0.5 3.5\n 1.5 5.0 9.0\n 8.0 6.5 2.0"
},
{
"description": "Use implicit expansion between a column and row vector",
"input": "col = (1:3)';\nrow = [10 20 30];\nratio = rdivide(col, row)",
"output": "ratio =\n 0.1 0.05 0.0333\n 0.2 0.10 0.0667\n 0.3 0.15 0.1000"
},
{
"description": "Divide complex inputs element-wise",
"input": "z1 = [1+2i, 3-4i];\nz2 = [2-1i, -1+1i];\nquot = rdivide(z1, z2)",
"output": "quot =\n 0.0 + 1.0i -3.5 + 0.5i"
},
{
"description": "Divide character codes by a numeric scalar",
"input": "letters = 'ABC';\ncodes = rdivide(letters, 2)",
"output": "codes = [32.5 33 33.5]"
},
{
"description": "Execute `rdivide` directly on gpuArray inputs",
"input": "G1 = gpuArray([10 20 30]);\nG2 = gpuArray([2 5 10]);\ndeviceQuot = rdivide(G1, G2);\nresult = gather(deviceQuot)",
"output": "deviceQuot =\n 1x3 gpuArray\n 5 4 3\nresult =\n 5 4 3"
},
{
"description": "Keep the result on the GPU with a `'like'` prototype",
"input": "proto = gpuArray.zeros(1, 1);\nA = [1 2 3];\nB = [2 4 6];\nC = rdivide(A, B, 'like', proto); % stays on the GPU for downstream work",
"output": "C =\n 1x3 gpuArray\n 0.5 0.5 0.5"
}
],
"faqs": [
{
"question": "Does `rdivide` support MATLAB implicit expansion?",
"answer": "Yes. Any singleton dimensions expand automatically. If a dimension has incompatible non-singleton extents, `rdivide` raises the standard size-mismatch error."
},
{
"question": "What numeric type does `rdivide` return?",
"answer": "Results are double precision for real inputs and complex double when either operand is complex. Logical and character inputs are promoted to double before division."
},
{
"question": "How does `rdivide` handle division by zero?",
"answer": "RunMat follows IEEE rules: `finite ./ 0` produces signed infinity, while `0 ./ 0` yields `NaN`. Complex results follow MATLAB's analytic continuation rules."
},
{
"question": "Can I divide gpuArrays by host scalars?",
"answer": "Yes. RunMat keeps the computation on the GPU when the scalar is numeric. For other host operand types, the runtime gathers the gpuArray and computes on the CPU."
},
{
"question": "Does `rdivide` preserve gpuArray residency after a fallback?",
"answer": "When a fallback occurs (for example, implicit expansion that the provider does not implement), the current result remains on the host. Subsequent operations may move it back to the GPU when auto-offload decides it is profitable."
},
{
"question": "How can I force the result to stay on the GPU?",
"answer": "Provide a `'like'` prototype: `rdivide(A, B, 'like', gpuArray.zeros(1, 1))` keeps the result on the device even if one of the inputs originated on the host. Complex prototypes are honoured on the host today; supply a real gpuArray prototype when you need the result to remain device-resident."
},
{
"question": "How are empty arrays handled?",
"answer": "Empty dimensions propagate. If either operand has an extent of zero in the broadcasted shape, the result is empty with the broadcasted dimensions."
},
{
"question": "Are integer inputs supported?",
"answer": "Yes. Integers promote to double precision during the division, matching other RunMat arithmetic builtins."
},
{
"question": "Can I mix complex and real operands?",
"answer": "Absolutely. The result is complex, with broadcasting rules identical to MATLAB."
},
{
"question": "What about string arrays?",
"answer": "String arrays are not numeric and therefore raise an error when passed to `rdivide`."
}
],
"links": [
{
"label": "times",
"url": "./times"
},
{
"label": "ldivide",
"url": "./ldivide"
},
{
"label": "mrdivide",
"url": "./mrdivide"
},
{
"label": "gpuArray",
"url": "./gpuarray"
},
{
"label": "gather",
"url": "./gather"
},
{
"label": "abs",
"url": "./abs"
},
{
"label": "angle",
"url": "./angle"
},
{
"label": "conj",
"url": "./conj"
},
{
"label": "double",
"url": "./double"
},
{
"label": "exp",
"url": "./exp"
},
{
"label": "expm1",
"url": "./expm1"
},
{
"label": "factorial",
"url": "./factorial"
},
{
"label": "gamma",
"url": "./gamma"
},
{
"label": "hypot",
"url": "./hypot"
},
{
"label": "imag",
"url": "./imag"
},
{
"label": "log",
"url": "./log"
},
{
"label": "log10",
"url": "./log10"
},
{
"label": "log1p",
"url": "./log1p"
},
{
"label": "log2",
"url": "./log2"
},
{
"label": "minus",
"url": "./minus"
},
{
"label": "plus",
"url": "./plus"
},
{
"label": "pow2",
"url": "./pow2"
},
{
"label": "power",
"url": "./power"
},
{
"label": "real",
"url": "./real"
},
{
"label": "sign",
"url": "./sign"
},
{
"label": "single",
"url": "./single"
},
{
"label": "sqrt",
"url": "./sqrt"
}
],
"source": {
"label": "`crates/runmat-runtime/src/builtins/math/elementwise/rdivide.rs`",
"url": "https://github.com/runmat-org/runmat/blob/main/crates/runmat-runtime/src/builtins/math/elementwise/rdivide.rs"
},
"gpu_residency": "RunMat's auto-offload planner keeps tensors on the GPU whenever fused expressions benefit from device execution. Explicit `gpuArray` / `gather` calls are still supported for MATLAB code that manages residency manually. When the active provider lacks the kernels needed for a particular call (for example, implicit expansion between gpuArrays of different shapes), RunMat gathers back to the host, computes the MATLAB-accurate result, and resumes execution seamlessly.",
"gpu_behavior": [
"When a gpuArray provider is active:\n\n1. If both operands are gpuArrays with identical shapes, RunMat dispatches to the provider's `elem_div` hook so the entire computation stays on device memory. 2. If one operand is a scalar (host or device) and the other is a gpuArray, the runtime calls `scalar_div` (`tensor ./ scalar`) or `scalar_rdiv` (`scalar ./ tensor`) accordingly. 3. The fusion planner treats `rdivide` as a fusible elementwise node, so adjacent elementwise producers or consumers can execute inside a single WGSL kernel or provider-optimised pipeline, reducing host↔device transfers. 4. Implicit-expansion workloads (for example, combining a row vector with a column vector) or unsupported operand kinds gather transparently to host memory, compute the result with full MATLAB semantics, and return a host tensor. When you request `'like'` with a complex prototype, RunMat gathers to the host, performs the conversion, and returns a complex host value so downstream code still sees MATLAB-compatible types."
]
}