runmat-runtime 0.4.1

Core runtime for RunMat with builtins, BLAS/LAPACK integration, and execution APIs
Documentation
{
  "title": "inv",
  "category": "math/linalg/solve",
  "keywords": [
    "inv",
    "matrix inverse",
    "linear algebra",
    "solve",
    "gpu"
  ],
  "summary": "Compute the inverse of a square matrix with MATLAB-compatible pivoting and GPU fallbacks.",
  "references": [
    "https://www.mathworks.com/help/matlab/ref/inv.html"
  ],
  "gpu_support": {
    "elementwise": false,
    "reduction": false,
    "precisions": [
      "f32",
      "f64"
    ],
    "broadcasting": "none",
    "notes": "Uses the acceleration provider's `inv` hook when available; the default WGPU backend gathers to the host, computes the inverse, and re-uploads the result to preserve residency."
  },
  "fusion": {
    "elementwise": false,
    "reduction": false,
    "max_inputs": 1,
    "constants": "uniform"
  },
  "requires_feature": null,
  "tested": {
    "unit": "builtins::math::linalg::solve::inv::tests",
    "gpu": "builtins::math::linalg::solve::inv::tests::inv_gpu_round_trip_matches_cpu",
    "wgpu": "builtins::math::linalg::solve::inv::tests::inv_wgpu_matches_cpu"
  },
  "description": "`X = inv(A)` returns the matrix inverse of a square, full-rank matrix `A`. The result satisfies `A * X = eye(size(A))` within round-off. Scalars behave like `1 ./ A`, matching MATLAB semantics.",
  "behaviors": [
    "Inputs must be 2-D matrices (trailing singleton dimensions are accepted). Non-square matrices raise the MATLAB error `\"inv: input must be a square matrix.\"`",
    "Singular or rank-deficient matrices raise `\"inv: matrix is singular to working precision.\"`",
    "Logical and integer inputs are promoted to double precision before inversion.",
    "Complex inputs are handled in full complex arithmetic.",
    "Empty matrices return an empty matrix with the same dimensions (e.g., `inv([])` yields `[]`)."
  ],
  "examples": [
    {
      "description": "Inverting a 2x2 matrix for solving linear systems",
      "input": "A = [4 -2; 1 3];\nX = inv(A)",
      "output": "X =\n    0.3    0.2\n   -0.1    0.4"
    },
    {
      "description": "Checking that `inv(A)` produces the identity matrix",
      "input": "A = [2 1 0; 0 1 -1; 0 0 3];\nX = inv(A);\nproduct = A * X",
      "output": "product =\n    1.0000         0         0\n         0    1.0000         0\n         0         0    1.0000"
    },
    {
      "description": "Inverting a diagonal matrix with symbolic structure",
      "input": "D = diag([2, 5, 10]);\nX = inv(D)",
      "output": "X =\n    0.5000         0         0\n         0    0.2000         0\n         0         0    0.1000"
    },
    {
      "description": "Computing the inverse of a complex matrix",
      "input": "A = [1+2i  0; 3i  4-1i];\nX = inv(A)",
      "output": "X =\n   0.2105 - 0.1053i  -0.0158 - 0.1579i\n  -0.1579 - 0.1184i   0.0526 + 0.2632i"
    },
    {
      "description": "Using `inv` on a GPU-resident matrix",
      "input": "G = gpuArray([3 1; 0 2]);\ninvG = inv(G);       % stays on the GPU when the provider implements inv\nresult = gather(invG)",
      "output": "result =\n    0.3333   -0.1667\n         0    0.5000"
    },
    {
      "description": "Handling singular matrices gracefully",
      "input": "A = [1 2; 2 4];\nX = inv(A)",
      "output": "Error using inv\ninv: matrix is singular to working precision."
    }
  ],
  "faqs": [
    {
      "question": "Do I need to use `inv` to solve linear systems?",
      "answer": "Prefer `mldivide` (`A \\\\ b`) or `linsolve` for numerical stability and performance. Use `inv` only when you explicitly need the inverse matrix."
    },
    {
      "question": "What error do I get for singular matrices?",
      "answer": "RunMat mirrors MATLAB and raises `\"inv: matrix is singular to working precision.\"` when LU factorisation detects a zero pivot."
    },
    {
      "question": "Can I invert non-square matrices?",
      "answer": "No. `inv` requires square matrices. Use `pinv` for pseudoinverses of rectangular matrices."
    },
    {
      "question": "Does `inv` support complex numbers?",
      "answer": "Yes. Complex matrices are inverted using full complex arithmetic."
    },
    {
      "question": "What happens with empty matrices?",
      "answer": "`inv([])` returns `[]` (an empty matrix) without error."
    },
    {
      "question": "Does `inv` preserve GPU residency?",
      "answer": "If the acceleration provider exposes an `inv` hook, the operation stays on the GPU. Otherwise, RunMat gathers, computes on the host, and re-uploads so the caller still receives a GPU tensor."
    }
  ],
  "links": [
    {
      "label": "pinv",
      "url": "./pinv"
    },
    {
      "label": "linsolve",
      "url": "./linsolve"
    },
    {
      "label": "mldivide",
      "url": "./mldivide"
    },
    {
      "label": "det",
      "url": "./det"
    },
    {
      "label": "gpuArray",
      "url": "./gpuarray"
    },
    {
      "label": "gather",
      "url": "./gather"
    },
    {
      "label": "cond",
      "url": "./cond"
    },
    {
      "label": "norm",
      "url": "./norm"
    },
    {
      "label": "rank",
      "url": "./rank"
    },
    {
      "label": "rcond",
      "url": "./rcond"
    }
  ],
  "source": {
    "label": "`crates/runmat-runtime/src/builtins/math/linalg/solve/inv.rs`",
    "url": "https://github.com/runmat-org/runmat/blob/main/crates/runmat-runtime/src/builtins/math/linalg/solve/inv.rs"
  },
  "gpu_residency": "You typically do not need to move data manually. If `A` already resides on the GPU and the provider implements `inv`, the computation stays on the device. Providers without a native kernel (including the current WGPU backend) download `A`, compute the inverse on the host, and re-upload the result, so subsequent GPU code continues to operate on device-resident data. `gpuArray` remains available for compatibility and for explicitly seeding GPU residency.",
  "gpu_behavior": [
    "When a GPU acceleration provider is active, RunMat forwards the operation to its `inv` hook. If the provider does not implement a native kernel, RunMat gathers the data to the host, uses the shared CPU implementation, and attempts to re-upload the result so downstream GPU work keeps its residency. The shipping WGPU backend currently follows this gather/compute/upload pattern."
  ]
}