runmat_runtime/builtins/math/poly/
polyfit.rs

1//! MATLAB-compatible `polyfit` builtin with GPU-aware semantics for RunMat.
2
3use log::{trace, warn};
4use num_complex::Complex64;
5use runmat_accelerate_api::ProviderPolyfitResult;
6use runmat_builtins::{ComplexTensor, StructValue, Tensor, Value};
7use runmat_macros::runtime_builtin;
8
9use crate::builtins::common::tensor;
10#[cfg(feature = "doc_export")]
11use crate::register_builtin_doc_text;
12use crate::{register_builtin_fusion_spec, register_builtin_gpu_spec};
13
14use crate::dispatcher;
15
16use crate::builtins::common::spec::{
17    BroadcastSemantics, BuiltinFusionSpec, BuiltinGpuSpec, ConstantStrategy, GpuOpKind,
18    ProviderHook, ReductionNaN, ResidencyPolicy, ScalarType, ShapeRequirements,
19};
20
21const EPS: f64 = 1.0e-12;
22const EPS_NAN: f64 = 1.0e-12;
23
24#[cfg(feature = "doc_export")]
25pub const DOC_MD: &str = r#"---
26title: "polyfit"
27category: "math/poly"
28keywords: ["polyfit", "polynomial fitting", "least squares", "prediction intervals", "gpu"]
29summary: "Fit an n-th degree polynomial to data points using least squares with MATLAB-compatible outputs."
30references:
31  - title: "MATLAB polyfit documentation"
32    url: "https://www.mathworks.com/help/matlab/ref/polyfit.html"
33gpu_support:
34  elementwise: false
35  reduction: false
36  precisions: ["f32", "f64"]
37  broadcasting: "matlab"
38  notes: "Providers gather inputs to the host today; the shared Householder QR solver runs on CPU while preserving GPU residency semantics."
39fusion:
40  elementwise: false
41  reduction: false
42  max_inputs: 3
43  constants: "uniform"
44requires_feature: null
45tested:
46  unit: "builtins::math::poly::polyfit::tests"
47  gpu: "builtins::math::poly::polyfit::tests::polyfit_wgpu_matches_cpu"
48  doc: "builtins::math::poly::polyfit::tests::doc_examples_present"
49---
50
51# What does the `polyfit` function do in MATLAB / RunMat?
52`polyfit(x, y, n)` returns the coefficients of an `n`-th degree polynomial that fits the samples
53`(x, y)` in a least-squares sense. The coefficients are ordered from the highest power of `x` (or
54`t` when centering is applied) down to the constant term, matching MATLAB's convention.
55
56## How does the `polyfit` function behave in MATLAB / RunMat?
57- `x` and `y` must contain the same number of finite numeric elements. They can be row vectors,
58  column vectors, or N-D arrays; RunMat flattens them column-major (MATLAB order).
59- The optional fourth argument provides weights: `polyfit(x, y, n, w)` minimises
60  `||sqrt(w) .* (y - polyval(p, x))||_2`. All weights must be real, non-negative, and match the
61  shape of `x`.
62- `polyfit` applies centring and scaling to the independent variable for numerical stability. It
63  returns the vector `mu = [mean(x), std(x)]`, and the polynomial fits `(x - mu(1)) / mu(2)`. Use
64  `polyval(p, x, [], mu)` to evaluate with the same scaling.
65- `[p, S, mu] = polyfit(...)` also returns `S`, a structure with fields `R`, `df`, and `normr`.
66  These match MATLAB and are accepted directly by `polyval` for computing prediction intervals.
67  `S.R` is the upper-triangular factor from the QR decomposition of the scaled Vandermonde matrix,
68  `S.df` is the degrees of freedom (max(`numel(x) - (n + 1)`, 0)), and `S.normr` is the 2-norm of
69  the weighted residuals.
70- RunMat mirrors MATLAB error messages for inconsistent dimensions, non-integer degrees, singular
71  scaling, and invalid weights.
72
73## `polyfit` Function GPU Execution Behavior
74RunMat’s acceleration layer exposes a dedicated `polyfit` provider hook. The WGPU provider routes
75requests through this hook, gathers inputs to the host, executes the shared Householder QR solver,
76and returns MATLAB-compatible outputs while preserving residency metadata. Providers that have not
77implemented the hook yet fall back to the same host code path automatically, so results stay correct
78even when the inputs originated on the GPU.
79
80## Examples of using the `polyfit` function in MATLAB / RunMat
81
82### Fitting a straight line through noisy samples
83
84```matlab
85x = 0:5;
86y = 2.5 * x + 1 + 0.05 * randn(size(x));
87p = polyfit(x, y, 1);
88```
89
90Expected output:
91
92```matlab
93p ≈ [2.5 1.0];   % slope and intercept recovered from noisy data
94```
95
96### Computing a quadratic fit and reusing `mu`
97
98```matlab
99x = -3:3;
100y = x.^2 - 2 * x + 4;
101[p, S, mu] = polyfit(x, y, 2);
102smoothed = polyval(p, x, [], mu);
103```
104
105Expected output:
106
107```matlab
108smoothed matches y exactly because the quadratic model fits perfectly.
109```
110
111### Retrieving the structure `S` for prediction intervals
112
113```matlab
114t = linspace(0, 2*pi, 25);
115y = sin(t) + 0.1 * randn(size(t));
116[p, S, mu] = polyfit(t, y, 3);
117[fitted, delta] = polyval(p, t, S, mu);
118```
119
120Expected output:
121
122```matlab
123delta contains the one-standard-deviation prediction interval for each fitted sample.
124```
125
126### Weighted polynomial fit
127
128```matlab
129x = linspace(-1, 1, 7);
130y = [1.1 0.4 0.2 0.0 0.1 0.5 1.4];
131w = [1 2 2 4 2 2 1];
132p = polyfit(x, y, 2, w);
133```
134
135Expected output:
136
137```matlab
138Central samples influence the fit more heavily, matching MATLAB's weighting semantics.
139```
140
141### Using `polyfit` with `gpuArray` inputs
142
143```matlab
144x = gpuArray.linspace(-2, 2, 50);
145y = gpuArray((x - 0.5).^3);
146p = polyfit(x, y, 3);
147```
148
149Expected output:
150
151```matlab
152p is returned on the host today; convert it back to gpuArray if desired.
153```
154
155### Complex-valued data with `polyfit`
156
157```matlab
158x = 0:4;
159y = exp(1i * x);
160p = polyfit(x, y, 2);
161```
162
163Expected output:
164
165```matlab
166p is complex-valued and matches MATLAB's complex least-squares solution.
167```
168
169## FAQ
170
171### What degree polynomial should I choose?
172You must specify the degree `n` explicitly. A degree of `numel(x) - 1` interpolates the data
173exactly, but higher degrees amplify noise dramatically. Use validation, cross-validation, or domain
174knowledge to select a sensible degree.
175
176### What do the outputs `S` and `mu` represent?
177`S` packages the QR factors (`R`), degrees of freedom (`df`), and residual norm (`normr`) so you can
178call `polyval(p, x, S, mu)` to obtain prediction intervals. `mu = [mean(x), std(x)]` records the
179centering and scaling applied during the fit.
180
181### How are weights interpreted?
182Weights act multiplicatively on the residual norm. RunMat minimises `||sqrt(w) .* (y - polyval(p, x))||_2`.
183Zero weights ignore corresponding samples; negative weights are not allowed and trigger MATLAB-style
184errors.
185
186### Can I keep the outputs on the GPU?
187Today the solver runs on the CPU even when inputs live on the GPU. The returned coefficients, `S`,
188and `mu` are CPU values. You can convert them back to `gpuArray` manually if needed. Future provider
189updates may keep everything on the device automatically.
190
191### Why does `mu(2)` need to be non-zero?
192`mu(2)` is the scaling factor (standard deviation) applied to `x`. If all `x` values are identical,
193the polynomial is ill-conditioned. RunMat mirrors MATLAB by treating this as a singular scaling and
194raising an error unless the degree is zero.
195
196## See Also
197[polyval](./polyval), [poly](./poly), [roots](./roots), [mldivide](../linalg/ops/mldivide), [gpuArray](../../acceleration/gpu/gpuArray), [gather](../../acceleration/gpu/gather)
198
199## Source & Feedback
200- Source: `crates/runmat-runtime/src/builtins/math/poly/polyfit.rs`
201- Found an issue? [Open a RunMat issue](https://github.com/runmat-org/runmat/issues/new/choose) with a minimal repro.
202"#;
203
204pub const GPU_SPEC: BuiltinGpuSpec = BuiltinGpuSpec {
205    name: "polyfit",
206    op_kind: GpuOpKind::Custom("polyfit"),
207    supported_precisions: &[ScalarType::F32, ScalarType::F64],
208    broadcast: BroadcastSemantics::Matlab,
209    provider_hooks: &[ProviderHook::Custom("polyfit")],
210    constant_strategy: ConstantStrategy::UniformBuffer,
211    residency: ResidencyPolicy::GatherImmediately,
212    nan_mode: ReductionNaN::Include,
213    two_pass_threshold: None,
214    workgroup_size: None,
215    accepts_nan_mode: false,
216    notes:
217        "Providers may gather to the host and invoke the shared Householder QR solver; WGPU implements this path today.",
218};
219
220register_builtin_gpu_spec!(GPU_SPEC);
221
222pub const FUSION_SPEC: BuiltinFusionSpec = BuiltinFusionSpec {
223    name: "polyfit",
224    shape: ShapeRequirements::Any,
225    constant_strategy: ConstantStrategy::UniformBuffer,
226    elementwise: None,
227    reduction: None,
228    emits_nan: false,
229    notes: "Acts as a sink node—polynomial fitting materialises results eagerly and terminates fusion graphs.",
230};
231
232register_builtin_fusion_spec!(FUSION_SPEC);
233
234#[cfg(feature = "doc_export")]
235register_builtin_doc_text!("polyfit", DOC_MD);
236
237#[runtime_builtin(
238    name = "polyfit",
239    category = "math/poly",
240    summary = "Fit an n-th degree polynomial to data points with MATLAB-compatible outputs.",
241    keywords = "polyfit,polynomial,least-squares,gpu",
242    accel = "sink",
243    sink = true
244)]
245fn polyfit_builtin(x: Value, y: Value, degree: Value, rest: Vec<Value>) -> Result<Value, String> {
246    let eval = evaluate(x, y, degree, &rest)?;
247    Ok(eval.coefficients())
248}
249
250/// Evaluate `polyfit`, returning the multi-output envelope used by the VM.
251pub fn evaluate(x: Value, y: Value, degree: Value, rest: &[Value]) -> Result<PolyfitEval, String> {
252    let deg = parse_degree(&degree)?;
253
254    if let Some(eval) = try_gpu_polyfit(&x, &y, deg, rest)? {
255        return Ok(eval);
256    }
257
258    let x_host = dispatcher::gather_if_needed(&x).map_err(|e| format!("polyfit: {e}"))?;
259    let y_host = dispatcher::gather_if_needed(&y).map_err(|e| format!("polyfit: {e}"))?;
260
261    let x_data = real_vector("polyfit", "X", x_host)?;
262    let (y_data, is_complex_input) = complex_vector("polyfit", "Y", y_host)?;
263
264    if x_data.len() != y_data.len() {
265        return Err("polyfit: X and Y vectors must be the same length".to_string());
266    }
267    if x_data.is_empty() {
268        return Err("polyfit: X and Y must contain at least one sample".to_string());
269    }
270    if deg + 1 > x_data.len() && x_data.len() > 1 {
271        warn!(
272            "polyfit: polynomial degree {} is ill-conditioned for {} data points; results may be inaccurate",
273            deg,
274            x_data.len()
275        );
276    }
277
278    let weights = parse_weights(rest, x_data.len())?;
279    let mut solution = solve_polyfit(&x_data, &y_data, deg, weights.as_deref())?;
280    if is_complex_input {
281        solution.is_complex = true;
282    }
283
284    PolyfitEval::from_solution(solution)
285}
286
287fn try_gpu_polyfit(
288    x: &Value,
289    y: &Value,
290    degree: usize,
291    rest: &[Value],
292) -> Result<Option<PolyfitEval>, String> {
293    let provider = match runmat_accelerate_api::provider() {
294        Some(p) => p,
295        None => return Ok(None),
296    };
297
298    let x_handle = match x {
299        Value::GpuTensor(handle) => handle,
300        _ => return Ok(None),
301    };
302    let y_handle = match y {
303        Value::GpuTensor(handle) => handle,
304        _ => return Ok(None),
305    };
306
307    if rest.len() > 1 {
308        return Ok(None);
309    }
310
311    let weight_handle = match rest.first() {
312        Some(Value::GpuTensor(handle)) => Some(handle),
313        Some(_) => return Ok(None),
314        None => None,
315    };
316
317    let result = match provider.polyfit(x_handle, y_handle, degree, weight_handle) {
318        Ok(res) => res,
319        Err(err) => {
320            trace!("polyfit: provider path unavailable ({err}); falling back to host");
321            return Ok(None);
322        }
323    };
324
325    let solution = PolyfitSolution::from_provider(result)?;
326    PolyfitEval::from_solution(solution).map(Some)
327}
328
329#[derive(Clone, Debug)]
330struct PolyfitSolution {
331    coeffs: Vec<Complex64>,
332    r_matrix: Vec<f64>,
333    mu_mean: f64,
334    mu_scale: f64,
335    normr: f64,
336    df: f64,
337    cols: usize,
338    is_complex: bool,
339}
340
341impl PolyfitSolution {
342    fn from_provider(result: ProviderPolyfitResult) -> Result<Self, String> {
343        let cols = result.coefficients.len();
344        if cols == 0 {
345            return Err("polyfit: provider returned empty coefficient vector".to_string());
346        }
347        if result.r_matrix.len() != cols * cols {
348            return Err("polyfit: provider returned malformed R matrix".to_string());
349        }
350        let [mu_mean, mu_scale] = result.mu;
351        Ok(Self {
352            coeffs: result
353                .coefficients
354                .into_iter()
355                .map(|re| Complex64::new(re, 0.0))
356                .collect(),
357            r_matrix: result.r_matrix,
358            mu_mean,
359            mu_scale,
360            normr: result.normr,
361            df: result.df,
362            cols,
363            is_complex: false,
364        })
365    }
366}
367
368/// Multi-output envelope for `polyfit`, mirroring MATLAB semantics.
369#[derive(Debug)]
370pub struct PolyfitEval {
371    coefficients: Value,
372    stats: Value,
373    mu: Value,
374    is_complex: bool,
375}
376
377impl PolyfitEval {
378    fn from_solution(solution: PolyfitSolution) -> Result<Self, String> {
379        let coefficients = coefficients_to_value(&solution.coeffs)?;
380        let stats = build_stats(
381            &solution.r_matrix,
382            solution.cols,
383            solution.normr,
384            solution.df,
385        )?;
386        let mu = build_mu(solution.mu_mean, solution.mu_scale)?;
387        Ok(Self {
388            coefficients,
389            stats,
390            mu,
391            is_complex: solution.is_complex,
392        })
393    }
394
395    /// Polynomial coefficients ordered from highest power to constant term.
396    pub fn coefficients(&self) -> Value {
397        self.coefficients.clone()
398    }
399
400    /// Structure `S` containing fields `R`, `df`, and `normr`.
401    pub fn stats(&self) -> Value {
402        self.stats.clone()
403    }
404
405    /// Centering and scaling vector `[mean(x), std(x)]`.
406    pub fn mu(&self) -> Value {
407        self.mu.clone()
408    }
409
410    /// Returns `true` if the fitted polynomial contains a complex coefficient.
411    pub fn is_complex(&self) -> bool {
412        self.is_complex
413    }
414}
415
416fn parse_degree(value: &Value) -> Result<usize, String> {
417    match value {
418        Value::Int(i) => {
419            let raw = i.to_i64();
420            if raw < 0 {
421                return Err("polyfit: degree must be a non-negative integer".to_string());
422            }
423            Ok(raw as usize)
424        }
425        Value::Num(n) => {
426            if !n.is_finite() {
427                return Err("polyfit: degree must be finite".to_string());
428            }
429            let rounded = n.round();
430            if (rounded - n).abs() > EPS {
431                return Err("polyfit: degree must be an integer".to_string());
432            }
433            if rounded < 0.0 {
434                return Err("polyfit: degree must be a non-negative integer".to_string());
435            }
436            Ok(rounded as usize)
437        }
438        Value::Tensor(t) if tensor::is_scalar_tensor(t) => parse_degree(&Value::Num(t.data[0])),
439        Value::LogicalArray(l) if l.len() == 1 => {
440            parse_degree(&Value::Num(if l.data[0] != 0 { 1.0 } else { 0.0 }))
441        }
442        other => Err(format!(
443            "polyfit: degree must be a scalar numeric value, got {other:?}"
444        )),
445    }
446}
447
448fn real_vector(context: &str, label: &str, value: Value) -> Result<Vec<f64>, String> {
449    match value {
450        Value::Tensor(mut tensor) => {
451            ensure_vector_shape(context, label, &tensor.shape)?;
452            Ok(tensor.data.drain(..).collect())
453        }
454        Value::LogicalArray(logical) => {
455            let tensor = tensor::logical_to_tensor(&logical)?;
456            ensure_vector_shape(context, label, &tensor.shape)?;
457            Ok(tensor.data)
458        }
459        Value::Num(n) => Ok(vec![n]),
460        Value::Int(i) => Ok(vec![i.to_f64()]),
461        Value::Bool(b) => Ok(vec![if b { 1.0 } else { 0.0 }]),
462        Value::GpuTensor(handle) => {
463            let gathered = crate::builtins::common::gpu_helpers::gather_tensor(&handle)?;
464            real_vector(context, label, Value::Tensor(gathered))
465        }
466        Value::Complex(_, _) | Value::ComplexTensor(_) => Err(format!(
467            "{context}: {label} must be real-valued; complex inputs are not supported"
468        )),
469        other => Err(format!(
470            "{context}: expected {label} to be a numeric vector, got {other:?}"
471        )),
472    }
473}
474
475fn complex_vector(
476    context: &str,
477    label: &str,
478    value: Value,
479) -> Result<(Vec<Complex64>, bool), String> {
480    match value {
481        Value::Tensor(mut tensor) => {
482            ensure_vector_shape(context, label, &tensor.shape)?;
483            let all_real = true;
484            let data = tensor
485                .data
486                .drain(..)
487                .map(|x| Complex64::new(x, 0.0))
488                .collect();
489            Ok((data, all_real))
490        }
491        Value::ComplexTensor(tensor) => {
492            ensure_vector_shape(context, label, &tensor.shape)?;
493            let is_complex = tensor.data.iter().any(|&(_, im)| im.abs() > EPS);
494            let data = tensor
495                .data
496                .into_iter()
497                .map(|(re, im)| Complex64::new(re, im))
498                .collect::<Vec<_>>();
499            Ok((data, is_complex))
500        }
501        Value::LogicalArray(logical) => {
502            let tensor = tensor::logical_to_tensor(&logical)?;
503            ensure_vector_shape(context, label, &tensor.shape)?;
504            Ok((
505                tensor
506                    .data
507                    .iter()
508                    .map(|&x| Complex64::new(x, 0.0))
509                    .collect(),
510                false,
511            ))
512        }
513        Value::Num(n) => Ok((vec![Complex64::new(n, 0.0)], false)),
514        Value::Int(i) => Ok((vec![Complex64::new(i.to_f64(), 0.0)], false)),
515        Value::Bool(b) => Ok((vec![Complex64::new(if b { 1.0 } else { 0.0 }, 0.0)], false)),
516        Value::Complex(re, im) => Ok((vec![Complex64::new(re, im)], im.abs() > EPS)),
517        Value::GpuTensor(handle) => complex_vector(
518            context,
519            label,
520            Value::Tensor(crate::builtins::common::gpu_helpers::gather_tensor(
521                &handle,
522            )?),
523        ),
524        other => Err(format!(
525            "{context}: expected {label} to be a numeric vector, got {other:?}"
526        )),
527    }
528}
529
530fn parse_weights(rest: &[Value], len: usize) -> Result<Option<Vec<f64>>, String> {
531    match rest.len() {
532        0 => Ok(None),
533        1 => {
534            let gathered =
535                dispatcher::gather_if_needed(&rest[0]).map_err(|e| format!("polyfit: {e}"))?;
536            let data = real_vector("polyfit", "weights", gathered)?;
537            if data.len() != len {
538                return Err("polyfit: weight vector must match the size of X".to_string());
539            }
540            validate_weights(&data)?;
541            Ok(Some(data))
542        }
543        _ => Err("polyfit: too many input arguments".to_string()),
544    }
545}
546
547fn validate_weights(weights: &[f64]) -> Result<(), String> {
548    for (idx, w) in weights.iter().enumerate() {
549        if !w.is_finite() {
550            return Err(format!(
551                "polyfit: weight at position {} must be finite",
552                idx + 1
553            ));
554        }
555        if *w < 0.0 {
556            return Err("polyfit: weights must be non-negative".to_string());
557        }
558    }
559    Ok(())
560}
561
562fn solve_polyfit(
563    x_data: &[f64],
564    y_data: &[Complex64],
565    degree: usize,
566    weights: Option<&[f64]>,
567) -> Result<PolyfitSolution, String> {
568    if x_data.len() != y_data.len() {
569        return Err("polyfit: X and Y vectors must be the same length".to_string());
570    }
571    if x_data.is_empty() {
572        return Err("polyfit: X and Y must contain at least one sample".to_string());
573    }
574    if let Some(w) = weights {
575        if w.len() != x_data.len() {
576            return Err("polyfit: weight vector must match the size of X".to_string());
577        }
578        validate_weights(w)?;
579    }
580
581    let mean = x_data.iter().sum::<f64>() / x_data.len() as f64;
582    if !mean.is_finite() {
583        return Err("polyfit: mean of X must be finite".to_string());
584    }
585    let scale = compute_scale(x_data, mean)?;
586    let scaled: Vec<f64> = x_data.iter().map(|&v| (v - mean) / scale).collect();
587
588    let mut rhs = y_data.to_vec();
589    for (idx, value) in rhs.iter().enumerate() {
590        if !value.re.is_finite() || !value.im.is_finite() {
591            return Err(format!(
592                "polyfit: Y must contain finite values (encountered NaN/Inf at position {})",
593                idx + 1
594            ));
595        }
596    }
597    if let Some(w) = weights {
598        apply_weights_rhs(&mut rhs, w)?;
599    }
600
601    let rows = scaled.len();
602    let cols = degree + 1;
603    let mut vandermonde = build_vandermonde(&scaled, cols);
604    if let Some(w) = weights {
605        apply_weights_matrix(&mut vandermonde, rows, cols, w)?;
606    }
607
608    let mut transformed_rhs = rhs.clone();
609    householder_qr(&mut vandermonde, rows, cols, &mut transformed_rhs)?;
610    let coeff_scaled = solve_upper(&vandermonde, rows, cols, &transformed_rhs)?;
611    let coeff_original = transform_coefficients(&coeff_scaled, mean, scale);
612
613    let normr = residual_norm(&transformed_rhs, rows, cols);
614    let df = if rows > cols {
615        (rows - cols) as f64
616    } else {
617        0.0
618    };
619    let r_matrix = extract_upper(&vandermonde, rows, cols);
620    let is_complex = coeff_original.iter().any(|c| c.im.abs() > EPS_NAN);
621
622    Ok(PolyfitSolution {
623        coeffs: coeff_original,
624        r_matrix,
625        mu_mean: mean,
626        mu_scale: scale,
627        normr,
628        df,
629        cols,
630        is_complex,
631    })
632}
633
634fn compute_scale(data: &[f64], mean: f64) -> Result<f64, String> {
635    if data.len() <= 1 {
636        return Ok(1.0);
637    }
638    let mut acc = 0.0;
639    for &value in data {
640        if !value.is_finite() {
641            return Err("polyfit: X must contain finite values".to_string());
642        }
643        let diff = value - mean;
644        acc += diff * diff;
645    }
646    let denom = (data.len() as f64 - 1.0).max(1.0);
647    let std = (acc / denom).sqrt();
648    let scale = if std.abs() <= EPS { 1.0 } else { std };
649    if !scale.is_finite() {
650        return Err("polyfit: failed to compute a stable scaling factor".to_string());
651    }
652    Ok(scale)
653}
654
655fn build_vandermonde(u: &[f64], cols: usize) -> Vec<f64> {
656    let rows = u.len();
657    let mut matrix = vec![0.0; rows * cols];
658    if cols == 0 {
659        return matrix;
660    }
661    for (row_idx, &value) in u.iter().enumerate() {
662        let mut powers = vec![0.0; cols];
663        powers[cols - 1] = 1.0;
664        for idx in (0..cols - 1).rev() {
665            powers[idx] = powers[idx + 1] * value;
666        }
667        for col_idx in 0..cols {
668            matrix[row_idx + col_idx * rows] = powers[col_idx];
669        }
670    }
671    matrix
672}
673
674fn apply_weights_matrix(
675    matrix: &mut [f64],
676    rows: usize,
677    cols: usize,
678    weights: &[f64],
679) -> Result<(), String> {
680    for (row, weight) in weights.iter().enumerate().take(rows) {
681        let sqrt_w = weight.sqrt();
682        if !sqrt_w.is_finite() {
683            return Err(format!(
684                "polyfit: weight at position {} must be finite",
685                row + 1
686            ));
687        }
688        for col in 0..cols {
689            let idx = row + col * rows;
690            matrix[idx] *= sqrt_w;
691        }
692    }
693    Ok(())
694}
695
696fn apply_weights_rhs(rhs: &mut [Complex64], weights: &[f64]) -> Result<(), String> {
697    for (idx, (value, weight)) in rhs.iter_mut().zip(weights.iter()).enumerate() {
698        let sqrt_w = weight.sqrt();
699        if !sqrt_w.is_finite() {
700            return Err(format!(
701                "polyfit: weight at position {} must be finite",
702                idx + 1
703            ));
704        }
705        *value *= sqrt_w;
706    }
707    Ok(())
708}
709
710fn ensure_vector_shape(context: &str, label: &str, shape: &[usize]) -> Result<(), String> {
711    if !is_vector_shape(shape) {
712        return Err(format!("{context}: {label} must be a vector"));
713    }
714    Ok(())
715}
716
717fn is_vector_shape(shape: &[usize]) -> bool {
718    shape.iter().copied().filter(|&dim| dim > 1).count() <= 1
719}
720
721fn householder_qr(
722    matrix: &mut [f64],
723    rows: usize,
724    cols: usize,
725    rhs: &mut [Complex64],
726) -> Result<(), String> {
727    let min_dim = rows.min(cols);
728    for k in 0..min_dim {
729        let mut norm_sq = 0.0;
730        for row in k..rows {
731            let val = matrix[row + k * rows];
732            norm_sq += val * val;
733        }
734        if norm_sq <= EPS {
735            continue;
736        }
737        let norm = norm_sq.sqrt();
738        let x0 = matrix[k + k * rows];
739        let alpha = if x0 >= 0.0 { -norm } else { norm };
740        let mut v = vec![0.0; rows - k];
741        v[0] = x0 - alpha;
742        for row in (k + 1)..rows {
743            v[row - k] = matrix[row + k * rows];
744        }
745        let v_norm_sq: f64 = v.iter().map(|&x| x * x).sum();
746        if v_norm_sq <= EPS {
747            continue;
748        }
749        let beta = 2.0 / v_norm_sq;
750        matrix[k + k * rows] = alpha;
751        for row in (k + 1)..rows {
752            matrix[row + k * rows] = 0.0;
753        }
754        for col in (k + 1)..cols {
755            let mut dot = 0.0;
756            for (idx, &vi) in v.iter().enumerate() {
757                let row_idx = k + idx;
758                dot += vi * matrix[row_idx + col * rows];
759            }
760            let factor = beta * dot;
761            for (idx, &vi) in v.iter().enumerate() {
762                let row_idx = k + idx;
763                matrix[row_idx + col * rows] -= factor * vi;
764            }
765        }
766        let mut dot = Complex64::new(0.0, 0.0);
767        for (idx, &vi) in v.iter().enumerate() {
768            let row_idx = k + idx;
769            dot += rhs[row_idx] * vi;
770        }
771        let factor = Complex64::new(beta, 0.0) * dot;
772        for (idx, &vi) in v.iter().enumerate() {
773            let row_idx = k + idx;
774            rhs[row_idx] -= factor * vi;
775        }
776    }
777    Ok(())
778}
779
780fn solve_upper(
781    matrix: &[f64],
782    rows: usize,
783    cols: usize,
784    rhs: &[Complex64],
785) -> Result<Vec<Complex64>, String> {
786    if rhs.len() < rows {
787        return Err("polyfit internal error: RHS dimension mismatch".to_string());
788    }
789    let mut coeffs = vec![Complex64::new(0.0, 0.0); cols];
790    for col in (0..cols).rev() {
791        let diag = if col < rows {
792            matrix[col + col * rows]
793        } else {
794            0.0
795        };
796        if diag.abs() <= EPS {
797            coeffs[col] = Complex64::new(0.0, 0.0);
798            continue;
799        }
800        let mut acc = if col < rows {
801            rhs[col]
802        } else {
803            Complex64::new(0.0, 0.0)
804        };
805        for next in (col + 1)..cols {
806            let idx = if col < rows {
807                matrix[col + next * rows]
808            } else {
809                0.0
810            };
811            acc -= Complex64::new(idx, 0.0) * coeffs[next];
812        }
813        coeffs[col] = acc / Complex64::new(diag, 0.0);
814    }
815    Ok(coeffs)
816}
817
818fn residual_norm(rhs: &[Complex64], rows: usize, cols: usize) -> f64 {
819    let tail_start = rows.min(cols);
820    let mut acc = 0.0;
821    for value in rhs.iter().skip(tail_start) {
822        acc += value.norm_sqr();
823    }
824    acc.sqrt()
825}
826
827fn extract_upper(matrix: &[f64], rows: usize, cols: usize) -> Vec<f64> {
828    let mut output = vec![0.0; cols * cols];
829    for col in 0..cols {
830        for row in 0..=col {
831            if row < rows {
832                output[row + col * cols] = matrix[row + col * rows];
833            }
834        }
835    }
836    output
837}
838
839fn transform_coefficients(coeffs: &[Complex64], mean: f64, scale: f64) -> Vec<Complex64> {
840    let mut poly: Vec<Complex64> = Vec::new();
841    for &coeff in coeffs {
842        let mut next = vec![Complex64::new(0.0, 0.0); poly.len() + 1];
843        for (idx, &value) in poly.iter().enumerate() {
844            next[idx + 1] += value / scale;
845            next[idx] -= value * (mean / scale);
846        }
847        next[0] += coeff;
848        poly = next;
849    }
850    poly.reverse();
851    poly
852}
853
854fn coefficients_to_value(coeffs: &[Complex64]) -> Result<Value, String> {
855    let all_real = coeffs
856        .iter()
857        .all(|c| c.im.abs() <= EPS_NAN && c.re.is_finite());
858    if all_real {
859        let data: Vec<f64> = coeffs.iter().map(|c| c.re).collect();
860        let tensor =
861            Tensor::new(data, vec![1, coeffs.len()]).map_err(|e| format!("polyfit: {e}"))?;
862        Ok(Value::Tensor(tensor))
863    } else {
864        let data: Vec<(f64, f64)> = coeffs.iter().map(|c| (c.re, c.im)).collect();
865        let tensor =
866            ComplexTensor::new(data, vec![1, coeffs.len()]).map_err(|e| format!("polyfit: {e}"))?;
867        Ok(Value::ComplexTensor(tensor))
868    }
869}
870
871fn build_stats(r: &[f64], n: usize, normr: f64, df: f64) -> Result<Value, String> {
872    let tensor = Tensor::new(r.to_vec(), vec![n, n]).map_err(|e| format!("polyfit: {e}"))?;
873    let mut st = StructValue::new();
874    st.fields.insert("R".to_string(), Value::Tensor(tensor));
875    st.fields.insert("df".to_string(), Value::Num(df));
876    st.fields.insert("normr".to_string(), Value::Num(normr));
877    Ok(Value::Struct(st))
878}
879
880fn build_mu(mean: f64, scale: f64) -> Result<Value, String> {
881    if !scale.is_finite() || scale.abs() <= EPS {
882        return Err("polyfit: mu(2) must be non-zero and finite".to_string());
883    }
884    let tensor = Tensor::new(vec![mean, scale], vec![1, 2]).map_err(|e| format!("polyfit: {e}"))?;
885    Ok(Value::Tensor(tensor))
886}
887
888#[derive(Debug, Clone)]
889pub struct PolyfitHostRealResult {
890    pub coefficients: Vec<f64>,
891    pub r_matrix: Vec<f64>,
892    pub mu: [f64; 2],
893    pub normr: f64,
894    pub df: f64,
895}
896
897pub fn polyfit_host_real_for_provider(
898    x: &[f64],
899    y: &[f64],
900    degree: usize,
901    weights: Option<&[f64]>,
902) -> Result<PolyfitHostRealResult, String> {
903    if x.len() != y.len() {
904        return Err("polyfit: X and Y vectors must be the same length".to_string());
905    }
906    if let Some(w) = weights {
907        if w.len() != x.len() {
908            return Err("polyfit: weight vector must match the size of X".to_string());
909        }
910        validate_weights(w)?;
911    }
912    let complex_y: Vec<Complex64> = y.iter().copied().map(|v| Complex64::new(v, 0.0)).collect();
913    let solution = solve_polyfit(x, &complex_y, degree, weights)?;
914    let PolyfitSolution {
915        coeffs,
916        r_matrix,
917        mu_mean,
918        mu_scale,
919        normr,
920        df,
921        cols: _,
922        is_complex,
923    } = solution;
924    if is_complex {
925        return Err(
926            "polyfit: provider fallback produced complex coefficients for real data".to_string(),
927        );
928    }
929    let coeffs: Vec<f64> = coeffs.into_iter().map(|c| c.re).collect();
930    let mu = [mu_mean, mu_scale];
931    Ok(PolyfitHostRealResult {
932        coefficients: coeffs,
933        r_matrix,
934        mu,
935        normr,
936        df,
937    })
938}
939
940#[cfg(test)]
941mod tests {
942    use super::*;
943    use crate::builtins::common::test_support;
944
945    #[test]
946    fn fits_linear_data() {
947        let x = Tensor::new(vec![0.0, 1.0, 2.0, 3.0], vec![4, 1]).unwrap();
948        let mut y_vals = Vec::new();
949        for i in 0..4 {
950            y_vals.push(1.5 * i as f64 + 2.0);
951        }
952        let y = Tensor::new(y_vals, vec![4, 1]).unwrap();
953        let eval = evaluate(
954            Value::Tensor(x),
955            Value::Tensor(y),
956            Value::Int(runmat_builtins::IntValue::I32(1)),
957            &[],
958        )
959        .expect("polyfit");
960        match eval.coefficients() {
961            Value::Tensor(t) => {
962                assert_eq!(t.shape, vec![1, 2]);
963                assert!((t.data[0] - 1.5).abs() < 1e-10);
964                assert!((t.data[1] - 2.0).abs() < 1e-10);
965            }
966            other => panic!("expected tensor coefficients, got {other:?}"),
967        }
968    }
969
970    #[test]
971    fn returns_struct_and_mu() {
972        let x = Tensor::new(vec![-1.0, 0.0, 1.0], vec![3, 1]).unwrap();
973        let y = Tensor::new(vec![1.0, 0.0, 1.0], vec![3, 1]).unwrap();
974        let eval = evaluate(
975            Value::Tensor(x),
976            Value::Tensor(y),
977            Value::Int(runmat_builtins::IntValue::I32(2)),
978            &[],
979        )
980        .expect("polyfit");
981        match eval.stats() {
982            Value::Struct(s) => {
983                assert!(s.fields.contains_key("R"));
984                assert!(s.fields.contains_key("df"));
985                assert!(s.fields.contains_key("normr"));
986            }
987            other => panic!("expected struct, got {other:?}"),
988        }
989        match eval.mu() {
990            Value::Tensor(t) => {
991                assert_eq!(t.shape, vec![1, 2]);
992                assert!((t.data[0]).abs() < 1e-10);
993                assert!(t.data[1].abs() > 0.0);
994            }
995            other => panic!("expected tensor mu, got {other:?}"),
996        }
997    }
998
999    #[test]
1000    fn weighted_fit_matches_unweighted_when_weights_equal() {
1001        let x = Tensor::new(vec![0.0, 1.0, 2.0], vec![3, 1]).unwrap();
1002        let y = Tensor::new(vec![1.0, 3.0, 7.0], vec![3, 1]).unwrap();
1003        let weights = Tensor::new(vec![1.0, 1.0, 1.0], vec![3, 1]).unwrap();
1004        let eval_unweighted = evaluate(
1005            Value::Tensor(x.clone()),
1006            Value::Tensor(y.clone()),
1007            Value::Int(runmat_builtins::IntValue::I32(2)),
1008            &[],
1009        )
1010        .expect("polyfit");
1011        let eval_weighted = evaluate(
1012            Value::Tensor(x),
1013            Value::Tensor(y),
1014            Value::Int(runmat_builtins::IntValue::I32(2)),
1015            &[Value::Tensor(weights)],
1016        )
1017        .expect("polyfit");
1018        assert_eq!(eval_unweighted.coefficients(), eval_weighted.coefficients());
1019    }
1020
1021    #[test]
1022    fn accepts_logical_degree_scalar() {
1023        let x = Tensor::new(vec![0.0, 1.0], vec![2, 1]).unwrap();
1024        let y = Tensor::new(vec![1.0, 3.0], vec![2, 1]).unwrap();
1025        let logical = runmat_builtins::LogicalArray::new(vec![1], vec![1, 1]).unwrap();
1026        let eval = evaluate(
1027            Value::Tensor(x),
1028            Value::Tensor(y),
1029            Value::LogicalArray(logical),
1030            &[],
1031        )
1032        .expect("polyfit");
1033        assert!(matches!(eval.coefficients(), Value::Tensor(_)));
1034    }
1035
1036    #[test]
1037    fn rejects_non_integer_degree() {
1038        let x = Tensor::new(vec![0.0, 1.0, 2.0], vec![3, 1]).unwrap();
1039        let y = Tensor::new(vec![1.0, 3.0, 7.0], vec![3, 1]).unwrap();
1040        let err = evaluate(Value::Tensor(x), Value::Tensor(y), Value::Num(1.5), &[])
1041            .expect_err("polyfit should reject non-integer degree");
1042        assert!(err.contains("integer"));
1043    }
1044
1045    #[test]
1046    fn rejects_infinite_weights() {
1047        let x = Tensor::new(vec![0.0, 1.0, 2.0], vec![3, 1]).unwrap();
1048        let y = Tensor::new(vec![1.0, 3.0, 7.0], vec![3, 1]).unwrap();
1049        let weights = Tensor::new(vec![1.0, f64::INFINITY, 1.0], vec![3, 1]).unwrap();
1050        let err = evaluate(
1051            Value::Tensor(x),
1052            Value::Tensor(y),
1053            Value::Int(runmat_builtins::IntValue::I32(2)),
1054            &[Value::Tensor(weights)],
1055        )
1056        .expect_err("polyfit should reject infinite weights");
1057        assert!(err.contains("weight at position 2"));
1058    }
1059
1060    #[test]
1061    fn gpu_inputs_are_gathered() {
1062        test_support::with_test_provider(|provider| {
1063            let x = Tensor::new(vec![0.0, 1.0, 2.0], vec![3, 1]).unwrap();
1064            let y = Tensor::new(vec![1.0, 3.0, 7.0], vec![3, 1]).unwrap();
1065            let view = runmat_accelerate_api::HostTensorView {
1066                data: &x.data,
1067                shape: &x.shape,
1068            };
1069            let x_handle = provider.upload(&view).expect("upload");
1070            let view_y = runmat_accelerate_api::HostTensorView {
1071                data: &y.data,
1072                shape: &y.shape,
1073            };
1074            let y_handle = provider.upload(&view_y).expect("upload");
1075            let eval = evaluate(
1076                Value::GpuTensor(x_handle),
1077                Value::GpuTensor(y_handle),
1078                Value::Int(runmat_builtins::IntValue::I32(2)),
1079                &[],
1080            )
1081            .expect("polyfit");
1082            assert!(matches!(eval.coefficients(), Value::Tensor(_)));
1083        });
1084    }
1085
1086    #[test]
1087    fn gpu_weights_are_gathered() {
1088        test_support::with_test_provider(|provider| {
1089            let x = Tensor::new(vec![0.0, 1.0, 2.0], vec![3, 1]).unwrap();
1090            let y = Tensor::new(vec![1.0, 3.0, 7.0], vec![3, 1]).unwrap();
1091            let weights = Tensor::new(vec![1.0, 2.0, 3.0], vec![3, 1]).unwrap();
1092
1093            let x_view = runmat_accelerate_api::HostTensorView {
1094                data: &x.data,
1095                shape: &x.shape,
1096            };
1097            let y_view = runmat_accelerate_api::HostTensorView {
1098                data: &y.data,
1099                shape: &y.shape,
1100            };
1101            let w_view = runmat_accelerate_api::HostTensorView {
1102                data: &weights.data,
1103                shape: &weights.shape,
1104            };
1105
1106            let x_handle = provider.upload(&x_view).expect("upload x");
1107            let y_handle = provider.upload(&y_view).expect("upload y");
1108            let w_handle = provider.upload(&w_view).expect("upload weights");
1109
1110            let cpu_eval = evaluate(
1111                Value::Tensor(x.clone()),
1112                Value::Tensor(y.clone()),
1113                Value::Int(runmat_builtins::IntValue::I32(2)),
1114                &[Value::Tensor(weights.clone())],
1115            )
1116            .expect("cpu polyfit");
1117
1118            let gpu_eval = evaluate(
1119                Value::GpuTensor(x_handle.clone()),
1120                Value::GpuTensor(y_handle.clone()),
1121                Value::Int(runmat_builtins::IntValue::I32(2)),
1122                &[Value::GpuTensor(w_handle.clone())],
1123            )
1124            .expect("gpu polyfit with weights");
1125
1126            assert_eq!(cpu_eval.coefficients(), gpu_eval.coefficients());
1127            assert_eq!(cpu_eval.mu(), gpu_eval.mu());
1128
1129            let _ = provider.free(&x_handle);
1130            let _ = provider.free(&y_handle);
1131            let _ = provider.free(&w_handle);
1132        });
1133    }
1134
1135    #[cfg(feature = "wgpu")]
1136    #[test]
1137    fn polyfit_wgpu_matches_cpu() {
1138        let options = runmat_accelerate::backend::wgpu::provider::WgpuProviderOptions::default();
1139        let _provider =
1140            match runmat_accelerate::backend::wgpu::provider::register_wgpu_provider(options) {
1141                Ok(p) => p,
1142                Err(err) => {
1143                    eprintln!("polyfit_wgpu_matches_cpu: skipping test ({err})");
1144                    return;
1145                }
1146            };
1147        let x = Tensor::new(vec![0.0, 1.0, 2.0, 3.0], vec![4, 1]).unwrap();
1148        let y = Tensor::new(vec![1.0, 3.0, 7.0, 13.0], vec![4, 1]).unwrap();
1149
1150        let cpu_eval = evaluate(
1151            Value::Tensor(x.clone()),
1152            Value::Tensor(y.clone()),
1153            Value::Int(runmat_builtins::IntValue::I32(2)),
1154            &[],
1155        )
1156        .expect("cpu polyfit");
1157
1158        let trait_provider = runmat_accelerate_api::provider().expect("wgpu provider registered");
1159        let x_view = runmat_accelerate_api::HostTensorView {
1160            data: &x.data,
1161            shape: &x.shape,
1162        };
1163        let y_view = runmat_accelerate_api::HostTensorView {
1164            data: &y.data,
1165            shape: &y.shape,
1166        };
1167        let x_handle = trait_provider.upload(&x_view).expect("upload x");
1168        let y_handle = trait_provider.upload(&y_view).expect("upload y");
1169
1170        let gpu_eval = evaluate(
1171            Value::GpuTensor(x_handle.clone()),
1172            Value::GpuTensor(y_handle.clone()),
1173            Value::Int(runmat_builtins::IntValue::I32(2)),
1174            &[],
1175        )
1176        .expect("gpu polyfit");
1177
1178        let _ = trait_provider.free(&x_handle);
1179        let _ = trait_provider.free(&y_handle);
1180
1181        let cpu_coeff = match cpu_eval.coefficients() {
1182            Value::Tensor(t) => t,
1183            other => panic!("expected tensor coefficients, got {other:?}"),
1184        };
1185        let gpu_coeff = match gpu_eval.coefficients() {
1186            Value::Tensor(t) => t,
1187            other => panic!("expected tensor coefficients, got {other:?}"),
1188        };
1189        assert_eq!(cpu_coeff.shape, gpu_coeff.shape);
1190        for (a, b) in cpu_coeff.data.iter().zip(gpu_coeff.data.iter()) {
1191            assert!((a - b).abs() < 1e-9, "coeff mismatch {a} vs {b}");
1192        }
1193
1194        let cpu_mu = match cpu_eval.mu() {
1195            Value::Tensor(t) => t,
1196            other => panic!("expected tensor mu, got {other:?}"),
1197        };
1198        let gpu_mu = match gpu_eval.mu() {
1199            Value::Tensor(t) => t,
1200            other => panic!("expected tensor mu, got {other:?}"),
1201        };
1202        assert_eq!(cpu_mu.shape, gpu_mu.shape);
1203        for (a, b) in cpu_mu.data.iter().zip(gpu_mu.data.iter()) {
1204            assert!((a - b).abs() < 1e-9, "mu mismatch {a} vs {b}");
1205        }
1206
1207        let cpu_stats = match cpu_eval.stats() {
1208            Value::Struct(s) => s,
1209            other => panic!("expected struct stats, got {other:?}"),
1210        };
1211        let gpu_stats = match gpu_eval.stats() {
1212            Value::Struct(s) => s,
1213            other => panic!("expected struct stats, got {other:?}"),
1214        };
1215        let cpu_r = match cpu_stats.fields.get("R").expect("R present") {
1216            Value::Tensor(t) => t.clone(),
1217            other => panic!("expected tensor R, got {other:?}"),
1218        };
1219        let gpu_r = match gpu_stats.fields.get("R").expect("R present") {
1220            Value::Tensor(t) => t.clone(),
1221            other => panic!("expected tensor R, got {other:?}"),
1222        };
1223        assert_eq!(cpu_r.shape, gpu_r.shape);
1224        for (a, b) in cpu_r.data.iter().zip(gpu_r.data.iter()) {
1225            assert!((a - b).abs() < 1e-9, "R mismatch {a} vs {b}");
1226        }
1227        let cpu_df = match cpu_stats.fields.get("df").expect("df present") {
1228            Value::Num(n) => *n,
1229            other => panic!("expected numeric df, got {other:?}"),
1230        };
1231        let gpu_df = match gpu_stats.fields.get("df").expect("df present") {
1232            Value::Num(n) => *n,
1233            other => panic!("expected numeric df, got {other:?}"),
1234        };
1235        assert!((cpu_df - gpu_df).abs() < 1e-9);
1236        let cpu_normr = match cpu_stats.fields.get("normr").expect("normr present") {
1237            Value::Num(n) => *n,
1238            other => panic!("expected numeric normr, got {other:?}"),
1239        };
1240        let gpu_normr = match gpu_stats.fields.get("normr").expect("normr present") {
1241            Value::Num(n) => *n,
1242            other => panic!("expected numeric normr, got {other:?}"),
1243        };
1244        assert!((cpu_normr - gpu_normr).abs() < 1e-9);
1245    }
1246
1247    #[test]
1248    fn rejects_mismatched_lengths() {
1249        let x = Tensor::new(vec![0.0, 1.0, 2.0], vec![3, 1]).unwrap();
1250        let y = Tensor::new(vec![1.0, 2.0], vec![2, 1]).unwrap();
1251        let err = evaluate(
1252            Value::Tensor(x),
1253            Value::Tensor(y),
1254            Value::Int(runmat_builtins::IntValue::I32(1)),
1255            &[],
1256        )
1257        .expect_err("polyfit should reject mismatched vector lengths");
1258        assert!(err.contains("same length"));
1259    }
1260
1261    #[test]
1262    fn rejects_non_vector_inputs() {
1263        let x = Tensor::new(vec![0.0, 1.0, 2.0, 3.0], vec![2, 2]).unwrap();
1264        let y = Tensor::new(vec![1.0, 2.0, 3.0, 4.0], vec![4, 1]).unwrap();
1265        let err = evaluate(
1266            Value::Tensor(x),
1267            Value::Tensor(y),
1268            Value::Int(runmat_builtins::IntValue::I32(1)),
1269            &[],
1270        )
1271        .expect_err("polyfit should reject non-vector X");
1272        assert!(err.contains("vector"));
1273    }
1274
1275    #[test]
1276    fn rejects_weight_length_mismatch() {
1277        let x = Tensor::new(vec![0.0, 1.0, 2.0], vec![3, 1]).unwrap();
1278        let y = Tensor::new(vec![1.0, 3.0, 7.0], vec![3, 1]).unwrap();
1279        let weights = Tensor::new(vec![1.0, 1.0], vec![2, 1]).unwrap();
1280        let err = evaluate(
1281            Value::Tensor(x),
1282            Value::Tensor(y),
1283            Value::Int(runmat_builtins::IntValue::I32(2)),
1284            &[Value::Tensor(weights)],
1285        )
1286        .expect_err("polyfit should reject mismatched weights");
1287        assert!(err.contains("weight vector must match"));
1288    }
1289
1290    #[test]
1291    fn rejects_negative_weights() {
1292        let x = Tensor::new(vec![0.0, 1.0, 2.0], vec![3, 1]).unwrap();
1293        let y = Tensor::new(vec![1.0, 3.0, 7.0], vec![3, 1]).unwrap();
1294        let weights = Tensor::new(vec![1.0, -1.0, 1.0], vec![3, 1]).unwrap();
1295        let err = evaluate(
1296            Value::Tensor(x),
1297            Value::Tensor(y),
1298            Value::Int(runmat_builtins::IntValue::I32(2)),
1299            &[Value::Tensor(weights)],
1300        )
1301        .expect_err("polyfit should reject negative weights");
1302        assert!(err.contains("non-negative"));
1303    }
1304
1305    #[test]
1306    fn fits_complex_data() {
1307        let x = Tensor::new(vec![0.0, 1.0, 2.0], vec![3, 1]).unwrap();
1308        let complex_values =
1309            ComplexTensor::new(vec![(0.0, 1.0), (1.0, 0.5), (4.0, -0.25)], vec![3, 1]).unwrap();
1310        let eval = evaluate(
1311            Value::Tensor(x),
1312            Value::ComplexTensor(complex_values),
1313            Value::Int(runmat_builtins::IntValue::I32(2)),
1314            &[],
1315        )
1316        .expect("polyfit complex");
1317        match eval.coefficients() {
1318            Value::ComplexTensor(t) => {
1319                assert_eq!(t.shape, vec![1, 3]);
1320            }
1321            other => panic!("expected complex tensor coefficients, got {other:?}"),
1322        }
1323    }
1324
1325    #[test]
1326    fn doc_examples_present() {
1327        #[cfg(feature = "doc_export")]
1328        {
1329            let blocks = test_support::doc_examples(DOC_MD);
1330            assert!(!blocks.is_empty());
1331        }
1332    }
1333}