Trait varpro::prelude::LeastSquaresProblem[][src]

pub trait LeastSquaresProblem<F, M, N> where
    F: ComplexField,
    N: Dim,
    M: Dim
{ type ResidualStorage: ContiguousStorageMut<F, M, U1>; type JacobianStorage: ContiguousStorageMut<F, M, N>; type ParameterStorage: ContiguousStorageMut<F, N, U1> + Clone; pub fn set_params(&mut self, x: &Matrix<F, N, U1, Self::ParameterStorage>);
pub fn params(&self) -> Matrix<F, N, U1, Self::ParameterStorage>;
pub fn residuals(&self) -> Option<Matrix<F, M, U1, Self::ResidualStorage>>;
pub fn jacobian(&self) -> Option<Matrix<F, M, N, Self::JacobianStorage>>; }

A least squares minimization problem.

This is what LevenbergMarquardt needs to compute the residuals and the Jacobian. See the module documentation for a usage example.

Associated Types

type ResidualStorage: ContiguousStorageMut<F, M, U1>[src]

Storage type used for the residuals. Use nalgebra::storage::Owned<F, M> if you want to use VectorN or MatrixMN.

type JacobianStorage: ContiguousStorageMut<F, M, N>[src]

type ParameterStorage: ContiguousStorageMut<F, N, U1> + Clone[src]

Loading content...

Required methods

pub fn set_params(&mut self, x: &Matrix<F, N, U1, Self::ParameterStorage>)[src]

Set the stored parameters $\vec{x}$.

pub fn params(&self) -> Matrix<F, N, U1, Self::ParameterStorage>[src]

Get the current parameter vector $\vec{x}$.

pub fn residuals(&self) -> Option<Matrix<F, M, U1, Self::ResidualStorage>>[src]

Compute the residual vector.

pub fn jacobian(&self) -> Option<Matrix<F, M, N, Self::JacobianStorage>>[src]

Compute the Jacobian of the residual vector.

Loading content...

Implementors

impl<'a, ScalarType> LeastSquaresProblem<ScalarType, Dynamic, Dynamic> for LevMarProblem<'a, ScalarType> where
    ScalarType: Scalar + ComplexField,
    ScalarType::RealField: Mul<ScalarType, Output = ScalarType> + Float
[src]

type ResidualStorage = Owned<ScalarType, Dynamic>

type JacobianStorage = Owned<ScalarType, Dynamic, Dynamic>

type ParameterStorage = Owned<ScalarType, Dynamic>

fn set_params(
    &mut self,
    params: &Vector<ScalarType, Dynamic, Self::ParameterStorage>
)
[src]

Set the (nonlinear) model parameters $\vec{\alpha}$ and update the internal state of the problem accordingly. The parameters are expected in the same order that the parameter names were provided in at model creation. So if we gave &["tau","beta"] as parameters at model creation, the function expects the layout of the parameter vector to be $\vec{\alpha}=(\tau,\beta)^T$.

fn params(&self) -> Vector<ScalarType, Dynamic, Self::ParameterStorage>[src]

Retrieve the (nonlinear) model parameters as a vector $\vec{\alpha}$. The order of the parameters in the vector is the same as the order of the parameter names given on model creation. E.g. if the parameters at model creation where given as &["tau","beta"], then the returned vector is $\vec{\alpha} = (\tau,\beta)^T$, i.e. the value of parameter $\tau$ is at index 0 and the value of $\beta$ at index 1.

fn residuals(
    &self
) -> Option<Vector<ScalarType, Dynamic, Self::ResidualStorage>>
[src]

Calculate the residual vector $\vec{r}_w$ of weighted residuals at every location $\vec{x}$. The residual is calculated from the data \vec{y} as $\vec{r}_w(\vec{\alpha}) = W\cdot(\vec{y}-\vec{f}(\vec{x},\vec{\alpha},\vec{c}(\vec{\alpha}))$, where $\vec{f}(\vec{x},\vec{\alpha},\vec{c})$ is the model function evaluated at the currently set nonlinear parameters $\vec{\alpha}$ and the linear coefficients $\vec{c}(\vec{\alpha})$. The VarPro algorithm calculates $\vec{c}(\vec{\alpha})$ as the coefficients that provide the best linear least squares fit, given the current $\vec{\alpha}$. For more info on the math of VarPro, see e.g. here.

fn jacobian(
    &self
) -> Option<Matrix<ScalarType, Dynamic, Dynamic, Self::JacobianStorage>>
[src]

Calculate the Jacobian matrix of the weighted residuals $\vec{r}_w(\vec{\alpha})$. For more info on how the Jacobian is calculated in the VarPro algorithm, see e.g. here.

Loading content...