pub fn minimize<F: Func<U>, G: Func<U>, U: Clone>(
func: F,
xinit: &[f64],
bounds: &[(f64, f64)],
cons: &[G],
args: U,
maxeval: usize,
rhobeg: RhoBeg,
stop_tol: Option<StopTols>,
) -> Result<(SuccessStatus, Vec<f64>, f64), (FailStatus, Vec<f64>, f64)>Expand description
Minimizes a function using the Constrained Optimization By Linear Approximation (COBYLA) method.
§Arguments
func- the function to minimizexinit- n-vector the initial guessbounds- x domain specified as a n-vector of tuple(lower bound, upper bound)cons- slice of constraint function intended to be negative at the endargs- user data pass to objective and constraint functionsmaxeval- maximum number of objective function evaluationrhobeg- initial changes to the x component
§Returns
The status of the optimization process, the argmin value and the objective function value
§Panics
When some vector arguments like bounds, xtol_abs do not have the same size as xinit
§Implementation note:
This implementation is a translation of NLopt 2.7.1 See also NLopt SLSQP documentation.
§Example
use cobyla::{minimize, Func, RhoBeg};
fn paraboloid(x: &[f64], _data: &mut ()) -> f64 {
10. * (x[0] + 1.).powf(2.) + x[1].powf(2.)
}
let mut x = vec![1., 1.];
// Constraints definition to be positive eventually: here `x_0 > 0`
let cstr1 = |x: &[f64], _user_data: &mut ()| x[0];
let cons: Vec<&dyn Func<()>> = vec![&cstr1];
match minimize(
paraboloid,
&mut x,
&[(-10., 10.), (-10., 10.)],
&cons,
(),
200,
RhoBeg::All(0.5),
None
) {
Ok((status, x_opt, y_opt)) => {
println!("status = {:?}", status);
println!("x_opt = {:?}", x_opt);
println!("y_opt = {}", y_opt);
}
Err((e, _, _)) => println!("Optim error: {:?}", e),
}§Algorithm description:
COBYLA minimizes an objective function F(X) subject to M inequality constraints on X, where X is a vector of variables that has N components.
The algorithm employs linear approximations to the objective and constraint functions, the approximations being formed by linear interpolation at N+1 points in the space of the variables. We regard these interpolation points as vertices of a simplex.
The parameter RHO controls the size of the simplex and it is reduced automatically from RHOBEG to RHOEND. For each RHO the subroutine tries to achieve a good vector of variables for the current size, and then RHO is reduced until the value RHOEND is reached.
Therefore RHOBEG and RHOEND should be set to reasonable initial changes to and the required accuracy in the variables respectively, but this accuracy should be viewed as a subject for experimentation because it is not guaranteed.
The subroutine has an advantage over many of its competitors, however, which is that it treats each constraint individually when calculating a change to the variables, instead of lumping the constraints together into a single penalty function.
The name of the algorithm is derived from the phrase Constrained Optimization BY Linear Approximations.
The user can set the values of RHOBEG and RHOEND, and must provide an
initial vector of variables in X. Further, the value of IPRINT should be
set to 0, 1, 2 or 3, which controls the amount of printing during the
calculation. Specifically, there is no output if IPRINT=0 and there is
output only at the end of the calculation if IPRINT=1.
Otherwise each new value of RHO and SIGMA is printed.
Further, the vector of variables and some function information are given either when RHO is reduced or when each new value of F(X) is computed in the cases IPRINT=2 or IPRINT=3 respectively. Here SIGMA is a penalty parameter, it being assumed that a change to X is an improvement if it reduces the merit function:
F(X)+SIGMA*MAX(0.0,-C1(X),-C2(X),…,-CM(X)),
where C1, C2, …, CM denote the constraint functions that should become nonnegative eventually, at least to the precision of RHOEND. In the printed output the displayed term that is multiplied by SIGMA is called MAXCV, which stands for ‘MAXimum Constraint Violation’.
This implementation is a translation/adaptation of NLopt 2.7.1 See NLopt COBYLA documentation.