pub struct Nlopt<F: ObjFn<T>, T> { /* private fields */ }
Expand description
This is the central struct
of this library. It represents an optimization of a given
function, called the objective function. The argument x
to this function is an
n
-dimensional double-precision vector. The dimensions are set at creation of the struct and
cannot be changed afterwards. NLopt offers different optimization algorithms. One must be
chosen at struct creation and cannot be changed afterwards. Always use Nlopt::<T>::new()
to create an Nlopt
struct.
Implementations§
Source§impl<F: ObjFn<T>, T> Nlopt<F, T>
impl<F: ObjFn<T>, T> Nlopt<F, T>
Sourcepub fn new(
algorithm: Algorithm,
n_dims: usize,
objective_fn: F,
target: Target,
user_data: T,
) -> Nlopt<F, T>
pub fn new( algorithm: Algorithm, n_dims: usize, objective_fn: F, target: Target, user_data: T, ) -> Nlopt<F, T>
Creates a new Nlopt
struct.
algorithm
- Which optimization algorithm to use. This cannot be changed after creation of the structn_dims
- Dimension of the argument to the objective functionobjective_fn
- This function has the signature(&[f64], Option<&mut [f64]>, T) -> f64
. The first argument is the vectorx
passed to the function. The second argument isSome(&mut [f64])
if the calling optimization algorithm needs the gradient of the function. If the gradient is not needed, it isNone
. The last argument is the user data provided beforehand using theuser_data
argument to the constructor.target
- Whether to minimize or maximize the objective functionuser_data
- Optional data that is passed to the objective function
Sourcepub fn get_algorithm(&self) -> Algorithm
pub fn get_algorithm(&self) -> Algorithm
Retrive chosen algorithm
Sourcepub fn recover_user_data(self) -> T
pub fn recover_user_data(self) -> T
Consume the struct and recover the user data that was passed into the constructor.
Useful if you wish to collect information during
the optimization process - for example, pass in some kind of Statistics
object, mutate it inside the objective function, then recover it.
Sourcepub fn set_lower_bounds(&mut self, bound: &[f64]) -> OptResult
pub fn set_lower_bounds(&mut self, bound: &[f64]) -> OptResult
Most of the algorithms in NLopt are designed for minimization of functions with simple bound
constraints on the inputs. That is, the input vectors x
are constrainted to lie in a
hyperrectangle lower_bound[i] ≤ x[i] ≤ upper_bound[i] for 0 ≤ i < n
.
NLopt guarantees that your objective
function and any nonlinear constraints will never be evaluated outside of these bounds
(unlike nonlinear constraints, which may be violated at intermediate steps).
These bounds are specified by passing an array bound
of length n
(the dimension of the
problem) to one or both of the functions:
set_lower_bounds(&[f64])
set_upper_bounds(&[f64])
If a lower/upper bound is not set, the default is no bound (unconstrained, i.e. a bound of
infinity); it is possible to have lower bounds but not upper bounds or vice versa.
Alternatively, the user can call one of the above functions and explicitly pass a lower
bound of -INFINITY
and/or an upper bound of +INFINITY
for some optimization parameters to
make them have no lower/upper bound, respectively.
It is permitted to set lower_bound[i] == upper_bound[i]
in one or more dimensions;
this is equivalent to fixing the corresponding x[i]
parameter, eliminating it
from the optimization.
Note, however, that some of the algorithms in NLopt, in particular most of the
global-optimization algorithms, do not support unconstrained optimization and will return an
error in optimize
if you do not supply finite lower and upper bounds.
Sourcepub fn set_upper_bounds(&mut self, bound: &[f64]) -> OptResult
pub fn set_upper_bounds(&mut self, bound: &[f64]) -> OptResult
See documentation for set_lower_bounds
Sourcepub fn set_lower_bound(&mut self, bound: f64) -> OptResult
pub fn set_lower_bound(&mut self, bound: f64) -> OptResult
For convenience, set_lower_bound
is supplied in order to set the lower
bounds for all optimization parameters to a single constant
Sourcepub fn set_upper_bound(&mut self, bound: f64) -> OptResult
pub fn set_upper_bound(&mut self, bound: f64) -> OptResult
For convenience, set_upper_bound
is supplied in order to set the upper
bounds for all optimization parameters to a single constant
Sourcepub fn get_upper_bounds(&self) -> Option<Vec<f64>>
pub fn get_upper_bounds(&self) -> Option<Vec<f64>>
Retrieve the current upper bonds on x
Sourcepub fn get_lower_bounds(&self) -> Option<Vec<f64>>
pub fn get_lower_bounds(&self) -> Option<Vec<f64>>
Retrieve the current lower bonds on x
Sourcepub fn add_equality_constraint<G: ObjFn<U>, U>(
&mut self,
constraint: G,
user_data: U,
tolerance: f64,
) -> OptResult
pub fn add_equality_constraint<G: ObjFn<U>, U>( &mut self, constraint: G, user_data: U, tolerance: f64, ) -> OptResult
Several of the algorithms in NLopt (MMA, COBYLA, and ORIG_DIRECT) also support arbitrary nonlinear inequality constraints, and some additionally allow nonlinear equality constraints (ISRES and AUGLAG). For these algorithms, you can specify as many nonlinear constraints as you wish.
In particular, a nonlinear constraint of the form fc(x) = 0
, where the function
fc is has the same form as an objective function, can be specified by calling this function.
tolerance
- This parameter is a tolerance that is used for the purpose of stopping criteria only: a pointx
is considered feasible for judging whether to stop the optimization iffc(x) ≤ tol
. A tolerance of zero means that NLopt will try not to consider anyx
to be converged unless the constraint is strictly satisfied; generally, at least a small positive tolerance is advisable to reduce sensitivity to rounding errors.
Sourcepub fn add_inequality_constraint<G: ObjFn<U>, U>(
&mut self,
constraint: G,
user_data: U,
tolerance: f64,
) -> OptResult
pub fn add_inequality_constraint<G: ObjFn<U>, U>( &mut self, constraint: G, user_data: U, tolerance: f64, ) -> OptResult
Set a nonlinear constraint of the form fc(x) ≤ 0
.
For more information see the documentation for add_equality_constraint
.
Sourcepub fn add_equality_mconstraint<G: MObjFn<U>, U>(
&mut self,
m: usize,
constraint: G,
user_data: U,
tolerance: &[f64],
) -> OptResult
pub fn add_equality_mconstraint<G: MObjFn<U>, U>( &mut self, m: usize, constraint: G, user_data: U, tolerance: &[f64], ) -> OptResult
In some applications with multiple constraints, it is more convenient to define a single
function that returns the values (and gradients) of all constraints at once. For example,
different constraint functions might share computations in some way. Or, if you have a large
number of constraints, you may wish to compute them in parallel. This possibility is
supported by this function, which defines multiple equality constraints at once, or
equivalently a vector-valued constraint function fc(x) | R^n --> R^m
:
constraint
- A constraint function bundled with user defined parameters.tolerance
- An array slice of lengthm
of the tolerances in each constraint dimension
Sourcepub fn add_inequality_mconstraint<G: MObjFn<U>, U>(
&mut self,
m: usize,
constraint: G,
user_data: U,
tolerance: &[f64],
) -> OptResult
pub fn add_inequality_mconstraint<G: MObjFn<U>, U>( &mut self, m: usize, constraint: G, user_data: U, tolerance: &[f64], ) -> OptResult
Set a nonlinear multivalue inequality constraint.
For more information see the documentation for add_equality_mconstraint
.
Sourcepub fn remove_constraints(&mut self) -> OptResult
pub fn remove_constraints(&mut self) -> OptResult
Remove all of the inequality and equality constraints from a given problem.
Sourcepub fn set_stopval(&mut self, stopval: f64) -> OptResult
pub fn set_stopval(&mut self, stopval: f64) -> OptResult
Multiple stopping criteria for the optimization are supported, as specified by the functions to modify a given optimization problem. The optimization halts whenever any one of these criteria is satisfied. In some cases, the precise interpretation of the stopping criterion depends on the optimization algorithm above (although we have tried to make them as consistent as reasonably possible), and some algorithms do not support all of the stopping criteria.
Note: you do not need to use all of the stopping criteria! In most cases, you only need one or two, and can omit the remainder (all criteria are disabled by default).
This functions specifies a stop when an objective value of at least stopval
is found: stop minimizing when an objective
value ≤ stopval
is found, or stop maximizing a value ≥ stopval
is found.
pub fn get_stopval(&self) -> f64
Sourcepub fn set_ftol_rel(&mut self, tolerance: f64) -> OptResult
pub fn set_ftol_rel(&mut self, tolerance: f64) -> OptResult
Set relative tolerance on function value: stop when an optimization step (or an estimate of
the optimum) changes the objective function value by less than tolerance
multiplied by the
absolute value of the function value. (If there is any chance that your optimum function
value is close to zero, you might want to set an absolute tolerance with set_ftol_abs
as well.) Criterion is disabled if tolerance
is non-positive.
pub fn get_ftol_rel(&self) -> Option<f64>
Sourcepub fn set_ftol_abs(&mut self, tolerance: f64) -> OptResult
pub fn set_ftol_abs(&mut self, tolerance: f64) -> OptResult
Set absolute tolerance on function value: stop when an optimization step (or an estimate of
the optimum) changes the function value by less than tolerance
.
Criterion is disabled if tolerance
is
non-positive.
pub fn get_ftol_abs(&self) -> Option<f64>
Sourcepub fn set_xtol_rel(&mut self, tolerance: f64) -> OptResult
pub fn set_xtol_rel(&mut self, tolerance: f64) -> OptResult
Set relative tolerance on optimization parameters: stop when an optimization step (or an
estimate of the optimum) changes every parameter by less than tolerance
multiplied by the absolute
value of the parameter. (If there is any chance that an optimal parameter is close to zero,
you might want to set an absolute tolerance with set_xtol_abs
as well.) Criterion is
disabled if tolerance
is non-positive.
pub fn get_xtol_rel(&self) -> Option<f64>
Sourcepub fn set_xtol_abs(&mut self, tolerance: &[f64]) -> OptResult
pub fn set_xtol_abs(&mut self, tolerance: &[f64]) -> OptResult
Set absolute tolerances on optimization parameters. tolerance
is a an array slice of length n
giving the tolerances: stop when an optimization step (or
an estimate of the optimum) changes every parameter x[i]
by less than tolerance[i]
.
Sourcepub fn set_xtol_abs1(&mut self, tolerance: f64) -> OptResult
pub fn set_xtol_abs1(&mut self, tolerance: f64) -> OptResult
For convenience, this function may be used to set the absolute tolerances in all n
optimization parameters to the same value.
pub fn get_xtol_abs(&mut self) -> Option<Vec<f64>>
Sourcepub fn set_maxeval(&mut self, maxeval: u32) -> OptResult
pub fn set_maxeval(&mut self, maxeval: u32) -> OptResult
Stop when the number of function evaluations exceeds maxeval
. (This is not a strict maximum:
the number of function evaluations may exceed maxeval
slightly, depending upon the
algorithm.) Criterion is disabled if maxeval
is non-positive.
pub fn get_maxeval(&mut self) -> Option<u32>
Sourcepub fn set_maxtime(&mut self, timeout: f64) -> OptResult
pub fn set_maxtime(&mut self, timeout: f64) -> OptResult
Stop when the optimization time (in seconds) exceeds maxtime
. (This is not a strict maximum:
the time may exceed maxtime
slightly, depending upon the algorithm and on how slow your
function evaluation is.) Criterion is disabled if maxtime
is non-positive.
pub fn get_maxtime(&self) -> Option<f64>
Sourcepub fn force_stop(&mut self, stopval: Option<i32>) -> OptResult
pub fn force_stop(&mut self, stopval: Option<i32>) -> OptResult
In certain cases, the caller may wish to force the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. In this case, it is possible to tell NLopt to halt the optimization gracefully, returning the best point found so far, by calling this function from within your objective or constraint functions. This causes nlopt_optimize to halt, returning the NLOPT_FORCED_STOP error code. It has no effect if not called during nlopt_optimize.
§Params
stopval: If you want to provide a bit more information, set a forced-stop integer value
val
, which can be later retrieved by calling: get_force_stop()
, which returns the
last force-stop value that was set since the last nlopt_optimize. The force-stop value is
None
at the beginning of nlopt_optimize. Passing stopval=0
to
force_stop()
tells NLopt not to force a halt.
pub fn get_force_stop(&mut self) -> Option<i32>
Sourcepub fn set_local_optimizer(
&mut self,
local_opt: Nlopt<impl ObjFn<()>, ()>,
) -> OptResult
pub fn set_local_optimizer( &mut self, local_opt: Nlopt<impl ObjFn<()>, ()>, ) -> OptResult
Some of the algorithms, especially MLSL and AUGLAG, use a different optimization algorithm as a subroutine, typically for local optimization. You can change the local search algorithm and its tolerances using this function.
Here, local_opt is another Nlopt<T>
whose parameters are used to determine the local
search algorithm, its stopping criteria, and other algorithm parameters. (However, the
objective function, bounds, and nonlinear-constraint parameters of local_opt
are ignored.)
The dimension n
of local_opt
must match that of the main optimization.
A stubbed version of local_opt
can be obtained with get_local_optimizer
.
pub fn get_local_optimizer( &mut self, algorithm: Algorithm, ) -> Nlopt<impl ObjFn<()>, ()>
Sourcepub fn set_initial_step(&mut self, dx: &[f64]) -> OptResult
pub fn set_initial_step(&mut self, dx: &[f64]) -> OptResult
For derivative-free local-optimization algorithms, the optimizer must somehow decide on some initial step size to perturb x by when it begins the optimization. This step size should be big enough that the value of the objective changes significantly, but not too big if you want to find the local optimum nearest to x. By default, NLopt chooses this initial step size heuristically from the bounds, tolerances, and other information, but this may not always be the best choice. You can use this function to modify the initial step size.
Here, dx
is an array of length n
containing
the (nonzero) initial step size for each component of the optimization parameters x
. For
convenience, if you want to set the step sizes in every direction to be the same value, you
can instead call set_initial_step1
.
pub fn set_initial_step1(&mut self, dx: f64) -> OptResult
Sourcepub fn get_initial_step(&mut self, x: &[f64]) -> Option<Vec<f64>>
pub fn get_initial_step(&mut self, x: &[f64]) -> Option<Vec<f64>>
Here, x
is the same as the initial guess that you plan to pass to optimize
– if you
have not set the initial step and NLopt is using its heuristics, its heuristic step size may
depend on the initial x
, which is why you must pass it here. Both x
and the return value are arrays of
length n
.
Sourcepub fn set_population(&mut self, population: usize) -> OptResult
pub fn set_population(&mut self, population: usize) -> OptResult
Several of the stochastic search algorithms (e.g., CRS, MLSL, and ISRES) start by generating
some initial “population” of random points x.
By default, this initial population size is
chosen heuristically in some algorithm-specific way, but the initial population can by
changed by calling this function. A population
of zero implies
that the heuristic default will be used.
pub fn get_population(&mut self) -> usize
Sourcepub fn srand_seed(seed: Option<u64>)
pub fn srand_seed(seed: Option<u64>)
For stochastic optimization algorithms, we use pseudorandom numbers generated by the
Mersenne Twister algorithm, based on code from Makoto Matsumoto. By default, the seed for
the random numbers is generated from the system time, so that you will get a different
sequence of pseudorandom numbers each time you run your program. If you want to use a
“deterministic” sequence of pseudorandom numbers, i.e. the same sequence from run to run,
you can set the seed with this function. To reset the seed based on the system time, you can
call this function with seed = None
.
Sourcepub fn set_vector_storage(&mut self, m: Option<usize>) -> OptResult
pub fn set_vector_storage(&mut self, m: Option<usize>) -> OptResult
Some of the NLopt algorithms are limited-memory “quasi-Newton” algorithms, which “remember” the gradients from a finite number M of the previous optimization steps in order to construct an approximate 2nd derivative matrix. The bigger M is, the more storage the algorithms require, but on the other hand they may converge faster for larger M. By default, NLopt chooses a heuristic value of M, but this can be changed by calling this function. Passing M=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets M to 10 or at most 10 MiB worth of vectors, whichever is larger.
pub fn get_vector_storage(&mut self) -> usize
Sourcepub fn version() -> (i32, i32, i32)
pub fn version() -> (i32, i32, i32)
To determine the version number of NLopt at runtime, you can call this function. For
example, NLopt version 3.1.4 would return (3, 1, 4)
.
Sourcepub fn optimize(
&self,
x_init: &mut [f64],
) -> Result<(SuccessState, f64), (FailState, f64)>
pub fn optimize( &self, x_init: &mut [f64], ) -> Result<(SuccessState, f64), (FailState, f64)>
Once all of the desired optimization parameters have been specified in a given
NLoptOptimzer
, you can perform the optimization by calling this function. On input,
x_init
is an array of length n
giving an initial
guess for the optimization parameters. On successful return, x_init
contains the optimized values
of the parameters, and the function returns the corresponding value of the objective function.