Struct nlopt::Nlopt
[−]
[src]
pub struct Nlopt<T: Clone> { /* fields omitted */ }
This is the central struct
of this library. It represents an optimization of a given
function, called the objective function. The argument x
to this function is an
n
-dimensional double-precision vector. The dimensions are set at creation of the struct and
cannot be changed afterwards. NLopt offers different optimization algorithms. One must be
chosen at struct creation and cannot be changed afterwards. Always use Nlopt::<T>::new()
to create an Nlopt
struct.
Methods
impl<T> Nlopt<T> where
T: Clone,
[src]
T: Clone,
pub fn new(
algorithm: Algorithm,
n_dims: usize,
obj: ObjectiveFn<T>,
target: Target,
user_data: T
) -> Nlopt<T>
[src]
algorithm: Algorithm,
n_dims: usize,
obj: ObjectiveFn<T>,
target: Target,
user_data: T
) -> Nlopt<T>
Creates a new Nlopt
struct.
algorithm
- Which optimization algorithm to use. This cannot be changed after creation of the structn_dims
- Dimension of the argument to the objective functionobj
- The objective function. This function has the signature(&[f64], Option<&mut [f64]>, T) -> f64
. The first argument is the vectorx
passed to the function. The second argument isSome(&mut [f64])
if the calling optimization algorithm needs the gradient of the function. If the gradient is not needed, it isNone
. The last argument is the user data provided beforehand using theuser_data
argument to the constructor.target
- Whether to minimize or maximize the objectiv functionuser_data
- Optional data that is passed to the objective function
pub fn set_lower_bounds(&mut self, bound: &[f64]) -> OptResult
[src]
Most of the algorithms in NLopt are designed for minimization of functions with simple bound
constraints on the inputs. That is, the input vectors x
are constrainted to lie in a
hyperrectangle lower_bound[i] ≤ x[i] ≤ upper_bound[i] for 0 ≤ i < n
. NLopt guarantees that your objective
function and any nonlinear constraints will never be evaluated outside of these bounds
(unlike nonlinear constraints, which may be violated at intermediate steps).
These bounds are specified by passing an array bound
of length n
(the dimension of the
problem) to one or both of the functions:
set_lower_bounds(&[f64])
set_upper_bounds(&[f64])
If a lower/upper bound is not set, the default is no bound (unconstrained, i.e. a bound of
infinity); it is possible to have lower bounds but not upper bounds or vice versa.
Alternatively, the user can call one of the above functions and explicitly pass a lower
bound of -INFINITY
and/or an upper bound of +INFINITY
for some optimization parameters to
make them have no lower/upper bound, respectively.
It is permitted to set lower_bound[i] == upper_bound[i]
in one or more dimensions; this is equivalent to
fixing the corresponding x[i]
parameter, eliminating it from the optimization.
Note, however, that some of the algorithms in NLopt, in particular most of the
global-optimization algorithms, do not support unconstrained optimization and will return an
error in optimize
if you do not supply finite lower and upper bounds.
pub fn set_upper_bounds(&mut self, bound: &[f64]) -> OptResult
[src]
See documentation for set_lower_bounds
pub fn set_lower_bound(&mut self, bound: f64) -> OptResult
[src]
For convenience, set_lower_bound
is supplied in order to set the lower
bounds for all optimization parameters to a single constant
pub fn set_upper_bound(&mut self, bound: f64) -> OptResult
[src]
For convenience, set_upper_bound
is supplied in order to set the upper
bounds for all optimization parameters to a single constant
pub fn get_upper_bounds(&self) -> Option<&[f64]>
[src]
Retrieve the current upper bonds on x
pub fn get_lower_bounds(&self) -> Option<&[f64]>
[src]
Retrieve the current lower bonds on x
pub fn add_constraint(
&mut self,
constraint: Box<Function<T>>,
t: ConstraintType,
tolerance: f64
) -> OptResult
[src]
&mut self,
constraint: Box<Function<T>>,
t: ConstraintType,
tolerance: f64
) -> OptResult
Several of the algorithms in NLopt (MMA, COBYLA, and ORIG_DIRECT) also support arbitrary nonlinear inequality constraints, and some additionally allow nonlinear equality constraints (ISRES and AUGLAG). For these algorithms, you can specify as many nonlinear constraints as you wish.
In particular, a nonlinear constraint of the form fc(x) ≤ 0
or fc(x) = 0
, where the function
fc is an ObjectiveFn<T>
, can be specified by calling this function.
t
- Specify whether the constraint is an equality (fc(x) = 0
) or inequality (fc(x) ≤ 0
) constraint.tolerance
- This parameter is a tolerance that is used for the purpose of stopping criteria only: a pointx
is considered feasible for judging whether to stop the optimization iffc(x) ≤ tol
. A tolerance of zero means that NLopt will try not to consider anyx
to be converged unless the constraint is strictly satisfied; generally, at least a small positive tolerance is advisable to reduce sensitivity to rounding errors.
pub fn add_mconstraint(
&mut self,
constraint: Box<MFunction<T>>,
t: ConstraintType,
tolerance: &[f64]
) -> OptResult
[src]
&mut self,
constraint: Box<MFunction<T>>,
t: ConstraintType,
tolerance: &[f64]
) -> OptResult
In some applications with multiple constraints, it is more convenient to define a single
function that returns the values (and gradients) of all constraints at once. For example,
different constraint functions might share computations in some way. Or, if you have a large
number of constraints, you may wish to compute them in parallel. This possibility is
supported by this function, which defines multiple constraints at once, or
equivalently a vector-valued constraint function fc(x) | R^n --> R^m
:
constraint
- ABox
containing the constraint function bundled with user defined parameters.t
- Specify whether the constraint is an equality or inequality constrainttolerance
- An array slice of lengthm
of the tolerances in each constraint dimension
pub fn remove_constraints(&mut self) -> OptResult
[src]
Remove all of the inequality and equality constraints from a given problem.
pub fn set_stopval(&mut self, stopval: f64) -> OptResult
[src]
Multiple stopping criteria for the optimization are supported, as specified by the functions to modify a given optimization problem. The optimization halts whenever any one of these criteria is satisfied. In some cases, the precise interpretation of the stopping criterion depends on the optimization algorithm above (although we have tried to make them as consistent as reasonably possible), and some algorithms do not support all of the stopping criteria.
Note: you do not need to use all of the stopping criteria! In most cases, you only need one or two, and can omit the remainder (all criteria are disabled by default).
This functions specifies a stop when an objective value of at least stopval
is found: stop minimizing when an objective
value ≤ stopval
is found, or stop maximizing a value ≥ stopval
is found.
pub fn get_stopval(&self) -> f64
[src]
pub fn set_ftol_rel(&mut self, tolerance: f64) -> OptResult
[src]
Set relative tolerance on function value: stop when an optimization step (or an estimate of
the optimum) changes the objective function value by less than tolerance
multiplied by the
absolute value of the function value. (If there is any chance that your optimum function
value is close to zero, you might want to set an absolute tolerance with set_ftol_abs
as well.) Criterion is disabled if tolerance
is non-positive.
pub fn get_ftol_rel(&self) -> Option<f64>
[src]
pub fn set_ftol_abs(&mut self, tolerance: f64) -> OptResult
[src]
Set absolute tolerance on function value: stop when an optimization step (or an estimate of
the optimum) changes the function value by less than tolerance
. Criterion is disabled if tolerance
is
non-positive.
pub fn get_ftol_abs(&self) -> Option<f64>
[src]
pub fn set_xtol_rel(&mut self, tolerance: f64) -> OptResult
[src]
Set relative tolerance on optimization parameters: stop when an optimization step (or an
estimate of the optimum) changes every parameter by less than tolerance
multiplied by the absolute
value of the parameter. (If there is any chance that an optimal parameter is close to zero,
you might want to set an absolute tolerance with set_xtol_abs
as well.) Criterion is
disabled if tolerance
is non-positive.
pub fn get_xtol_rel(&self) -> Option<f64>
[src]
pub fn set_xtol_abs(&mut self, tolerance: &[f64]) -> OptResult
[src]
Set absolute tolerances on optimization parameters. tolerance
is a an array slice of length n
giving the tolerances: stop when an optimization step (or
an estimate of the optimum) changes every parameter x[i]
by less than tolerance[i]
.
pub fn set_xtol_abs1(&mut self, tolerance: f64) -> OptResult
[src]
For convenience, this function may be used to set the absolute tolerances in all n
optimization parameters to the same value.
pub fn get_xtol_abs(&mut self) -> Option<&[f64]>
[src]
pub fn set_maxeval(&mut self, maxeval: u32) -> OptResult
[src]
Stop when the number of function evaluations exceeds maxeval
. (This is not a strict maximum:
the number of function evaluations may exceed maxeval
slightly, depending upon the
algorithm.) Criterion is disabled if maxeval
is non-positive.
pub fn get_maxeval(&mut self) -> Option<u32>
[src]
pub fn set_maxtime(&mut self, timeout: f64) -> OptResult
[src]
Stop when the optimization time (in seconds) exceeds maxtime
. (This is not a strict maximum:
the time may exceed maxtime
slightly, depending upon the algorithm and on how slow your
function evaluation is.) Criterion is disabled if maxtime
is non-positive.
pub fn get_maxtime(&self) -> Option<f64>
[src]
pub fn force_stop(&mut self, stopval: Option<i32>) -> OptResult
[src]
In certain cases, the caller may wish to force the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. In this case, it is possible to tell NLopt to halt the optimization gracefully, returning the best point found so far, by calling this function from within your objective or constraint functions. This causes nlopt_optimize to halt, returning the NLOPT_FORCED_STOP error code. It has no effect if not called during nlopt_optimize.
Params
stopval: If you want to provide a bit more information, set a forced-stop integer value
val
, which can be later retrieved by calling: get_force_stop()
, which returns the
last force-stop value that was set since the last nlopt_optimize. The force-stop value is
None
at the beginning of nlopt_optimize. Passing stopval=0
to
force_stop()
tells NLopt not to force a halt.
pub fn get_force_stop(&mut self) -> Option<i32>
[src]
pub fn set_local_optimizer(&mut self, local_opt: Nlopt<T>) -> OptResult
[src]
Some of the algorithms, especially MLSL and AUGLAG, use a different optimization algorithm as a subroutine, typically for local optimization. You can change the local search algorithm and its tolerances using this function.
Here, local_opt is another Nlopt<T>
whose parameters are used to determine the local
search algorithm, its stopping criteria, and other algorithm parameters. (However, the
objective function, bounds, and nonlinear-constraint parameters of local_opt
are ignored.)
The dimension n
of local_opt
must match that of the main optimization.
pub fn set_initial_step(&mut self, dx: &[f64]) -> OptResult
[src]
For derivative-free local-optimization algorithms, the optimizer must somehow decide on some initial step size to perturb x by when it begins the optimization. This step size should be big enough that the value of the objective changes significantly, but not too big if you want to find the local optimum nearest to x. By default, NLopt chooses this initial step size heuristically from the bounds, tolerances, and other information, but this may not always be the best choice. You can use this function to modify the initial step size.
Here, dx
is an array of length n
containing
the (nonzero) initial step size for each component of the optimization parameters x
. For
convenience, if you want to set the step sizes in every direction to be the same value, you
can instead call set_initial_step1
.
pub fn set_initial_step1(&mut self, dx: f64) -> OptResult
[src]
pub fn get_initial_step(&mut self, x: &[f64]) -> Option<&[f64]>
[src]
Here, x
is the same as the initial guess that you plan to pass to optimize
– if you
have not set the initial step and NLopt is using its heuristics, its heuristic step size may
depend on the initial x
, which is why you must pass it here. Both x
and the return value are arrays of
length n
.
pub fn set_population(&mut self, population: usize) -> OptResult
[src]
Several of the stochastic search algorithms (e.g., CRS, MLSL, and ISRES) start by generating
some initial "population" of random points x.
By default, this initial population size is
chosen heuristically in some algorithm-specific way, but the initial population can by
changed by calling this function. A population
of zero implies that the heuristic default will be
used.
pub fn get_population(&mut self) -> usize
[src]
pub fn srand_seed(seed: Option<u64>)
[src]
For stochastic optimization algorithms, we use pseudorandom numbers generated by the
Mersenne Twister algorithm, based on code from Makoto Matsumoto. By default, the seed for
the random numbers is generated from the system time, so that you will get a different
sequence of pseudorandom numbers each time you run your program. If you want to use a
"deterministic" sequence of pseudorandom numbers, i.e. the same sequence from run to run,
you can set the seed with this function. To reset the seed based on the system time, you can
call this function with seed = None
.
pub fn set_vector_storage(&mut self, m: Option<usize>) -> OptResult
[src]
Some of the NLopt algorithms are limited-memory "quasi-Newton" algorithms, which "remember" the gradients from a finite number M of the previous optimization steps in order to construct an approximate 2nd derivative matrix. The bigger M is, the more storage the algorithms require, but on the other hand they may converge faster for larger M. By default, NLopt chooses a heuristic value of M, but this can be changed by calling this function. Passing M=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets M to 10 or at most 10 MiB worth of vectors, whichever is larger.
pub fn get_vector_storage(&mut self) -> usize
[src]
pub fn version() -> (i32, i32, i32)
[src]
To determine the version number of NLopt at runtime, you can call this function. For
example, NLopt version 3.1.4 would return (3, 1, 4)
.
pub fn optimize(
&self,
x_init: &mut [f64]
) -> Result<(SuccessState, f64), (FailState, f64)>
[src]
&self,
x_init: &mut [f64]
) -> Result<(SuccessState, f64), (FailState, f64)>
Once all of the desired optimization parameters have been specified in a given
NLoptOptimzer
, you can perform the optimization by calling this function. On input,
x_init
is an array of length n
giving an initial
guess for the optimization parameters. On successful return, x_init
contains the optimized values
of the parameters, and the function returns the corresponding value of the objective function.