[][src]Struct nlopt::Nlopt

pub struct Nlopt<F: ObjFn<T>, T> {
    pub n_dims: usize,
    // some fields omitted
}

This is the central struct of this library. It represents an optimization of a given function, called the objective function. The argument x to this function is an n-dimensional double-precision vector. The dimensions are set at creation of the struct and cannot be changed afterwards. NLopt offers different optimization algorithms. One must be chosen at struct creation and cannot be changed afterwards. Always use Nlopt::<T>::new() to create an Nlopt struct.

Fields

n_dims: usize

Implementations

impl<F: ObjFn<T>, T> Nlopt<F, T>[src]

pub fn new(
    algorithm: Algorithm,
    n_dims: usize,
    objective_fn: F,
    target: Target,
    user_data: T
) -> Nlopt<F, T>
[src]

Creates a new Nlopt struct.

  • algorithm - Which optimization algorithm to use. This cannot be changed after creation of the struct
  • n_dims - Dimension of the argument to the objective function
  • objective_fn - This function has the signature (&[f64], Option<&mut [f64]>, T) -> f64. The first argument is the vector x passed to the function. The second argument is Some(&mut [f64]) if the calling optimization algorithm needs the gradient of the function. If the gradient is not needed, it is None. The last argument is the user data provided beforehand using the user_data argument to the constructor.
  • target - Whether to minimize or maximize the objective function
  • user_data - Optional data that is passed to the objective function

pub fn set_lower_bounds(&mut self, bound: &[f64]) -> OptResult[src]

Most of the algorithms in NLopt are designed for minimization of functions with simple bound constraints on the inputs. That is, the input vectors x are constrainted to lie in a hyperrectangle lower_bound[i] ≤ x[i] ≤ upper_bound[i] for 0 ≤ i < n. NLopt guarantees that your objective function and any nonlinear constraints will never be evaluated outside of these bounds (unlike nonlinear constraints, which may be violated at intermediate steps).

These bounds are specified by passing an array bound of length n (the dimension of the problem) to one or both of the functions:

set_lower_bounds(&[f64])

set_upper_bounds(&[f64])

If a lower/upper bound is not set, the default is no bound (unconstrained, i.e. a bound of infinity); it is possible to have lower bounds but not upper bounds or vice versa. Alternatively, the user can call one of the above functions and explicitly pass a lower bound of -INFINITY and/or an upper bound of +INFINITY for some optimization parameters to make them have no lower/upper bound, respectively.

It is permitted to set lower_bound[i] == upper_bound[i] in one or more dimensions; this is equivalent to fixing the corresponding x[i] parameter, eliminating it from the optimization.

Note, however, that some of the algorithms in NLopt, in particular most of the global-optimization algorithms, do not support unconstrained optimization and will return an error in optimize if you do not supply finite lower and upper bounds.

pub fn set_upper_bounds(&mut self, bound: &[f64]) -> OptResult[src]

See documentation for set_lower_bounds

pub fn set_lower_bound(&mut self, bound: f64) -> OptResult[src]

For convenience, set_lower_bound is supplied in order to set the lower bounds for all optimization parameters to a single constant

pub fn set_upper_bound(&mut self, bound: f64) -> OptResult[src]

For convenience, set_upper_bound is supplied in order to set the upper bounds for all optimization parameters to a single constant

pub fn get_upper_bounds(&self) -> Option<&[f64]>[src]

Retrieve the current upper bonds on x

pub fn get_lower_bounds(&self) -> Option<&[f64]>[src]

Retrieve the current lower bonds on x

pub fn add_equality_constraint<G: ObjFn<U>, U>(
    &mut self,
    constraint: G,
    user_data: U,
    tolerance: f64
) -> OptResult
[src]

Several of the algorithms in NLopt (MMA, COBYLA, and ORIG_DIRECT) also support arbitrary nonlinear inequality constraints, and some additionally allow nonlinear equality constraints (ISRES and AUGLAG). For these algorithms, you can specify as many nonlinear constraints as you wish.

In particular, a nonlinear constraint of the form fc(x) = 0, where the function fc is has the same form as an objective function, can be specified by calling this function.

  • tolerance - This parameter is a tolerance that is used for the purpose of stopping criteria only: a point x is considered feasible for judging whether to stop the optimization if fc(x) ≤ tol. A tolerance of zero means that NLopt will try not to consider any x to be converged unless the constraint is strictly satisfied; generally, at least a small positive tolerance is advisable to reduce sensitivity to rounding errors.

pub fn add_inequality_constraint<G: ObjFn<U>, U>(
    &mut self,
    constraint: G,
    user_data: U,
    tolerance: f64
) -> OptResult
[src]

Set a nonlinear constraint of the form fc(x) ≤ 0. For more information see the documentation for add_equality_constraint.

pub fn add_equality_mconstraint<G: MObjFn<U>, U>(
    &mut self,
    m: usize,
    constraint: G,
    user_data: U,
    tolerance: &[f64]
) -> OptResult
[src]

In some applications with multiple constraints, it is more convenient to define a single function that returns the values (and gradients) of all constraints at once. For example, different constraint functions might share computations in some way. Or, if you have a large number of constraints, you may wish to compute them in parallel. This possibility is supported by this function, which defines multiple equality constraints at once, or equivalently a vector-valued constraint function fc(x) | R^n --> R^m:

  • constraint - A constraint function bundled with user defined parameters.
  • tolerance - An array slice of length m of the tolerances in each constraint dimension

pub fn add_inequality_mconstraint<G: MObjFn<U>, U>(
    &mut self,
    m: usize,
    constraint: G,
    user_data: U,
    tolerance: &[f64]
) -> OptResult
[src]

Set a nonlinear multivalue inequality constraint. For more information see the documentation for add_equality_mconstraint.

pub fn remove_constraints(&mut self) -> OptResult[src]

Remove all of the inequality and equality constraints from a given problem.

pub fn set_stopval(&mut self, stopval: f64) -> OptResult[src]

Multiple stopping criteria for the optimization are supported, as specified by the functions to modify a given optimization problem. The optimization halts whenever any one of these criteria is satisfied. In some cases, the precise interpretation of the stopping criterion depends on the optimization algorithm above (although we have tried to make them as consistent as reasonably possible), and some algorithms do not support all of the stopping criteria.

Note: you do not need to use all of the stopping criteria! In most cases, you only need one or two, and can omit the remainder (all criteria are disabled by default).

This functions specifies a stop when an objective value of at least stopval is found: stop minimizing when an objective value ≤ stopval is found, or stop maximizing a value ≥ stopval is found.

pub fn get_stopval(&self) -> f64[src]

pub fn set_ftol_rel(&mut self, tolerance: f64) -> OptResult[src]

Set relative tolerance on function value: stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than tolerance multiplied by the absolute value of the function value. (If there is any chance that your optimum function value is close to zero, you might want to set an absolute tolerance with set_ftol_abs as well.) Criterion is disabled if tolerance is non-positive.

pub fn get_ftol_rel(&self) -> Option<f64>[src]

pub fn set_ftol_abs(&mut self, tolerance: f64) -> OptResult[src]

Set absolute tolerance on function value: stop when an optimization step (or an estimate of the optimum) changes the function value by less than tolerance. Criterion is disabled if tolerance is non-positive.

pub fn get_ftol_abs(&self) -> Option<f64>[src]

pub fn set_xtol_rel(&mut self, tolerance: f64) -> OptResult[src]

Set relative tolerance on optimization parameters: stop when an optimization step (or an estimate of the optimum) changes every parameter by less than tolerance multiplied by the absolute value of the parameter. (If there is any chance that an optimal parameter is close to zero, you might want to set an absolute tolerance with set_xtol_abs as well.) Criterion is disabled if tolerance is non-positive.

pub fn get_xtol_rel(&self) -> Option<f64>[src]

pub fn set_xtol_abs(&mut self, tolerance: &[f64]) -> OptResult[src]

Set absolute tolerances on optimization parameters. tolerance is a an array slice of length n giving the tolerances: stop when an optimization step (or an estimate of the optimum) changes every parameter x[i] by less than tolerance[i].

pub fn set_xtol_abs1(&mut self, tolerance: f64) -> OptResult[src]

For convenience, this function may be used to set the absolute tolerances in all n optimization parameters to the same value.

pub fn get_xtol_abs(&mut self) -> Option<&[f64]>[src]

pub fn set_maxeval(&mut self, maxeval: u32) -> OptResult[src]

Stop when the number of function evaluations exceeds maxeval. (This is not a strict maximum: the number of function evaluations may exceed maxeval slightly, depending upon the algorithm.) Criterion is disabled if maxeval is non-positive.

pub fn get_maxeval(&mut self) -> Option<u32>[src]

pub fn set_maxtime(&mut self, timeout: f64) -> OptResult[src]

Stop when the optimization time (in seconds) exceeds maxtime. (This is not a strict maximum: the time may exceed maxtime slightly, depending upon the algorithm and on how slow your function evaluation is.) Criterion is disabled if maxtime is non-positive.

pub fn get_maxtime(&self) -> Option<f64>[src]

pub fn force_stop(&mut self, stopval: Option<i32>) -> OptResult[src]

In certain cases, the caller may wish to force the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. In this case, it is possible to tell NLopt to halt the optimization gracefully, returning the best point found so far, by calling this function from within your objective or constraint functions. This causes nlopt_optimize to halt, returning the NLOPT_FORCED_STOP error code. It has no effect if not called during nlopt_optimize.

Params

stopval: If you want to provide a bit more information, set a forced-stop integer value val, which can be later retrieved by calling: get_force_stop(), which returns the last force-stop value that was set since the last nlopt_optimize. The force-stop value is None at the beginning of nlopt_optimize. Passing stopval=0 to force_stop() tells NLopt not to force a halt.

pub fn get_force_stop(&mut self) -> Option<i32>[src]

pub fn set_local_optimizer(
    &mut self,
    local_opt: Nlopt<impl ObjFn<()>, ()>
) -> OptResult
[src]

Some of the algorithms, especially MLSL and AUGLAG, use a different optimization algorithm as a subroutine, typically for local optimization. You can change the local search algorithm and its tolerances using this function.

Here, local_opt is another Nlopt<T> whose parameters are used to determine the local search algorithm, its stopping criteria, and other algorithm parameters. (However, the objective function, bounds, and nonlinear-constraint parameters of local_opt are ignored.) The dimension n of local_opt must match that of the main optimization.

A stubbed version of local_opt can be obtained with get_local_optimizer.

pub fn get_local_optimizer(
    &mut self,
    algorithm: Algorithm
) -> Nlopt<impl ObjFn<()>, ()>
[src]

pub fn set_initial_step(&mut self, dx: &[f64]) -> OptResult[src]

For derivative-free local-optimization algorithms, the optimizer must somehow decide on some initial step size to perturb x by when it begins the optimization. This step size should be big enough that the value of the objective changes significantly, but not too big if you want to find the local optimum nearest to x. By default, NLopt chooses this initial step size heuristically from the bounds, tolerances, and other information, but this may not always be the best choice. You can use this function to modify the initial step size.

Here, dx is an array of length n containing the (nonzero) initial step size for each component of the optimization parameters x. For convenience, if you want to set the step sizes in every direction to be the same value, you can instead call set_initial_step1.

pub fn set_initial_step1(&mut self, dx: f64) -> OptResult[src]

pub fn get_initial_step(&mut self, x: &[f64]) -> Option<&[f64]>[src]

Here, x is the same as the initial guess that you plan to pass to optimize – if you have not set the initial step and NLopt is using its heuristics, its heuristic step size may depend on the initial x, which is why you must pass it here. Both x and the return value are arrays of length n.

pub fn set_population(&mut self, population: usize) -> OptResult[src]

Several of the stochastic search algorithms (e.g., CRS, MLSL, and ISRES) start by generating some initial "population" of random points x. By default, this initial population size is chosen heuristically in some algorithm-specific way, but the initial population can by changed by calling this function. A population of zero implies that the heuristic default will be used.

pub fn get_population(&mut self) -> usize[src]

pub fn srand_seed(seed: Option<u64>)[src]

For stochastic optimization algorithms, we use pseudorandom numbers generated by the Mersenne Twister algorithm, based on code from Makoto Matsumoto. By default, the seed for the random numbers is generated from the system time, so that you will get a different sequence of pseudorandom numbers each time you run your program. If you want to use a "deterministic" sequence of pseudorandom numbers, i.e. the same sequence from run to run, you can set the seed with this function. To reset the seed based on the system time, you can call this function with seed = None.

pub fn set_vector_storage(&mut self, m: Option<usize>) -> OptResult[src]

Some of the NLopt algorithms are limited-memory "quasi-Newton" algorithms, which "remember" the gradients from a finite number M of the previous optimization steps in order to construct an approximate 2nd derivative matrix. The bigger M is, the more storage the algorithms require, but on the other hand they may converge faster for larger M. By default, NLopt chooses a heuristic value of M, but this can be changed by calling this function. Passing M=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets M to 10 or at most 10 MiB worth of vectors, whichever is larger.

pub fn get_vector_storage(&mut self) -> usize[src]

pub fn version() -> (i32, i32, i32)[src]

To determine the version number of NLopt at runtime, you can call this function. For example, NLopt version 3.1.4 would return (3, 1, 4).

pub fn optimize(
    &self,
    x_init: &mut [f64]
) -> Result<(SuccessState, f64), (FailState, f64)>
[src]

Once all of the desired optimization parameters have been specified in a given NLoptOptimzer, you can perform the optimization by calling this function. On input, x_init is an array of length n giving an initial guess for the optimization parameters. On successful return, x_init contains the optimized values of the parameters, and the function returns the corresponding value of the objective function.

Trait Implementations

impl<F: ObjFn<T>, T> Drop for Nlopt<F, T>[src]

Auto Trait Implementations

impl<F, T> RefUnwindSafe for Nlopt<F, T> where
    F: RefUnwindSafe,
    T: RefUnwindSafe
[src]

impl<F, T> !Send for Nlopt<F, T>[src]

impl<F, T> !Sync for Nlopt<F, T>[src]

impl<F, T> Unpin for Nlopt<F, T> where
    F: Unpin,
    T: Unpin
[src]

impl<F, T> UnwindSafe for Nlopt<F, T> where
    F: UnwindSafe,
    T: UnwindSafe
[src]

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.