egobox_ego/lib.rs
1//! This library implements Efficient Global Optimization method,
2//! it started as a port of the [EGO algorithm](https://smt.readthedocs.io/en/stable/_src_docs/applications/ego.html)
3//! implemented as an application example in [SMT](https://smt.readthedocs.io/en/stable).
4//!
5//! The optimizer is able to deal with inequality constraints.
6//! Objective and contraints are expected to computed grouped at the same time
7//! hence the given function should return a vector where the first component
8//! is the objective value and the remaining ones constraints values intended
9//! to be negative in the end.
10//! The optimizer comes with a set of options to:
11//! * specify the initial doe,
12//! * parameterize internal optimization,
13//! * parameterize mixture of experts,
14//! * save intermediate results and allow warm/hot restart,
15//! * handling of mixed-integer variables
16//! * activation of TREGO algorithm variation
17//!
18//! # Examples
19//!
20//! ## Continuous optimization
21//!
22//! ```
23//! use ndarray::{array, Array2, ArrayView2};
24//! use egobox_ego::EgorBuilder;
25//!
26//! // A one-dimensional test function, x in [0., 25.] and min xsinx(x) ~ -15.1 at x ~ 18.9
27//! fn xsinx(x: &ArrayView2<f64>) -> Array2<f64> {
28//! (x - 3.5) * ((x - 3.5) / std::f64::consts::PI).mapv(|v| v.sin())
29//! }
30//!
31//! // We ask for 10 evaluations of the objective function to get the result
32//! let res = EgorBuilder::optimize(xsinx)
33//! .configure(|config| config.max_iters(10))
34//! .min_within(&array![[0.0, 25.0]])
35//! .run()
36//! .expect("xsinx minimized");
37//! println!("Minimum found f(x) = {:?} at x = {:?}", res.x_opt, res.y_opt);
38//! ```
39//!
40//! The implementation relies on [Mixture of Experts](egobox_moe).
41//!
42//!
43//! ## Mixed-integer optimization
44//!
45//! While [Egor] optimizer works with continuous data (i.e floats), the optimizer
46//! allows to make basic mixed-integer optimization. The configuration of the Optimizer
47//! as a mixed_integer optimizer is done though the `EgorBuilder`
48//!
49//! As a second example, we define an objective function `mixsinx` taking integer
50//! input values from the previous function `xsinx` defined above.
51//!
52//! ```
53//! use ndarray::{array, Array2, ArrayView2};
54//! use linfa::ParamGuard;
55//! #[cfg(feature = "blas")]
56//! use ndarray_linalg::Norm;
57//! #[cfg(not(feature = "blas"))]
58//! use linfa_linalg::norm::*;
59//! use egobox_ego::{EgorBuilder, InfillStrategy, XType};
60//!
61//! fn mixsinx(x: &ArrayView2<f64>) -> Array2<f64> {
62//! if (x.mapv(|v| v.round()).norm_l2() - x.norm_l2()).abs() < 1e-6 {
63//! (x - 3.5) * ((x - 3.5) / std::f64::consts::PI).mapv(|v| v.sin())
64//! } else {
65//! panic!("Error: mixsinx works only on integer, got {:?}", x)
66//! }
67//! }
68//!
69//! let max_iters = 10;
70//! let doe = array![[0.], [7.], [25.]]; // the initial doe
71//!
72//! // We define input as being integer
73//! let xtypes = vec![XType::Int(0, 25)];
74//!
75//! let res = EgorBuilder::optimize(mixsinx)
76//! .configure(|config|
77//! config.doe(&doe) // we pass the initial doe
78//! .max_iters(max_iters)
79//! .infill_strategy(InfillStrategy::EI)
80//! .seed(42))
81//! .min_within_mixint_space(&xtypes) // We build a mixed-integer optimizer
82//! .run()
83//! .expect("Egor minimization");
84//! println!("min f(x)={} at x={}", res.y_opt, res.x_opt);
85//! ```
86//!
87//! # Usage
88//!
89//! The [`EgorBuilder`] class is used to build an initial optimizer setting
90//! the objective function, an optional random seed (to get reproducible runs) and
91//! a design space specifying the domain and dimensions of the inputs `x`.
92//!
93//! The `min_within()` and `min_within_mixed_space()` methods return an [`Egor`] object, the optimizer,
94//! which can be further configured.
95//! The first one is used for continuous input space (eg floats only), the second one for mixed-integer input
96//! space (some variables, components of `x`, may be integer, ordered or categorical).
97//!
98//! Some of the most useful options are:
99//!
100//! * Specification of the size of the initial DoE. The default is nx+1 where nx is the dimension of x.
101//! If your objective function is not expensive you can take `3*nx` to help the optimizer
102//! approximating your objective function.
103//!
104//! ```no_run
105//! # use egobox_ego::{EgorConfig};
106//! # let egor_config = EgorConfig::default();
107//! egor_config.n_doe(100);
108//! ```
109//!
110//! You can also provide your initial doe though the `egor.doe(your_doe)` method.
111//!
112//! * Specifications of constraints (expected to be negative at the end of the optimization)
113//! In this example below we specify that 2 constraints will be computed with the objective values meaning
114//! the objective function is expected to return an array '\[nsamples, 1 obj value + 2 const values\]'.
115//!
116//! ```no_run
117//! # let egor_config = egobox_ego::EgorConfig::default();
118//! egor_config.n_cstr(2);
119//! ```
120//!
121//! * If the default infill strategy (WB2, Watson and Barnes 2nd criterion),
122//! you can switch for either EI (Expected Improvement) or WB2S (scaled version of WB2).
123//! See \[[Priem2019](#Priem2019)\]
124//!
125//! ```no_run
126//! # use egobox_ego::{EgorConfig, InfillStrategy};
127//! # let egor_config = EgorConfig::default();
128//! egor_config.infill_strategy(InfillStrategy::EI);
129//! ```
130//!
131//! * Constraints modeled with a surrogate can be integrated in the infill criterion
132//! through their probability of feasibility. See \[[Sasena2002](#Sasena2002)\]
133//!
134//! ```no_run
135//! # use egobox_ego::{EgorConfig};
136//! # let egor_config = EgorConfig::default();
137//! egor_config.cstr_infill(true);
138//! ```
139//!
140//! * Constraints modeled with a surrogate can be used with their mean value or their upper trust bound
141//! See \[[Priem2019](#Priem2019)\]
142//!
143//! ```no_run
144//! # use egobox_ego::{EgorConfig, ConstraintStrategy};
145//! # let egor_config = EgorConfig::default();
146//! egor_config.cstr_strategy(ConstraintStrategy::UpperTrustBound);
147//! ```
148//!
149//! * The default gaussian process surrogate is parameterized with a constant trend and a squared exponential correlation kernel, also
150//! known as Kriging. The optimizer use such surrogates to approximate objective and constraint functions. The kind of surrogate
151//! can be changed using `regression_spec` and `correlation_spec()` methods to specify trend and kernels tested to get the best
152//! approximation (quality tested through cross validation).
153//!
154//! ```no_run
155//! # use egobox_ego::EgorConfig;
156//! # use egobox_ego::{GpConfig, RegressionSpec, CorrelationSpec};
157//! # let egor_config = EgorConfig::default();
158//! egor_config.configure_gp(|gp_conf| {
159//! gp_conf.regression_spec(RegressionSpec::CONSTANT | RegressionSpec::LINEAR)
160//! .correlation_spec(CorrelationSpec::MATERN32 | CorrelationSpec::MATERN52)
161//! });
162//! ```
163//! * As the dimension increase the gaussian process surrogate building may take longer or even fail
164//! in this case you can specify a PLS dimension reduction \[[Bartoli2019](#Bartoli2019)\].
165//! Gaussian process will be built using the `ndim` (usually 3 or 4) main components in the PLS projected space.
166//!
167//! ```no_run
168//! # use egobox_ego::EgorConfig;
169//! # use egobox_ego::GpConfig;
170//! # let egor_config = EgorConfig::default();
171//! egor_config.configure_gp(|gp_conf| {
172//! gp_conf.kpls(3)
173//! });
174//! ```
175//!
176//! In the above example all GP with combinations of regression and correlation will be tested and the best combination for
177//! each modeled function will be retained. You can also simply specify `RegressionSpec::ALL` and `CorrelationSpec::ALL` to
178//! test all available combinations but remember that the more you test the slower it runs.
179//!
180//! * the TREGO algorithm described in \[[Diouane2023](#Diouane2023)\] can be activated
181//!
182//! ```no_run
183//! # use egobox_ego::{EgorConfig, RegressionSpec, CorrelationSpec};
184//! # let egor_config = EgorConfig::default();
185//! egor_config.trego(true);
186//! ```
187//!
188//! * Intermediate results can be logged at each iteration when `outdir` directory is specified.
189//! The following files :
190//! * egor_config.json: Egor configuration,
191//! * egor_initial_doe.npy: initial DOE (x, y) as numpy array,
192//! * egor_doe.npy: DOE (x, y) as numpy array,
193//! * egor_history.npy: best (x, y) wrt to iteration number as (n_iters, nx + ny) numpy array
194//!
195//! ```no_run
196//! # use egobox_ego::EgorConfig;
197//! # let egor_config = EgorConfig::default();
198//! egor_config.outdir("./.output");
199//! ```
200//! If warm_start is set to `true`, the algorithm starts from the saved `egor_doe.npy`
201//!
202//! * Hot start checkpointing can be enabled with `hot_start` option specifying a number of
203//! extra iterations beyond max iters. This mechanism allows to restart after an interruption
204//! from the last saved checkpoint. While warm_start restart from saved doe for another max_iters
205//! iterations, hot start allows to continue from the last saved optimizer state till max_iters
206//! is reached with optinal extra iterations.
207//!
208//! ```no_run
209//! # use egobox_ego::{EgorConfig, HotStartMode};
210//! # let egor_config = EgorConfig::default();
211//! egor_config.hot_start(HotStartMode::Enabled);
212//! ```
213//!
214//! # Implementation notes
215//!
216//! * Mixture of experts and PLS dimension reduction is explained in \[[Bartoli2019](#Bartoli2019)\]
217//! * Parallel evaluation is available through the selection of a qei strategy. See in \[[Ginsbourger2010](#Ginsbourger2010)\]
218//! * Mixed integer approach is implemented using continuous relaxation. See \[[Garrido2018](#Garrido2018)\]
219//! * TREGO algorithm is implemented. See \[[Diouane2023](#Diouane2023)\]
220//! * CoEGO approach is implemented with CCBO setting where expensive evaluations are run after context vector update.
221//! See \[[Zhan2024](#Zhan024)\] and \[[Pretsch2024](#Pretsch2024)\]
222//! * Theta bounds are implemented as in \[[Appriou2023](#Appriou2023)\]
223//! * Logirithm of Expected Improvement is implemented as in \[[Ament2025](#Ament2025)\]
224//!
225//! # References
226//!
227//! \[<a id="Bartoli2019">Bartoli2019</a>\]: Bartoli, Nathalie, et al. [Adaptive modeling strategy for constrained global
228//! optimization with application to aerodynamic wing design](https://www.sciencedirect.com/science/article/pii/S1270963818306011)
229//! Aerospace Science and technology 90 (2019): 85-102.
230//!
231//! \[<a id="Ginsbourger2010">Ginsbourger2010</a>\]: Ginsbourger, D., Le Riche, R., & Carraro, L. (2010).
232//! Kriging is well-suited to parallelize optimization.
233//!
234//! \[<a id="Garrido2018">Garrido2018</a>\]: E.C. Garrido-Merchan and D. Hernandez-Lobato. Dealing with categorical and
235//! integer-valued variables in Bayesian Optimization with Gaussian processes.
236//!
237//! Bouhlel, M. A., Bartoli, N., Otsmane, A., & Morlier, J. (2016). [Improving kriging surrogates
238//! of high-dimensional design models by partial least squares dimension reduction.](https://doi.org/10.1007/s00158-015-1395-9)
239//! Structural and Multidisciplinary Optimization, 53(5), 935–952.
240//!
241//! Bouhlel, M. A., Hwang, J. T., Bartoli, N., Lafage, R., Morlier, J., & Martins, J. R. R. A.
242//! (2019). [A python surrogate modeling framework with derivatives](https://doi.org/10.1016/j.advengsoft.2019.03.005).
243//! Advances in Engineering Software, 102662.
244//!
245//! Dubreuil, S., Bartoli, N., Gogu, C., & Lefebvre, T. (2020). [Towards an efficient global multi-
246//! disciplinary design optimization algorithm](https://doi.org/10.1007/s00158-020-02514-6).
247//! Structural and Multidisciplinary Optimization, 62(4), 1739–1765.
248//!
249//! Jones, D. R., Schonlau, M., & Welch, W. J. (1998). [Efficient global optimization of expensive
250//! black-box functions](https://www.researchgate.net/publication/235709802_Efficient_Global_Optimization_of_Expensive_Black-Box_Functions).
251//! Journal of Global Optimization, 13(4), 455–492.
252//!
253//! \[<a id="Diouane2023">Diouane(2023)</a>\]: Diouane, Youssef, et al.
254//! [TREGO: a trust-region framework for efficient global optimization](https://arxiv.org/pdf/2101.06808)
255//! Journal of Global Optimization 86.1 (2023): 1-23.
256//!
257//! \[<a id="Priem2019">Priem2019</a>\]: Priem, Rémy, Nathalie Bartoli, and Youssef Diouane.
258//! [On the use of upper trust bounds in constrained Bayesian optimization infill criteria](https://hal.science/hal-02182492v1/file/Priem_24049.pdf).
259//! AIAA aviation 2019 forum. 2019.
260//!
261//! \[<a id="Sasena2002">Sasena2002</a>\]: Sasena M., Papalambros P., Goovaerts P., 2002.
262//! [Global optimization of problems with disconnected feasible regions via surrogate modeling](https://deepblue.lib.umich.edu/handle/2027.42/77089). AIAA Paper.
263//!
264//! \[<a id="Ginsbourger2010">Ginsbourger2010</a>\]: Ginsbourger, D., Le Riche, R., & Carraro, L. (2010).
265//! [Kriging is well-suited to parallelize optimization](https://www.researchgate.net/publication/226716412_Kriging_Is_Well-Suited_to_Parallelize_Optimization).
266//!
267//! \[<a id="Garrido2018">Garrido2018</a>\]: E.C. Garrido-Merchan and D. Hernandez-Lobato.
268//! [Dealing with categorical and integer-valued variables in Bayesian Optimization with Gaussian processes](https://arxiv.org/pdf/1805.03463).
269//!
270//! \[<a id="Zhan2024">Zhan2024</a>\]: Zhan, Dawei, et al.
271//! [A cooperative approach to efficient global optimization](https://link.springer.com/article/10.1007/s10898-023-01316-6).
272//! Journal of Global Optimization 88.2 (2024): 327-357
273//!
274//! \[<a id="Pretsch2024">Pretsch2024</a>\]: Lisa Pretsch et al.
275//! [Bayesian optimization of cooperative components for multi-stage aero-structural compressor blade design](https://www.researchgate.net/publication/391492598_Bayesian_optimization_of_cooperative_components_for_multi-stage_aero-structural_compressor_blade_design).
276//! Struct Multidisc Optim 68, 84 (2025)
277//!
278//! \[<a id="Appriou2023">Appriou2023</a>\]: Appriou, T., Rullière, D. & Gaudrie, D,
279//! [Combination of optimization-free kriging models for high-dimensional problems](https://doi.org/10.1007/s00180-023-01424-7),
280//! Comput Stat 39, 3049–3071 (2024).
281//!
282//! \[<a id="Ament2025">Ament2025</a>\]: S Ament, S Daulton, D Eriksson, M Balandat, E Bakshy,
283//! [Unexpected improvements to expected improvement for bayesian optimization](https://arxiv.org/pdf/2310.20708),
284//! Advances in Neural Information Processing Systems, 2023
285//!
286//! smtorg. (2018). Surrogate modeling toolbox. In [GitHub repository](https://github.com/SMTOrg/smt)
287//!
288#![warn(missing_docs)]
289#![warn(rustdoc::broken_intra_doc_links)]
290
291pub mod criteria;
292pub mod gpmix;
293
294mod egor;
295mod errors;
296mod solver;
297mod types;
298
299pub use crate::egor::*;
300pub use crate::errors::*;
301pub use crate::gpmix::spec::{CorrelationSpec, RegressionSpec};
302pub use crate::solver::*;
303pub use crate::types::*;
304pub use crate::utils::{
305 CHECKPOINT_FILE, Checkpoint, CheckpointingFrequency, EGOBOX_LOG, EGOR_GP_FILENAME,
306 EGOR_INITIAL_GP_FILENAME, EGOR_USE_GP_RECORDER, EGOR_USE_GP_VAR_PORTFOLIO,
307 EGOR_USE_MAX_PROBA_OF_FEASIBILITY, HotStartCheckpoint, HotStartMode, find_best_result_index,
308};
309
310mod optimizers;
311mod utils;