egobox_ego/
lib.rs

1//! This library implements Efficient Global Optimization method,
2//! it started as a port of the [EGO algorithm](https://smt.readthedocs.io/en/stable/_src_docs/applications/ego.html)
3//! implemented as an application example in [SMT](https://smt.readthedocs.io/en/stable).
4//!
5//! The optimizer is able to deal with inequality constraints.
6//! Objective and contraints are expected to computed grouped at the same time
7//! hence the given function should return a vector where the first component
8//! is the objective value and the remaining ones constraints values intended
9//! to be negative in the end.   
10//! The optimizer comes with a set of options to:
11//! * specify the initial doe,
12//! * parameterize internal optimization,
13//! * parameterize mixture of experts,
14//! * save intermediate results and allow warm/hot restart,
15//! * handling of mixed-integer variables
16//! * activation of TREGO algorithm variation
17//!
18//! # Examples
19//!
20//! ## Continuous optimization
21//!
22//! ```
23//! use ndarray::{array, Array2, ArrayView2};
24//! use egobox_ego::EgorBuilder;
25//!
26//! // A one-dimensional test function, x in [0., 25.] and min xsinx(x) ~ -15.1 at x ~ 18.9
27//! fn xsinx(x: &ArrayView2<f64>) -> Array2<f64> {
28//!     (x - 3.5) * ((x - 3.5) / std::f64::consts::PI).mapv(|v| v.sin())
29//! }
30//!
31//! // We ask for 10 evaluations of the objective function to get the result
32//! let res = EgorBuilder::optimize(xsinx)
33//!             .configure(|config| config.max_iters(10))
34//!             .min_within(&array![[0.0, 25.0]])
35//!             .expect("optimizer configured")
36//!             .run()
37//!             .expect("xsinx minimized");
38//! println!("Minimum found f(x) = {:?} at x = {:?}", res.x_opt, res.y_opt);
39//! ```
40//!
41//! The implementation relies on [Mixture of Experts](egobox_moe).
42//!
43//!
44//! ## Mixed-integer optimization
45//!
46//! While [Egor] optimizer works with continuous data (i.e floats), the optimizer
47//! allows to make basic mixed-integer optimization. The configuration of the Optimizer
48//! as a mixed_integer optimizer is done though the `EgorBuilder`  
49//!
50//! As a second example, we define an objective function `mixsinx` taking integer
51//! input values from the previous function `xsinx` defined above.
52//!  
53//! ```
54//! use ndarray::{array, Array2, ArrayView2};
55//! use linfa::ParamGuard;
56//! #[cfg(feature = "blas")]
57//! use ndarray_linalg::Norm;
58//! #[cfg(not(feature = "blas"))]
59//! use linfa_linalg::norm::*;
60//! use egobox_ego::{EgorBuilder, InfillStrategy, XType};
61//!
62//! fn mixsinx(x: &ArrayView2<f64>) -> Array2<f64> {
63//!     if (x.mapv(|v| v.round()).norm_l2() - x.norm_l2()).abs() < 1e-6 {
64//!         (x - 3.5) * ((x - 3.5) / std::f64::consts::PI).mapv(|v| v.sin())
65//!     } else {
66//!         panic!("Error: mixsinx works only on integer, got {:?}", x)
67//!     }
68//! }
69//!
70//! let max_iters = 10;
71//! let doe = array![[0.], [7.], [25.]];   // the initial doe
72//!
73//! // We define input as being integer
74//! let xtypes = vec![XType::Int(0, 25)];
75//!
76//! let res = EgorBuilder::optimize(mixsinx)
77//!     .configure(|config|
78//!         config.doe(&doe)  // we pass the initial doe
79//!               .max_iters(max_iters)
80//!               .infill_strategy(InfillStrategy::EI)
81//!               .seed(42))     
82//!     .min_within_mixint_space(&xtypes)  // We build a mixed-integer optimizer
83//!     .expect("optimizer configured")
84//!     .run()
85//!     .expect("Egor minimization");
86//! println!("min f(x)={} at x={}", res.y_opt, res.x_opt);
87//! ```  
88//!
89//! # Usage
90//!
91//! The [`EgorBuilder`] class is used to build an initial optimizer setting
92//! the objective function, an optional random seed (to get reproducible runs) and
93//! a design space specifying the domain and dimensions of the inputs `x`.
94//!  
95//! The `min_within()` and `min_within_mixed_space()` methods return an [`Egor`] object, the optimizer,
96//! which can be further configured.
97//! The first one is used for continuous input space (eg floats only), the second one for mixed-integer input
98//! space (some variables, components of `x`, may be integer, ordered or categorical).
99//!
100//! Some of the most useful options are:
101//!
102//! * Specification of the size of the initial DoE. The default is nx+1 where nx is the dimension of x.
103//!   If your objective function is not expensive you can take `3*nx` to help the optimizer
104//!   approximating your objective function.
105//!
106//! ```no_run
107//! # use egobox_ego::{EgorConfig};
108//! # let egor_config = EgorConfig::default();
109//!     egor_config.n_doe(100);
110//! ```
111//!
112//! You can also provide your initial doe though the `egor.doe(your_doe)` method.
113//!
114//! * Specifications of constraints (expected to be negative at the end of the optimization)
115//!   In this example below we specify that 2 constraints will be computed with the objective values meaning
116//!   the objective function is expected to return an array '\[nsamples, 1 obj value + 2 const values\]'.
117//!
118//! ```no_run
119//! # let egor_config = egobox_ego::EgorConfig::default();
120//!     egor_config.n_cstr(2);
121//! ```
122//!
123//! * If the default infill strategy (WB2, Watson and Barnes 2nd criterion),
124//!   you can switch for either EI (Expected Improvement) or WB2S (scaled version of WB2).
125//!   See \[[Priem2019](#Priem2019)\]
126//!
127//! ```no_run
128//! # use egobox_ego::{EgorConfig, InfillStrategy};
129//! # let egor_config = EgorConfig::default();
130//!     egor_config.infill_strategy(InfillStrategy::EI);
131//! ```
132//!
133//! * Constraints modeled with a surrogate can be integrated in the infill criterion
134//!   through their probability of feasibility. See \[[Sasena2002](#Sasena2002)\]
135//!
136//! ```no_run
137//! # use egobox_ego::{EgorConfig};
138//! # let egor_config = EgorConfig::default();
139//!     egor_config.cstr_infill(true);
140//! ```
141//!
142//! * Constraints modeled with a surrogate can be used with their mean value or their upper trust bound
143//!   See \[[Priem2019](#Priem2019)\]
144//!
145//! ```no_run
146//! # use egobox_ego::{EgorConfig, ConstraintStrategy};
147//! # let egor_config = EgorConfig::default();
148//!     egor_config.cstr_strategy(ConstraintStrategy::UpperTrustBound);
149//! ```
150//!
151//! * The default gaussian process surrogate is parameterized with a constant trend and a squared exponential correlation kernel, also
152//!   known as Kriging. The optimizer use such surrogates to approximate objective and constraint functions. The kind of surrogate
153//!   can be changed using `regression_spec` and `correlation_spec()` methods to specify trend and kernels tested to get the best
154//!   approximation (quality tested through cross validation).
155//!
156//! ```no_run
157//! # use egobox_ego::EgorConfig;
158//! # use egobox_ego::{GpConfig, RegressionSpec, CorrelationSpec};
159//! # let egor_config = EgorConfig::default();
160//!     egor_config.configure_gp(|gp_conf| {
161//!         gp_conf.regression_spec(RegressionSpec::CONSTANT | RegressionSpec::LINEAR)
162//!                .correlation_spec(CorrelationSpec::MATERN32 | CorrelationSpec::MATERN52)
163//!     });
164//! ```
165//! * As the dimension increase the gaussian process surrogate building may take longer or even fail
166//!   in this case you can specify a PLS dimension reduction \[[Bartoli2019](#Bartoli2019)\].
167//!   Gaussian process will be built using the `ndim` (usually 3 or 4) main components in the PLS projected space.
168//!
169//! ```no_run
170//! # use egobox_ego::EgorConfig;
171//! # use egobox_ego::GpConfig;
172//! # let egor_config = EgorConfig::default();
173//!     egor_config.configure_gp(|gp_conf| {
174//!         gp_conf.kpls(3)
175//!     });
176//! ```
177//!
178//! In the above example all GP with combinations of regression and correlation will be tested and the best combination for
179//! each modeled function will be retained. You can also simply specify `RegressionSpec::ALL` and `CorrelationSpec::ALL` to
180//! test all available combinations but remember that the more you test the slower it runs.
181//!
182//! * the TREGO algorithm described in \[[Diouane2023](#Diouane2023)\] activated with
183//!   the default gl1-4 configuration from the reference paper
184//!
185//! ```no_run
186//! # use egobox_ego::{EgorConfig, RegressionSpec, CorrelationSpec};
187//! # let egor_config = EgorConfig::default();
188//!     egor_config.trego(true);
189//! ```
190//!
191//! or with a custom configuration, here gl4-1 and beta=0.8
192//!
193//! ```no_run
194//! # use egobox_ego::{EgorConfig, RegressionSpec, CorrelationSpec};
195//! # let egor_config = EgorConfig::default();
196//!    egor_config.configure_trego(|trego_cfg| trego_cfg.n_gl_steps((4, 1)).beta(0.8));
197//! ```
198//!
199//! * Intermediate results can be logged at each iteration when `outdir` directory is specified.
200//!   The following files :
201//!   * egor_config.json: Egor configuration,
202//!   * egor_initial_doe.npy: initial DOE (x, y) as numpy array,
203//!   * egor_doe.npy: DOE (x, y) as numpy array,
204//!   * egor_history.npy: best (x, y) wrt to iteration number as (n_iters, nx + ny) numpy array   
205//!  
206//! ```no_run
207//! # use egobox_ego::EgorConfig;
208//! # let egor_config = EgorConfig::default();
209//!     egor_config.outdir("./.output");  
210//! ```
211//! If warm_start is set to `true`, the algorithm starts from the saved `egor_doe.npy`
212//!
213//! * Hot start checkpointing can be enabled with `hot_start` option specifying a number of
214//!   extra iterations beyond max iters. This mechanism allows to restart after an interruption
215//!   from the last saved checkpoint. While warm_start restart from saved doe for another max_iters
216//!   iterations, hot start allows to continue from the last saved optimizer state till max_iters
217//!   is reached with optinal extra iterations.
218//!
219//! ```no_run
220//! # use egobox_ego::{EgorConfig, HotStartMode};
221//! # let egor_config = EgorConfig::default();
222//!     egor_config.hot_start(HotStartMode::Enabled);
223//! ```
224//!
225//! # Implementation notes
226//!
227//! * Mixture of experts and PLS dimension reduction is explained in \[[Bartoli2019](#Bartoli2019)\]
228//! * Parallel evaluation is available through the selection of a qei strategy. See in \[[Ginsbourger2010](#Ginsbourger2010)\]
229//! * Mixed integer approach is implemented using continuous relaxation. See \[[Garrido2018](#Garrido2018)\]
230//! * TREGO algorithm is implemented. See \[[Diouane2023](#Diouane2023)\]
231//! * CoEGO approach is implemented with CCBO setting where expensive evaluations are run after context vector update.
232//!   See \[[Zhan2024](#Zhan024)\] and \[[Pretsch2024](#Pretsch2024)\]
233//! * Theta bounds are implemented as in \[[Appriou2023](#Appriou2023)\]
234//! * Logirithm of Expected Improvement is implemented as in \[[Ament2025](#Ament2025)\]
235//!
236//! # References
237//!
238//! \[<a id="Bartoli2019">Bartoli2019</a>\]: Bartoli, Nathalie, et al. [Adaptive modeling strategy for constrained global
239//! optimization with application to aerodynamic wing design](https://www.sciencedirect.com/science/article/pii/S1270963818306011)
240//!  Aerospace Science and technology 90 (2019): 85-102.
241//!
242//! \[<a id="Ginsbourger2010">Ginsbourger2010</a>\]: Ginsbourger, D., Le Riche, R., & Carraro, L. (2010).
243//! Kriging is well-suited to parallelize optimization.
244//!  
245//! \[<a id="Garrido2018">Garrido2018</a>\]: E.C. Garrido-Merchan and D. Hernandez-Lobato. Dealing with categorical and
246//! integer-valued variables in Bayesian Optimization with Gaussian processes.
247//!
248//! Bouhlel, M. A., Bartoli, N., Otsmane, A., & Morlier, J. (2016). [Improving kriging surrogates
249//! of high-dimensional design models by partial least squares dimension reduction.](https://doi.org/10.1007/s00158-015-1395-9)
250//! Structural and Multidisciplinary Optimization, 53(5), 935–952.
251//!
252//! Bouhlel, M. A., Hwang, J. T., Bartoli, N., Lafage, R., Morlier, J., & Martins, J. R. R. A.
253//! (2019). [A python surrogate modeling framework with derivatives](https://doi.org/10.1016/j.advengsoft.2019.03.005).
254//! Advances in Engineering Software, 102662.
255//!
256//! Dubreuil, S., Bartoli, N., Gogu, C., & Lefebvre, T. (2020). [Towards an efficient global multi-
257//! disciplinary design optimization algorithm](https://doi.org/10.1007/s00158-020-02514-6).
258//! Structural and Multidisciplinary Optimization, 62(4), 1739–1765.
259//!
260//! Jones, D. R., Schonlau, M., & Welch, W. J. (1998). [Efficient global optimization of expensive
261//! black-box functions](https://www.researchgate.net/publication/235709802_Efficient_Global_Optimization_of_Expensive_Black-Box_Functions).
262//! Journal of Global Optimization, 13(4), 455–492.
263//!
264//! \[<a id="Diouane2023">Diouane(2023)</a>\]: Diouane, Youssef, et al.
265//! [TREGO: a trust-region framework for efficient global optimization](https://arxiv.org/pdf/2101.06808)
266//! Journal of Global Optimization 86.1 (2023): 1-23.
267//!
268//! \[<a id="Priem2019">Priem2019</a>\]: Priem, Rémy, Nathalie Bartoli, and Youssef Diouane.
269//! [On the use of upper trust bounds in constrained Bayesian optimization infill criteria](https://hal.science/hal-02182492v1/file/Priem_24049.pdf).
270//! AIAA aviation 2019 forum. 2019.
271//!
272//! \[<a id="Sasena2002">Sasena2002</a>\]: Sasena M., Papalambros P., Goovaerts P., 2002.
273//! [Global optimization of problems with disconnected feasible regions via surrogate modeling](https://deepblue.lib.umich.edu/handle/2027.42/77089). AIAA Paper.
274//!
275//! \[<a id="Ginsbourger2010">Ginsbourger2010</a>\]: Ginsbourger, D., Le Riche, R., & Carraro, L. (2010).
276//! [Kriging is well-suited to parallelize optimization](https://www.researchgate.net/publication/226716412_Kriging_Is_Well-Suited_to_Parallelize_Optimization).
277//!
278//! \[<a id="Garrido2018">Garrido2018</a>\]: E.C. Garrido-Merchan and D. Hernandez-Lobato.
279//! [Dealing with categorical and integer-valued variables in Bayesian Optimization with Gaussian processes](https://arxiv.org/pdf/1805.03463).
280//!
281//! \[<a id="Zhan2024">Zhan2024</a>\]: Zhan, Dawei, et al.
282//! [A cooperative approach to efficient global optimization](https://link.springer.com/article/10.1007/s10898-023-01316-6).
283//! Journal of Global Optimization 88.2 (2024): 327-357
284//!
285//! \[<a id="Pretsch2024">Pretsch2024</a>\]: Lisa Pretsch et al.
286//! [Bayesian optimization of cooperative components for multi-stage aero-structural compressor blade design](https://www.researchgate.net/publication/391492598_Bayesian_optimization_of_cooperative_components_for_multi-stage_aero-structural_compressor_blade_design).
287//! Struct Multidisc Optim 68, 84 (2025)
288//!
289//! \[<a id="Appriou2023">Appriou2023</a>\]: Appriou, T., Rullière, D. & Gaudrie, D,
290//! [Combination of optimization-free kriging models for high-dimensional problems](https://doi.org/10.1007/s00180-023-01424-7),
291//! Comput Stat 39, 3049–3071 (2024).
292//!
293//! \[<a id="Ament2025">Ament2025</a>\]: S Ament, S Daulton, D Eriksson, M Balandat, E Bakshy,
294//! [Unexpected improvements to expected improvement for bayesian optimization](https://arxiv.org/pdf/2310.20708),
295//! Advances in Neural Information Processing Systems, 2023
296//!
297//! smtorg. (2018). Surrogate modeling toolbox. In [GitHub repository](https://github.com/SMTOrg/smt)
298//!
299#![warn(missing_docs)]
300#![warn(rustdoc::broken_intra_doc_links)]
301
302pub mod criteria;
303pub mod gpmix;
304
305mod egor;
306mod errors;
307mod solver;
308mod types;
309
310pub use crate::egor::*;
311pub use crate::errors::*;
312pub use crate::gpmix::spec::{CorrelationSpec, RegressionSpec};
313pub use crate::solver::*;
314pub use crate::types::*;
315pub use crate::utils::{
316    CHECKPOINT_FILE, Checkpoint, CheckpointingFrequency, EGOBOX_LOG, EGOR_GP_FILENAME,
317    EGOR_INITIAL_GP_FILENAME, EGOR_USE_GP_RECORDER, EGOR_USE_GP_VAR_PORTFOLIO,
318    EGOR_USE_MAX_PROBA_OF_FEASIBILITY, HotStartCheckpoint, HotStartMode, find_best_result_index,
319};
320
321mod optimizers;
322mod utils;