Expand description
A Tree-Parzen Estimator (TPE) library for black-box optimization.
This library provides an Optuna-like API for hyperparameter optimization using the Tree-Parzen Estimator algorithm. It supports:
- Float, integer, and categorical parameter types
- Log-scale and stepped parameter sampling
- Synchronous and async optimization
- Parallel trial evaluation with bounded concurrency
- Serialization for saving/loading study state
§Quick Start
use optimizer::{Direction, Study, TpeSampler};
// Create a study with TPE sampler
let sampler = TpeSampler::builder().seed(42).build();
let study: Study<f64> = Study::with_sampler(Direction::Minimize, sampler);
// Optimize x^2 for 20 trials
study
.optimize_with_sampler(20, |trial| {
let x = trial.suggest_float("x", -10.0, 10.0)?;
Ok::<_, optimizer::TpeError>(x * x)
})
.unwrap();
// Get the best result
let best = study.best_trial().unwrap();
println!("Best value: {} at x={:?}", best.value, best.params);§Creating a Study
A Study manages optimization trials. Create one with an optimization direction:
use optimizer::{Direction, RandomSampler, Study, TpeSampler};
// Minimize with default random sampler
let study: Study<f64> = Study::new(Direction::Minimize);
// Maximize with TPE sampler
let study: Study<f64> = Study::with_sampler(Direction::Maximize, TpeSampler::new());
// With seeded sampler for reproducibility
let study: Study<f64> = Study::with_sampler(Direction::Minimize, RandomSampler::with_seed(42));§Suggesting Parameters
Within the objective function, use Trial to suggest parameter values:
use optimizer::{Direction, Study};
let study: Study<f64> = Study::new(Direction::Minimize);
study
.optimize(10, |trial| {
// Float parameters
let x = trial.suggest_float("x", 0.0, 1.0)?;
let lr = trial.suggest_float_log("learning_rate", 1e-5, 1e-1)?;
let step = trial.suggest_float_step("step", 0.0, 1.0, 0.1)?;
// Integer parameters
let n = trial.suggest_int("n_layers", 1, 10)?;
let batch = trial.suggest_int_log("batch_size", 16, 256)?;
let units = trial.suggest_int_step("units", 32, 512, 32)?;
// Categorical parameters
let optimizer = trial.suggest_categorical("optimizer", &["sgd", "adam", "rmsprop"])?;
// Return objective value
Ok::<_, optimizer::TpeError>(x * n as f64)
})
.unwrap();§Configuring TPE
The TpeSampler can be configured using the builder pattern:
use optimizer::TpeSampler;
let sampler = TpeSampler::builder()
.gamma(0.15) // Quantile for good/bad split
.n_startup_trials(20) // Random trials before TPE
.n_ei_candidates(32) // Candidates to evaluate
.seed(42) // Reproducibility
.build();§Async and Parallel Optimization
With the async feature enabled, you can run trials asynchronously:
ⓘ
use optimizer::{Study, Direction};
// Sequential async
study.optimize_async(10, |mut trial| async move {
let x = trial.suggest_float("x", 0.0, 1.0)?;
Ok((trial, x * x))
}).await?;
// Parallel with bounded concurrency
study.optimize_parallel(10, 4, |mut trial| async move {
let x = trial.suggest_float("x", 0.0, 1.0)?;
Ok((trial, x * x))
}).await?;§Serialization
With the serde feature enabled, studies can be serialized:
ⓘ
use optimizer::{Study, Direction, TpeSampler};
// Save study state
let study: Study<f64> = Study::new(Direction::Minimize);
let json = serde_json::to_string(&study)?;
// Load and continue
let mut study: Study<f64> = serde_json::from_str(&json)?;
study.set_sampler(TpeSampler::new()); // Restore sampler
study.optimize_with_sampler(10, |trial| { /* ... */ }).unwrap();§Feature Flags
serde: Enable serialization/deserialization of studies and trialsasync: Enable async optimization methods (requires tokio)
Structs§
- Completed
Trial - A completed trial with its parameters, distributions, and objective value.
- Random
Sampler - A simple random sampler that samples uniformly from distributions.
- Study
- A study manages the optimization process, tracking trials and their results.
- TpeSampler
- A Tree-Parzen Estimator (TPE) sampler for Bayesian optimization.
- TpeSampler
Builder - Builder for configuring a
TpeSampler. - Trial
- A trial represents a single evaluation of the objective function.
Enums§
- Direction
- The direction of optimization.
- TpeError
- The error type for TPE operations.
- Trial
State - The state of a trial in its lifecycle.
Traits§
- Sampler
- Trait for pluggable parameter sampling strategies.
Type Aliases§
- Result
- A specialized Result type for TPE operations.