pub trait Optimizer: Send + Sync {
// Required methods
fn step(
&self,
params: &mut [f64],
gradients: &[f64],
iteration: usize,
) -> Result<(), OptimizerError>;
fn name(&self) -> &str;
// Provided methods
fn reset(&mut self) { ... }
fn current_learning_rate(&self, _iteration: usize) -> f64 { ... }
}Expand description
Trait for optimization algorithms.
Implement this trait to create custom optimizers compatible with LogosQ’s variational algorithms.
§Thread Safety
Optimizers must be Send + Sync for use in parallel optimization.
Internal state should use appropriate synchronization primitives.
§Example
use logosq_optimizer::{Optimizer, OptimizerError};
struct MyOptimizer {
learning_rate: f64,
}
impl Optimizer for MyOptimizer {
fn step(
&self,
params: &mut [f64],
gradients: &[f64],
iteration: usize,
) -> Result<(), OptimizerError> {
for (p, g) in params.iter_mut().zip(gradients.iter()) {
*p -= self.learning_rate * g;
}
Ok(())
}
fn name(&self) -> &str { "MyOptimizer" }
}Required Methods§
Sourcefn step(
&self,
params: &mut [f64],
gradients: &[f64],
iteration: usize,
) -> Result<(), OptimizerError>
fn step( &self, params: &mut [f64], gradients: &[f64], iteration: usize, ) -> Result<(), OptimizerError>
Perform a single optimization step.
§Arguments
params- Mutable slice of parameters to updategradients- Gradient of the objective w.r.t. parametersiteration- Current iteration number (0-indexed)
§Errors
OptimizerError::DimensionMismatchif params and gradients differ in lengthOptimizerError::Divergedif NaN or Inf is encountered
Provided Methods§
Sourcefn current_learning_rate(&self, _iteration: usize) -> f64
fn current_learning_rate(&self, _iteration: usize) -> f64
Get current learning rate (may be adaptive).