Optimizer

Trait Optimizer 

Source
pub trait Optimizer: Send + Sync {
    // Required methods
    fn step(
        &self,
        params: &mut [f64],
        gradients: &[f64],
        iteration: usize,
    ) -> Result<(), OptimizerError>;
    fn name(&self) -> &str;

    // Provided methods
    fn reset(&mut self) { ... }
    fn current_learning_rate(&self, _iteration: usize) -> f64 { ... }
}
Expand description

Trait for optimization algorithms.

Implement this trait to create custom optimizers compatible with LogosQ’s variational algorithms.

§Thread Safety

Optimizers must be Send + Sync for use in parallel optimization. Internal state should use appropriate synchronization primitives.

§Example

use logosq_optimizer::{Optimizer, OptimizerError};

struct MyOptimizer {
    learning_rate: f64,
}

impl Optimizer for MyOptimizer {
    fn step(
        &self,
        params: &mut [f64],
        gradients: &[f64],
        iteration: usize,
    ) -> Result<(), OptimizerError> {
        for (p, g) in params.iter_mut().zip(gradients.iter()) {
            *p -= self.learning_rate * g;
        }
        Ok(())
    }
     
    fn name(&self) -> &str { "MyOptimizer" }
}

Required Methods§

Source

fn step( &self, params: &mut [f64], gradients: &[f64], iteration: usize, ) -> Result<(), OptimizerError>

Perform a single optimization step.

§Arguments
  • params - Mutable slice of parameters to update
  • gradients - Gradient of the objective w.r.t. parameters
  • iteration - Current iteration number (0-indexed)
§Errors
Source

fn name(&self) -> &str

Get the optimizer name.

Provided Methods§

Source

fn reset(&mut self)

Reset internal state (for optimizers with momentum).

Source

fn current_learning_rate(&self, _iteration: usize) -> f64

Get current learning rate (may be adaptive).

Implementors§