Module gradient_descent

Module gradient_descent 

Source
Expand description

Gradient Descent Optimization

Gradient descent is a first-order iterative optimization algorithm for finding local minima of differentiable functions. It’s the foundation of training neural networks and machine learning models.

§Examples

use advanced_algorithms::optimization::gradient_descent::{GradientDescent, LearningRate};

// Minimize f(x) = x^2 + 2x + 1
let f = |x: &[f64]| x[0] * x[0] + 2.0 * x[0] + 1.0;
let grad_f = |x: &[f64]| vec![2.0 * x[0] + 2.0];

let gd = GradientDescent::new()
    .with_learning_rate(LearningRate::Constant(0.1))
    .with_max_iterations(1000);

let result = gd.minimize(f, grad_f, &[10.0]).unwrap();
// Should converge to x ≈ -1

Structs§

GradientDescent
Gradient descent optimizer configuration
OptimizationResult
Result of optimization
StochasticGD
Stochastic Gradient Descent for batch optimization

Enums§

LearningRate
Learning rate strategy

Functions§

numerical_gradient
Numerical gradient computation (for testing)