Expand description
Gradient Descent Optimization
Gradient descent is a first-order iterative optimization algorithm for finding local minima of differentiable functions. It’s the foundation of training neural networks and machine learning models.
§Examples
use advanced_algorithms::optimization::gradient_descent::{GradientDescent, LearningRate};
// Minimize f(x) = x^2 + 2x + 1
let f = |x: &[f64]| x[0] * x[0] + 2.0 * x[0] + 1.0;
let grad_f = |x: &[f64]| vec![2.0 * x[0] + 2.0];
let gd = GradientDescent::new()
.with_learning_rate(LearningRate::Constant(0.1))
.with_max_iterations(1000);
let result = gd.minimize(f, grad_f, &[10.0]).unwrap();
// Should converge to x ≈ -1Structs§
- Gradient
Descent - Gradient descent optimizer configuration
- Optimization
Result - Result of optimization
- StochasticGD
- Stochastic Gradient Descent for batch optimization
Enums§
- Learning
Rate - Learning rate strategy
Functions§
- numerical_
gradient - Numerical gradient computation (for testing)