pub struct AdaBound<T: Float> { /* private fields */ }Expand description
AdaBound optimizer configuration
AdaBound combines the benefits of adaptive learning rate methods (like Adam) with the strong generalization of SGD by dynamically bounding the learning rates.
§Key Features
- Smooth transition from Adam to SGD during training
- Dynamic bounds prevent learning rates from becoming too large or too small
- Better generalization than pure Adam
- Maintains fast convergence of adaptive methods
§Type Parameters
T: Floating-point type (f32 or f64)
Implementations§
Source§impl<T: Float + ScalarOperand> AdaBound<T>
impl<T: Float + ScalarOperand> AdaBound<T>
Sourcepub fn new(
learning_rate: T,
final_lr: T,
beta1: T,
beta2: T,
epsilon: T,
gamma: T,
weight_decay: T,
amsbound: bool,
) -> Result<Self>
pub fn new( learning_rate: T, final_lr: T, beta1: T, beta2: T, epsilon: T, gamma: T, weight_decay: T, amsbound: bool, ) -> Result<Self>
Create a new AdaBound optimizer
§Arguments
learning_rate: Initial learning rate (typically 0.001)final_lr: Final learning rate for SGD convergence (typically 0.1)beta1: First moment decay rate (typically 0.9)beta2: Second moment decay rate (typically 0.999)epsilon: Small constant for numerical stability (typically 1e-8)gamma: Convergence speed parameter (typically 1e-3)weight_decay: L2 regularization coefficient (typically 0.0)amsbound: Use AMSBound variant if true
§Example
use optirs_core::optimizers::AdaBound;
let optimizer = AdaBound::<f32>::new(
0.001, // learning_rate
0.1, // final_lr
0.9, // beta1
0.999, // beta2
1e-8, // epsilon
1e-3, // gamma
0.0, // weight_decay
false // amsbound
).unwrap();Sourcepub fn step(
&mut self,
params: ArrayView1<'_, T>,
grads: ArrayView1<'_, T>,
) -> Result<Array1<T>>
pub fn step( &mut self, params: ArrayView1<'_, T>, grads: ArrayView1<'_, T>, ) -> Result<Array1<T>>
Perform a single optimization step
§Arguments
params: Current parameter valuesgrads: Gradient values
§Returns
Result containing updated parameters or error
§Algorithm
- Initialize moments on first step
- Apply weight decay if configured
- Update biased first moment: m_t = β₁ * m_{t-1} + (1 - β₁) * g_t
- Update biased second moment: v_t = β₂ * v_{t-1} + (1 - β₂) * g_t²
- Compute bias-corrected moments
- Compute dynamic bounds: [α_l(t), α_u(t)]
- Compute clipped learning rate per parameter
- Apply parameter update: θ_{t+1} = θ_t - η_t * m̂_t
§Example
use optirs_core::optimizers::AdaBound;
use scirs2_core::ndarray_ext::array;
let mut optimizer = AdaBound::<f32>::default();
let params = array![1.0, 2.0, 3.0];
let grads = array![0.1, 0.2, 0.3];
let updated_params = optimizer.step(params.view(), grads.view()).unwrap();Sourcepub fn step_count(&self) -> usize
pub fn step_count(&self) -> usize
Get the number of optimization steps performed
Sourcepub fn current_bounds(&self) -> (T, T)
pub fn current_bounds(&self) -> (T, T)
Get current dynamic bounds [lower, upper]
Trait Implementations§
Source§impl<'de, T> Deserialize<'de> for AdaBound<T>where
T: Deserialize<'de> + Float,
impl<'de, T> Deserialize<'de> for AdaBound<T>where
T: Deserialize<'de> + Float,
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
Auto Trait Implementations§
impl<T> Freeze for AdaBound<T>where
T: Freeze,
impl<T> RefUnwindSafe for AdaBound<T>where
T: RefUnwindSafe,
impl<T> Send for AdaBound<T>where
T: Send,
impl<T> Sync for AdaBound<T>where
T: Sync,
impl<T> Unpin for AdaBound<T>where
T: Unpin,
impl<T> UnwindSafe for AdaBound<T>where
T: UnwindSafe + RefUnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<T> Serialize for T
impl<T> Serialize for T
fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<(), Error>
fn do_erased_serialize( &self, serializer: &mut dyn Serializer, ) -> Result<(), ErrorImpl>
Source§impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
Source§fn to_subset(&self) -> Option<SS>
fn to_subset(&self) -> Option<SS>
The inverse inclusion map: attempts to construct
self from the equivalent element of its
superset. Read moreSource§fn is_in_subset(&self) -> bool
fn is_in_subset(&self) -> bool
Checks if
self is actually part of its subset T (and can be converted to it).Source§fn to_subset_unchecked(&self) -> SS
fn to_subset_unchecked(&self) -> SS
Use with care! Same as
self.to_subset but without any property checks. Always succeeds.Source§fn from_subset(element: &SS) -> SP
fn from_subset(element: &SS) -> SP
The inclusion map: converts
self to the equivalent element of its superset.