Overview
The stochy crate is a collection of stochastic approximation algorithms:
RSPSA(Resilient Simultaneous Perturbation Stochastic Approximation)SPSA(Simultaneous Perturbation Stochastic Approximation)
You can use stochy to:
- Minimize functions with multiple parameters, without requiring a gradient.
- Optimize parameters in game-playing programs using relative difference functions.
stochy is compatible with both the stepwise algorithm API and
the argmin solver API (enable via the argmin feature flag). Difference functions are only supported under stepwise.
Usage
Example Cargo.toml:
[]
= "0.0.3"
# if using argmin, replace the above with:
# stochy = { version = "0.0.3", features = ["argmin"] }
Example 1
use ;
use ;
let f = ;
let hyperparams = default;
let initial_guess = vec!;
let spsa = from_fn.expect;
let = fixed_iters
.on_step
.solve
.expect;
assert_approx_eq!;
println!;
Example 2 (argmin)
This example is equivalent to Example 1, but uses the argmin crate to manage the SPSA algorithm.
use assert_approx_eq;
;
#
let hyperparams = default;
let algo = new;
let exec = new;
let initial_param = vec!;
let result = exec
.configure
.run
.unwrap;
let best_param = result.state.best_param.unwrap;
assert_approx_eq!;
println!;
Comparison
Table 1: Feature comparison of the algorithms contrasted with the more familiar Gradient Descent algorithm.
| Gradient Descent (reference) | RSPSA | SPSA |
|---|---|---|
| Requires gradient function | No gradient function required | No gradient function required |
| Difference function insufficient | Accepts relative difference function | Accepts relative difference function |
| One gradient evaluation per iteration | Two function evaluations per iteration | Two function evaluations per iteration |
| Single learning-rate hyperparameter | Less sensitive to hyperparameters than SPSA | Very sensitive to hyperparameters |
| Continuous convergence progression | Convergence saturation | Continuous convergence progression |