OptiRS Core
Core optimization algorithms and utilities for the OptiRS machine learning optimization library.
Overview
OptiRS-Core provides the foundational optimization algorithms and mathematical utilities that power the entire OptiRS ecosystem. This crate integrates deeply with the SciRS2 scientific computing foundation and implements state-of-the-art optimization algorithms with high performance and numerical stability.
Features
- Core Optimizers: SGD, Adam, AdamW, RMSprop with adaptive learning rates
- SciRS2 Integration: Built on top of SciRS2's scientific computing primitives
- Automatic Differentiation: Full integration with SciRS2's autograd system
- Linear Algebra: High-performance matrix operations via SciRS2-linalg
- Performance Monitoring: Built-in metrics and benchmarking via SciRS2-metrics
- Serialization: Complete Serde support for checkpointing and model persistence
- Optional Features: Parallelization with Rayon, SIMD acceleration
Optimization Algorithms
Supported Optimizers
- SGD (Stochastic Gradient Descent): Classic optimizer with momentum and weight decay
- Adam: Adaptive moment estimation with bias correction
- AdamW: Adam with decoupled weight decay for better generalization
- RMSprop: Root Mean Square Propagation for adaptive learning rates
Advanced Features
- Learning rate scheduling and decay
- Gradient clipping and normalization
- Warm-up and cooldown strategies
- Numerical stability guarantees
- Memory-efficient implementations
Dependencies
Core Dependencies
ndarray: N-dimensional arrays for tensor operationsserde: Serialization and deserializationthiserror: Error handlingrand: Random number generation
SciRS2 Integration
scirs2-core: Foundation scientific primitivesscirs2-optimize: Base optimization interfacesscirs2-linalg: Matrix operations and linear algebrascirs2-autograd: Automatic differentiationscirs2-neural: Neural network optimization supportscirs2-metrics: Performance monitoring and benchmarks
Usage
Add this to your Cargo.toml:
[]
= "0.1.0-rc.2"
= "0.1.0-rc.4" # Required foundation
Basic Example
use ;
use Array1; // ✅ CORRECT - Use scirs2_core
// Create an Adam optimizer
let mut optimizer = new
.beta1
.beta2
.epsilon
.build;
// Your parameters and gradients
let mut params = from;
let grads = from;
// Update parameters
optimizer.step;
With SciRS2 Integration
use Adam;
use Variable;
// Create optimizer with SciRS2 autograd integration
let mut optimizer = new.with_autograd.build;
// Use with SciRS2 variables for automatic differentiation
let mut params = new;
optimizer.step_autograd;
Features
Default Features
std: Standard library support (enabled by default)
Optional Features
parallel: Enable Rayon-based parallelizationsimd: Enable SIMD acceleration with wide vectors
Enable features in your Cargo.toml:
[]
= { = "0.1.0-rc.2", = ["parallel", "simd"] }
Architecture
OptiRS-Core is designed with modularity and performance in mind:
optirs-core/
├── src/
│ ├── lib.rs # Public API and re-exports
│ ├── optimizers/ # Optimizer implementations
│ │ ├── mod.rs
│ │ ├── sgd.rs
│ │ ├── adam.rs
│ │ ├── adamw.rs
│ │ └── rmsprop.rs
│ ├── schedulers/ # Learning rate scheduling
│ ├── utils/ # Mathematical utilities
│ └── integration/ # SciRS2 integration layer
Performance
OptiRS-Core is optimized for high-performance machine learning workloads:
- Memory-efficient gradient updates
- Vectorized operations with ndarray
- Optional SIMD acceleration
- Zero-copy operations where possible
- Numerical stability guarantees
Development Guidelines
Coding Standards
To ensure consistency across the OptiRS-Core codebase, all contributors must follow these guidelines:
Variable Naming
- Always use
snake_casefor variable names (e.g.,gradient_norm,parameter_count,learning_rate) - Avoid camelCase or other naming conventions (e.g.,
gradientNorm❌,parameterCount❌) - Use descriptive names that clearly indicate the variable's purpose
// ✅ Correct: snake_case
let gradient_norm = gradients.norm;
let parameter_count = model.parameter_count;
let learning_rate = optimizer.learning_rate;
// ❌ Incorrect: camelCase or other formats
let gradientNorm = gradients.norm;
let parameterCount = model.parameter_count;
let learningrate = optimizer.learning_rate;
Function and Method Names
- Use
snake_casefor function and method names - Use descriptive verbs that indicate the function's action
Type Names
- Use
PascalCasefor struct, enum, and trait names - Use
SCREAMING_SNAKE_CASEfor constants
General Guidelines
- Follow Rust's official naming conventions as specified in RFC 430
- Use
rustfmtandclippyto maintain code formatting and catch common issues - Write clear, self-documenting code with appropriate comments
Before Submitting Code
- Run
cargo fmtto format your code - Run
cargo clippyto check for lint issues - Ensure all tests pass with
cargo test - Verify compilation with
cargo check
Contributing
OptiRS follows the Cool Japan organization's development standards. See the main OptiRS repository for contribution guidelines.
License
This project is licensed under either of:
- Apache License, Version 2.0
- MIT License
at your option.