optirs_learned/lib.rs
1//! # OptiRS Learned - Learned Optimizers and Meta-Learning
2//!
3//! **Version:** 0.1.0
4//! **Status:** Research Phase (Early Implementation)
5//!
6//! ⚠️ **Warning:** This crate is in early research phase. APIs may change significantly
7//! in future releases. Not recommended for production use.
8//!
9//! `optirs-learned` provides learned optimizers, meta-learning algorithms, and adaptive
10//! optimization systems built on [SciRS2](https://github.com/cool-japan/scirs).
11//!
12//! ## Dependencies
13//!
14//! - `scirs2-core` 0.1.1 - Required foundation
15//! - `optirs-core` 0.1.0 - Core optimizers
16//!
17//! ## Implementation Status (v0.1.0)
18//!
19//! - 🚧 Transformer-based optimizers (in development)
20//! - 🚧 LSTM optimizers (planned)
21//! - 🚧 Meta-learning framework (in development)
22//! - 📝 Research prototypes only
23//! - 📝 No production-ready implementations yet
24//!
25//! ## Status: Research Phase
26//!
27//! This crate implements cutting-edge research in learned optimization.
28//!
29//! ## Features
30//!
31//! ### Transformer-Based Optimizers
32//! - **Self-Attention** - Learn optimization patterns across parameters
33//! - **Cross-Attention** - Share optimization knowledge between layers
34//! - **Positional Encoding** - Parameter-aware optimization
35//! - **Multi-Head** - Diverse optimization strategies
36//!
37//! ### LSTM Optimizers
38//! - **Recurrent State** - Maintain long-term optimization memory
39//! - **Gating Mechanisms** - Adaptive learning rate control
40//! - **Sequence Modeling** - Learn optimization trajectories
41//! - **Stateful Updates** - Context-aware parameter updates
42//!
43//! ### Meta-Learning
44//! - **MAML** - Model-Agnostic Meta-Learning
45//! - **Reptile** - First-order meta-learning
46//! - **Meta-SGD** - Learn learning rates and update rules
47//! - **Task Adaptation** - Rapid fine-tuning on new tasks
48//!
49//! ### Few-Shot Optimization
50//! - **Fast Adaptation** - Few-step convergence on new problems
51//! - **Transfer Learning** - Knowledge transfer across domains
52//! - **Online Learning** - Continuous adaptation during training
53//! - **Hypernetworks** - Generate optimizer parameters on-the-fly
54//!
55//! ## Example Usage (Future)
56//!
57//! ```rust,ignore
58//! use optirs_learned::{TransformerOptimizer, MetaLearningConfig};
59//! use scirs2_core::ndarray::Array1;
60//!
61//! // Create transformer-based optimizer
62//! let config = MetaLearningConfig {
63//! num_heads: 8,
64//! hidden_dim: 256,
65//! num_layers: 4,
66//! };
67//!
68//! let mut optimizer = TransformerOptimizer::new(config)?;
69//!
70//! // Meta-train on multiple tasks
71//! for task in tasks {
72//! optimizer.meta_train(&task)?;
73//! }
74//!
75//! // Rapid adaptation to new task
76//! let params = Array1::from_elem(1000, 1.0);
77//! let grads = Array1::from_elem(1000, 0.01);
78//! let updated = optimizer.step(¶ms, &grads)?; // Fast convergence
79//! ```
80//!
81//! ## Research Highlights
82//!
83//! - **Outperforms Hand-Designed** - Better than Adam on many tasks
84//! - **Generalizes Across Domains** - Vision, NLP, RL all benefit
85//! - **Few-Shot Learning** - Converges in 10-100 steps vs thousands
86//! - **Adaptive Schedules** - Learns optimal learning rate schedules
87//!
88//! ## Architecture
89//!
90//! Built exclusively on SciRS2:
91//! - **ML Pipeline**: `scirs2_core::ml_pipeline::MLPipeline`
92//! - **Neural**: `scirs2_core::neural_architecture_search`
93//! - **Memory**: `scirs2_core::memory_efficient::LazyArray`
94//! - **Metrics**: `scirs2_core::ml_pipeline::PipelineMetrics`
95//!
96//! ## References
97//!
98//! - Learning to Learn by Gradient Descent by Gradient Descent (Andrychowicz et al., 2016)
99//! - Learned Optimizers that Scale and Generalize (Metz et al., 2022)
100//! - VeLO: Training Versatile Learned Optimizers (Metz et al., 2023)
101//!
102//! ## Contributing
103//!
104//! Research contributions welcome! Follow SciRS2 integration guidelines.
105
106pub mod adaptive;
107pub mod common;
108pub mod continual_learning;
109pub mod cross_domain_transfer;
110pub mod domain_optimizers;
111pub mod episodic_memory_impl;
112pub mod error;
113pub mod few_shot;
114pub mod few_shot_impl;
115pub mod lstm;
116pub mod meta_learning;
117pub mod online_maml;
118pub mod transformer;
119pub mod transformer_based_optimizer;
120
121pub use common::{
122 LearnedOptimizerConfig, MetaOptimizationStrategy, NeuralOptimizerMetrics, NeuralOptimizerType,
123 OptimizerState, StateMetadata, TaskContext, TaskPerformance,
124};
125pub use continual_learning::{ElasticWeightConsolidation, NetworkColumn, ProgressiveNetworks};
126pub use error::{OptimError, Result};
127pub use lstm::LSTMOptimizer;
128pub use transformer::TransformerOptimizer;
129pub use transformer_based_optimizer::TransformerOptimizer as TransformerBasedOptimizer;