Expand description
Adaptive Transformer Enhancement for Optimization
This module implements transformer-based neural architectures that adaptively enhance optimization algorithms. The transformers learn to attend to different aspects of the optimization landscape and adapt their strategies accordingly.
Structs§
- Adaptive
Components - Adaptive components for transformer optimization
- Adaptive
Transformer Optimizer - Adaptive Transformer-Enhanced Optimizer
- Attention
Adaptation - Attention adaptation mechanism
- Convergence
Detector - Convergence detector
- Feed
Forward Network - Feed-forward network
- Gradient
Scaler - Gradient scaling mechanism
- Gradient
Statistics - Gradient statistics
- Layer
Normalization - Layer normalization
- Learning
Rate Adapter - Learning rate adapter
- Multi
Head Attention - Multi-head attention mechanism
- Optimization
History - Optimization history for transformer context
- Optimization
Step - Optimization step output from transformer
- Optimization
Trajectory - Optimization trajectory for training
- Optimization
Transformer - Transformer architecture for optimization
- Step
Size Predictor - Step size predictor
- Trajectory
Step - Single step in optimization trajectory
- Transformer
Block - Single transformer block
- Transformer
Metrics - Performance metrics for transformer
- Transformer
Problem Encoder - Problem encoder for transformers
Functions§
- placeholder
- transformer_
optimize - Convenience function for transformer-based optimization