Expand description
Subspace Embedding Optimization
Reduces the dimensionality of a large-scale optimization problem by projecting the search space into a random low-dimensional subspace and optimising there.
§Algorithm sketch
For each restart:
- Sample a random projection matrix P of shape
(subspace_dim × full_dim). - Optimize the surrogate g(y) = f(P^T y) in the subspace via coordinate descent.
- Recover the full-dimensional point x = P^T y.
- Clip to bounds if provided.
The method is particularly effective when the objective has a low intrinsic dimensionality (e.g., the effective subspace spanned by the gradient is small).
§References
- Wang, Z. et al. (2016). “Bayesian Optimization in a Billion Dimensions via Random Embeddings”. JAIR 55.
- Choromanski, K. et al. (2022). “Geometry-Aware Structured Transformations”.
Structs§
- Subspace
Config - Configuration for the subspace embedding optimizer.
- Subspace
Embedding Optimizer - Optimizer that works in a random low-dimensional subspace.
Enums§
- Sketch
Type - Type of random projection matrix to use for subspace embedding.
Functions§
- coord_
descent_ subspace - Coordinate descent in the subspace.
- count_
sketch - Generate a count-sketch matrix of shape
(sub_dim × full_dim). - gaussian_
sketch - Generate a Gaussian sketch matrix of shape
(sub_dim × full_dim). - rademacher_
sketch - Generate a Rademacher sketch matrix of shape
(sub_dim × full_dim).