Expand description
Differentiable Combinatorial Optimization
This module implements differentiable relaxations of combinatorial optimization problems, enabling gradients to flow through discrete decision layers in end-to-end learning pipelines.
§Algorithms
-
SparseMAP (Niculae & Blondel, 2017): Sparse structured prediction via QP over the marginal polytope. Yields sparse probability distributions over combinatorial structures with exact gradients via the active-set theorem.
-
Perturbed Optimizers (Berthet et al., 2020): Sample-based differentiable argmax using additive Gaussian noise, enabling unbiased gradient estimates through any black-box combinatorial solver.
-
Differentiable Sorting (Cuturi & Doucet, 2017): Regularised isotonic regression for soft sorting and ranking.
-
Differentiable Top-K: Entropy-regularised LP relaxation of the hard top-k selector.
§References
- Niculae & Blondel (2017). “A regularized framework for sparse and structured neural attention.” NeurIPS.
- Berthet et al. (2020). “Learning with Differentiable Perturbed Optimizers.” NeurIPS.
- Blondel et al. (2020). “Fast Differentiable Sorting and Ranking.” ICML.
Structs§
- Perturbed
Optimizer - Differentiable argmax via additive Gaussian perturbations.
- Perturbed
Optimizer Config - Configuration for the Perturbed Optimizer.
- Sparsemap
Config - Configuration for the SparseMAP solver.
- Sparsemap
Result - Result of SparseMAP.
Enums§
- Structure
Type - Type of combinatorial structure defining the polytope for SparseMAP.
Functions§
- diff_
topk - Differentiable top-k selector via entropy-regularised LP.
- soft_
rank - Compute soft ranks of elements in
x. - soft_
sort - Compute the soft sort of a vector via regularised isotonic regression.
- sparsemap
- Solve SparseMAP via projected gradient descent on the regularised QP.
- sparsemap_
gradient - Compute gradient of a loss through SparseMAP via the active-set theorem.