Skip to main content

Module combinatorial

Module combinatorial 

Source
Expand description

Differentiable Combinatorial Optimization

This module implements differentiable relaxations of combinatorial optimization problems, enabling gradients to flow through discrete decision layers in end-to-end learning pipelines.

§Algorithms

  • SparseMAP (Niculae & Blondel, 2017): Sparse structured prediction via QP over the marginal polytope. Yields sparse probability distributions over combinatorial structures with exact gradients via the active-set theorem.

  • Perturbed Optimizers (Berthet et al., 2020): Sample-based differentiable argmax using additive Gaussian noise, enabling unbiased gradient estimates through any black-box combinatorial solver.

  • Differentiable Sorting (Cuturi & Doucet, 2017): Regularised isotonic regression for soft sorting and ranking.

  • Differentiable Top-K: Entropy-regularised LP relaxation of the hard top-k selector.

§References

  • Niculae & Blondel (2017). “A regularized framework for sparse and structured neural attention.” NeurIPS.
  • Berthet et al. (2020). “Learning with Differentiable Perturbed Optimizers.” NeurIPS.
  • Blondel et al. (2020). “Fast Differentiable Sorting and Ranking.” ICML.

Structs§

PerturbedOptimizer
Differentiable argmax via additive Gaussian perturbations.
PerturbedOptimizerConfig
Configuration for the Perturbed Optimizer.
SparsemapConfig
Configuration for the SparseMAP solver.
SparsemapResult
Result of SparseMAP.

Enums§

StructureType
Type of combinatorial structure defining the polytope for SparseMAP.

Functions§

diff_topk
Differentiable top-k selector via entropy-regularised LP.
soft_rank
Compute soft ranks of elements in x.
soft_sort
Compute the soft sort of a vector via regularised isotonic regression.
sparsemap
Solve SparseMAP via projected gradient descent on the regularised QP.
sparsemap_gradient
Compute gradient of a loss through SparseMAP via the active-set theorem.