Expand description
Differentiable optimization layers (OptNet-style LP/QP).
This module implements differentiable quadratic and linear programming layers that can be embedded in gradient-based training pipelines. The backward pass uses implicit differentiation of the KKT conditions to compute gradients of the optimal solution w.r.t. all problem parameters.
§Submodules
kkt_sensitivity: KKT bordered matrix assembly and adjoint-method sensitivity.qp_layer: ADMM-based QP layer with warm-start and active-set backward.lp_layer: Entropic LP layer and basis sensitivity analysis.perturbed_optimizer: Black-box differentiable combinatorial optimization.implicit_diff: Core implicit differentiation engine.combinatorial: SparseMAP, soft sort/rank (legacy entry points).diff_qp: Interior-point differentiable QP.diff_lp: Differentiable LP (active-set based).
§References
- Amos & Kolter (2017). “OptNet: Differentiable Optimization as a Layer in Neural Networks.” ICML.
- Berthet et al. (2020). “Learning with Differentiable Perturbed Optimizers.” NeurIPS.
- Niculae & Blondel (2017). “A regularized framework for sparse and structured neural attention.” NeurIPS.
Re-exports§
pub use combinatorial::diff_topk;pub use combinatorial::soft_rank;pub use combinatorial::soft_sort;pub use combinatorial::sparsemap;pub use combinatorial::sparsemap_gradient;pub use combinatorial::PerturbedOptimizer as PerturbedOptimizerLegacy;pub use combinatorial::PerturbedOptimizerConfig as PerturbedOptimizerLegacyConfig;pub use combinatorial::SparsemapConfig;pub use combinatorial::SparsemapResult;pub use combinatorial::StructureType;pub use diff_lp::DifferentiableLP;pub use diff_qp::DifferentiableQP;pub use kkt_sensitivity::kkt_matrix;pub use kkt_sensitivity::kkt_sensitivity;pub use kkt_sensitivity::mat_vec;pub use kkt_sensitivity::outer_product;pub use kkt_sensitivity::parametric_nlp_adjoint;pub use kkt_sensitivity::regularize_q;pub use kkt_sensitivity::sym_outer_product;pub use kkt_sensitivity::KktGrad;pub use kkt_sensitivity::KktSystem;pub use kkt_sensitivity::NlpGrad;pub use layer::OptNetLayer;pub use layer::StandardOptNetLayer;pub use lp_layer::lp_gradient;pub use lp_layer::lp_perturbed;pub use lp_layer::LpLayer;pub use lp_layer::LpLayerConfig;pub use lp_layer::LpSensitivity;pub use perturbed_optimizer::PerturbedOptimizer;pub use perturbed_optimizer::PerturbedOptimizerConfig;pub use perturbed_optimizer::SparseMap;pub use perturbed_optimizer::SparseMapConfig;pub use qp_layer::QpLayer;pub use qp_layer::QpLayerConfig;pub use types::BackwardMode;pub use types::DiffLPConfig;pub use types::DiffLPResult;pub use types::DiffOptGrad;pub use types::DiffOptParams;pub use types::DiffOptResult;pub use types::DiffOptStatus;pub use types::DiffQPConfig;pub use types::DiffQPResult;pub use types::ImplicitGradient;pub use types::KKTSystem;
Modules§
- combinatorial
- Differentiable Combinatorial Optimization
- diff_lp
- Differentiable Linear Programming.
- diff_qp
- Differentiable Quadratic Programming (OptNet-style).
- implicit_
diff - Core implicit differentiation engine for optimization layers.
- kkt_
sensitivity - KKT sensitivity analysis for differentiable optimization layers.
- layer
- OptNet layer abstraction for embedding differentiable optimization in neural network pipelines.
- lp_
layer - Differentiable LP layer via entropic regularization and basis sensitivity.
- perturbed_
optimizer - Differentiable combinatorial optimization via perturbed optimizers.
- qp_
layer - ADMM-based differentiable QP layer with warm-start and active-set backward.
- types
- Types for differentiable optimization (OptNet-style LP/QP layers).