soco/algorithms/mod.rs
1//! Offline and Online Algorithms.
2//!
3//! # Algorithms
4//!
5//! ## Offline
6//!
7//! ### Backward-Recurrent Capacity Provisioning \[1\]
8//!
9//! SSCO - Fractional - Uni-Dimensional
10//!
11//! Stays within bounds on the optimal solution moving backwards in time.
12//!
13//! ### Convex Optimization
14//!
15//! SCO - Fractional
16//!
17//! Finds the minimizer of a convex objective.
18//!
19//! ### Graph-Based Optimal Algorithm (in one dimension) \[5\]
20//!
21//! SSCO - Integral - Uni-Dimensional
22//!
23//! Finds a shortest path in a graph using dynamic programming (in polynomial time).
24//!
25//! ### Graph-Based Optimal Algorithm \[9\]
26//!
27//! SSCO - Integral
28//!
29//! Finds a shortest path in a graph using dynamic programming (**not** in polynomial time).
30//!
31//! ### Graph-Based Polynomial-Time Approximation Scheme \[9\]
32//!
33//! SSCO - Integral
34//!
35//! Extends the graph-based optimal algorithm by only considering a subset of the decision space to achieve a better performance.
36//!
37//! ### Static Fractional Optimum
38//!
39//! SCO - Fractional
40//!
41//! Solves a smaller problem than _Convex Optimization_ to obtain an optimal static solution.
42//!
43//! ### Static Integral Optimum
44//!
45//! SCO - Integral
46//!
47//! Cycles through _all_ possible configurations to find the optimal static integral solution. Convexity is used to stop the search of the decision space quickly in practice, however, the worst-case runtime is exponential.
48//!
49//! ## Online
50//!
51//! ### Lazy Capacity Provisioning \[1\]
52//!
53//! SSCO - Fractional - Uni-Dimensional - $3$-competitive
54//!
55//! Lazily stays within fractional bounds on the decision space.
56//!
57//! ### Lazy Capacity Provisioning \[5\]
58//!
59//! SSCO - Integral - Uni-Dimensional - $3$-competitive (optimal for a deterministic algorithm)
60//!
61//! Lazily stays within integral bounds on the decision space.
62//!
63//! ### Memoryless algorithm \[3\]
64//!
65//! SSCO - Fractional - Uni-Dimensional - $3$-competitive (optimal for a memoryless algorithm)
66//!
67//! Moves towards the minimizer of the hitting cost balancing the paid movement cost. Special case of Primal Online Balanced Descent.
68//!
69//! ### Probabilistic Algorithm \[3\]
70//!
71//! SSCO - Fractional - Uni-Dimensional - $2$-competitive (optimal)
72//!
73//! Constructs a probability distribution of well-performing configurations over time.
74//!
75//! ### Randmoly Biased Greedy \[4\]
76//!
77//! SCO - Fractional - Uni-Dimensional - $2$-competitive (optimal)
78//!
79//! Uses a work function to balance hitting and movement costs.
80//!
81//! ### Randomized Integral Relaxation \[5\]
82//!
83//! SSCO - Integral - Uni-Dimensional - $2$-competitive (optimal)
84//!
85//! Randomly rounds solutions of any $2$-competitive fractional algorithm.
86//!
87//! ### Lazy Budgeting for SLO \[8\]
88//!
89//! SLO - Integral - $2d$-competitive (optimal for a deterministic algorithm)
90//!
91//! Keeps servers active for some minimum time delta even when they are not use to balance hitting costs and movement costs.
92//!
93//! ### Lazy Budgeting for SLO (randomized) \[8\]
94//!
95//! SLO - Integral - $(e / (e - 1))d$-competitive
96//!
97//! Keeps servers active for some minimum time delta even when they are not use to balance hitting costs and movement costs.
98//!
99//! ### Lazy Budgeting for SBLO \[9\]
100//!
101//! SBLO - Integral - $2d + 1 + \epsilon$-competitive
102//!
103//! Keeps servers active for some minimum time delta even when they are not use to balance hitting costs and movement costs.
104//!
105//! ### Online Gradient Descent \[4\]
106//!
107//! SCO - Fractional
108//!
109//! Achieves sublinear regret by _learning_ the static offline optimum.
110//!
111//! ### Primal Online Balanced Descent \[6\]
112//!
113//! SCO - Fractional - $3+\mathcal{O}(1/\alpha)$-competitive for the $\ell_2$ norm and locally $\alpha$-polyhedral hitting costs, $\mathcal{O}(\sqrt{d})$-competitive for the $\ell_1$ norm; mirror map must be $m$-strongly convex and $M$-Lipschitz smooth in the switching cost norm
114//!
115//! Takes a gradient step orthogonal to some landing level set balancing costs in the primal space.
116//!
117//! ### Dual Online Balanced Descent \[6\]
118//!
119//! SCO - Fractional - mirror map must be $m$-strongly convex and $M$-Lipschitz smooth in the switching cost norm
120//!
121//! Takes a gradient step orthogonal to some landing level set balancing costs in the dual space. Achieves sublinear regret.
122//!
123//! ### Greedy Online Balanced Descent \[7\]
124//!
125//! SCO - Fractional - $\mathcal{O}(1 / \sqrt{m})$-competitive for $m$-quasiconvex hitting costs and $\ell_2$-squared switching costs
126//!
127//! Takes a normal OBD-step and then an additional step directly towards the minimizer of the hitting cost depending on the convexity parameter $m$.
128//!
129//! ### Regularized Online Balanced Descent \[7\]
130//!
131//! SCO - Fractional - $\mathcal{O}(1 / \sqrt{m})$-competitive (optimal) for $m$-strongly convex and differentiable hitting costs and switching costs modeled as the Bregman divergence where the potential function is $\alpha$-strongly convex, $\beta$-strongly smooth, differentiable, and its Fenchel Conjugate is differentiable; $\Omega(1/m)$-competitive for $m$-quasiconvex hitting costs and $\ell_2$-squared switching costs
132//!
133//! Using a computationally simpler local view.
134//!
135//! ### Receding Horizon Control \[2\]
136//!
137//! SSCO - Fractional - $(1 + \Omega(\beta/e_0))$-competitive where $e_0$ is the idle cost and $\beta$ is the scaling of the Manhattan distance; when uni-dimensional the competitive ratio is $1 + \mathcal{O}(1/w)$
138//!
139//! ### Averaging Fixed Horizon Control \[2\]
140//!
141//! SSCO - Fractional - $1 + \mathcal{O}(1/w)$-competitive
142//!
143//! # References
144//!
145//! 1. Minghong Lin and Adam Wierman and Lachlan L. H. Andrew and Eno Thereska. _Dynamic right-sizing for power-proportional data centers_. 2011.
146//! 2. Minghong Lin and Zhenhua Liu and Adam Wierman and Lachlan L. H. Andrew. _Online Algorithms for Geographical Load Balancing_. 2012.
147//! 3. Nikhil Bansal and Anupam Gupta and Ravishankar Krishnaswamy and Kirk Pruhs and Kevin Schewior and Cliff Stein. _A 2-Competitive Algorithm For Online Convex Optimization With Switching Costs_. 2015.
148//! 4. Lachlan L. H. Andrew and Siddharth Barman and Katrina Ligett and Minghong Lin and Adam Myerson and Alan Roytman and Adam Wierman. _A Tale of Two Metrics: Simultaneous Bounds on Competitiveness and Regret_. 2015.
149//! 5. Susanne Albers and Jens Quedenfeld. _Optimal Algorithms for Right-Sizing Data Centers_. 2018.
150//! 6. Niangjun Chen and Gautam Goel and Adam Wierman. _Smoothed Online Convex Optimization in High Dimensions via Online Balanced Descent_. 2018.
151//! 7. Gautam Goel and Yiheng Lin and Haoyuan Sun and Adam Wierman. _Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization_. 2019.
152//! 8. Susanne Albers and Jens Quedenfeld. _Algorithms for Energy Conservation in Heterogeneous Data Centers_. 2021.
153//! 9. Susanne Albers and Jens Quedenfeld. _Algorithms for Right-Sizing Heterogeneous Data Centers_. 2021.
154
155use crate::{
156 model::{ModelOutputFailure, ModelOutputSuccess},
157 problem::{DefaultGivenProblem, Problem},
158};
159
160mod capacity_provisioning;
161
162pub mod offline;
163pub mod online;
164
165/// Options of an algorithm.
166pub trait Options<T, P, C, D>:
167 Clone + DefaultGivenProblem<T, P, C, D> + Send
168where
169 P: Problem<T, C, D>,
170 C: ModelOutputSuccess,
171 D: ModelOutputFailure,
172{
173}
174impl<T, P, C, D, O> Options<T, P, C, D> for O
175where
176 O: Clone + DefaultGivenProblem<T, P, C, D> + Send,
177 P: Problem<T, C, D>,
178 C: ModelOutputSuccess,
179 D: ModelOutputFailure,
180{
181}