1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
//! Priority queue traits, d-ary heap implementations having binary heap as a special case.
//!
//! ## Traits
//!
//! This crate defines two priority queue traits for (node, key) pairs with the following features:
//!
//! * [`PriorityQueue<N, K>`]: providing basic priority queue functionalities.
//! * [`PriorityQueueDecKey<N, K>`]: adds super powers which is achieved by being able to locate positions of nodes that already exists on the heap.
//!
//! Separating more advanced `PriorityQueueDecKey` from the basic queue is due to the fact that additional functionalities are often made available through usage of additional memory.
//!
//! ### Benefits of `PriorityQueueDecKey`
//!
//! Decrease-key, and related operations, are critical for certain algorithms where the key of a particular node is evaluated multiple times. Without the ability to update keys of nodes on the heap, space complexity of correspoinding algorithms
//! increases exponentially.
//!
//! Consider Dijkstra's shortest-path algorithm for instance. Space complexity of the algorithm would be *O(n^2)* with a `PriorityQueue` where *n* is the number of nodes on the graph. This is due to the fact that label of each node might be evaluated *n-1* times and consequently each node can be pushed to the queue *n-1* times. As also noted in `std::collections::BinaryHeap` documentation, [*this implementation isn't memory-efficient as it may leave duplicate nodes in the queue.*](https://doc.rust-lang.org/stable/std/collections/binary_heap/index.html)
//!
//! On the other hand, using a `PriorityQueueDecKey`, space complexity of the algorithm will be kept as *O(n)*: each node will enter the queue at most once; consequent evaluations of its label will be handled by decrease key operation.
//!
//! Furthermore, the additional functionalities simplify the algorithm implementation pushing some of the complexity to the data structure. This becomes clear when the following `shortest_path` implementation is compared to the corresponding `std::collections::BinaryHeap` example. Note that it is almost a direct substitution while providing a better space complexity, generic dary-heap options and a cleaner algorithm implementation.
//!
//!
//! #### `PriorityQueue` version of Dijkstra's shortest path algorithm
//!
//! Below is the main iteration of Dijkstra's shortest path algorithm with a basic priority queue without a decrease key operation; taken and slightly adjusted from `std::collections::BinaryHeap`.
//!
//! ```rust ignore
//! // Examine the frontier with lower cost nodes first (min-heap)
//! while let Some((position, cost)) = heap.pop() {
//!     // Alternatively we could have continued to find all shortest paths
//!     if position == goal {
//!         return Some(cost);
//!     }
//!
//!     // Important as we may have already found a better way
//!     if cost > dist[position] { continue; }
//!
//!     // For each node we can reach, see if we can find a way with
//!     // a lower cost going through this node
//!     for edge in &adj_list[position] {
//!         let next = State { cost: cost + edge.cost, position: edge.node };
//!
//!         // If so, add it to the frontier and continue
//!         if next.cost < dist[next.position] {
//!             heap.push(next);
//!             // Relaxation, we have now found a better way
//!             dist[next.position] = next.cost;
//!         }
//!     }
//! }
//! ```
//!
//! In addition to the heap, we need to keep the `dist` array. This has two main purposes:
//! * to avoid visiting the same node multiple times by the check `cost > dist[position]`;
//! * so that we do not push non-improving labels to the queue to reduce heap pushes by `next.cost < dist[next.position]`; however, it still requires *O(n^2)* space complexity.
//!
//! Notice that none of these would be necessary if each node entered the heap at most once and its key was updated throughout the search whenever a shorter path is found.
//!
//! #### `PriorityQueueDecKey` version of Dijkstra's shortest path algorithm
//!
//! See below the `PriorityQueueDecKey` version which reduces space complexity to *O(n)*.
//!
//! The heap is now internally paired up with a positions array (`DaryHeapOfIndices`) or hash map (`DaryHeapWithMap`) with space complexity of *O(n)*. This is already compensated by not requiring the `dist` vector.
//!
//! Furthermore, it allows to simplify the algorithm implementation by pushing the main complexity to the data structure; the algorithm now simply expresses the traversal and the update.
//!
//! ```rust ignore
//! // Examine the frontier with lower cost nodes first (min-heap)
//! while let Some((position, cost)) = heap.pop() {
//!     // Alternatively we could have continued to find all shortest paths
//!     if position == goal {
//!         return Some(cost);
//!     }
//!
//!     // For each node we can reach, see if we can find a way with
//!     // a lower cost going through this node
//!     for edge in &adj_list[position] {
//!         heap.try_decrease_key_or_push(&edge.node, &(cost + edge.cost));
//!     }
//! }
//! ```
//!
//! Note that `try_decrease_key_or_push` performs the following:
//!
//! * if the node already exists in the queue:
//!     * when the new `key` is strictly less than the `node`'s current key; it decreases the key of the node to the given new `key`, and  returns true (a new shorter path is found, an indicator to udpate predecessor if shortest path is a required output in addition to shortest distance);
//!     * otherwise, does not change the queue and returns false;
//! * otherwise, pushes the `node` with the given `key` to the queue, and returns false.
//!
//! This is exactly what we need for Dijsktra's shortest path algorithm, and many node labelling algorithms. See [`PriorityQueueDecKey`] for other metods of the trait.
//!
//! See below, or [tests/dijkstra.rs](https://github.com/orxfun/orx-priority-queue/blob/main/tests/dijkstra.rs), for the complete implementation of the Dijkstra's shortest path algorithm with a `PriorityQueueDecKey`, adjusted from the standard BinaryHeap example.
//!
//!
//! ```rust
//! use orx_priority_queue::*;
//!
//! // Each node is represented as a `usize`, for a shorter implementation.
//! struct Edge {
//!     node: usize,
//!     cost: usize,
//! }
//!
//! // Dijkstra's shortest path algorithm.
//!
//! // Start at `start` and use `dist` to track the current shortest distance
//! // to each node. This implementation isn't memory-efficient as it may leave duplicate
//! // nodes in the queue. It also uses `usize::MAX` as a sentinel value,
//! // for a simpler implementation.
//! fn shortest_path(adj_list: &Vec<Vec<Edge>>, start: usize, goal: usize) -> Option<usize> {
//!     let mut heap = BinaryHeapWithMap::default();
//!
//!     // We're at `start`, with a zero cost
//!     heap.push(start, 0);
//!
//!     // Examine the frontier with lower cost nodes first (min-heap)
//!     while let Some((position, cost)) = heap.pop() {
//!         // Alternatively we could have continued to find all shortest paths
//!         if position == goal {
//!             return Some(cost);
//!         }
//!
//!         // For each node we can reach, see if we can find a way with
//!         // a lower cost going through this node
//!         for edge in &adj_list[position] {
//!             heap.try_decrease_key_or_push(&edge.node, &(cost + edge.cost));
//!         }
//!     }
//!
//!     // Goal not reachable
//!     None
//! }
//!
//! // This is the directed graph we're going to use.
//! // The node numbers correspond to the different states,
//! // and the edge weights symbolize the cost of moving
//! // from one node to another.
//! // Note that the edges are one-way.
//! //
//! //                  7
//! //          +-----------------+
//! //          |                 |
//! //          v   1        2    |  2
//! //          0 -----> 1 -----> 3 ---> 4
//! //          |        ^        ^      ^
//! //          |        | 1      |      |
//! //          |        |        | 3    | 1
//! //          +------> 2 -------+      |
//! //           10      |               |
//! //                   +---------------+
//! //
//! // The graph is represented as an adjacency list where each index,
//! // corresponding to a node value, has a list of outgoing edges.
//! // Chosen for its efficiency.
//! let graph = vec![
//!     // Node 0
//!     vec![Edge { node: 2, cost: 10 }, Edge { node: 1, cost: 1 }],
//!     // Node 1
//!     vec![Edge { node: 3, cost: 2 }],
//!     // Node 2
//!     vec![
//!         Edge { node: 1, cost: 1 },
//!         Edge { node: 3, cost: 3 },
//!         Edge { node: 4, cost: 1 },
//!     ],
//!     // Node 3
//!     vec![Edge { node: 0, cost: 7 }, Edge { node: 4, cost: 2 }],
//!     // Node 4
//!     vec![],
//! ];
//!
//! assert_eq!(shortest_path(&graph, 0, 1), Some(1));
//! assert_eq!(shortest_path(&graph, 0, 3), Some(3));
//! assert_eq!(shortest_path(&graph, 3, 0), Some(7));
//! assert_eq!(shortest_path(&graph, 0, 4), Some(5));
//! assert_eq!(shortest_path(&graph, 4, 0), None);
//! ```
//!
//! ## Implementations
//!
//! ### d-ary heap
//!
//! The core [d-ary heap](https://en.wikipedia.org/wiki/D-ary_heap) is implemented thanks to const generics.
//! Three structs are created from this core struct:
//!
//! * [`DaryHeap<N, K, const D: usize>`] which implements `PriorityQueue<N, K>` to be preferred when the additional
//! features are not required.
//! * [`DaryHeapWithMap<N, K, const D: usize>`] where `N: Hash + Equal` implements `PriorityQueueDecKey<N, K>`.
//! It is a combination of the d-ary heap and a hash-map to track positions of nodes.
//! This might be considered as the default way to extend the heap to enable additional funcitonalities without requiring a linear search.
//! * [`DaryHeapOfIndices<N, K, const D: usize>`] where `N: HasIndex` implements `PriorityQueueDecKey<N, K>`.
//! This variant is and alternative to the hash-map implementation and is particularly useful in algorithms where nodes to be enqueued are sampled from a closed set with known elements and the size of the queue is likely to get close to total number of candidates.
//!
//! ### Special traversal for d=2: binary-heap
//!
//! const generics further allows to use special arithmetics for the special case where d=2; i.e.,
//! when d-ary heap is the binary heap.
//! In particular, one addition/subtraction is avoided during the traversal through the tree.
//!
//! However, overall performance of the queues depends on the use case,
//! ratio of push an decrease-key operations, etc.
//! Benchmarks will follow.
//!
//!
//! ## Example
//!
//! ```rust
//! use orx_priority_queue::*;
//!
//! fn test_priority_queue<P>(mut pq: P)
//! where
//!     P: PriorityQueue<usize, f64>,
//! {
//!     println!("\ntest_priority_queue");
//!     pq.clear();
//!
//!     pq.push(0, 42.0);
//!     assert_eq!(Some(&(0, 42.0)), pq.peek());
//!
//!     let popped = pq.pop();
//!     assert_eq!(Some((0, 42.0)), popped);
//!     assert!(pq.is_empty());
//!
//!     pq.push(0, 42.0);
//!     pq.push(1, 7.0);
//!     pq.push(2, 24.0);
//!     pq.push(10, 3.0);
//!
//!     while let Some(popped) = pq.pop() {
//!         println!("pop {:?}", popped);
//!     }
//! }
//! fn test_priority_queue_deckey<P>(mut pq: P)
//! where
//!     P: PriorityQueueDecKey<usize, f64>,
//! {
//!     println!("\ntest_priority_queue_deckey");
//!     pq.clear();
//!
//!     pq.push(0, 42.0);
//!     assert_eq!(Some(&(0, 42.0)), pq.peek());
//!
//!     let popped = pq.pop();
//!     assert_eq!(Some((0, 42.0)), popped);
//!     assert!(pq.is_empty());
//!
//!     pq.push(0, 42.0);
//!     assert!(pq.contains(&0));
//!
//!     pq.decrease_key(&0, &7.0);
//!     assert_eq!(Some(&(0, 7.0)), pq.peek());
//!
//!     let is_key_decreased = pq.try_decrease_key(&0, &10.0);
//!     assert!(!is_key_decreased);
//!     assert_eq!(Some(&(0, 7.0)), pq.peek());
//!
//!     while let Some(popped) = pq.pop() {
//!         println!("pop {:?}", popped);
//!     }
//! }
//!
//! // d-ary heap generic over const d
//! const D: usize = 4;
//!
//! test_priority_queue(DaryHeap::<usize, f64, D>::default());
//! test_priority_queue(DaryHeapWithMap::<usize, f64, D>::default());
//! test_priority_queue(DaryHeapOfIndices::<usize, f64, D>::with_upper_limit(100));
//!
//! test_priority_queue_deckey(DaryHeapWithMap::<usize, f64, D>::default());
//! test_priority_queue_deckey(DaryHeapOfIndices::<usize, f64, D>::with_upper_limit(100));
//!
//! // or type aliases for common heaps to simplify signature
//! // Binary, Ternary or Quarternary to fix D of Dary
//! test_priority_queue(BinaryHeap::default());
//! test_priority_queue(BinaryHeapWithMap::default());
//! test_priority_queue(BinaryHeapOfIndices::with_upper_limit(100));
//! test_priority_queue_deckey(BinaryHeapWithMap::default());
//! test_priority_queue_deckey(BinaryHeapOfIndices::with_upper_limit(100));
//!
//! test_priority_queue(TernaryHeap::default());
//! test_priority_queue(TernaryHeapWithMap::default());
//! test_priority_queue(TernaryHeapOfIndices::with_upper_limit(100));
//! test_priority_queue_deckey(TernaryHeapWithMap::default());
//! test_priority_queue_deckey(TernaryHeapOfIndices::with_upper_limit(100));
//!
//! test_priority_queue(QuarternaryHeap::default());
//! test_priority_queue(QuarternaryHeapWithMap::default());
//! test_priority_queue(QuarternaryHeapOfIndices::with_upper_limit(100));
//! test_priority_queue_deckey(QuarternaryHeapWithMap::default());
//! test_priority_queue_deckey(QuarternaryHeapOfIndices::with_upper_limit(100));
//! ```
//!
//! ## License
//!
//! This library is licensed under MIT license. See LICENSE for details.

#![warn(
    missing_docs,
    clippy::unwrap_in_result,
    clippy::unwrap_used,
    clippy::panic,
    clippy::panic_in_result_fn,
    clippy::float_cmp,
    clippy::float_cmp_const,
    clippy::missing_panics_doc,
    clippy::todo
)]

mod dary;
mod has_index;
mod positions;
mod priority_queue;
mod priority_queue_deckey;

pub use dary::daryheap::{BinaryHeap, DaryHeap, QuarternaryHeap, TernaryHeap};
pub use dary::daryheap_index::{
    BinaryHeapOfIndices, DaryHeapOfIndices, QuarternaryHeapOfIndices, TernaryHeapOfIndices,
};
pub use dary::daryheap_map::{
    BinaryHeapWithMap, DaryHeapWithMap, QuarternaryHeapWithMap, TernaryHeapWithMap,
};
pub use has_index::HasIndex;
pub use priority_queue::PriorityQueue;
pub use priority_queue_deckey::PriorityQueueDecKey;