# SQueue
[SQueue](https://gitlab.com/liberecofr/squeue) is a sized queue implemented in [Rust](https://www.rust-lang.org/).
This is just a simple wrapper over a [standard Rust VecDeque](https://doc.rust-lang.org/stable/std/collections/struct.VecDeque.html).
It makes a few opinionated choices, so as to choose whether one should pop/push on front/back, and trims the API to the
minimum so that it can only serve as a simple FIFO queue.
The main difference between this and using a standard queue, is that this one is capped
in size, and will drop the oldest items when pushing more than its allowed capacity.
On top of this, provides a sync thread-safe implementation, which can be shared between threads.
But nothing crazy, just a simple wrapper, really.

Project spawned from [HashLRU](https://gitlab.com/liberecofr/hashlru).
# Status
SQueue is between the toy project and something you could use.
It comes with with a rather complete test harness and tries to have reasonable documentation, so that's a plus.
OTOH it is quite young, and to my knowledge not used in production anywhere.
In doubt, use vanilla [VecDeque](https://doc.rust-lang.org/stable/std/collections/struct.VecDeque.html).
[](https://gitlab.com/liberecofr/squeue/pipelines)
[](https://crates.io/crates/squeue)
[](https://gitlab.com/liberecofr/squeue/tree/main)
[](https://gitlab.com/liberecofr/squeue/blob/main/LICENSE)
# Usage
```rust
use squeue::Queue;
let mut queue: Queue<usize> = Queue::new(3);
assert_eq!(None, queue.push(1));
assert_eq!(None, queue.push(2));
assert_eq!(None, queue.push(3));
assert_eq!(Some(1), queue.push(4));
assert_eq!(3, queue.len());
```
# Benchmarks
Taken from a random CI job:
```
running 6 tests
test tests::bench_read_usize_builtin_vecdeque ... bench: 0 ns/iter (+/- 0)
test tests::bench_read_usize_squeue_queue ... bench: 0 ns/iter (+/- 0)
test tests::bench_read_usize_squeue_sync_queue ... bench: 16 ns/iter (+/- 0)
test tests::bench_write_usize_builtin_vecdeque ... bench: 10 ns/iter (+/- 28)
test tests::bench_write_usize_squeue_queue ... bench: 3 ns/iter (+/- 0)
test tests::bench_write_usize_squeue_sync_queue ... bench: 21 ns/iter (+/- 0)
test result: ok. 0 passed; 0 failed; 0 ignored; 6 measured; 0 filtered out; finished in 11.75s
```
Not the results of thorough intensive benchmarking but the general idea is:
* it is cheap to use
* in some cases, may be faster than using the builtin type, my current interpretation
is that since it caps the memory size, there are less memory allocations to do when
pushing lots of items into it, as they are just dropped
* the sync version is significantly slower, as expected
To run the benchmarks:
```shell
cd bench
rustup default nightly
cargo bench
```
# Links
* [crate](https://crates.io/crates/squeue) on crates.io
* [doc](https://docs.rs/squeue/) on docs.rs
* [source](https://gitlab.com/liberecofr/squeue/tree/main) on gitlab.com
* [VecDeque](https://doc.rust-lang.org/stable/std/collections/struct.VecDeque.html), standard Rust object to implement queues
* [HashLRU](https://gitlab.com/liberecofr/hashlru), related project
# License
SQueue is licensed under the [MIT](https://gitlab.com/liberecofr/squeue/blob/main/LICENSE) license.