# TODO
## Benchmark Improvements
### Missing Benchmarks
- [x] **`list/pop_front` and `list/pop_back`**: Add a FIFO/LIFO drain benchmark. Push N elements, then pop them all from one end. This models queue (pop_front) and stack (pop_back) patterns. Compare PieList, Vec, VecDeque.
- [x] **`list/retain`**: Benchmark in-place filtering with `retain()`. Fill a list, then remove ~50% of elements by predicate. Models removing dead entities in a game loop or filtering stale cache entries. Compare PieList `retain()` vs `Vec::retain()`.
- [x] **`heap/mixed_workload`**: Add a synthetic mixed push/decrease_key/pop benchmark that interleaves operations in a ratio typical of Dijkstra-like algorithms (e.g., for each pop, do ~3 pushes and ~2 decrease_keys). This isolates the heap behavior from graph traversal overhead.
- [x] **Larger Dijkstra graphs**: The dense graph benchmark uses only n=100. Add a medium-dense graph (n=1000, m=50000) to show how the FibHeap advantage scales. Consider adding a topology with bidirectional edges for more realistic relaxation patterns.
### Benchmark Quality Fixes
- [x] **ExtFibHeap coverage**: Add `extfibheap` to `push_pop`, `peek`, and `drain` benchmarks for consistent cross-implementation comparison. Document if `extfibheap` supports `decrease_key`; if not, note why it's excluded from that benchmark.
- [x] **`bench_list_split` cleanup**: The PieList split benchmark creates a `front` list that is never re-spliced or cleared. In debug builds this would panic on drop. Add a `front.clear(&mut pool)` or re-splice step to prevent this.
- [x] **Vec prepend size limit**: The `prepend` benchmark skips Vec for N > 1000 to avoid timeout. This is correct but worth documenting in BENCHMARKS.md with an explicit note explaining why Vec is absent at larger sizes (O(n²) total time).
## bench-table Visualization Improvements
- [x] **Inline ASCII bar charts**: Add a `--bars` flag that draws a horizontal bar after each relative time, proportional to slowdown. Example: `1.00x ████████████████` vs `6.5x ██`. This makes magnitude differences visible at a glance.
- [x] **Summary dashboard mode**: Add a `--summary` flag that prints a single-screen overview: one line per benchmark category, showing the headline "PieList wins by Nx / Vec wins by Nx" for each operation. Useful for quick regression checks.
- [x] **Regression markers**: When both `base/` and `new/` estimates exist, show ↑ (faster) or ↓ (slower) markers and percentage change. This makes the bench-table useful for CI-style "did this PR regress performance?" checks.
## Documentation
- [x] **Fill in measured results for newer benchmarks**: The "Key Benchmark Results" section in BENCHMARKS.md is missing data for `drain`, `cursor_traverse`, `split`, `push_pop`, `peek`, `pool/shrink_to_fit`. Run benchmarks and fill in measured values.
- [x] **Memory usage estimates**: Add a section to BENCHMARKS.md documenting approximate memory overhead per element for each implementation. Even a computed estimate (e.g., `size_of::<Slot<T>>()` vs `size_of::<T>()`) would be valuable for users deciding between PieList and Vec in memory-constrained environments.