pathfinding-indexed
pathfinding-indexed provides index-only graph algorithms with predictable performance. Graphs are
stored as dense usize indices with adjacency lists, and algorithms are exposed as methods on
IndexedGraph (directed) and IndexedUndirectedGraph (undirected).
The indexed graph API is the stable core of the crate. BMSSP support is included as an
experimental specialized shortest-path implementation; current repository benchmarks track its
progress, but do not yet show it outperforming Dijkstra on the workloads in benches/.
Credits
This crate builds on the original pathfinding crate and
credits Samuel Tardieu and its contributors for the original library and algorithm coverage this
work descends from.
Why This Is A Separate Crate
The original pathfinding crate optimizes for a very
different interface: generic free functions over arbitrary node types, closure-based successor
generation, and a broader public surface that includes helpers such as grids and matrices.
pathfinding-indexed makes a different tradeoff. It specializes on dense usize node indices,
materialized adjacency lists, and graph-owned methods on IndexedGraph and
IndexedUndirectedGraph. That lets the implementation use contiguous storage and simpler hot-path
bookkeeping, which is where the performance wins in this crate come from.
On the current apples-to-apples benchmark against the original crate, this indexed variant is modestly faster on Dijkstra and roughly tied on A*. So the claim here is not "strictly better in every case." The real difference is that this crate is optimized for indexed graph workloads and predictable performance characteristics.
That is why this was published as a separate crate instead of proposed as a direct PR to the original one: the final interface is intentionally much less generic and much less seamless for the original crate's closure-based use cases, so it would be a breaking product decision, not a narrow upstream optimization patch.
Using this crate
In your Cargo.toml, put:
[dependencies]
pathfinding-indexed = "4.15.0"
Example
use IndexedGraph;
let graph = from_adjacency;
let result = graph.dijkstra;
assert_eq!;
More Examples
Map external node values to dense indices:
use IndexedGraphMap;
use HashMap;
let raw: =
.into_iter
.collect;
let mapped = from_nodes_and_successors;
let start = mapped.index_of.unwrap;
let goal = mapped.index_of.unwrap;
let result = mapped.graph.dijkstra;
assert_eq!;
Work with undirected graphs directly:
use IndexedUndirectedGraph;
let graph = from_edges;
let mst = graph.kruskal;
assert_eq!;
Build from matrix-shaped inputs without hand-writing adjacency conversion:
use ;
let graph = from_adjacency_matrix
.unwrap;
assert_eq!;
let mapped = from_walkable_matrix_4
.unwrap;
assert!;
Working with Graphs
See the Graph Guide for examples of building indexed graphs from adjacency lists, edge lists, adjacency matrices, walkable grid inputs, and for mapping external node values to dense indices.
License
This code is released under a dual Apache 2.0 / MIT free software license.
Benchmarking
This repository includes two types of benchmarks:
Wall-time Benchmarks (Criterion/CodSpeed)
Traditional wall-time benchmarks using Criterion (with CodSpeed compatibility) are located in
benches/ with names like algos.rs, edmondskarp.rs, etc. These can be run with:
The BMSSP-focused benchmarks currently cover:
bmssp_all_vs_dijkstra: single-source, all-target shortest paths on a 4096-node constant-degree graphbmssp_vs_dijkstra: single-pair shortest path on a 4096-node denser directed graphindexed_vs_pathfinding:pathfinding-indexedvs the originalpathfindingcrate on the same grid workload
Recent local runs on those workloads showed:
constant_degree_dijkstra_all: about0.47-0.49 msconstant_degree_bmssp_all: about4.2-4.5 msdense_graph_dijkstra: about0.43-0.44 msdense_graph_bmssp: about45-47 ms
These numbers are machine-specific, but the current conclusion is stable: BMSSP has improved substantially and is worth keeping as a specialized implementation, yet it is not a general replacement for Dijkstra in this crate today.
Deterministic Benchmarks (iai-callgrind)
For more precise and deterministic performance measurements, we use iai-callgrind which counts CPU
instructions, cache hits/misses, and estimated cycles using Valgrind. These benchmarks are prefixed
with iai_ and require the iai feature flag:
# Install valgrind first (required by iai-callgrind)
# Run the benchmarks with the feature flag
The iai-callgrind benchmarks provide consistent results across runs and are not affected by system load, making them ideal for detecting performance regressions. They run automatically in CI for all pull requests, comparing performance against the base branch.
Contributing
You are welcome to contribute by opening issues or submitting pull requests. Please open an issue before implementing a new feature, in case it is a work in progress already or it is fit for this repository.
In order to pass the continuous integration tests, your code must be formatted using the latest
rustfmt with the nightly rust toolchain, and pass cargo clippy and pre-commit checks.
Those will run automatically when you submit a pull request. You can install pre-commit to your
checked out version of the repository by running:
This repository uses the conventional commits commit message style, such as:
- feat(indexed-graphs): add
IndexedGraph::from_adjacency() - fix(algorithms): avoid heap churn in
dijkstra
Each commit must be self-sufficient and clean. If during inspection or code review you need to make further changes to a commit, please squash it. You may use git rebase -i, or more convenient tools such as jj or git-branchless, in order to manipulate your git commits.
If a pull-request should automatically close an open issue, please include "Fix #xxx" or "Close #xxx" in the pull-request cover-letter.