π KeyPaths in Rust
Key paths provide a safe, composable way to access and modify nested data in Rust. Inspired by KeyPath and Functional Lenses system, this feature rich crate lets you work with struct fields and enum variants as first-class values.
Installation
Add to your Cargo.toml:
[]
= "2.9.8"
= "2.6.2"
Basic usage
use Arc;
use Kp;
Composing keypaths
Chain through nested structures with then():
let street_kp = address.then;
let street = street_kp.get; // Option<&String>
Partial and Any keypaths
Use #[derive(Pkp, Akp)] (requires Kp) to get type-erased keypath collections:
- PKp β
partial_kps()returnsVec<PKp<Self>>; value type erased, root known - AKp β
any_kps()returnsVec<AKp>; both root and value type-erased for heterogeneous collections
Filter by value_type_id() / root_type_id() and read with get_as(). For writes, dispatch to the typed Kp (e.g. Person::name()) based on TypeId.
See examples: pkp_akp_filter_typeid, pkp_akp_read_write_convert.
Features
| Feature | Description |
|---|---|
parking_lot |
Use parking_lot::Mutex / RwLock instead of std::sync |
tokio |
Async lock support (tokio::sync::Mutex, RwLock) |
arcswap |
arc-swap (Arc<ArcSwap<T>>, Arc<ArcSwapOption<T>>) via LockKp |
pin_project |
Enable #[pin] field support for pin-project compatibility |
More examples
Supported containers
The #[derive(Kp)] macro (from key-paths-derive) generates keypath accessors for these wrapper types:
| Container | Access | Notes |
|---|---|---|
Option<T> |
field() |
Unwraps to inner type |
Box<T> |
field() |
Derefs to inner |
Pin<T>, Pin<Box<T>> |
field(), field_inner() |
Container + inner (when T: Unpin) |
Rc<T>, Arc<T> |
field() |
Derefs; mut when unique ref |
Vec<T> |
field(), field_at(i) |
Container + index access |
HashMap<K,V>, BTreeMap<K,V> |
field_at(k) |
Key-based access |
HashSet<T>, BTreeSet<T> |
field() |
Container identity |
VecDeque<T>, LinkedList<T>, BinaryHeap<T> |
field(), field_at(i) |
Index where applicable |
Result<T,E> |
field() |
Unwraps Ok |
Cow<'_, T> |
field() |
as_ref / to_mut |
Option<Cow<'_, T>> |
field() |
Optional Cow unwrap |
std::sync::Mutex<T>, std::sync::RwLock<T> |
field() |
Container (use LockKp for lock-through) |
Arc<Mutex<T>>, Arc<RwLock<T>> |
field(), field_kp() / field() as LockKp |
Lock-through via LockKp |
Arc<arcswap::ArcSwap<T>>, Arc<arcswap::ArcSwapOption<T>> |
field_kp() / field() as LockKp |
arcswap feature; use arcswap dependency key (see below) |
tokio::sync::Mutex, tokio::sync::RwLock |
field_async() |
Async lock-through (tokio feature) |
parking_lot::Mutex, parking_lot::RwLock |
field(), field_lock() |
parking_lot feature |
Nested combinations (e.g. Option<Box<T>>, Option<Vec<T>>, Vec<Option<T>>) are supported.
arcswap (optional): atomically swappable Arc
Enable arcswap on rust-key-paths and add the same dependency key in your crate so generated paths resolve:
[]
= { = "2.9.8", = ["arcswap"] }
= { = "arc-swap", = "1.9" }
When to use ArcSwap instead of RwLock<Arc<T>>: you reload or publish whole snapshots (store / swap / rcu) and many threads read the current snapshot most of the time. Reads use load() (default strategy: low-latency, lock-free snapshots) instead of contending on a readerβwriter lock. Prefer a static or LazyLock holding an ArcSwap when a single global pointer is enough; wrap in Arc<ArcSwap<T>> when the swap container is created at runtime and shared across threads (the outer Arc is only for sharing the container; hot-path reads touch the inner atomic pointer, not the Arc refcount).
When not to: you need a true in-place &mut T through a lock for arbitrary mutation of T inside the guard. ArcSwap stores an Arc<T>; updates replace the pointer. Use store / rcu at the call site for writes.
Chaining: compose the full lock path on the first LockKp with .then(β¦) / .then_lock(β¦) (see examples/box_keypath_arcswap.rs). Import ChainExt for Kp::then_lock. Nested then_lock from the crate root can infer a 'static root in some compositions; if you hit that, call an inner LockKp from a & to the inner struct (same example).
pin_project #[pin] fields (optional feature)
When using pin-project, mark pinned fields with #[pin]. The derive generates:
#[pin] field type |
Access | Notes |
|---|---|---|
Plain (e.g. i32) |
field(), field_pinned() |
Pinned projection via this.project() |
Future |
field(), field_pinned(), field_await() |
Poll through Pin<&mut Self> |
Box<dyn Future<Output=T>> |
field(), field_pinned(), field_await() |
Same for boxed futures |
Enable with pin_project feature and add #[pin_project] to your struct:
Examples: pin_project_example, pin_project_fair_race (FairRaceFuture use case).
Performance: box_keypath benchmark
Benchmark file: benches/box_keypath_bench.rs
Run with:
Read path (scsf -> sosf -> omse -> B -> dsf)
| Variant | Time (approx) |
|---|---|
| keypath | 996.46-997.18 ps |
| unwrap | 944.10-946.59 ps |
| as_ref().map | 996.31-997.39 ps |
? operator |
996.33-997.24 ps |
Write path (scsf -> sosf -> omse -> B -> dsf)
| Variant | Time (approx) |
|---|---|
| keypath | 147.44-149.09 ns |
| unwrap | 143.13-145.02 ns |
| as_ref().map | 141.04-142.65 ns |
? operator |
141.41-150.25 ns |
These numbers are from Criterion's reported confidence ranges on this machine. In this benchmark, keypaths are very close to direct traversal for reads and only slightly slower for writes.
Keypath size
From examples/box_keypath.rs, the composed keypath prints:
size of kp = 0
So this composed kp is zero-sized (no captured runtime state).
Performance: arcswap_keypath benchmark
Benchmark file: benches/arcswap_keypath_bench.rs (requires --features arcswap).
Read path (scsf β ArcSwap load β omse β B β dsf)
Compared to load_full() plus a manual match on the loaded OneMoreStruct (which clones the inner Arc on every read), the composed keypath uses load() under the hood and stays on the snapshot for the rest of the chain.
| Variant | Time (approx, --quick run on one machine) |
|---|---|
keypath_then_lock |
37.8β38.7 ns |
load_full_manual |
115β119 ns |
Your numbers will vary by CPU and optimization level; treat this as a sanity check that keypath traversal stays in the same ballpark as a small manual load, while load_full + clone is heavier by design when you need an owned Arc.
π License
- Mozilla Public License 2.0