pub struct ChunkTreeCache { /* private fields */ }Expand description
Cache of chunk tree mappings for resolving logical to physical addresses.
Keyed by logical start address. Uses a BTreeMap for efficient range lookups.
Implementations§
Source§impl ChunkTreeCache
impl ChunkTreeCache
Sourcepub fn insert(&mut self, mapping: ChunkMapping)
pub fn insert(&mut self, mapping: ChunkMapping)
Insert a chunk mapping into the cache.
Sourcepub fn lookup(&self, logical: u64) -> Option<&ChunkMapping>
pub fn lookup(&self, logical: u64) -> Option<&ChunkMapping>
Look up the chunk mapping that contains the given logical address.
Sourcepub fn resolve(&self, logical: u64) -> Option<(u64, u64)>
pub fn resolve(&self, logical: u64) -> Option<(u64, u64)>
Resolve a logical address to (devid, physical) for the first stripe.
For read-only access the first stripe is sufficient on SINGLE, DUP,
and any mirroring profile. RAID0/5/6/10 striping would need stripe
index calculation, but for tree blocks (always nodesize <= stripe_len)
the whole block lives in one stripe slot, so this works for the
common case.
Callers using a multi-device BlockReader look up the device handle
by devid; single-device callers ignore it.
Sourcepub fn resolve_all(&self, logical: u64) -> Option<Vec<(u64, u64)>>
pub fn resolve_all(&self, logical: u64) -> Option<Vec<(u64, u64)>>
Resolve a logical address to (devid, physical) for every stripe.
For DUP, RAID1, RAID1C3, and RAID1C4, a single logical address maps to multiple physical copies. Write operations must update all copies to maintain consistency.
Use plan_write for actual write routing.
resolve_all ignores the chunk’s RAID profile and assumes every
stripe should receive the same bytes; that is correct for DUP /
RAID1* but wrong for RAID0 (each row goes to one device only) and
RAID10 (each row goes to one mirror pair, not all pairs). Kept
for diagnostics and read-only callers that only need a list of
stripe locations.
Sourcepub fn plan_write(&self, logical: u64, len: usize) -> Option<WritePlan>
pub fn plan_write(&self, logical: u64, len: usize) -> Option<WritePlan>
Plan the per-device writes needed to land len bytes at the
logical address logical, accounting for the chunk’s RAID
profile and stripe length.
Returns a WritePlan: a Plain variant (a flat vec of
StripePlacements) for non-parity profiles, and a Parity
variant (ParityPlan) for RAID5/RAID6.
Per-profile fan-out for a single row of a non-parity profile:
- SINGLE: one placement (column 0).
- DUP / RAID1 / RAID1C3 / RAID1C4:
num_stripesplacements (every stripe gets the same bytes). - RAID0: one placement (column =
stripe_nr % num_stripes). - RAID10:
sub_stripesplacements (the mirror pair for the row).
For RAID5/RAID6 the plan instead names every data column slot of every touched physical row plus the rotating parity column(s); the caller must run a parity executor that prereads the data slots, mixes in caller bytes, computes parity, then writes data
- parity to the device.
Buffers larger than stripe_len - stripe_offset span multiple
rows; each row’s placements are appended in order.
Returns None if logical is unmapped or if logical + len
exceeds the chunk.
Sourcepub fn plan_read(
&self,
logical: u64,
len: usize,
) -> Option<Vec<StripePlacement>>
pub fn plan_read( &self, logical: u64, len: usize, ) -> Option<Vec<StripePlacement>>
Plan the per-device reads needed to fetch len bytes from the
logical address logical. Returns exactly one placement per row
(the first stripe of each row, or the row’s data column for
RAID5/RAID6) — the caller assembles the bytes in order.
Reads on RAID5/RAID6 ignore parity columns: the data column owning each row’s bytes is read directly. Degraded reads (reconstructing a missing data column from parity) are out of scope.
Returns None if logical is unmapped or if logical + len
exceeds the chunk.
Sourcepub fn iter(&self) -> impl Iterator<Item = &ChunkMapping>
pub fn iter(&self) -> impl Iterator<Item = &ChunkMapping>
Iterate over all chunk mappings in logical address order.