pub struct DecodingState<'a, T: Topology, const STRIDE_Y: usize> {Show 31 fields
pub graph: &'a StaticGraph,
pub width: usize,
pub height: usize,
pub stride_y: usize,
pub row_start_mask: u64,
pub row_end_mask: u64,
pub blocks_state: &'a mut [BlockStateHot],
pub parents: &'a mut [u32],
pub defect_mask: &'a mut [u64],
pub path_mark: &'a mut [u64],
pub block_dirty_mask: &'a mut [u64],
pub active_mask: &'a mut [u64],
pub queued_mask: &'a mut [u64],
pub active_block_mask: u64,
pub ingestion_list: &'a mut [u32],
pub ingestion_count: usize,
pub edge_bitmap: &'a mut [u64],
pub edge_dirty_list: &'a mut [u32],
pub edge_dirty_count: usize,
pub edge_dirty_mask: &'a mut [u64],
pub boundary_bitmap: &'a mut [u64],
pub boundary_dirty_list: &'a mut [u32],
pub boundary_dirty_count: usize,
pub boundary_dirty_mask: &'a mut [u64],
pub bfs_pred: &'a mut [u16],
pub bfs_queue: &'a mut [u16],
pub needs_scalar_fallback: bool,
pub scalar_fallback_mask: u64,
pub boundary_config: BoundaryConfig,
pub parent_offset: usize,
pub _marker: PhantomData<T>,
}Expand description
Main decoder state structure for Union Find QEC decoding.
This structure contains all the state needed for a single decoding instance. It is parameterized by:
'a- Lifetime of the backing arena memoryT: Topology- The lattice topology (e.g.,SquareGrid)STRIDE_Y- Compile-time Y stride for performance optimization
§Memory Layout
All slices are allocated from an Arena and are contiguous.
The decoder uses Morton (Z-order) encoding to organize nodes into 64-node
blocks for cache efficiency.
§Usage
use prav_core::{Arena, DecodingState, SquareGrid, EdgeCorrection};
let mut buffer = [0u8; 1024 * 1024];
let mut arena = Arena::new(&mut buffer);
// Create decoder with STRIDE_Y matching grid dimensions
let mut state: DecodingState<SquareGrid, 32> = DecodingState::new(&mut arena, 32, 32, 1);
// Load syndromes and decode
state.load_dense_syndromes(&syndromes);
state.grow_clusters();
let mut corrections = [EdgeCorrection::default(); 1024];
let count = state.peel_forest(&mut corrections);§Thread Safety
DecodingState is not thread-safe. For parallel decoding, create one
instance per thread with separate arenas.
Fields§
§graph: &'a StaticGraphReference to static graph metadata (dimensions, strides).
width: usizeGrid width in nodes.
height: usizeGrid height in nodes.
stride_y: usizeY stride for coordinate calculations (power of 2 for fast division).
row_start_mask: u64Bitmask identifying nodes at the start of their row within a block.
row_end_mask: u64Bitmask identifying nodes at the end of their row within a block.
blocks_state: &'a mut [BlockStateHot]Block state for each 64-node block (boundary, occupied, masks, cached root).
parents: &'a mut [u32]Union Find parent pointers. parents[i] == i means node i is a root.
defect_mask: &'a mut [u64]Bitmask of defect (syndrome) nodes per block.
path_mark: &'a mut [u64]Path marking bitmask used during peeling.
block_dirty_mask: &'a mut [u64]Bitmask tracking which blocks have been modified (for sparse reset).
active_mask: &'a mut [u64]Bitmask of currently active blocks for this growth iteration.
queued_mask: &'a mut [u64]Bitmask of blocks queued for the next growth iteration.
active_block_mask: u64Fast-path active mask for small grids (<=64 blocks).
ingestion_list: &'a mut [u32]List of block indices with syndromes to process.
ingestion_count: usizeNumber of valid entries in ingestion_list.
edge_bitmap: &'a mut [u64]Bitmask of edges to include in corrections (XOR-accumulated).
edge_dirty_list: &'a mut [u32]List of edge bitmap word indices that have been modified.
edge_dirty_count: usizeNumber of valid entries in edge_dirty_list.
edge_dirty_mask: &'a mut [u64]Bitmask tracking which edge_bitmap words are dirty.
boundary_bitmap: &'a mut [u64]Bitmask of boundary corrections per block.
boundary_dirty_list: &'a mut [u32]List of block indices with boundary corrections.
boundary_dirty_count: usizeNumber of valid entries in boundary_dirty_list.
boundary_dirty_mask: &'a mut [u64]Bitmask tracking which boundary_bitmap entries are dirty.
bfs_pred: &'a mut [u16]BFS predecessor array for path tracing.
bfs_queue: &'a mut [u16]BFS queue for path tracing.
needs_scalar_fallback: boolFlag indicating scalar fallback is needed for some blocks.
scalar_fallback_mask: u64Bitmask of blocks requiring scalar processing.
boundary_config: BoundaryConfigConfiguration for boundary matching behavior.
parent_offset: usizeOffset for parent array (used in some optimizations).
_marker: PhantomData<T>Phantom marker for the topology type parameter.
Implementations§
Source§impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
Source§impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
Sourcepub unsafe fn process_block_small_stride_32<const SILENT: bool>(
blk_idx: usize,
parents: &mut [u32],
blocks_state: &mut [BlockStateHot],
defect_mask: &mut [u64],
block_dirty_mask: &mut [u64],
queued_mask: &mut [u64],
is_small_grid: bool,
block_offset: usize,
) -> bool
pub unsafe fn process_block_small_stride_32<const SILENT: bool>( blk_idx: usize, parents: &mut [u32], blocks_state: &mut [BlockStateHot], defect_mask: &mut [u64], block_dirty_mask: &mut [u64], queued_mask: &mut [u64], is_small_grid: bool, block_offset: usize, ) -> bool
§Safety
Caller must ensure:
blk_idxis within bounds ofblocks_stateparents,blocks_state,defect_mask,block_dirty_mask,queued_maskare validblock_offset + blk_idxcorresponds to valid parent array indices
Source§impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
Sourcepub unsafe fn process_all_blocks_stride_32_unrolled_16(&mut self) -> (bool, u64)
pub unsafe fn process_all_blocks_stride_32_unrolled_16(&mut self) -> (bool, u64)
Unrolled processing for exactly 16 blocks (1024 nodes) with STRIDE_Y=32. Returns (any_expanded, next_active_mask).
§Safety
Caller must ensure the decoder state is properly initialized with exactly 16 blocks.
Sourcepub unsafe fn process_all_blocks_stride_64(&mut self) -> (bool, u64)
pub unsafe fn process_all_blocks_stride_64(&mut self) -> (bool, u64)
Optimized processing for stride-64 grids (64x64). Returns (any_expanded, next_active_mask).
§Safety
Caller must ensure the decoder state is properly initialized for stride-64 processing.
Source§impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
Sourcepub fn new(
arena: &mut Arena<'a>,
width: usize,
height: usize,
depth: usize,
) -> Self
pub fn new( arena: &mut Arena<'a>, width: usize, height: usize, depth: usize, ) -> Self
Creates a new decoder state for the given grid dimensions.
Allocates all necessary data structures from the provided arena. The decoder is initialized and ready for use after construction.
§Arguments
arena- Arena allocator to use for all internal allocations.width- Grid width in nodes.height- Grid height in nodes.depth- Grid depth (1 for 2D codes, >1 for 3D codes).
§Panics
Panics if STRIDE_Y doesn’t match the calculated stride for the given
dimensions. The stride is max(width, height, depth).next_power_of_two().
§Memory Requirements
The arena must have sufficient space for:
num_nodes * 4bytes for parent arraynum_blocks * 64bytes for block states- Additional space for masks, bitmaps, and queues
A safe estimate is num_nodes * 20 bytes. Use required_buffer_size
for exact calculation.
Sourcepub fn initialize_internal(&mut self)
pub fn initialize_internal(&mut self)
Resets all internal state for a new decoding cycle.
This is a full reset that clears all dynamic state while preserving the grid topology (valid_mask). Call this before loading new syndromes when you want to completely reset the decoder.
§Performance Note
For repeated decoding with sparse syndromes, prefer sparse_reset
which only resets modified blocks (O(modified) vs O(n)).
Sourcepub fn load_erasures(&mut self, erasures: &[u64])
pub fn load_erasures(&mut self, erasures: &[u64])
Loads erasure information indicating which qubits were lost.
Erasures represent qubits that could not be measured (e.g., photon loss). The decoder excludes erased qubits from cluster growth.
§Arguments
erasures- Dense bitarray where bitiinerasures[blk]indicates node(blk * 64 + i)is erased.
§Effect
Updates effective_mask = valid_mask & !erasure_mask for each block,
which controls which nodes participate in cluster growth.
Sourcepub fn mark_block_dirty(&mut self, blk_idx: usize)
pub fn mark_block_dirty(&mut self, blk_idx: usize)
Marks a block as modified for sparse reset tracking.
Called internally when a block’s state changes. Marked blocks will be
reset during the next sparse_reset call.
Sourcepub fn is_small_grid(&self) -> bool
pub fn is_small_grid(&self) -> bool
Checks if this grid qualifies for small-grid optimizations.
Small grids (<=64 blocks, i.e., <=4096 nodes) use a single u64 bitmask
for active block tracking, enabling faster iteration.
§Returns
true if the grid has at most 65 blocks (64 data + 1 boundary sentinel).
Sourcepub fn push_next(&mut self, blk_idx: usize)
pub fn push_next(&mut self, blk_idx: usize)
Queues a block for processing in the next growth iteration.
Sets the corresponding bit in queued_mask. The block will be processed
when active_mask and queued_mask are swapped at the end of the current
iteration.
Sourcepub fn sparse_reset(&mut self)
pub fn sparse_reset(&mut self)
Resets only the blocks that were modified, enabling efficient reuse.
At typical error rates (p=0.001-0.06), only a small fraction of blocks are modified during decoding. This method resets only those blocks, achieving O(modified) complexity instead of O(total).
§When to Use
Call this between decoding cycles when:
- Error rate is low (most blocks unmodified)
- You want to minimize reset overhead
For high error rates or when topology changes, use
initialize_internal instead.
§What Gets Reset
For each dirty block:
boundary,occupied,root,root_rankinBlockStateHotdefect_maskentry- Parent pointers for all 64 nodes in the block
Sourcepub fn reset_for_next_cycle(&mut self)
pub fn reset_for_next_cycle(&mut self)
Resets state for the next decoding cycle (sparse reset).
This efficiently resets only the blocks that were modified during
the previous decoding cycle. At typical error rates (p < 0.1),
this is significantly faster than full_reset.
Use this method between consecutive decoding cycles when the grid topology remains unchanged.
§Complexity
O(modified_blocks) where modified_blocks << total_blocks for typical error rates.
§Example
for cycle in 0..1000 {
decoder.reset_for_next_cycle();
decoder.load_dense_syndromes(&syndromes[cycle]);
let count = decoder.decode(&mut corrections);
}Sourcepub fn full_reset(&mut self)
pub fn full_reset(&mut self)
Fully resets all decoder state.
This performs a complete reset of all internal data structures, suitable for when:
- Starting fresh with a completely new problem
- The grid topology has changed
- You want guaranteed clean state
For repeated decoding with similar syndrome patterns, prefer
reset_for_next_cycle which is faster.
§Complexity
O(n) where n is the total number of nodes.
Source§impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
impl<'a, T: Topology, const STRIDE_Y: usize> DecodingState<'a, T, STRIDE_Y>
Sourcepub fn find(&mut self, i: u32) -> u32
pub fn find(&mut self, i: u32) -> u32
Finds the cluster root for node i. See UnionFind::find.
Sourcepub unsafe fn union(&mut self, u: u32, v: u32) -> bool
pub unsafe fn union(&mut self, u: u32, v: u32) -> bool
Merges clusters containing nodes u and v.
See UnionFind::union for details.
§Safety
Caller must ensure u and v are valid node indices within the parents array.
Sourcepub fn load_dense_syndromes(&mut self, syndromes: &[u64])
pub fn load_dense_syndromes(&mut self, syndromes: &[u64])
Loads syndrome measurements. See ClusterGrowth::load_dense_syndromes.
Sourcepub fn grow_clusters(&mut self)
pub fn grow_clusters(&mut self)
Expands clusters until convergence. See ClusterGrowth::grow_clusters.
Sourcepub unsafe fn process_block(&mut self, blk_idx: usize) -> bool
pub unsafe fn process_block(&mut self, blk_idx: usize) -> bool
Processes a single block during growth.
See ClusterGrowth::process_block for details.
§Safety
Caller must ensure blk_idx is within bounds of the internal blocks state.
Sourcepub fn decode(&mut self, corrections: &mut [EdgeCorrection]) -> usize
pub fn decode(&mut self, corrections: &mut [EdgeCorrection]) -> usize
Full decode: growth + peeling. See Peeling::decode.
Sourcepub fn peel_forest(&mut self, corrections: &mut [EdgeCorrection]) -> usize
pub fn peel_forest(&mut self, corrections: &mut [EdgeCorrection]) -> usize
Extracts corrections from grown clusters. See Peeling::peel_forest.
Trait Implementations§
Source§impl<'a, T: Topology, const STRIDE_Y: usize> ClusterGrowth for DecodingState<'a, T, STRIDE_Y>
impl<'a, T: Topology, const STRIDE_Y: usize> ClusterGrowth for DecodingState<'a, T, STRIDE_Y>
Source§fn load_dense_syndromes(&mut self, syndromes: &[u64])
fn load_dense_syndromes(&mut self, syndromes: &[u64])
Source§fn grow_clusters(&mut self)
fn grow_clusters(&mut self)
Source§fn grow_iteration(&mut self) -> bool
fn grow_iteration(&mut self) -> bool
Source§impl<'a, T: Topology, const STRIDE_Y: usize> Peeling for DecodingState<'a, T, STRIDE_Y>
impl<'a, T: Topology, const STRIDE_Y: usize> Peeling for DecodingState<'a, T, STRIDE_Y>
Source§fn decode(&mut self, corrections: &mut [EdgeCorrection]) -> usize
fn decode(&mut self, corrections: &mut [EdgeCorrection]) -> usize
Source§fn peel_forest(&mut self, corrections: &mut [EdgeCorrection]) -> usize
fn peel_forest(&mut self, corrections: &mut [EdgeCorrection]) -> usize
Source§fn reconstruct_corrections(
&mut self,
corrections: &mut [EdgeCorrection],
) -> usize
fn reconstruct_corrections( &mut self, corrections: &mut [EdgeCorrection], ) -> usize
Source§fn trace_path(&mut self, u: u32, _boundary_node: u32)
fn trace_path(&mut self, u: u32, _boundary_node: u32)
u toward the boundary through parent pointers. Read more