pub struct Grid<const W: usize, const H: usize>(/* private fields */);Expand description
A 2D matrix of 64-bit signed integers used as the core data structure in WHY2 encryption.
The Grid represents either input data or a key, formatted into rows and columns of i64 cells.
All transformations—round mixing, key scheduling, and nonlinear diffusion—operate directly on this structure.
Grids are flexible and can be transformed in-place. This abstraction allows WHY2 to generalize encryption over variable-sized blocks of dimension $W \times H$.
§Grid Size Consistency
WHY2 requires that the same grid dimensions ($W \times H$) be used consistently throughout encryption and decryption. Mixing grid sizes within a single session or across rounds is unsupported and may lead to incorrect results or undefined behavior.
Implementations§
Source§impl<const W: usize, const H: usize> Grid<W, H>
Implementation of core Grid operations for fixed-size grids.
impl<const W: usize, const H: usize> Grid<W, H>
Implementation of core Grid operations for fixed-size grids.
This block defines methods for Grid<W, H>, where W and H are compile-time constants
representing the grid’s width and height. All transformations — such as ARX mixing, key application,
and round-based encryption - operate on grids of this fixed shape.
§Type Parameters
W: Number of columns (width), must be a compile-time constant.H: Number of rows (height), must be a compile-time constant.
§Notes
- Grid dimensions must remain consistent across encryption and decryption.
Sourcepub fn from_key(vec: Zeroizing<Vec<i64>>) -> Result<Self, GridError>
pub fn from_key(vec: Zeroizing<Vec<i64>>) -> Result<Self, GridError>
Initializes a key Grid from a vector of signed 64-bit integers.
Each cell is built from two key parts using nonlinear mixing. addition, XOR, and rotation. This improves diffusion and avoids simple linear patterns in the key.
§Algorithm
For each cell index $i$, the key parts $A$ and $B$ are derived from the input vector $V$:
$$ A = (V_i + V_{i + \text{Area}}) \lll (i \bmod 64) $$
$$ B = (V_i \oplus V_{i + \text{Area}}) \ggg (i \bmod 64) $$
where $\lll$ and $\ggg$ denote left and right rotation respectively.
The final grid value is computed as: $$ Grid_{x,y} = A \oplus B \oplus i $$
§Parameters
vec: A vector of signed 64-bit integers representing the raw key.
§Returns
- Ok(
Grid) with mixed key values if dimensions are valid. - Err(
GridError) if the grid area is too small.
Sourcepub fn from_bytes(bytes: &[u8]) -> Result<Vec<Self>, GridError>
pub fn from_bytes(bytes: &[u8]) -> Result<Vec<Self>, GridError>
Initializes Grid from vector of unsigned 8-bit integers.
This function constructs Grid by chunking the input vector into i64 cells. It expects
exactly $W \times H \times 8$ bytes and returns an error if the input length does not match.
§Parameters
bytes: A byte slice (&[8u]) containing the raw data.
§Returns
- Ok(Vec<
Grid>) if the byte length matches the expected grid size - Err(
GridError) if the input length is not divisible by matrix size.
§Notes
- No transformation is applied
- Use this for raw Grid construction, not for secure key loading
Sourcepub fn iter_mut(&mut self) -> IterMut<'_, [i64; W]>
pub fn iter_mut(&mut self) -> IterMut<'_, [i64; W]>
Returns a mutable iterator over rows in the Grid
Sourcepub fn subcell(&mut self, round: usize)
pub fn subcell(&mut self, round: usize)
Applies nonlinear ARX-style mixing to each cell in the grid.
This transformation introduces symmetric diffusion by modifying each i64 cell
using a combination of addition, rotation, and XOR operations. The process is
round-dependent and designed to obscure bit patterns across the Grid.
§Parameters
round: A round index used to tweak the transformation logic.
§Behavior
Each 64-bit cell is split into two 32-bit halves $v_0, v_1$.
For SUBCELL_ROUNDS iterations, the Feistel-like network applies:
$$ v_0 \leftarrow v_0 + (((v_1 \ll 4) \oplus (v_1 \gg 5)) + v_1) \oplus \text{sum} $$
$$ v_1 \leftarrow v_1 + (((v_0 \ll 4) \oplus (v_0 \gg 5)) + v_0) \oplus \text{sum} $$
where $\text{sum}$ is incremented by a constant $\delta = $ SUBCELL_DELTA in each round:
$$ \text{sum} \leftarrow \text{sum} + \delta $$
§Notes
Sourcepub fn shift_rows(&mut self, key_grid: &Grid<W, H>)
pub fn shift_rows(&mut self, key_grid: &Grid<W, H>)
Applies row-wise shifting to the Grid based on a key Grid.
This transformation rotates each row of the Grid by a variable amount derived from
the corresponding row in key_grid. The shift amount $S_i$ for row $i$ is computed as:
$$ S_i = \left( \bigoplus_{j=0}^{W-1} K_{i,j} \right) \bmod W $$
§Behavior
- Each row is rotated left by $S_i$.
§Notes
- This method mutates the grid in-place.
Sourcepub fn mix_columns(&mut self)
pub fn mix_columns(&mut self)
Applies column-wise mixing to the grid using linear XOR diffusion.
This transformation modifies each column by XORing it with its adjacent column, introducing horizontal diffusion across the grid.
§Behavior
For each column $c \in {0, \dots, W-1}$, compute: $$ G_{r, c} \leftarrow G_{r, c} \oplus G_{r, (c + 1) \bmod W} $$
§Notes
- This method mutates the grid in-place.
Sourcepub fn mix_matrix(&mut self, key_grid: &Grid<W, H>)
pub fn mix_matrix(&mut self, key_grid: &Grid<W, H>)
Applies a matrix-based affine transformation to mix rows.
This function treats the Grid as a matrix and multiplies it by a key-dependent transformation
matrix, while adding a deterministic noise term. This converts the transformation from
purely linear ($Ax$) to affine:
$$ G’ = (L \cdot U) \cdot G + \text{noise} $$
To ensure the operation is reversible (invertible) in modular arithmetic, the transformation is constructed as a product of a Lower triangular matrix ($L$) and an Upper triangular matrix ($U$).
§Behavior
- Lower Pass ($L$): Each row adds a multiple of previous rows ($i > j$).
- Upper Pass ($U$): Each row adds a multiple of following rows ($i < j$).
§Notes
- This method mutates the
Gridin-place. - All additions and multiplications are wrapping (modulo $2^{64}$).
Sourcepub fn mix_diagonals(&mut self)
pub fn mix_diagonals(&mut self)
Applies diagonal-wise mixing to the grid using XOR diffusion.
This transformation modifies each diagonal line by XORing each element with the next element along that diagonal.
§Behavior
- Processes all diagonals parallel to the main diagonal.
- For each cell $(r, c)$, compute: $$ G_{r,c} \leftarrow G_{r,c} \oplus G_{r+1, c+1} $$
§Notes
- This method mutates the grid in-place.
Sourcepub fn increment(&mut self, amount: &mut u64)
pub fn increment(&mut self, amount: &mut u64)
Increments the Grid value by a specified amount, treating it as a large Little-Endian integer.
This method performs modular addition of a 64-bit value to the multi-precision integer represented by the grid:
$$ G \leftarrow (G + \text{amount}) \bmod 2^{64 \times W \times H} $$
§Parameters
amount: The unsigned 64-bit value to add to the grid.- Pass
1for standard sequential counter increment. - Pass a block index $i$ (offset) when initializing parallel CTR counters.
- Pass
§Behavior
- The
Gridis treated as a single large integer in Little-Endian format (the cell at[0][0]is the least significant limb). - The
amountis added to the first cell, and any resulting carry is propagated sequentially through the remaining cells. - If the entire grid overflows (wraps around), the value resets modulo the grid size.
§Security
- When the
constant-timefeature is enabled, this function always iterates through the entire grid to prevent timing leaks via carry propagation analysis.
Trait Implementations§
Source§impl<const W: usize, const H: usize> BitXorAssign<&Grid<W, H>> for Grid<W, H>
impl<const W: usize, const H: usize> BitXorAssign<&Grid<W, H>> for Grid<W, H>
Source§fn bitxor_assign(&mut self, rhs: &Grid<W, H>)
fn bitxor_assign(&mut self, rhs: &Grid<W, H>)
^= operation. Read moreSource§impl<const W: usize, const H: usize> ConstantTimeEq for Grid<W, H>
Available on crate feature constant-time only.
impl<const W: usize, const H: usize> ConstantTimeEq for Grid<W, H>
constant-time only.Auto Trait Implementations§
impl<const W: usize, const H: usize> Freeze for Grid<W, H>
impl<const W: usize, const H: usize> RefUnwindSafe for Grid<W, H>
impl<const W: usize, const H: usize> Send for Grid<W, H>
impl<const W: usize, const H: usize> Sync for Grid<W, H>
impl<const W: usize, const H: usize> Unpin for Grid<W, H>
impl<const W: usize, const H: usize> UnwindSafe for Grid<W, H>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more