Skip to main content

compressed_intvec/fixed/
mod.rs

1//! A generic, compressed, and randomly accessible vector with fixed-width encoding.
2//!
3//! This module provides [`FixedVec`] and its thread-safe counterpart, [`AtomicFixedVec`],
4//! two data structures for storing sequences of integers where each element is encoded
5//! using the same number of bits. This strategy, known as fixed-width encoding,
6//! is suitable for data where values are bounded within a known range, as it
7//! allows for O(1) random access by directly calculating the memory location of any element.
8//!
9//! # Core Concepts
10//!
11//! The design is generic over four parameters: `FixedVec<T, W, E, B>`.
12//! - `T`: The element type as seen by the user (e.g., [`u32`], [`i16`]), constrained by the [`Storable`] trait.
13//! - `W`: The underlying unsigned integer type used for storage (e.g., [`u64`], [`usize`]), constrained by the [`Word`] trait.
14//! - `E`: The [`Endianness`] (e.g., [`LE`] or [`BE`]) for bit-level operations.
15//! - `B`: The backing buffer, which abstracts over ownership. This allows [`FixedVec`] to be an owned container (e.g., `Vec<W>`) or a zero-copy, borrowed view (e.g., `&[W]`).
16//!
17//! ### Immutability and Zero-Copy Views
18//! Immutable access is performed in O(1) time by calculating an element's bit-level offset.
19//! Structures like [`FixedVecSlice`](slice::FixedVecSlice) provide zero-copy views into a vector, representing
20//! a sub-region of the data without requiring data duplication or new allocations.
21//!
22//! ### Mutability via Proxy Objects
23//! Direct mutable references (`&mut T`) to individual bit-packed elements are not possible, as
24//! elements do not align with byte boundaries and may not even exist as discrete entities in memory.
25//! Instead, mutability is managed through the proxy object pattern. Methods like [`at_mut`](FixedVec::at_mut) return a temporary proxy ([`MutProxy`]) that holds a decoded copy of the value.
26//! Modifications are applied to this copy, and the value is automatically encoded and written
27//! back into the vector's bitstream when the proxy object goes out of scope (i.e., is dropped).
28//!
29//! # Core Data Structures
30//!
31//! - [`FixedVec`]: The primary implementation for single-threaded contexts.
32//! - [`AtomicFixedVec`]: A thread-safe variant for concurrent applications. It provides an
33//!   API analogous to Rust's standard atomic types ([`load`](AtomicFixedVec::load), [`store`](AtomicFixedVec::store), [`fetch_add`](AtomicFixedVec::fetch_add), etc.).
34//!   It uses lock-free atomic instructions for elements contained within a single machine word and
35//!   a fine-grained locking strategy (lock striping) for elements that span word boundaries. This
36//!   hybrid approach ensures thread safety for any `bit-width` configuration.
37//!
38//! # Main Components
39//!
40//! - [`FixedVec`] and [`AtomicFixedVec`]
41//! - [`BitWidth`]: An enum to control the bit-width selection strategy. Options include:
42//!     - [`BitWidth::Minimal`]: Selects the minimal bit-width required.
43//!     - [`BitWidth::PowerOfTwo`]: Rounds up to the nearest power of two.
44//!     - [`BitWidth::Explicit`]: Allows specifying a fixed bit-width.
45//! - **Builders**: [`FixedVecBuilder`](builder::FixedVecBuilder) and [`FixedVecFromIterBuilder`](builder::FixedVecFromIterBuilder)
46//! - **Slices**: [`FixedVecSlice`](slice::FixedVecSlice) for creating immutable or mutable views.
47//!
48//! # Examples
49//!
50//! Create a [`FixedVec`] from a slice of data. The builder will
51//! automatically determine the minimal number of bits required.
52//!
53//! ```
54//! # fn main() -> Result<(), Box<dyn std::error::Error>> {
55//! use compressed_intvec::fixed::{FixedVec, UFixedVec};
56//!
57//! // The numbers 0-7 can all be represented in 3 bits.
58//! let data: Vec<u32> = (0..8).collect();
59//!
60//! // The builder infers that `bit_width` should be 3.
61//! let vec: UFixedVec<u32> = FixedVec::builder()
62//!     .build(&data)?;
63//!
64//! assert_eq!(vec.len(), 8);
65//! assert_eq!(vec.bit_width(), 3);
66//! assert_eq!(vec.get(5), Some(5));
67//! # Ok(())
68//! # }
69//! ```
70//!
71//! ## Storing Signed Integers
72//!
73//! [`FixedVec`] can store signed integers. The underlying storage uses zig-zag encoding,
74//! which maps small negative and positive numbers to small unsigned integers.
75//!
76//! ```
77//! # fn main() -> Result<(), Box<dyn std::error::Error>> {
78//! use compressed_intvec::fixed::{FixedVec, SFixedVec};
79//!
80//! // The values range from -2 to 1. Zig-zag encoding maps these to
81//! // unsigned values, so the maximum value is 3, which
82//! // requires 2 bits.
83//! let data: &[i16] = &[-2, -1, 0, 1];
84//! let vec: SFixedVec<i16> = FixedVec::builder().build(data)?;
85//!
86//! assert_eq!(vec.bit_width(), 2);
87//! assert_eq!(vec.get(0), Some(-2));
88//! assert_eq!(vec.get(3), Some(1));
89//! # Ok(())
90//! # }
91//! ```
92//!
93//! # Implementation Notes
94//!
95//! To ensure safe and efficient memory access, [`FixedVec`] adds one padding
96//! word at the end of its storage buffer. This padding prevents out-of-bounds
97//! reads in methods like [`get_unchecked`](FixedVec::get_unchecked) and is a
98//! prerequisite for unaligned access with [`get_unaligned_unchecked`](FixedVec::get_unaligned_unchecked).
99//! The builders handle this padding automatically. When creating a [`FixedVec`] from raw parts,
100//! it is the caller's responsibility to ensure this padding is present.
101//!
102//! # Common Type Aliases
103//!
104//! To simplify usage, this crate provides several type aliases for common [`FixedVec`]
105//! configurations. In most cases, you should prefer using these aliases over the
106//! fully generic `FixedVec<T, W, E, B>` struct.
107//!
108//! ### General-Purpose Aliases
109//!
110//! These aliases use [`usize`] for the storage word, which is often the most
111//! efficient choice for the target architecture.
112//!
113//! - [`UFixedVec<T>`]: For unsigned integers (e.g., [`u8`], [`u16`], [`u32`]).
114//! - [`SFixedVec<T>`]: For signed integers (e.g., [`i8`], [`i16`], [`i32`]).
115//!
116//! ### Concrete Aliases for [`u64`]/[`i64`]
117//!
118//! These aliases are specialized for [`u64`]/[`i64`] elements stored in [`u64`] words:
119//!
120//! - [`LEFixedVec`]: [`u64`] elements, Little-Endian.
121//! - [`BEFixedVec`]: [`u64`] elements, Big-Endian.
122//! - [`LESFixedVec`]: [`i64`] elements, Little-Endian.
123//! - [`BESFixedVec`]: [`i64`] elements, Big-Endian.
124//!
125//! The [`atomic`] module provides a similar set of aliases like [`UAtomicFixedVec`]
126//! and [`SAtomicFixedVec`].
127
128// Declare and export submodules.
129#[macro_use]
130pub mod macros;
131pub mod builder;
132pub mod iter;
133pub mod iter_mut;
134#[cfg(feature = "parallel")]
135pub mod parallel;
136pub mod proxy;
137pub mod slice;
138pub mod traits;
139
140pub mod atomic;
141
142// Conditionally compile the serde module.
143#[cfg(feature = "serde")]
144mod serde;
145
146use dsi_bitstream::{
147    prelude::Endianness,
148    traits::{BE, LE},
149};
150use mem_dbg::{MemDbg, MemSize};
151use num_traits::ToPrimitive;
152use std::{error::Error as StdError, fmt, iter::FromIterator, marker::PhantomData};
153use traits::{Storable, Word};
154
155use crate::fixed::{proxy::MutProxy, traits::DefaultParams};
156
157// Re-export atomic aliases for convenience.
158pub use atomic::{AtomicFixedVec, SAtomicFixedVec, UAtomicFixedVec};
159
160// Type aliases for common `FixedVec` configurations.
161
162/// A [`FixedVec`] for unsigned integers with a `usize` word and Little-Endian layout.
163///
164/// This is a convenient alias for a common configuration. The element type `T`
165/// can be any unsigned integer that implements [`Storable`], such as `u8`, `u16`,
166/// `u32`, [`u64`], `u128`, or `usize`.
167pub type UFixedVec<T, B = Vec<usize>> = FixedVec<T, usize, LE, B>;
168
169/// A [`FixedVec`] for signed integers with a `usize` word and Little-Endian layout.
170///
171/// This alias is suitable for general-purpose use with signed types like `i8`,
172/// `i16`, `i32`, [`i64`], `i128`, or `isize`.
173pub type SFixedVec<T, B = Vec<usize>> = FixedVec<T, usize, LE, B>;
174
175// --- Concrete Aliases for [`u64`]/[`i64`] elements ---
176// These are provided for common use cases and backward compatibility.
177
178/// A [`FixedVec`] for [`u64`] elements with a [`u64`] backend and Little-Endian layout.
179pub type LEFixedVec<B = Vec<u64>> = FixedVec<u64, u64, LE, B>;
180/// A [`FixedVec`] for [`i64`] elements with a [`u64`] backend and Little-Endian layout.
181pub type LESFixedVec<B = Vec<u64>> = FixedVec<i64, u64, LE, B>;
182
183/// A [`FixedVec`] for [`u64`] elements with a [`u64`] backend and Big-Endian layout.
184pub type BEFixedVec<B = Vec<u64>> = FixedVec<u64, u64, BE, B>;
185/// A [`FixedVec`] for [`i64`] elements with a [`u64`] backend and Big-Endian layout.
186pub type BESFixedVec<B = Vec<u64>> = FixedVec<i64, u64, BE, B>;
187
188/// Specifies the strategy for determining the number of bits for each integer.
189///
190/// This enum controls how the bit width of a [`FixedVec`] is determined during
191/// its construction. The choice of strategy involves a trade-off between memory
192/// usage and random access performance.
193///
194/// # Performance Considerations
195///
196/// - **[`Minimal`](BitWidth::Minimal) vs. [`PowerOfTwo`](BitWidth::PowerOfTwo)**: While [`Minimal`](BitWidth::Minimal) provides the most compact
197///   storage, [`PowerOfTwo`](BitWidth::PowerOfTwo) can offer better performance for certain operations.
198///   When the `bit_width` is a power of two (e.g., 8, 16, 32) and aligns with
199///   word boundaries, some in-place operations like [`map_in_place`](FixedVec::map_in_place) can use a
200///   faster, word-at-a-time algorithm.
201///
202/// - **[`Explicit`](BitWidth::Explicit)**: This is the fastest strategy at construction time, as it
203///   avoids the need to iterate through the input data to find the maximum
204///   value. Use this when the required bit width is known in advance.
205#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]
206pub enum BitWidth {
207    /// Use the minimum number of bits required to store the largest value.
208    ///
209    /// This strategy analyzes the input data to find the maximum value and sets
210    /// the bit width accordingly. It ensures the most compact memory representation.
211    #[default]
212    Minimal,
213
214    /// Round the bit width up to the next power of two (e.g., 8, 16, 32).
215    ///
216    /// This strategy can improve random access performance for some in-place
217    /// operations, as they can be implemented more efficiently with bit-shift operations on aligned data.
218    PowerOfTwo,
219
220    /// Use a specific number of bits.
221    ///
222    /// This strategy enforces a user-defined bit width. If any value in the
223    /// input data exceeds what can be stored in this many bits, the build
224    /// process will fail.
225    Explicit(usize),
226}
227
228/// Defines errors that can occur during [`FixedVec`] operations.
229#[derive(Debug)]
230pub enum Error {
231    /// A value in the input data is too large to be stored with the configured
232    /// bit width.
233    ValueTooLarge {
234        /// The value that could not be stored.
235        value: u128,
236        /// The index of the value in the input data.
237        index: usize,
238        /// The configured number of bits.
239        bit_width: usize,
240    },
241    /// A parameter is invalid for the requested operation.
242    ///
243    /// This typically occurs if `bit_width` is larger than the storage word size
244    /// or if a provided buffer is too small.
245    InvalidParameters(String),
246}
247
248impl fmt::Display for Error {
249    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
250        match self {
251            Error::ValueTooLarge {
252                value,
253                index,
254                bit_width,
255            } => write!(
256                f,
257                "value {value} at index {index} does not fit in {bit_width} bits"
258            ),
259            Error::InvalidParameters(s) => write!(f, "Invalid parameters: {s}"),
260        }
261    }
262}
263
264impl StdError for Error {}
265
266/// A compressed vector of integers with fixed-width encoding.
267///
268/// [`FixedVec`] stores a sequence of integers where each element is encoded using
269/// the same number of bits. This allows for O(1) random access by calculating
270/// the memory location of any element. It is suitable for data where values are
271/// bounded within a known range.
272///
273/// # Type Parameters
274///
275/// - `T`: The integer type for the elements (e.g., `u32`, `i16`). It must
276///   implement the [`Storable`] trait.
277/// - `W`: The underlying storage word (e.g., [`u64`], `usize`). It must implement
278///   the [`Word`] trait.
279/// - `E`: The [`Endianness`] (e.g., [`LE`] or [`BE`]) for bit-level operations.
280/// - `B`: The backend storage buffer, such as `Vec<W>` for an owned vector or
281///   `&[W]` for a borrowed view.
282///
283/// For common configurations, several [type aliases] are provided.
284///
285/// [type aliases]: crate::fixed#type-aliases
286#[derive(Debug, Clone, MemDbg, MemSize)]
287pub struct FixedVec<T: Storable<W>, W: Word, E: Endianness, B: AsRef<[W]> = Vec<W>> {
288    /// The underlying storage for the bit-packed data.
289    pub(crate) bits: B,
290    /// The number of bits used to encode each element.
291    pub(crate) bit_width: usize,
292    /// A bitmask with the lowest `bit_width` bits set to one.
293    pub(crate) mask: W,
294    /// The number of elements in the vector.
295    pub(crate) len: usize,
296    /// Zero-sized markers for the generic type parameters `T`, `W`, and `E`.
297    pub(crate) _phantom: PhantomData<(T, W, E)>,
298}
299
300// [`FixedVec`] builder implementation.
301impl<T, W, E> FixedVec<T, W, E, Vec<W>>
302where
303    T: Storable<W>,
304    W: Word,
305    E: Endianness,
306    dsi_bitstream::impls::BufBitWriter<E, dsi_bitstream::impls::MemWordWriterVec<W, Vec<W>>>:
307        dsi_bitstream::prelude::BitWrite<E, Error = std::convert::Infallible>,
308{
309    /// Creates a builder for constructing a [`FixedVec`] from a slice.
310    ///
311    /// The builder provides methods to customize the vector's properties, such
312    /// as the [`BitWidth`] strategy.
313    ///
314    /// # Examples
315    ///
316    /// ```
317    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
318    /// use compressed_intvec::fixed::{FixedVec, BitWidth, UFixedVec};
319    ///
320    /// let data: &[u32] = &[10, 20, 30, 40, 50];
321    /// let vec: UFixedVec<u32> = FixedVec::builder()
322    ///     .bit_width(BitWidth::Minimal)
323    ///     .build(data)?;
324    ///
325    /// assert_eq!(vec.get(1), Some(20));
326    /// # Ok(())
327    /// # }
328    /// ```
329    pub fn builder() -> builder::FixedVecBuilder<T, W, E> {
330        builder::FixedVecBuilder::new()
331    }
332
333    /// Creates a builder for constructing a [`FixedVec`] from an iterator.
334    ///
335    /// This builder is suitable for large datasets provided by an iterator,
336    /// as it processes the data in a streaming fashion.
337    ///
338    /// # Arguments
339    ///
340    /// * `iter`: An iterator that yields the integer values.
341    /// * `bit_width`: The number of bits to use for each element. This must be
342    ///   specified, as the builder cannot analyze the data in advance.
343    ///
344    /// # Examples
345    ///
346    /// ```
347    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
348    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
349    ///
350    /// let data = 0..100u32;
351    /// let vec: UFixedVec<u32> = FixedVec::from_iter_builder(data, 7)
352    ///     .build()?;
353    ///
354    /// assert_eq!(vec.len(), 100);
355    /// assert_eq!(vec.bit_width(), 7);
356    /// assert_eq!(vec.get(99), Some(99));
357    /// # Ok(())
358    /// # }
359    /// ```
360    pub fn from_iter_builder<I: IntoIterator<Item = T>>(
361        iter: I,
362        bit_width: usize,
363    ) -> builder::FixedVecFromIterBuilder<T, W, E, I> {
364        builder::FixedVecFromIterBuilder::new(iter, bit_width)
365    }
366
367    /// Creates an owned [`FixedVec`] from a slice of data.
368    ///
369    /// This method is a convenience wrapper around the builder API, using
370    /// the default bit width strategy ([`BitWidth::Minimal`]).
371    ///
372    /// # Examples
373    ///
374    /// ```rust
375    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
376    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
377    ///
378    /// let data: &[u32] = &[10, 20, 30];
379    ///
380    /// let vec: UFixedVec<u32> = FixedVec::from_slice(data)?;
381    ///
382    /// assert_eq!(vec.len(), 3);
383    /// assert_eq!(vec.bit_width(), 5); // 30 fits in 5 bits
384    /// assert_eq!(vec.get(0), Some(10));
385    /// # Ok(())
386    /// # }
387    /// ```
388    pub fn from_slice(slice: &[T]) -> Result<Self, Error> {
389        Self::builder().build(slice)
390    }
391}
392
393// Core immutable API.
394impl<T, W, E, B> FixedVec<T, W, E, B>
395where
396    T: Storable<W>,
397    W: Word,
398    E: Endianness,
399    B: AsRef<[W]>,
400{
401    /// Creates a [`FixedVec`] from its raw components.
402    ///
403    /// This constructor allows for the creation of a zero-copy view over an
404    /// existing buffer.
405    ///
406    /// # Errors
407    ///
408    /// Returns an [`Error::InvalidParameters`] if `bit_width` is larger than
409    /// the word size or if the `bits` buffer is too small to hold the specified
410    /// number of elements and the required padding.
411    ///
412    /// # Implementation Notes
413    ///
414    /// The provided `bits` buffer must contain at least one extra "padding"
415    /// word at the end. This padding is essential to prevent out-of-bounds
416    /// memory access in methods like `get_unchecked`.
417    ///
418    /// # Examples
419    ///
420    /// ```
421    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
422    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
423    ///
424    /// // 3 elements * 5 bits = 15 bits. This requires 1 data word.
425    /// // We need 1 extra word for padding.
426    /// let mut buffer = vec![0_usize; 2];
427    ///
428    /// // Manually encode data into the first word.
429    /// // 10 (01010), 20 (10100), 30 (11110)
430    /// buffer[0] = 0b11110_10100_01010;
431    ///
432    /// let vec = UFixedVec::<u32, _>::from_parts(&buffer, 3, 5)?;
433    /// assert_eq!(vec.get(0), Some(10));
434    /// assert_eq!(vec.get(1), Some(20));
435    /// assert_eq!(vec.get(2), Some(30));
436    /// # Ok(())
437    /// # }
438    /// ```
439    pub fn from_parts(bits: B, len: usize, bit_width: usize) -> Result<Self, Error> {
440        if bit_width > <W as traits::Word>::BITS {
441            return Err(Error::InvalidParameters(format!(
442                "bit_width ({}) cannot be greater than the word size ({})",
443                bit_width,
444                <W as traits::Word>::BITS
445            )));
446        }
447
448        let total_bits = len * bit_width;
449        let data_words = total_bits.div_ceil(<W as traits::Word>::BITS);
450
451        // Essential safety check: ensure the buffer is large enough for the data
452        // AND the 1 padding word required.
453        if bits.as_ref().len() < data_words + 1 {
454            return Err(Error::InvalidParameters(format!(
455                "The provided buffer is too small. It has {} words, but {} data words + 1 padding word are required.",
456                bits.as_ref().len(),
457                data_words
458            )));
459        }
460
461        Ok(unsafe { Self::new_unchecked(bits, len, bit_width) })
462    }
463
464    /// Returns the number of elements in the vector.
465    #[inline]
466    pub fn len(&self) -> usize {
467        self.len
468    }
469
470    /// Returns `true` if the vector is empty.
471    #[inline]
472    pub fn is_empty(&self) -> bool {
473        self.len == 0
474    }
475
476    /// Returns the number of bits used to encode each element.
477    #[inline]
478    pub fn bit_width(&self) -> usize {
479        self.bit_width
480    }
481
482    /// Returns a read-only slice of the underlying storage words.
483    #[inline]
484    pub fn as_limbs(&self) -> &[W] {
485        self.bits.as_ref()
486    }
487
488    /// Returns a raw pointer to the start of the underlying buffer and its
489    /// length in words.
490    ///
491    /// # Safety
492    ///
493    /// The caller must ensure that the buffer is not mutated in a way that
494    /// violates the invariants of the [`FixedVec`] while the pointer is active.
495    pub fn as_raw_parts(&self) -> (*const W, usize) {
496        let slice = self.bits.as_ref();
497        (slice.as_ptr(), slice.len())
498    }
499
500    /// Creates a [`FixedVec`] from its raw components without performing checks.
501    ///
502    /// # Safety
503    ///
504    /// The caller must ensure that the following invariants are met:
505    /// 1. The `bits` buffer must be large enough to hold `len * bit_width` bits.
506    /// 2. The `bits` buffer must have at least one extra padding word at the end
507    ///    to prevent out-of-bounds reads during access.
508    /// 3. `bit_width` must not be greater than the number of bits in `W`.
509    pub(crate) unsafe fn new_unchecked(bits: B, len: usize, bit_width: usize) -> Self {
510        let mask = if bit_width == <W as traits::Word>::BITS {
511            W::max_value()
512        } else {
513            (W::ONE << bit_width) - W::ONE
514        };
515
516        Self {
517            bits,
518            len,
519            bit_width,
520            mask,
521            _phantom: PhantomData,
522        }
523    }
524
525    /// Returns the element at the specified index, or `None` if the index is
526    /// out of bounds.
527    ///
528    /// This operation is O(1).
529    ///
530    /// # Examples
531    ///
532    /// ```
533    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
534    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
535    ///
536    /// let data: &[u32] = &[10, 20, 30];
537    /// let vec: UFixedVec<u32> = FixedVec::builder().build(data)?;
538    ///
539    /// assert_eq!(vec.get(1), Some(20));
540    /// assert_eq!(vec.get(3), None);
541    /// # Ok(())
542    /// # }
543    /// ```
544    #[inline]
545    pub fn get(&self, index: usize) -> Option<T> {
546        if index >= self.len {
547            return None;
548        }
549        Some(unsafe { self.get_unchecked(index) })
550    }
551
552    /// Returns the element at the specified index without bounds checking.
553    ///
554    /// # Safety
555    ///
556    /// Calling this method with an out-of-bounds `index` is undefined behavior.
557    /// The `index` must be less than the vector's `len`.
558    #[inline(always)]
559    pub unsafe fn get_unchecked(&self, index: usize) -> T {
560        debug_assert!(index < self.len);
561
562        let bits_per_word = <W as traits::Word>::BITS;
563        // Optimization: if bit_width matches word size, access is trivial.
564        if self.bit_width == bits_per_word {
565            let val = unsafe { *self.as_limbs().get_unchecked(index) };
566            // Apply endianness correction if needed.
567            let final_val = if E::IS_BIG { val.to_be() } else { val };
568            return <T as Storable<W>>::from_word(final_val);
569        }
570
571        // Calculate the bit position and word index for the element.
572        let bit_pos = index * self.bit_width;
573        let word_index = bit_pos / bits_per_word;
574        let bit_offset = bit_pos % bits_per_word;
575
576        let limbs = self.as_limbs();
577        let final_word: W;
578
579        // The logic is specialized for endianness to maximize performance.
580        if E::IS_LITTLE {
581            // Fast path: the element is fully contained within a single word.
582            if bit_offset + self.bit_width <= bits_per_word {
583                final_word =
584                    unsafe { (*limbs.get_unchecked(word_index) >> bit_offset) & self.mask };
585            } else {
586                // Slow path: the element spans two words.
587                // Read the low part from the first word and the high part from the next.
588                let low = unsafe { *limbs.get_unchecked(word_index) >> bit_offset };
589                let high =
590                    unsafe { *limbs.get_unchecked(word_index + 1) << (bits_per_word - bit_offset) };
591                final_word = (low | high) & self.mask;
592            }
593        } else {
594            // Big-Endian logic requires byte-swapping the words before extraction.
595            let word_hi = unsafe { (*limbs.get_unchecked(word_index)).to_be() };
596            if bit_offset + self.bit_width <= bits_per_word {
597                final_word = (word_hi << bit_offset) >> (bits_per_word - self.bit_width);
598            } else {
599                let word_lo = unsafe { (*limbs.get_unchecked(word_index + 1)).to_be() };
600                let bits_in_first = bits_per_word - bit_offset;
601                let high = word_hi << bit_offset >> (bits_per_word - bits_in_first);
602                let low = word_lo >> (bits_per_word - (self.bit_width - bits_in_first));
603                final_word = (high << (self.bit_width - bits_in_first)) | low;
604            }
605        }
606        <T as Storable<W>>::from_word(final_word)
607    }
608
609    /// Returns the element at the specified index using unaligned memory access,
610    /// or [`None`] if the index is out of bounds.
611    ///
612    /// This method attempts to use a optimized path with a single
613    /// unaligned memory read. For certain `bit_width` configurations where this
614    /// is not safe (e.g., a 63-bit value on a [`u64`] backend), it automatically
615    /// falls back to the safe, two-read implementation of [`get_unchecked`](Self::get_unchecked).
616    ///
617    /// # Note
618    ///
619    /// This method performs various checks to determine if the current configuration
620    /// is safe for a single unaligned read. This will of course add some overhead. If
621    /// your are sure that
622    ///
623    /// # Examples
624    ///
625    /// ```
626    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
627    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
628    ///
629    /// let data: &[u32] = &[10, 20, 30];
630    /// let vec: UFixedVec<u32> = FixedVec::builder().build(data)?;
631    ///
632    /// assert_eq!(vec.get_unaligned(1), Some(20));
633    /// assert_eq!(vec.get_unaligned(3), None);
634    /// # Ok(())
635    /// # }
636    /// ```
637    #[inline]
638    pub fn get_unaligned(&self, index: usize) -> Option<T> {
639        if index >= self.len {
640            return None;
641        }
642
643        // SAFETY: We have just performed the bounds check, so both
644        // `get_unchecked` and `get_unaligned_unchecked` are safe to call.
645        unsafe {
646            let bits_per_word = <W as Word>::BITS;
647            let bit_width = self.bit_width;
648
649            // Check if the current configuration is safe for a single unaligned read.
650            // This is a precise check based on the analysis of which bit_widths can
651            // cause a read to span more than W::BITS.
652            let is_safe = (bit_width <= bits_per_word.saturating_sub(6)) // e.g., <= 58 for u64
653                || (bit_width == bits_per_word.saturating_sub(4))        // e.g., == 60 for u64
654                || (bit_width == bits_per_word);
655
656            let value = if is_safe {
657                // Fast path for safe configurations.
658                self.get_unaligned_unchecked(index)
659            } else {
660                // Fallback for unsafe bit_widths (e.g., 59, 61, 62, 63 for u64).
661                self.get_unchecked(index)
662            };
663            Some(value)
664        }
665    }
666
667    /// Returns the element at `index` using unaligned memory access.
668    ///
669    /// This method can be significantly faster for random access. It performs a
670    /// single, potentially unaligned read of a [`Word`] and extracts the value.
671    ///
672    /// # Performance
673    ///
674    /// This method is generally the fastest way to perform random reads on
675    /// Little-Endian architectures, especially when the `bit_width` is not a
676    /// power of two.
677    ///
678    /// - [`get_unchecked`](Self::get_unchecked): May require reading one or two separate machine words
679    ///   and combining them with bit shifts. This is fast if the data is already
680    ///   in the CPU cache.
681    /// - [`get_unaligned_unchecked`](Self::get_unaligned_unchecked): Performs a single memory read that may
682    ///   cross word boundaries. It often results in fewer instructions and better throughput than the two
683    ///   separate reads of [`get_unchecked`](Self::get_unchecked), especially in memory-bound scenarios.
684    ///
685    ///
686    /// # Safety
687    ///
688    /// Calling this method is undefined behavior if:
689    /// - `index` is out of bounds (`index >= self.len()`).
690    /// - The `bit_width` is one for which a single unaligned read is unsafe.
691    ///   This is the case for `bit_width` values such as `59`, `61`, `62`, `63`
692    ///   on a [`u64`] backend, and analogous values for other word sizes.
693    ///
694    /// # Panics
695    ///
696    /// In debug builds, this method will panic if the safety conditions on `index`
697    /// or `bit_width` are not met.
698    ///
699    /// # Implementation Notes
700    ///
701    /// For Big-Endian systems, this method falls back to the standard
702    /// [`get_unchecked`](Self::get_unchecked) implementation, as unaligned
703    /// access logic is more complex and architecture-dependent.
704    #[inline(always)]
705    pub unsafe fn get_unaligned_unchecked(&self, index: usize) -> T {
706        debug_assert!(index < self.len);
707
708        if E::IS_LITTLE {
709            let bits_per_word = <W as Word>::BITS;
710            let bit_width = self.bit_width;
711
712            if bit_width == bits_per_word {
713                return unsafe { self.get_unchecked(index) };
714            }
715
716            // In debug builds, assert that this function is only called for `bit_width`
717            // values where a single unaligned read is guaranteed to be sufficient.
718            debug_assert!(
719                {
720                    let is_safe_contiguous = bit_width <= bits_per_word.saturating_sub(6); // e.g., <= 58 for u64
721                    let is_safe_case_60 = bit_width == bits_per_word.saturating_sub(4); // e.g., == 60 for u64
722                    is_safe_contiguous || is_safe_case_60
723                },
724                "get_unaligned_unchecked is not safe for this bit_width ({bit_width}). \
725                The value may span more than {bits_per_word} bits, making a single read insufficient. \
726                Use get_unaligned() for a safe version with an automatic fallback."
727            );
728
729            let bit_pos = index * bit_width;
730            let byte_pos = bit_pos / 8;
731            let bit_rem = bit_pos % 8;
732
733            let limbs_ptr = self.as_limbs().as_ptr() as *const u8;
734
735            // Perform an unaligned read from the calculated byte position.
736            let word: W = unsafe { (limbs_ptr.add(byte_pos) as *const W).read_unaligned() };
737            let extracted_word = word >> bit_rem;
738
739            <T as Storable<W>>::from_word(extracted_word & self.mask)
740        } else {
741            // For Big-Endian, the logic for unaligned reads is complex and
742            // architecture-dependent. Fall back to the standard `get_unchecked`.
743            unsafe { self.get_unchecked(index) }
744        }
745    }
746
747    /// Returns an iterator over the elements of the vector.
748    ///
749    /// # Examples
750    ///
751    /// ```
752    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
753    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
754    ///
755    /// let data: &[u32] = &[10, 20, 30];
756    /// let vec: UFixedVec<u32> = FixedVec::builder().build(data)?;
757    /// let iter = vec.iter();
758    ///
759    /// for (i, value) in iter.enumerate() {
760    ///     assert_eq!(Some(value), vec.get(i));
761    /// }
762    /// # Ok(())
763    /// # }
764    /// ```
765    pub fn iter(&self) -> iter::FixedVecIter<'_, T, W, E, B> {
766        iter::FixedVecIter::new(self)
767    }
768
769    /// Returns an unchecked iterator over the elements of the vector.
770    ///
771    /// # Safety
772    ///
773    /// The caller must ensure that the iterator is not advanced beyond the
774    /// vector's length.
775    pub unsafe fn iter_unchecked(&self) -> iter::FixedVecUncheckedIter<'_, T, W, E, B> {
776        iter::FixedVecUncheckedIter::new(self)
777    }
778
779    /// Creates an immutable view (slice) of a sub-region of the vector.
780    ///
781    /// Returns `None` if the specified range is out of bounds.
782    ///
783    /// # Arguments
784    /// * `start`: The starting index of the slice.
785    /// * `len`: The number of elements in the slice.
786    pub fn slice(&self, start: usize, len: usize) -> Option<slice::FixedVecSlice<&Self>> {
787        if start.saturating_add(len) > self.len {
788            return None;
789        }
790        Some(slice::FixedVecSlice::new(self, start..start + len))
791    }
792
793    /// Splits the vector into two immutable views at a given index.
794    ///
795    /// Returns `None` if `mid` is out of bounds.
796    ///
797    /// # Arguments
798    /// * `mid`: The index at which to split the vector.
799    pub fn split_at(
800        &self,
801        mid: usize,
802    ) -> Option<(slice::FixedVecSlice<&Self>, slice::FixedVecSlice<&Self>)> {
803        if mid > self.len {
804            return None;
805        }
806        let left = slice::FixedVecSlice::new(self, 0..mid);
807        let right = slice::FixedVecSlice::new(self, mid..self.len);
808        Some((left, right))
809    }
810
811    /// Returns an iterator over non-overlapping chunks of the vector.
812    ///
813    /// Each chunk is a [`FixedVecSlice`] of length `chunk_size`, except for the
814    /// last chunk, which may be shorter.
815    ///
816    /// # Panics
817    ///
818    /// Panics if `chunk_size` is 0.
819    ///
820    /// [`FixedVecSlice`]: slice::FixedVecSlice
821    pub fn chunks(&self, chunk_size: usize) -> iter::Chunks<'_, T, W, E, B> {
822        iter::Chunks::new(self, chunk_size)
823    }
824
825    /// Returns an iterator over all contiguous windows of length `size`.
826    ///
827    /// The windows overlap. If the vector is shorter than `size`, the iterator
828    /// returns no values.
829    ///
830    /// # Panics
831    ///
832    /// Panics if `size` is 0.
833    ///
834    /// # Examples
835    ///
836    /// ```
837    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
838    /// use compressed_intvec::fixed_vec;
839    ///
840    /// let vec = fixed_vec![1u32, 2, 3, 4, 5];
841    /// let mut windows = vec.windows(3);
842    ///
843    /// let slice1 = vec.slice(0, 3).ok_or("slice failed")?;
844    /// let slice2 = vec.slice(1, 3).ok_or("slice failed")?;
845    /// let slice3 = vec.slice(2, 3).ok_or("slice failed")?;
846    ///
847    /// assert_eq!(windows.next().ok_or("no window")?, slice1);
848    /// assert_eq!(windows.next().ok_or("no window")?, slice2);
849    /// assert_eq!(windows.next().ok_or("no window")?, slice3);
850    /// assert!(windows.next().is_none());
851    /// # Ok(())
852    /// # }
853    /// ```
854    pub fn windows(&self, size: usize) -> iter::Windows<'_, T, W, E, B> {
855        assert!(size != 0, "window size cannot be zero");
856        iter::Windows::new(self, size)
857    }
858
859    /// Returns a raw pointer to the storage word containing the start of an element.
860    ///
861    /// This method returns a pointer to the [`Word`] in the backing buffer where
862    /// the data for the element at `index` begins.
863    ///
864    /// Returns `None` if `index` is out of bounds.
865    ///
866    /// # Safety
867    ///
868    /// This method is safe as it only returns a raw pointer. Dereferencing the
869    /// pointer is `unsafe`. The caller must ensure that the pointer is not used
870    /// after the [`FixedVec`] is dropped or modified.
871    pub fn addr_of(&self, index: usize) -> Option<*const W> {
872        if index >= self.len {
873            return None;
874        }
875
876        let bit_pos = index * self.bit_width;
877        let word_idx = bit_pos / <W as Word>::BITS;
878
879        let limbs = self.as_limbs();
880        if word_idx < limbs.len() {
881            Some(limbs.as_ptr().wrapping_add(word_idx))
882        } else {
883            None
884        }
885    }
886
887    /// Hints to the CPU to prefetch the data for the element at `index` into cache.
888    ///
889    /// This method uses an intrinsic to reduce memory latency for predictable
890    /// access patterns. It only has an effect on architectures that support it
891    /// (e.g., x86, x86-64) and compiles to a no-op on other platforms.
892    pub fn prefetch(&self, index: usize) {
893        if index >= self.len {
894            return;
895        }
896
897        #[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
898        {
899            use std::arch::x86_64::{_MM_HINT_T0, _mm_prefetch};
900
901            let bit_pos = index * self.bit_width;
902            let byte_pos = bit_pos / 8;
903
904            let limbs_ptr = self.as_limbs().as_ptr() as *const i8;
905
906            // SAFETY: Bounds check on `index` ensures `byte_pos` is within the
907            // allocated buffer (including padding). The pointer is valid.
908            unsafe {
909                // Prefetch into all cache levels (a good general-purpose default).
910                _mm_prefetch(limbs_ptr.add(byte_pos), _MM_HINT_T0);
911            }
912        }
913    }
914
915    /// Binary searches this vector for a given element.
916    ///
917    /// If the value is found, returns `Ok(usize)` with the index of the
918    /// matching element. If the value is not found, returns `Err(usize)` with
919    /// the index where the value could be inserted to maintain order.
920    ///
921    /// # Examples
922    ///
923    /// ```
924    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
925    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
926    ///
927    /// let data: &[u32] = &[10, 20, 30, 40, 50];
928    /// let vec: UFixedVec<u32> = FixedVec::builder().build(data)?;
929    ///
930    /// assert_eq!(vec.binary_search(&30), Ok(2));
931    /// assert_eq!(vec.binary_search(&35), Err(3));
932    /// # Ok(())
933    /// # }
934    /// ```
935    pub fn binary_search(&self, value: &T) -> Result<usize, usize>
936    where
937        T: Ord,
938    {
939        let mut low = 0;
940        let mut high = self.len();
941
942        while low < high {
943            let mid = low + (high - low) / 2;
944            let mid_val = unsafe { self.get_unchecked(mid) };
945
946            match mid_val.cmp(value) {
947                std::cmp::Ordering::Less => low = mid + 1,
948                std::cmp::Ordering::Equal => return Ok(mid),
949                std::cmp::Ordering::Greater => high = mid,
950            }
951        }
952        Err(low)
953    }
954
955    /// Binary searches this vector with a key extraction function.
956    ///
957    /// This method is useful when searching for a value of a different type
958    /// than the elements of the slice. The function `f` is used to extract a
959    /// key of type `K` from an element, which is then compared to `key`.
960    pub fn binary_search_by_key<K: Ord, F>(&self, key: &K, mut f: F) -> Result<usize, usize>
961    where
962        F: FnMut(T) -> K,
963    {
964        self.binary_search_by(|probe| f(*probe).cmp(key))
965    }
966
967    /// Binary searches this vector with a custom comparison function.
968    ///
969    /// The comparator function `f` should return an `Ordering` indicating
970    /// the relation of a probe element to the value being searched for.
971    ///
972    /// # Examples
973    ///
974    /// ```
975    /// use compressed_intvec::fixed_vec;
976    /// use std::cmp::Ordering;
977    ///
978    /// let vec = fixed_vec![0u32, 1, 1, 2, 3];
979    ///
980    /// // Search for a value by comparing it to the probe.
981    /// let result = vec.binary_search_by(|probe| probe.cmp(&1));
982    /// assert!(matches!(result, Ok(1) | Ok(2)));
983    ///
984    /// let result_not_found = vec.binary_search_by(|probe| probe.cmp(&4));
985    /// assert_eq!(result_not_found, Err(5));
986    /// ```
987    pub fn binary_search_by<F>(&self, mut f: F) -> Result<usize, usize>
988    where
989        F: FnMut(&T) -> std::cmp::Ordering,
990    {
991        let mut low = 0;
992        let mut high = self.len();
993
994        while low < high {
995            let mid = low + (high - low) / 2;
996            // SAFETY: The loop invariants ensure `mid` is always in bounds.
997            let mid_val = unsafe { self.get_unchecked(mid) };
998
999            match f(&mid_val) {
1000                std::cmp::Ordering::Less => low = mid + 1,
1001                std::cmp::Ordering::Equal => return Ok(mid),
1002                std::cmp::Ordering::Greater => high = mid,
1003            }
1004        }
1005        Err(low)
1006    }
1007
1008    /// Returns the index of the partition point of the vector.
1009    ///
1010    /// The vector is partitioned according to the predicate `pred`. This means
1011    /// all elements for which `pred` returns `true` are on the left of the
1012    /// partition point, and all elements for which `pred` returns `false` are
1013    /// on the right.
1014    ///
1015    /// # Examples
1016    ///
1017    /// ```
1018    /// use compressed_intvec::fixed_vec;
1019    ///
1020    /// let vec = fixed_vec![0u32, 1, 2, 3, 4, 5];
1021    ///
1022    /// // Find the partition point for elements `< 3`.
1023    /// let partition_idx = vec.partition_point(|&x| x < 3);
1024    /// assert_eq!(partition_idx, 3);
1025    /// ```
1026    pub fn partition_point<P>(&self, mut pred: P) -> usize
1027    where
1028        P: FnMut(&T) -> bool,
1029    {
1030        let mut len = self.len();
1031        let mut left = 0;
1032
1033        while len > 0 {
1034            let half = len / 2;
1035            let mid = left + half;
1036            // SAFETY: The loop invariants ensure `mid` is always in bounds.
1037            let value = unsafe { self.get_unchecked(mid) };
1038
1039            if pred(&value) {
1040                left = mid + 1;
1041                len -= half + 1;
1042            } else {
1043                len = half;
1044            }
1045        }
1046        left
1047    }
1048}
1049
1050/// Allows iterating over a borrowed [`FixedVec`] (e.g., `for val in &my_vec`).
1051impl<'a, T, W, E, B> IntoIterator for &'a FixedVec<T, W, E, B>
1052where
1053    T: Storable<W>,
1054    W: Word,
1055    E: Endianness,
1056    B: AsRef<[W]>,
1057{
1058    type Item = T;
1059    type IntoIter = iter::FixedVecIter<'a, T, W, E, B>;
1060
1061    fn into_iter(self) -> Self::IntoIter {
1062        self.iter()
1063    }
1064}
1065
1066/// Allows iterating over an owned [`FixedVec`], consuming it.
1067impl<T, W, E> IntoIterator for FixedVec<T, W, E, Vec<W>>
1068where
1069    T: Storable<W> + 'static,
1070    W: Word,
1071    E: Endianness,
1072{
1073    type Item = T;
1074    type IntoIter = iter::FixedVecIntoIter<T, W, E>;
1075
1076    /// Consumes the vector and returns an iterator over its elements.
1077    fn into_iter(self) -> Self::IntoIter {
1078        iter::FixedVecIntoIter::new(self)
1079    }
1080}
1081
1082impl<T, W, E> FromIterator<T> for FixedVec<T, W, E, Vec<W>>
1083where
1084    T: Storable<W>,
1085    W: Word,
1086    E: Endianness,
1087    dsi_bitstream::impls::BufBitWriter<E, dsi_bitstream::impls::MemWordWriterVec<W, Vec<W>>>:
1088        dsi_bitstream::prelude::BitWrite<E, Error = std::convert::Infallible>,
1089{
1090    /// Creates a [`FixedVec`] by collecting elements from an iterator.
1091    ///
1092    /// The bit width is determined automatically using the [`BitWidth::Minimal`]
1093    /// strategy. This requires collecting the iterator into a temporary `Vec<T>`
1094    /// to analyze its contents before compression.
1095    ///
1096    /// # Memory Usage
1097    ///
1098    /// Because this implementation must first collect all items into a temporary
1099    /// `Vec<T>` to determine the optimal `bit_width`, it may lead to a temporary
1100    /// peak in memory usage that is roughly double the size of the uncompressed
1101    /// data.
1102    ///
1103    /// For very large datasets where this memory overhead is a concern, it is
1104    /// recommended to use [`FixedVec::from_iter_builder`] instead. The builder
1105    /// allows for streaming construction but requires the `bit_width` to be
1106    /// specified manually.
1107    ///
1108    /// # Examples
1109    ///
1110    /// ```
1111    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
1112    ///
1113    /// let data = 0u32..100;
1114    /// let vec: UFixedVec<u32> = data.collect();
1115    ///
1116    /// assert_eq!(vec.len(), 100);
1117    /// assert_eq!(vec.get(50), Some(50));
1118    /// ```
1119    fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> Self {
1120        let data: Vec<T> = iter.into_iter().collect();
1121        Self::builder().build(&data).unwrap()
1122    }
1123}
1124
1125impl<T, W, E> Default for FixedVec<T, W, E, Vec<W>>
1126where
1127    T: Storable<W>,
1128    W: Word,
1129    E: Endianness,
1130{
1131    /// Creates an empty [`FixedVec`] with a default `bit_width` of 1.
1132    fn default() -> Self {
1133        // SAFETY: An empty vector with a valid bit_width is always safe.
1134        unsafe { Self::new_unchecked(Vec::new(), 0, 1) }
1135    }
1136}
1137
1138// Methods for owned vectors (`B = Vec<W>`).
1139impl<T, W, E> FixedVec<T, W, E, Vec<W>>
1140where
1141    T: Storable<W> + ToPrimitive,
1142    W: Word,
1143    E: Endianness,
1144{
1145    /// Creates a new, empty [`FixedVec`] with a specified bit width.
1146    ///
1147    /// # Errors
1148    ///
1149    /// Returns an [`Error::InvalidParameters`] if `bit_width` is greater than
1150    /// the number of bits in the storage word `W`.
1151    ///
1152    /// # Examples
1153    ///
1154    /// ```
1155    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
1156    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
1157    ///
1158    /// let vec: UFixedVec<u32> = FixedVec::new(8)?;
1159    /// assert!(vec.is_empty());
1160    /// assert_eq!(vec.bit_width(), 8);
1161    /// # Ok(())
1162    /// # }
1163    /// ```
1164    pub fn new(bit_width: usize) -> Result<Self, Error> {
1165        if bit_width > <W as traits::Word>::BITS {
1166            return Err(Error::InvalidParameters(format!(
1167                "bit_width ({}) cannot be greater than the word size ({})",
1168                bit_width,
1169                <W as traits::Word>::BITS
1170            )));
1171        }
1172        Ok(unsafe { Self::new_unchecked(Vec::new(), 0, bit_width) })
1173    }
1174
1175    /// Appends an element to the end of the vector.
1176    ///
1177    /// This operation has amortized O(1) complexity.
1178    ///
1179    /// # Panics
1180    ///
1181    /// Panics if the `value` is too large to be represented by the configured
1182    /// `bit_width`.
1183    ///
1184    /// # Examples
1185    ///
1186    /// ```
1187    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
1188    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
1189    ///
1190    /// let mut vec: UFixedVec<u32> = FixedVec::new(4)?;
1191    /// vec.push(10);
1192    /// vec.push(15);
1193    ///
1194    /// assert_eq!(vec.len(), 2);
1195    /// assert_eq!(vec.get(1), Some(15));
1196    /// # Ok(())
1197    /// # }
1198    /// ```
1199    #[inline(always)]
1200    pub fn push(&mut self, value: T) {
1201        let value_w = <T as Storable<W>>::into_word(value);
1202
1203        // Check if the value fits within the configured bit width.
1204        if (value_w & !self.mask) != W::ZERO {
1205            panic!(
1206                "Value {:?} does not fit in the configured bit_width of {}",
1207                value_w, self.bit_width
1208            );
1209        }
1210
1211        let bits_per_word = <W as traits::Word>::BITS;
1212        // Grow the underlying buffer if the new element would exceed its bit capacity.
1213        if (self.len + 1) * self.bit_width > self.bits.len() * bits_per_word {
1214            self.bits.push(W::ZERO);
1215        }
1216
1217        // SAFETY: We have ensured the value fits and the buffer has capacity.
1218        unsafe {
1219            self.set_unchecked(self.len, value_w);
1220        }
1221
1222        self.len += 1;
1223    }
1224
1225    /// Removes the last element from the vector and returns it.
1226    ///
1227    /// Returns `None` if the vector is empty.
1228    ///
1229    /// # Examples
1230    ///
1231    /// ```
1232    /// use compressed_intvec::fixed_vec;
1233    ///
1234    /// let mut vec = fixed_vec![1u32, 2, 3];
1235    /// assert_eq!(vec.pop(), Some(3));
1236    /// assert_eq!(vec.len(), 2);
1237    /// ```
1238    pub fn pop(&mut self) -> Option<T> {
1239        if self.is_empty() {
1240            return None;
1241        }
1242        let value = self.get(self.len - 1).unwrap();
1243        self.len -= 1;
1244        Some(value)
1245    }
1246
1247    /// Removes all elements from the vector, leaving the capacity unchanged.
1248    pub fn clear(&mut self) {
1249        self.len = 0;
1250    }
1251
1252    /// Creates a new, empty [`FixedVec`] with a specified bit width and capacity.
1253    ///
1254    /// The vector will be able to hold at least `capacity` elements without
1255    /// reallocating.
1256    ///
1257    /// # Errors
1258    ///
1259    /// Returns an [`Error::InvalidParameters`] if `bit_width` is greater than
1260    /// the number of bits in the storage word `W`.
1261    ///
1262    /// # Examples
1263    ///
1264    /// ```
1265    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
1266    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
1267    ///
1268    /// let vec: UFixedVec<u32> = FixedVec::with_capacity(5, 1000)?;
1269    /// assert!(vec.capacity() >= 1000);
1270    /// # Ok(())
1271    /// # }
1272    /// ```
1273    pub fn with_capacity(bit_width: usize, capacity: usize) -> Result<Self, Error> {
1274        if bit_width > <W as traits::Word>::BITS {
1275            return Err(Error::InvalidParameters(format!(
1276                "bit_width ({}) cannot be greater than the word size ({})",
1277                bit_width,
1278                <W as traits::Word>::BITS
1279            )));
1280        }
1281        let bits_per_word = <W as traits::Word>::BITS;
1282        let total_bits = capacity.saturating_mul(bit_width);
1283        let num_words = total_bits.div_ceil(bits_per_word);
1284
1285        let buffer = if capacity == 0 {
1286            Vec::new()
1287        } else {
1288            Vec::with_capacity(num_words + 1) // +1 for padding
1289        };
1290
1291        Ok(unsafe { Self::new_unchecked(buffer, 0, bit_width) })
1292    }
1293
1294    /// Returns the number of elements the vector can hold without reallocating.
1295    pub fn capacity(&self) -> usize {
1296        if self.bit_width == 0 {
1297            return usize::MAX;
1298        }
1299        let word_capacity = self.bits.capacity();
1300        if word_capacity <= 1 {
1301            return 0; // Not enough for data + padding.
1302        }
1303        // Subtract padding words before calculating element capacity.
1304        ((word_capacity - 1) * <W as traits::Word>::BITS) / self.bit_width
1305    }
1306
1307    /// Returns the capacity of the underlying storage in words.
1308    pub fn word_capacity(&self) -> usize {
1309        self.bits.capacity()
1310    }
1311
1312    /// Reserves capacity for at least `additional` more elements to be inserted.
1313    ///
1314    /// # Panics
1315    ///
1316    /// Panics if the new capacity overflows [`usize`].
1317    ///
1318    /// # Examples
1319    ///
1320    /// ```
1321    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
1322    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
1323    ///
1324    /// let mut vec: UFixedVec<u32> = FixedVec::new(4)?;
1325    /// vec.reserve(100);
1326    /// assert!(vec.capacity() >= 100);
1327    /// # Ok(())
1328    /// # }
1329    /// ```
1330    pub fn reserve(&mut self, additional: usize) {
1331        let target_element_capacity = self.len.saturating_add(additional);
1332        if self.capacity() >= target_element_capacity {
1333            return;
1334        }
1335        let bits_per_word = <W as Word>::BITS;
1336        let required_total_bits = target_element_capacity.saturating_mul(self.bit_width);
1337        let required_data_words = required_total_bits.div_ceil(bits_per_word);
1338        let required_word_capacity = required_data_words + 1; // +1 for padding
1339
1340        let current_len = self.bits.len();
1341        if self.bits.capacity() < required_word_capacity {
1342            self.bits.reserve(required_word_capacity - current_len);
1343        }
1344    }
1345
1346    /// Resizes the vector so that its length is equal to `new_len`.
1347    ///
1348    /// If `new_len` is greater than the current length, the vector is extended
1349    /// by appending `value`. If `new_len` is less than the current length, the
1350    /// vector is truncated.
1351    ///
1352    /// # Panics
1353    ///
1354    /// Panics if the `value` used for filling does not fit in the configured
1355    /// `bit_width`.
1356    ///
1357    /// # Examples
1358    ///
1359    /// ```
1360    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
1361    /// use compressed_intvec::fixed::{FixedVec, UFixedVec};
1362    ///
1363    /// let mut vec = UFixedVec::<u32>::new(4)?;
1364    /// vec.push(1);
1365    /// vec.push(2);
1366    ///
1367    /// vec.resize(4, 10);
1368    /// assert_eq!(vec.len(), 4);
1369    /// assert_eq!(vec.get(3), Some(10));
1370    ///
1371    /// vec.resize(1, 0);
1372    /// assert_eq!(vec.len(), 1);
1373    /// assert_eq!(vec.get(0), Some(1));
1374    /// # Ok(())
1375    /// # }
1376    /// ```
1377    #[inline(always)]
1378    pub fn resize(&mut self, new_len: usize, value: T) {
1379        if new_len > self.len {
1380            let value_w = <T as Storable<W>>::into_word(value);
1381
1382            if (value_w & !self.mask) != W::ZERO {
1383                panic!(
1384                    "Value {:?} does not fit in the configured bit_width of {}",
1385                    value_w, self.bit_width
1386                );
1387            }
1388
1389            let bits_per_word = <W as traits::Word>::BITS;
1390            let required_total_bits = new_len * self.bit_width;
1391            let required_data_words = required_total_bits.div_ceil(bits_per_word);
1392            let required_vec_len = required_data_words.saturating_add(1); // Padding
1393
1394            if self.bits.len() < required_vec_len {
1395                self.bits.resize(required_vec_len, W::ZERO);
1396            }
1397
1398            for i in self.len..new_len {
1399                unsafe {
1400                    self.set_unchecked(i, value_w);
1401                }
1402            }
1403            self.len = new_len;
1404        } else {
1405            self.len = new_len;
1406        }
1407    }
1408
1409    /// Shrinks the capacity of the vector as much as possible.
1410    pub fn shrink_to_fit(&mut self) {
1411        let min_word_len = if self.len == 0 {
1412            0
1413        } else {
1414            let bits_per_word = <W as traits::Word>::BITS;
1415            let required_total_bits = self.len.saturating_mul(self.bit_width);
1416            let required_words = required_total_bits.div_ceil(bits_per_word);
1417            required_words + 1 // +1 for padding
1418        };
1419
1420        if self.bits.len() > min_word_len {
1421            self.bits.truncate(min_word_len);
1422        }
1423        self.bits.shrink_to_fit();
1424    }
1425
1426    /// Removes and returns the element at `index`, shifting all elements after
1427    /// it to the left.
1428    ///
1429    /// This operation is O(n), where n is the number of elements after `index`.
1430    ///
1431    /// # Panics
1432    ///
1433    /// Panics if `index` is out of bounds.
1434    ///
1435    /// # Complexity
1436    ///
1437    /// This operation has a complexity of O(L), where L is the number of bits
1438    /// that need to be shifted. In the worst case (removing the first element),
1439    /// this is proportional to the total number of bits in the vector
1440    /// (`self.len() * self.bit_width()`).
1441    ///
1442    /// # Panics
1443    ///
1444    /// Panics if `index` is out of bounds.
1445    pub fn remove(&mut self, index: usize) -> T {
1446        assert!(index < self.len, "remove: index out of bounds");
1447
1448        let value_to_return = self.get(index).unwrap();
1449
1450        let start_bit = index * self.bit_width;
1451        let end_bit = self.len * self.bit_width;
1452        let total_bits_to_shift = end_bit - (start_bit + self.bit_width);
1453
1454        if total_bits_to_shift > 0 {
1455            self.shift_bits_left(start_bit, self.bit_width, total_bits_to_shift);
1456        }
1457
1458        self.len -= 1;
1459
1460        value_to_return
1461    }
1462
1463    /// Inserts an element at `index`, shifting all elements after it to the right.
1464    ///
1465    /// This operation is O(n), where n is the number of elements after `index`.
1466    ///
1467    /// # Panics
1468    ///
1469    /// Panics if `index` is out of bounds or if the `element` is too large to
1470    /// be represented by the configured `bit_width`.
1471    ///
1472    /// # Complexity
1473    ///
1474    /// This operation has a complexity of O(L), where L is the number of bits
1475    /// that need to be shifted. In the worst case (inserting at the beginning),
1476    /// this is proportional to the total number of bits in the vector
1477    /// (`self.len() * self.bit_width()`).
1478    ///
1479    /// # Panics
1480    ///
1481    /// Panics if `index` is out of bounds or if the `element` is too large to
1482    /// be represented by the configured `bit_width`.
1483    pub fn insert(&mut self, index: usize, element: T) {
1484        assert!(index <= self.len, "insert: index out of bounds");
1485        let value_w = <T as Storable<W>>::into_word(element);
1486        let bits_per_word = <W as Word>::BITS;
1487        let limit = if self.bit_width < bits_per_word {
1488            W::ONE << self.bit_width
1489        } else {
1490            W::max_value()
1491        };
1492        if self.bit_width < bits_per_word && value_w >= limit {
1493            panic!(
1494                "Value {:?} does not fit in the configured bit_width of {}",
1495                value_w, self.bit_width
1496            );
1497        }
1498        self.reserve(1);
1499        let start_shift_bit = index * self.bit_width;
1500        let num_bits_to_move = (self.len - index) * self.bit_width;
1501        if num_bits_to_move > 0 {
1502            self.shift_bits_right(start_shift_bit, self.bit_width, num_bits_to_move);
1503        }
1504        self.len += 1;
1505        unsafe {
1506            self.set_unchecked(index, value_w);
1507        }
1508    }
1509
1510    /// Shifts a range of bits to the left in-place.
1511    fn shift_bits_left(&mut self, start_bit: usize, shift_amount: usize, num_bits_to_move: usize) {
1512        if num_bits_to_move == 0 {
1513            return;
1514        }
1515
1516        let bits_per_word = <W as Word>::BITS;
1517
1518        // Fast path for word-aligned shifts, using `copy_within`.
1519        if shift_amount.is_multiple_of(bits_per_word) {
1520            let start_write_word = start_bit / bits_per_word;
1521            let start_read_word = (start_bit + shift_amount) / bits_per_word;
1522            let num_words_to_move = num_bits_to_move.div_ceil(bits_per_word);
1523
1524            if start_read_word < self.bits.len() {
1525                let read_end = (start_read_word + num_words_to_move).min(self.bits.len());
1526                self.bits
1527                    .copy_within(start_read_word..read_end, start_write_word);
1528            }
1529            return;
1530        }
1531
1532        // Slow path for unaligned shifts (word-at-a-time).
1533        let shift_rem = shift_amount % bits_per_word;
1534        let inv_shift_rem = bits_per_word - shift_rem;
1535
1536        let start_write_bit = start_bit;
1537        let end_write_bit = start_bit + num_bits_to_move;
1538
1539        let start_write_word = start_write_bit / bits_per_word;
1540        let end_write_word = (end_write_bit - 1) / bits_per_word;
1541
1542        for write_word_idx in start_write_word..=end_write_word {
1543            // Fetch the source data, which may span two source words.
1544            let read_bit = write_word_idx * bits_per_word + shift_rem;
1545            let read_word_idx = read_bit / bits_per_word;
1546
1547            let low_part = self.bits.get(read_word_idx).copied().unwrap_or(W::ZERO) >> shift_rem;
1548            let high_part =
1549                self.bits.get(read_word_idx + 1).copied().unwrap_or(W::ZERO) << inv_shift_rem;
1550
1551            let value_to_write = low_part | high_part;
1552
1553            // Create a mask for the bits we are about to modify.
1554            let mut mask = W::max_value();
1555            if write_word_idx == start_write_word {
1556                mask &= W::max_value() << (start_write_bit % bits_per_word);
1557            }
1558            if write_word_idx == end_write_word {
1559                let end_offset = end_write_bit % bits_per_word;
1560                if end_offset != 0 {
1561                    mask &= (W::ONE << end_offset).wrapping_sub(W::ONE);
1562                }
1563            }
1564
1565            self.bits[write_word_idx] =
1566                (self.bits[write_word_idx] & !mask) | (value_to_write & mask);
1567        }
1568    }
1569
1570    /// Shifts a range of bits to the right in-place.
1571    fn shift_bits_right(&mut self, start_bit: usize, shift_amount: usize, num_bits_to_move: usize) {
1572        if num_bits_to_move == 0 {
1573            return;
1574        }
1575
1576        let bits_per_word = <W as Word>::BITS;
1577
1578        // Ensure the vector has enough capacity and is resized to accommodate the shift.
1579        let required_end_bit = start_bit + shift_amount + num_bits_to_move;
1580        let required_words = required_end_bit.div_ceil(bits_per_word);
1581        let required_vec_len = required_words.saturating_add(1); // +1 for padding
1582        if self.bits.len() < required_vec_len {
1583            self.bits.resize(required_vec_len, W::ZERO);
1584        }
1585
1586        // Fast path for word-aligned shifts.
1587        if shift_amount.is_multiple_of(bits_per_word) {
1588            let start_read_word = start_bit / bits_per_word;
1589            let start_write_word = (start_bit + shift_amount) / bits_per_word;
1590            let num_words_to_move = num_bits_to_move.div_ceil(bits_per_word);
1591
1592            if start_read_word + num_words_to_move <= self.bits.len() {
1593                self.bits.copy_within(
1594                    start_read_word..start_read_word + num_words_to_move,
1595                    start_write_word,
1596                );
1597            }
1598        } else {
1599            // Slow path for unaligned shifts (iterating from right to left).
1600            let word_shift = shift_amount / bits_per_word;
1601            let shift_rem = shift_amount % bits_per_word;
1602            let inv_shift_rem = bits_per_word - shift_rem;
1603
1604            let start_write_bit = start_bit + shift_amount;
1605            let end_write_bit = start_write_bit + num_bits_to_move;
1606
1607            let start_write_word = start_write_bit / bits_per_word;
1608            let end_write_word = (end_write_bit - 1) / bits_per_word;
1609
1610            for write_word_idx in (start_write_word..=end_write_word).rev() {
1611                let read_word_idx = write_word_idx - word_shift;
1612
1613                // Fetch source data, which may span two words.
1614                let high_part =
1615                    self.bits.get(read_word_idx).copied().unwrap_or(W::ZERO) << shift_rem;
1616                let low_part = if read_word_idx > 0 {
1617                    self.bits.get(read_word_idx - 1).copied().unwrap_or(W::ZERO) >> inv_shift_rem
1618                } else {
1619                    W::ZERO
1620                };
1621                let value_to_write = low_part | high_part;
1622
1623                // Create a mask for the bits we are about to modify.
1624                let mut mask = W::max_value();
1625                if write_word_idx == start_write_word {
1626                    mask &= W::max_value() << (start_write_bit % bits_per_word);
1627                }
1628                if write_word_idx == end_write_word {
1629                    let end_offset = end_write_bit % bits_per_word;
1630                    if end_offset != 0 {
1631                        mask &= (W::ONE << end_offset).wrapping_sub(W::ONE);
1632                    }
1633                }
1634
1635                self.bits[write_word_idx] =
1636                    (self.bits[write_word_idx] & !mask) | (value_to_write & mask);
1637            }
1638        }
1639
1640        // Zero out the vacated bits at the beginning of the shifted region.
1641        let mut clear_bit = start_bit;
1642        let end_clear_bit = start_bit + shift_amount;
1643
1644        while clear_bit < end_clear_bit {
1645            let word_idx = clear_bit / bits_per_word;
1646            let offset = clear_bit % bits_per_word;
1647            let bits_to_clear = (bits_per_word - offset).min(end_clear_bit - clear_bit);
1648
1649            let mask = if bits_to_clear == bits_per_word {
1650                W::max_value()
1651            } else {
1652                ((W::ONE << bits_to_clear).wrapping_sub(W::ONE)) << offset
1653            };
1654
1655            if word_idx < self.bits.len() {
1656                self.bits[word_idx] &= !mask;
1657            }
1658            clear_bit += bits_to_clear;
1659        }
1660    }
1661
1662    /// Removes an element at `index` and returns it, replacing it with the last
1663    /// element of the vector.
1664    ///
1665    /// This operation is O(1) but does not preserve the order of elements.
1666    ///
1667    /// # Panics
1668    ///
1669    /// Panics if `index` is out of bounds.
1670    pub fn swap_remove(&mut self, index: usize) -> T {
1671        assert!(index < self.len, "swap_remove: index out of bounds");
1672
1673        if index == self.len - 1 {
1674            self.pop().unwrap()
1675        } else {
1676            let old_val = unsafe { self.get_unchecked(index) };
1677            let last_val = self.pop().unwrap(); // `pop` already decrements len
1678            self.set(index, last_val);
1679            old_val
1680        }
1681    }
1682
1683    /// Appends an element to the vector, returning an error if the value doesn't fit.
1684    ///
1685    /// # Errors
1686    ///
1687    /// Returns [`Error::ValueTooLarge`] if the `value` cannot be represented
1688    /// by the configured `bit_width`.
1689    pub fn try_push(&mut self, value: T) -> Result<(), Error> {
1690        let value_w = <T as Storable<W>>::into_word(value);
1691        let bits_per_word = <W as traits::Word>::BITS;
1692
1693        let limit = if self.bit_width < bits_per_word {
1694            W::ONE << self.bit_width
1695        } else {
1696            W::max_value()
1697        };
1698
1699        if self.bit_width < bits_per_word && value_w >= limit {
1700            return Err(Error::ValueTooLarge {
1701                value: value_w.to_u128().unwrap(),
1702                index: self.len,
1703                bit_width: self.bit_width,
1704            });
1705        }
1706
1707        self.push(value);
1708        Ok(())
1709    }
1710
1711    /// Extends the vector with the elements from a slice.
1712    ///
1713    /// # Panics
1714    ///
1715    /// Panics if any value in `other` does not fit within the `bit_width`.
1716    pub fn extend_from_slice(&mut self, other: &[T]) {
1717        if other.is_empty() {
1718            return;
1719        }
1720
1721        self.reserve(other.len());
1722
1723        // Pre-validate all values to ensure atomicity.
1724        let bits_per_word = <W as traits::Word>::BITS;
1725        let limit = if self.bit_width < bits_per_word {
1726            W::ONE << self.bit_width
1727        } else {
1728            W::max_value()
1729        };
1730        if self.bit_width < bits_per_word {
1731            for (i, &value) in other.iter().enumerate() {
1732                let value_w = <T as Storable<W>>::into_word(value);
1733                if value_w >= limit {
1734                    panic!(
1735                        "Value at index {} of slice ({:?}) does not fit in the configured bit_width of {}",
1736                        i, value_w, self.bit_width
1737                    );
1738                }
1739            }
1740        }
1741
1742        let old_len = self.len;
1743        let new_len = old_len + other.len();
1744
1745        // Ensure the underlying Vec has enough initialized words to write into.
1746        let required_total_bits = new_len * self.bit_width;
1747        let required_data_words = required_total_bits.div_ceil(bits_per_word);
1748        let required_vec_len = required_data_words.saturating_add(1); // Padding
1749        if self.bits.len() < required_vec_len {
1750            self.bits.resize(required_vec_len, W::ZERO);
1751        }
1752
1753        // Write the new values.
1754        for (i, &value) in other.iter().enumerate() {
1755            // SAFETY: We have reserved, resized, and validated the data.
1756            unsafe {
1757                self.set_unchecked(old_len + i, <T as Storable<W>>::into_word(value));
1758            }
1759        }
1760
1761        self.len = new_len;
1762    }
1763}
1764
1765// Mutable in-place operations.
1766impl<T, W, E, B> FixedVec<T, W, E, B>
1767where
1768    T: Storable<W>,
1769    W: Word,
1770    E: Endianness,
1771    B: AsRef<[W]> + AsMut<[W]>,
1772{
1773    /// Returns a mutable proxy for an element at `index`.
1774    ///
1775    /// This allows for syntax like `*vec.at_mut(i).unwrap() = new_value;`.
1776    ///
1777    /// Returns `None` if the index is out of bounds.
1778    pub fn at_mut(&mut self, index: usize) -> Option<MutProxy<'_, T, W, E, B>> {
1779        if index >= self.len {
1780            return None;
1781        }
1782        Some(MutProxy::new(self, index))
1783    }
1784
1785    /// Returns a mutable slice of the underlying storage words.
1786    ///
1787    /// # Safety
1788    ///
1789    /// Modifying the returned slice is logically unsafe. Any change to the bits
1790    /// can violate the invariants of the [`FixedVec`], leading to panics or
1791    /// incorrect results on subsequent method calls.
1792    pub unsafe fn as_mut_limbs(&mut self) -> &mut [W] {
1793        self.bits.as_mut()
1794    }
1795
1796    /// Sets the value of the element at `index`.
1797    ///
1798    /// # Panics
1799    ///
1800    /// Panics if `index` is out of bounds or if `value` is too large to be
1801    /// represented by the configured `bit_width`.
1802    ///
1803    /// # Examples
1804    ///
1805    /// ```
1806    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
1807    /// use compressed_intvec::fixed::{FixedVec, UFixedVec, BitWidth};
1808    ///
1809    /// let data: &[u32] = &[10, 20, 30];
1810    /// let mut vec: UFixedVec<u32> = FixedVec::builder().bit_width(BitWidth::Explicit(7)).build(data)?;
1811    ///
1812    /// vec.set(1, 99);
1813    /// assert_eq!(vec.get(1), Some(99));
1814    /// # Ok(())
1815    /// # }
1816    /// ```
1817    pub fn set(&mut self, index: usize, value: T) {
1818        assert!(
1819            index < self.len,
1820            "Index out of bounds: expected index < {}, got {}",
1821            self.len,
1822            index
1823        );
1824
1825        let value_w = <T as Storable<W>>::into_word(value);
1826        let bits_per_word = <W as traits::Word>::BITS;
1827
1828        let limit = if self.bit_width < bits_per_word {
1829            W::ONE << self.bit_width
1830        } else {
1831            W::max_value()
1832        };
1833
1834        if self.bit_width < bits_per_word && value_w >= limit {
1835            panic!(
1836                "Value {:?} does not fit in the configured bit_width of {}",
1837                value_w, self.bit_width
1838            );
1839        }
1840
1841        unsafe { self.set_unchecked(index, value_w) };
1842    }
1843
1844    /// Sets the value of the element at `index` without bounds or value checking.
1845    ///
1846    /// # Safety
1847    ///
1848    /// The caller must ensure that `index` is within bounds and that `value_w`
1849    /// fits within the configured `bit_width`. Failure to do so will result in
1850    /// data corruption or a panic.
1851    #[inline(always)]
1852    pub unsafe fn set_unchecked(&mut self, index: usize, value_w: W) {
1853        let bits_per_word = <W as traits::Word>::BITS;
1854        if self.bit_width == bits_per_word {
1855            unsafe {
1856                *self.bits.as_mut().get_unchecked_mut(index) = if E::IS_LITTLE {
1857                    value_w
1858                } else {
1859                    value_w.to_be()
1860                };
1861            }
1862            return;
1863        }
1864
1865        let bit_pos = index * self.bit_width;
1866        let word_index = bit_pos / bits_per_word;
1867        let bit_offset = bit_pos % bits_per_word;
1868
1869        let limbs = self.bits.as_mut();
1870
1871        if E::IS_LITTLE {
1872            // Fast path: the value fits entirely within a single word.
1873            if bit_offset + self.bit_width <= bits_per_word {
1874                let mut word = unsafe { *limbs.get_unchecked(word_index) };
1875                // Clear the target bits and then OR the new value.
1876                word &= !(self.mask << bit_offset);
1877                word |= value_w << bit_offset;
1878                unsafe { *limbs.get_unchecked_mut(word_index) = word };
1879            } else {
1880                let remaining_bits_in_first_word = bits_per_word - bit_offset;
1881                let (left, right) = unsafe { limbs.split_at_mut_unchecked(word_index + 1) };
1882                let mut low_word_val = unsafe { *left.get_unchecked(word_index) };
1883                let low_mask = (<W as num_traits::NumCast>::from(1u8).unwrap() << bit_offset)
1884                    .wrapping_sub(<W as num_traits::NumCast>::from(1u8).unwrap());
1885                low_word_val &= low_mask;
1886                low_word_val |= value_w << bit_offset;
1887                unsafe { *left.get_unchecked_mut(word_index) = low_word_val };
1888
1889                let mut high_word_val = unsafe { *right.get_unchecked(0) };
1890                high_word_val &= !(self.mask >> remaining_bits_in_first_word);
1891                high_word_val |= value_w >> remaining_bits_in_first_word;
1892                unsafe { *right.get_unchecked_mut(0) = high_word_val };
1893            }
1894        } else {
1895            // Big-Endian set logic.
1896            if bit_offset + self.bit_width <= bits_per_word {
1897                let shift = bits_per_word - self.bit_width - bit_offset;
1898                let mask = self.mask << shift;
1899                let word = unsafe { limbs.get_unchecked_mut(word_index) };
1900                *word &= !mask.to_be();
1901                *word |= (value_w << shift).to_be();
1902            } else {
1903                let (left, right) = unsafe { limbs.split_at_mut_unchecked(word_index + 1) };
1904                let high_word = unsafe { left.get_unchecked_mut(word_index) };
1905                let low_word = unsafe { right.get_unchecked_mut(0) };
1906
1907                let bits_in_first = bits_per_word - bit_offset;
1908                let bits_in_second = self.bit_width - bits_in_first;
1909
1910                let high_mask =
1911                    (self.mask >> bits_in_second) << (bits_per_word - bits_in_first - bit_offset);
1912                let high_value = value_w >> bits_in_second;
1913                *high_word &= !high_mask.to_be();
1914                *high_word |= (high_value << (bits_per_word - bits_in_first - bit_offset)).to_be();
1915
1916                let low_mask = self.mask << (bits_per_word - bits_in_second);
1917                let low_value = value_w << (bits_per_word - bits_in_second);
1918                *low_word &= !low_mask.to_be();
1919                *low_word |= low_value.to_be();
1920            }
1921        }
1922    }
1923
1924    /// Sets the value of an element, returning an error if the value doesn't fit.
1925    ///
1926    /// # Errors
1927    ///
1928    /// Returns [`Error::ValueTooLarge`] if the `value` cannot be represented
1929    /// by the configured `bit_width`.
1930    ///
1931    /// # Panics
1932    ///
1933    /// Panics if `index` is out of bounds.
1934    pub fn try_set(&mut self, index: usize, value: T) -> Result<(), Error> {
1935        assert!(index < self.len, "try_set: index out of bounds");
1936
1937        let value_w = <T as Storable<W>>::into_word(value);
1938        let bits_per_word = <W as traits::Word>::BITS;
1939
1940        let limit = if self.bit_width < bits_per_word {
1941            W::ONE << self.bit_width
1942        } else {
1943            W::max_value()
1944        };
1945
1946        if self.bit_width < bits_per_word && value_w >= limit {
1947            return Err(Error::ValueTooLarge {
1948                value: value_w.to_u128().unwrap(),
1949                index,
1950                bit_width: self.bit_width,
1951            });
1952        }
1953
1954        unsafe { self.set_unchecked(index, value_w) };
1955        Ok(())
1956    }
1957
1958    /// Returns an iterator over non-overlapping, mutable chunks of the vector.
1959    ///
1960    /// # Panics
1961    ///
1962    /// Panics if `chunk_size` is 0.
1963    pub fn chunks_mut(&mut self, chunk_size: usize) -> iter_mut::ChunksMut<'_, T, W, E, B> {
1964        iter_mut::ChunksMut::new(self, chunk_size)
1965    }
1966
1967    /// Returns an unchecked mutable iterator over the elements of the vector.
1968    ///
1969    /// # Safety
1970    ///
1971    /// The caller must ensure that the iterator is not advanced beyond the
1972    /// vector's length.
1973    pub unsafe fn iter_mut_unchecked(&mut self) -> iter_mut::IterMutUnchecked<'_, T, W, E, B> {
1974        iter_mut::IterMutUnchecked::new(self)
1975    }
1976
1977    /// Applies a function to all elements in place, without checking if the
1978    /// returned values fit within the `bit_width`.
1979    ///
1980    /// # Safety
1981    ///
1982    /// The caller must ensure that the function `f` always returns a value
1983    /// that can be represented by `self.bit_width()` bits. Returning a value
1984    /// that is too large will result in data corruption.
1985    pub unsafe fn map_in_place_unchecked<F>(&mut self, mut f: F)
1986    where
1987        F: FnMut(T) -> T,
1988    {
1989        if E::IS_LITTLE {
1990            // This path operates on words directly for efficiency.
1991            let mut word_op = |word: W| -> W {
1992                let val_t = <T as Storable<W>>::from_word(word);
1993                let new_val_t = f(val_t);
1994                <T as Storable<W>>::into_word(new_val_t)
1995            };
1996            unsafe { self.map_in_place_generic_word_op(&mut word_op) };
1997        } else {
1998            // Fallback for BE, which is more complex to optimize at the word level.
1999            for i in 0..self.len {
2000                let old_val_t = unsafe { self.get_unchecked(i) };
2001                let new_val_t = f(old_val_t);
2002                unsafe { self.set_unchecked(i, <T as Storable<W>>::into_word(new_val_t)) };
2003            }
2004        }
2005    }
2006
2007    /// Internal worker for `map_in_place_unchecked`, operating on `W`.
2008    unsafe fn map_in_place_generic_word_op<FW>(&mut self, f_w: &mut FW)
2009    where
2010        FW: FnMut(W) -> W,
2011    {
2012        let bit_width = self.bit_width;
2013        if self.len == 0 || bit_width == 0 {
2014            return;
2015        }
2016
2017        let bits_per_word = <W as Word>::BITS;
2018
2019        // Path for power-of-two bit widths that align with word boundaries.
2020        if bit_width.is_power_of_two() && bits_per_word % bit_width == 0 {
2021            let elems_per_word = bits_per_word / bit_width;
2022            let mask = self.mask;
2023            let num_full_words = self.len / elems_per_word;
2024
2025            for word_idx in 0..num_full_words {
2026                let old_word = unsafe { *self.bits.as_ref().get_unchecked(word_idx) };
2027                let mut new_word = W::ZERO;
2028                for i in 0..elems_per_word {
2029                    let shift = i * bit_width;
2030                    let old_val_w = (old_word >> shift) & mask;
2031                    new_word |= f_w(old_val_w) << shift;
2032                }
2033                unsafe { *self.bits.as_mut().get_unchecked_mut(word_idx) = new_word };
2034            }
2035            // Process remaining elements individually.
2036            let start_idx = num_full_words * elems_per_word;
2037            for i in start_idx..self.len {
2038                let old_val_t = self.get(i).unwrap();
2039                let old_val_w = <T as Storable<W>>::into_word(old_val_t);
2040                unsafe { self.set_unchecked(i, f_w(old_val_w)) };
2041            }
2042            return;
2043        }
2044
2045        // Generic path for non-power-of-two bit widths.
2046        let limbs = self.bits.as_mut();
2047        let num_words = (self.len * bit_width).div_ceil(bits_per_word);
2048        let last_word_idx = num_words.saturating_sub(1);
2049
2050        let mut write_buffer = W::ZERO;
2051        let mut read_buffer = unsafe { *limbs.get_unchecked(0) };
2052        let mut global_bit_offset = 0;
2053
2054        for word_idx in 0..last_word_idx {
2055            let lower_word_boundary = word_idx * bits_per_word;
2056            let upper_word_boundary = lower_word_boundary + bits_per_word;
2057
2058            while global_bit_offset + bit_width <= upper_word_boundary {
2059                let offset_in_word = global_bit_offset - lower_word_boundary;
2060                let element = (read_buffer >> offset_in_word) & self.mask;
2061                write_buffer |= f_w(element) << offset_in_word;
2062                global_bit_offset += bit_width;
2063            }
2064
2065            let next_word = unsafe { *limbs.get_unchecked(word_idx + 1) };
2066            let mut new_write_buffer = W::ZERO;
2067
2068            if upper_word_boundary != global_bit_offset {
2069                let elem_idx = global_bit_offset / bit_width;
2070                if elem_idx >= self.len {
2071                    unsafe { *limbs.get_unchecked_mut(word_idx) = write_buffer };
2072                    return;
2073                }
2074
2075                let remainder_in_word = upper_word_boundary - global_bit_offset;
2076                let offset_in_word = global_bit_offset - lower_word_boundary;
2077
2078                let element = ((read_buffer >> offset_in_word) | (next_word << remainder_in_word))
2079                    & self.mask;
2080                let new_element = f_w(element);
2081
2082                write_buffer |= new_element << offset_in_word;
2083                new_write_buffer = new_element >> remainder_in_word;
2084
2085                global_bit_offset += bit_width;
2086            }
2087
2088            unsafe { *limbs.get_unchecked_mut(word_idx) = write_buffer };
2089
2090            read_buffer = next_word;
2091            write_buffer = new_write_buffer;
2092        }
2093
2094        let lower_word_boundary = last_word_idx * bits_per_word;
2095
2096        while global_bit_offset < self.len * bit_width {
2097            let offset_in_word = global_bit_offset - lower_word_boundary;
2098            let element = (read_buffer >> offset_in_word) & self.mask;
2099            write_buffer |= f_w(element) << offset_in_word;
2100            global_bit_offset += bit_width;
2101        }
2102
2103        unsafe { *limbs.get_unchecked_mut(last_word_idx) = write_buffer };
2104    }
2105
2106    /// Applies a function to all elements in the vector, modifying them in-place.
2107    ///
2108    /// # Panics
2109    ///
2110    /// Panics if the function `f` returns a value that does not fit within the
2111    /// configured `bit_width`.
2112    ///
2113    /// # Examples
2114    ///
2115    /// ```
2116    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
2117    /// use compressed_intvec::fixed::{FixedVec, BitWidth, UFixedVec};
2118    ///
2119    /// // Values up to 9*2=18, requires 5 bits. We must build with enough space.
2120    /// let initial_data: Vec<u32> = (0..10).collect();
2121    /// let mut vec: UFixedVec<u32> = FixedVec::builder()
2122    ///     .bit_width(BitWidth::Explicit(5))
2123    ///     .build(&initial_data)?;
2124    ///
2125    /// vec.map_in_place(|x| x * 2);
2126    ///
2127    /// for i in 0..vec.len() {
2128    ///     assert_eq!(vec.get(i), Some(i as u32 * 2));
2129    /// }
2130    /// # Ok(())
2131    /// # }
2132    /// ```
2133    pub fn map_in_place<F>(&mut self, mut f: F)
2134    where
2135        F: FnMut(T) -> T,
2136    {
2137        let bit_width = self.bit_width;
2138        let limit = if bit_width < <W as Word>::BITS {
2139            W::ONE << bit_width
2140        } else {
2141            W::max_value()
2142        };
2143
2144        let safe_f = |value: T| {
2145            let new_value = f(value);
2146            let new_value_w = <T as Storable<W>>::into_word(new_value);
2147            if bit_width < <W as Word>::BITS && new_value_w >= limit {
2148                panic!(
2149                    "map_in_place: returned value {new_value_w:?} does not fit in bit_width {bit_width}"
2150                );
2151            }
2152            new_value
2153        };
2154
2155        // SAFETY: The `safe_f` wrapper ensures that any value passed to the
2156        // underlying unsafe function is valid for the vector's bit_width.
2157        unsafe {
2158            self.map_in_place_unchecked(safe_f);
2159        }
2160    }
2161
2162    /// Replaces the element at `index` with a new `value`, returning the old value.
2163    ///
2164    /// # Panics
2165    ///
2166    /// Panics if `index` is out of bounds or if `value` does not fit within
2167    /// the configured `bit_width`.
2168    pub fn replace(&mut self, index: usize, value: T) -> T {
2169        assert!(index < self.len, "replace: index out of bounds");
2170
2171        let old_value = unsafe { self.get_unchecked(index) };
2172        self.set(index, value);
2173        old_value
2174    }
2175
2176    /// Swaps the elements at indices `a` and `b`.
2177    ///
2178    /// # Panics
2179    ///
2180    /// Panics if `a` or `b` are out of bounds.
2181    pub fn swap(&mut self, a: usize, b: usize) {
2182        assert!(a < self.len, "swap: index a out of bounds");
2183        assert!(b < self.len, "swap: index b out of bounds");
2184
2185        if a == b {
2186            return;
2187        }
2188
2189        unsafe {
2190            let val_a = self.get_unchecked(a);
2191            let val_b = self.get_unchecked(b);
2192            self.set_unchecked(a, <T as Storable<W>>::into_word(val_b));
2193            self.set_unchecked(b, <T as Storable<W>>::into_word(val_a));
2194        }
2195    }
2196
2197    /// Returns a mutable proxy to the first element, or `None` if empty.
2198    pub fn first_mut(&mut self) -> Option<MutProxy<'_, T, W, E, B>> {
2199        if self.is_empty() {
2200            None
2201        } else {
2202            self.at_mut(0)
2203        }
2204    }
2205
2206    /// Returns a mutable proxy to the last element, or `None` if empty.
2207    pub fn last_mut(&mut self) -> Option<MutProxy<'_, T, W, E, B>> {
2208        if self.is_empty() {
2209            None
2210        } else {
2211            let len = self.len();
2212            self.at_mut(len - 1)
2213        }
2214    }
2215
2216    /// Returns the first element of the vector, or `None` if empty.
2217    pub fn first(&self) -> Option<T> {
2218        if self.is_empty() {
2219            None
2220        } else {
2221            Some(unsafe { self.get_unchecked(0) })
2222        }
2223    }
2224
2225    /// Returns the last element of the vector, or `None` if empty.
2226    pub fn last(&self) -> Option<T> {
2227        if self.is_empty() {
2228            None
2229        } else {
2230            let len = self.len();
2231            Some(unsafe { self.get_unchecked(len - 1) })
2232        }
2233    }
2234
2235    /// Splits the vector into two mutable slices at `mid`.
2236    ///
2237    /// # Panics
2238    ///
2239    /// Panics if `mid > len`.
2240    pub fn split_at_mut(
2241        &mut self,
2242        mid: usize,
2243    ) -> (
2244        slice::FixedVecSlice<&mut Self>,
2245        slice::FixedVecSlice<&mut Self>,
2246    ) {
2247        assert!(mid <= self.len, "mid > len in split_at_mut");
2248        // SAFETY: The two slices are guaranteed not to overlap.
2249        unsafe {
2250            let ptr = self as *mut Self;
2251            let left = slice::FixedVecSlice::new(&mut *ptr, 0..mid);
2252            let right = slice::FixedVecSlice::new(&mut *ptr, mid..self.len());
2253            (left, right)
2254        }
2255    }
2256
2257    /// Returns a mutable iterator over the elements of the vector.
2258    pub fn iter_mut(&mut self) -> iter_mut::IterMut<'_, T, W, E, B> {
2259        iter_mut::IterMut::new(self)
2260    }
2261
2262    /// Rotates the elements of the vector in-place such that the element at `mid`
2263    /// becomes the first element.
2264    ///
2265    /// # Panics
2266    ///
2267    /// Panics if `mid > len`.
2268    pub fn rotate_left(&mut self, mid: usize) {
2269        assert!(mid <= self.len, "mid > len in rotate_left");
2270        if self.is_empty() || mid == 0 || mid == self.len {
2271            return;
2272        }
2273        // A simple, correct implementation.
2274        let mut temp = Vec::with_capacity(mid);
2275        for i in 0..mid {
2276            temp.push(unsafe { self.get_unchecked(i) });
2277        }
2278        for i in mid..self.len {
2279            let val = unsafe { self.get_unchecked(i) };
2280            unsafe { self.set_unchecked(i - mid, <T as Storable<W>>::into_word(val)) };
2281        }
2282        for (i, val) in temp.into_iter().enumerate() {
2283            unsafe { self.set_unchecked(self.len - mid + i, <T as Storable<W>>::into_word(val)) };
2284        }
2285    }
2286
2287    /// Rotates the elements of the vector in-place such that the element at
2288    /// `len - k` becomes the first element.
2289    ///
2290    /// # Panics
2291    ///
2292    /// Panics if `k > len`.
2293    pub fn rotate_right(&mut self, k: usize) {
2294        assert!(k <= self.len, "k > len in rotate_right");
2295        if self.is_empty() || k == 0 || k == self.len {
2296            return;
2297        }
2298        self.rotate_left(self.len - k);
2299    }
2300
2301    /// Fills the vector with the given value.
2302    ///
2303    /// # Panics
2304    ///
2305    /// Panics if `value` does not fit in the configured `bit_width`.
2306    pub fn fill(&mut self, value: T) {
2307        let value_w = <T as Storable<W>>::into_word(value);
2308        let bits_per_word = <W as traits::Word>::BITS;
2309        let limit = if self.bit_width < bits_per_word {
2310            W::ONE << self.bit_width
2311        } else {
2312            W::max_value()
2313        };
2314        if self.bit_width < bits_per_word && value_w >= limit {
2315            panic!(
2316                "Value {:?} does not fit in the configured bit_width of {}",
2317                value_w, self.bit_width
2318            );
2319        }
2320
2321        for i in 0..self.len() {
2322            unsafe { self.set_unchecked(i, value_w) };
2323        }
2324    }
2325
2326    /// Fills the vector with values returned by a closure.
2327    ///
2328    /// # Panics
2329    ///
2330    /// Panics if the closure returns a value that does not fit in the configured
2331    /// `bit_width`.
2332    pub fn fill_with<F>(&mut self, mut f: F)
2333    where
2334        F: FnMut() -> T,
2335    {
2336        let bits_per_word = <W as traits::Word>::BITS;
2337        let limit = if self.bit_width < bits_per_word {
2338            W::ONE << self.bit_width
2339        } else {
2340            W::max_value()
2341        };
2342
2343        for i in 0..self.len() {
2344            let value = f();
2345            let value_w = <T as Storable<W>>::into_word(value);
2346            if self.bit_width < bits_per_word && value_w >= limit {
2347                panic!(
2348                    "Value {:?} returned by closure does not fit in bit_width {}",
2349                    value_w, self.bit_width
2350                );
2351            }
2352            unsafe { self.set_unchecked(i, value_w) };
2353        }
2354    }
2355
2356    /// Copies a sequence of elements from a source [`FixedVec`] into this one.
2357    ///
2358    /// # Panics
2359    ///
2360    /// Panics if the source and destination vectors do not have the same `bit_width`,
2361    /// or if the source or destination ranges are out of bounds.
2362    pub fn copy_from_slice(
2363        &mut self,
2364        src: &Self,
2365        src_range: std::ops::Range<usize>,
2366        dest_index: usize,
2367    ) {
2368        assert_eq!(
2369            self.bit_width, src.bit_width,
2370            "bit_width mismatch in copy_from_slice"
2371        );
2372        assert!(src_range.start <= src_range.end, "source range start > end");
2373        assert!(src_range.end <= src.len(), "source range out of bounds");
2374        let len = src_range.len();
2375        assert!(
2376            dest_index + len <= self.len(),
2377            "destination range out of bounds"
2378        );
2379
2380        if len == 0 {
2381            return;
2382        }
2383
2384        // Fast path: if bit alignments are the same, we can do word-level copies.
2385        let bit_width = self.bit_width;
2386        let bits_per_word = <W as Word>::BITS;
2387        let src_bit_offset = (src_range.start * bit_width) % bits_per_word;
2388        let dest_bit_offset = (dest_index * bit_width) % bits_per_word;
2389
2390        if src_bit_offset == dest_bit_offset {
2391            let src_word_start = (src_range.start * bit_width) / bits_per_word;
2392            let dest_word_start = (dest_index * bit_width) / bits_per_word;
2393            let total_bits_to_copy = len * bit_width;
2394            let num_words_to_copy = total_bits_to_copy.div_ceil(bits_per_word);
2395
2396            let src_words = &src.bits.as_ref()[src_word_start..src_word_start + num_words_to_copy];
2397            let dest_words =
2398                &mut self.bits.as_mut()[dest_word_start..dest_word_start + num_words_to_copy];
2399
2400            // If the last word is not fully copied, we need to mask it.
2401            let residual_bits = total_bits_to_copy % bits_per_word;
2402            if residual_bits == 0 {
2403                dest_words.copy_from_slice(src_words);
2404            } else {
2405                if num_words_to_copy > 1 {
2406                    dest_words[..num_words_to_copy - 1]
2407                        .copy_from_slice(&src_words[..num_words_to_copy - 1]);
2408                }
2409                let last_word_mask = (W::ONE << residual_bits).wrapping_sub(W::ONE);
2410                let dest_last_word = &mut dest_words[num_words_to_copy - 1];
2411                let src_last_word = &src_words[num_words_to_copy - 1];
2412                *dest_last_word =
2413                    (*dest_last_word & !last_word_mask) | (*src_last_word & last_word_mask);
2414            }
2415        } else {
2416            // Slow path: copy element by element if alignments differ.
2417            if dest_index > src_range.start && dest_index < src_range.end {
2418                // Copy backwards if ranges overlap and dest is after src.
2419                for i in (0..len).rev() {
2420                    let val = unsafe { src.get_unchecked(src_range.start + i) };
2421                    unsafe {
2422                        self.set_unchecked(dest_index + i, <T as Storable<W>>::into_word(val))
2423                    };
2424                }
2425            } else {
2426                for i in 0..len {
2427                    let val = unsafe { src.get_unchecked(src_range.start + i) };
2428                    unsafe {
2429                        self.set_unchecked(dest_index + i, <T as Storable<W>>::into_word(val))
2430                    };
2431                }
2432            }
2433        }
2434    }
2435}
2436
2437impl<T, W, E, B, B2> PartialEq<FixedVec<T, W, E, B2>> for FixedVec<T, W, E, B>
2438where
2439    T: Storable<W> + PartialEq,
2440    W: Word,
2441    E: Endianness,
2442    B: AsRef<[W]>,
2443    B2: AsRef<[W]>,
2444{
2445    /// Checks for equality between two [`FixedVec`] instances.
2446    ///
2447    /// It first checks `len` and `bit_width`, then the underlying storage.
2448    fn eq(&self, other: &FixedVec<T, W, E, B2>) -> bool {
2449        if self.len() != other.len() || self.bit_width() != other.bit_width() {
2450            return false;
2451        }
2452        self.as_limbs() == other.as_limbs()
2453    }
2454}
2455
2456/// Implements `PartialEq` for comparing a [`FixedVec`] with a standard slice.
2457impl<T, W, E, B, T2> PartialEq<&[T2]> for FixedVec<T, W, E, B>
2458where
2459    T: Storable<W> + PartialEq<T2>,
2460    W: Word,
2461    E: Endianness,
2462    B: AsRef<[W]>,
2463    T2: Clone,
2464{
2465    fn eq(&self, other: &&[T2]) -> bool {
2466        if self.len() != other.len() {
2467            return false;
2468        }
2469        self.iter().zip(other.iter()).all(|(a, b)| a == *b)
2470    }
2471}
2472
2473impl<T, W, E> Extend<T> for FixedVec<T, W, E, Vec<W>>
2474where
2475    T: Storable<W> + ToPrimitive,
2476    W: Word,
2477    E: Endianness,
2478{
2479    /// Extends the vector with the contents of an iterator.
2480    ///
2481    /// # Panics
2482    ///
2483    /// Panics if any value from the iterator does not fit within the
2484    /// configured `bit_width`.
2485    fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I) {
2486        let iter = iter.into_iter();
2487        let (lower_bound, _) = iter.size_hint();
2488        self.reserve(lower_bound);
2489        for item in iter {
2490            self.push(item);
2491        }
2492    }
2493}
2494
2495impl<T, W, E> From<FixedVec<T, W, E, Vec<W>>> for FixedVec<T, W, E, Box<[W]>>
2496where
2497    T: Storable<W>,
2498    W: Word,
2499    E: Endianness,
2500{
2501    /// Converts a `Vec`-backed [`FixedVec`] into a `Box<[]>`-backed [`FixedVec`].
2502    fn from(vec: FixedVec<T, W, E, Vec<W>>) -> Self {
2503        unsafe { Self::new_unchecked(vec.bits.into_boxed_slice(), vec.len, vec.bit_width) }
2504    }
2505}
2506
2507impl<T, W, E> From<FixedVec<T, W, E, Box<[W]>>> for FixedVec<T, W, E, Vec<W>>
2508where
2509    T: Storable<W>,
2510    W: Word,
2511    E: Endianness,
2512{
2513    /// Converts a `Box<[]>`-backed [`FixedVec`] into a `Vec`-backed [`FixedVec`].
2514    fn from(vec: FixedVec<T, W, E, Box<[W]>>) -> Self {
2515        unsafe { Self::new_unchecked(vec.bits.into_vec(), vec.len, vec.bit_width) }
2516    }
2517}
2518
2519impl<'a, T> TryFrom<&'a [T]> for FixedVec<T, <T as DefaultParams>::W, <T as DefaultParams>::E>
2520where
2521    T: Storable<<T as DefaultParams>::W> + DefaultParams,
2522    <T as DefaultParams>::W: Word,
2523    <T as DefaultParams>::E: Endianness,
2524    dsi_bitstream::impls::BufBitWriter<
2525        <T as DefaultParams>::E,
2526        dsi_bitstream::impls::MemWordWriterVec<
2527            <T as DefaultParams>::W,
2528            Vec<<T as DefaultParams>::W>,
2529        >,
2530    >: dsi_bitstream::prelude::BitWrite<<T as DefaultParams>::E, Error = std::convert::Infallible>,
2531{
2532    type Error = Error;
2533
2534    /// Creates a [`FixedVec`] from a slice using [`BitWidth::Minimal`] and default parameters.
2535    ///
2536    /// This is a convenience method equivalent to `FixedVec::builder().build(slice)`.
2537    /// It uses the default [`Word`] ([`usize`]) and [`Endianness`] ([`LE`]) associated with the element type `T`.
2538    ///
2539    /// # Examples
2540    ///
2541    /// ```
2542    /// # fn main() -> Result<(), Box<dyn std::error::Error>> {
2543    /// use compressed_intvec::fixed::{UFixedVec, SFixedVec};
2544    /// use std::convert::TryFrom;
2545    ///
2546    /// // For unsigned types
2547    /// let data_u: &[u32] = &[10, 20, 30];
2548    /// let vec_u = UFixedVec::<u32>::try_from(data_u)?;
2549    /// assert_eq!(vec_u.bit_width(), 5);
2550    ///
2551    /// // For signed types
2552    /// let data_s: &[i16] = &[-10, 0, 10];
2553    /// let vec_s = SFixedVec::<i16>::try_from(data_s)?;
2554    /// assert_eq!(vec_s.bit_width(), 5);
2555    /// # Ok(())
2556    /// # }
2557    /// ```
2558    fn try_from(slice: &'a [T]) -> Result<Self, Self::Error> {
2559        Self::builder().bit_width(BitWidth::Minimal).build(slice)
2560    }
2561}