#[repr(transparent)]pub struct BitArray<A = [usize; 1], O = Lsb0> where
A: BitViewSized,
O: BitOrder, {
pub _ord: PhantomData<O>,
pub data: A,
}
Expand description
Bit-Precision Array Immediate
This type is a wrapper over the array fundamental [T; N]
that views its
contents as a BitSlice
region. As an array, it can be held directly by value
and does not require an indirection such as the &BitSlice
reference.
Original
Usage
BitArray
is a Rust analogue of the C++ std::bitset<N>
container. However,
restrictions in the Rust type system do not allow specifying exact bit lengths
in the array type. Instead, it must specify a storage array that can contain all
the bits you want.
Because BitArray
is a plain-old-data object, its fields are public and it has
no restrictions on its interior value. You can freely access the interior
storage and move data in or out of the BitArray
type with no cost.
As a convenience, the BitArr!
type-constructor macro can produce correct
type definitions from an exact bit count and your memory-layout type parameters.
Values of that type can then be built from the bitarr!
value-constructor
macro:
use bitvec::prelude::*;
type Example = BitArr!(for 43, in u32, Msb0);
let example: Example = bitarr!(u32, Msb0; 1; 33);
struct HasBitfield {
inner: Example,
}
let ex2 = HasBitfield {
inner: BitArray::new([1, 2]),
};
Note that the actual type of the Example
alias is BitArray<[u32; 2], Msb0>
,
as that is ceil(32, 43)
, so the bitarr!
macro can accept any number of bits
in 33 .. 65
and will produce a value of the correct type.
Type Parameters
BitArray
differs from the other data structures in the crate in that it does
not take a T: BitStore
parameter, but rather takes A: BitViewSized
. That
trait is implemented by all T: BitStore
scalars and all [T; N]
arrays of
them, and provides the logic to translate the aggregate storage into the memory
sequence that the crate expects.
As with all BitSlice
regions, the O: BitOrder
parameter specifies the
ordering of bits within a single A::Store
element.
Future API Changes
Exact bit lengths cannot be encoded into the BitArray
type until the
const-generics system in the compiler can allow type-level computation on type
integers. When this stabilizes, bitvec
will issue a major upgrade that
replaces the BitArray<A, O>
definition with BitArray<T, O, const N: usize>
and match the C++ std::bitset<N>
definition.
Large Bit-Arrays
As with ordinary arrays, large arrays can be expensive to move by value, and
should generally be preferred to have static locations such as actual static
bindings, a long lifetime in a low stack frame, or a heap allocation. While you
certainly can Box<[BitArray<A, O>]>
directly, you may instead prefer the
BitBox
or BitVec
heap-allocated regions. These offer the same storage
behavior and are better optimized than Box<BitArray>
for working with the
contained BitSlice
region.
Examples
use bitvec::prelude::*;
const WELL_KNOWN: BitArr!(for 16, in u8, Lsb0) = BitArray::<[u8; 2], Lsb0> {
data: *b"bv",
..BitArray::ZERO
};
struct HasBitfields {
inner: BitArr!(for 50, in u8, Lsb0),
}
impl HasBitfields {
fn new() -> Self {
Self {
inner: bitarr!(u8, Lsb0; 0; 50),
}
}
fn some_field(&self) -> &BitSlice<u8, Lsb0> {
&self.inner[2 .. 52]
}
}
Fields
_ord: PhantomData<O>
The ordering of bits within an A::Store
element.
data: A
The wrapped data buffer.
Implementations
sourceimpl<A, O> BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourcepub fn as_slice(&self) -> &BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
👎 Deprecated: use .as_bitslice()
or .as_raw_slice()
instead
pub fn as_slice(&self) -> &BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
use .as_bitslice()
or .as_raw_slice()
instead
Returns a bit-slice containing the entire bit-array. Equivalent to
&a[..]
.
Because BitArray
can be viewed as a slice of bits or as a slice of
elements with equal ease, you should switch to using .as_bitslice()
or .as_raw_slice()
to make your choice explicit.
Original
sourcepub fn as_mut_slice(&mut self) -> &mut BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
👎 Deprecated: use .as_mut_bitslice()
or .as_raw_mut_slice()
instead
pub fn as_mut_slice(&mut self) -> &mut BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
use .as_mut_bitslice()
or .as_raw_mut_slice()
instead
Returns a mutable bit-slice containing the entire bit-array. Equivalent
to &mut a[..]
.
Because BitArray
can be viewed as a slice of bits or as a slice of
elements with equal ease, you should switch to using
.as_mut_bitslice()
or .as_raw_mut_slice()
to make your choice
explicit.
Original
sourceimpl<A, O> BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourcepub fn new(data: A) -> Self
pub fn new(data: A) -> Self
Wraps an existing buffer as a bit-array.
Examples
use bitvec::prelude::*;
let data = [0u16, 1, 2, 3];
let bits = BitArray::<_, Msb0>::new(data);
assert_eq!(bits.len(), 64);
sourcepub fn into_inner(self) -> A
pub fn into_inner(self) -> A
Removes the bit-array wrapper, returning the contained buffer.
Examples
use bitvec::prelude::*;
let bits = bitarr![0; 30];
let native: [usize; 1] = bits.into_inner();
sourcepub fn as_bitslice(&self) -> &BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
pub fn as_bitslice(&self) -> &BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
Explicitly views the bit-array as a bit-slice.
sourcepub fn as_mut_bitslice(&mut self) -> &mut BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
pub fn as_mut_bitslice(&mut self) -> &mut BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
Explicitly views the bit-array as a mutable bit-slice.
sourcepub fn as_raw_slice(&self) -> &[A::Store]
pub fn as_raw_slice(&self) -> &[A::Store]
Views the bit-array as a slice of its underlying memory elements.
sourcepub fn as_raw_mut_slice(&mut self) -> &mut [A::Store]
pub fn as_raw_mut_slice(&mut self) -> &mut [A::Store]
Views the bit-array as a mutable slice of its underlying memory elements.
Methods from Deref<Target = BitSlice<A::Store, O>>
sourcepub fn len(&self) -> usize
pub fn len(&self) -> usize
sourcepub fn is_empty(&self) -> bool
pub fn is_empty(&self) -> bool
sourcepub fn first(&self) -> Option<BitRef<'_, Const, T, O>>
pub fn first(&self) -> Option<BitRef<'_, Const, T, O>>
Gets a reference to the first bit of the bit-slice, or None
if it is
empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
Examples
use bitvec::prelude::*;
let bits = bits![1, 0, 0];
assert_eq!(bits.first().as_deref(), Some(&true));
assert!(bits![].first().is_none());
sourcepub fn first_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
pub fn first_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
Gets a mutable reference to the first bit of the bit-slice, or None
if
it is empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some(mut first) = bits.first_mut() {
*first = true;
}
assert_eq!(bits, bits![1, 0, 0]);
assert!(bits![mut].first_mut().is_none());
sourcepub fn split_first(&self) -> Option<(BitRef<'_, Const, T, O>, &Self)>
pub fn split_first(&self) -> Option<(BitRef<'_, Const, T, O>, &Self)>
Splits the bit-slice into a reference to its first bit, and the rest of
the bit-slice. Returns None
when empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
Examples
use bitvec::prelude::*;
let bits = bits![1, 0, 0];
let (first, rest) = bits.split_first().unwrap();
assert_eq!(first, &true);
assert_eq!(rest, bits![0; 2]);
sourcepub fn split_first_mut(
&mut self
) -> Option<(BitRef<'_, Mut, T::Alias, O>, &mut BitSlice<T::Alias, O>)>
pub fn split_first_mut(
&mut self
) -> Option<(BitRef<'_, Mut, T::Alias, O>, &mut BitSlice<T::Alias, O>)>
Splits the bit-slice into mutable references of its first bit, and the
rest of the bit-slice. Returns None
when empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some((mut first, rest)) = bits.split_first_mut() {
*first = true;
assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![1, 0, 0]);
sourcepub fn split_last(&self) -> Option<(BitRef<'_, Const, T, O>, &Self)>
pub fn split_last(&self) -> Option<(BitRef<'_, Const, T, O>, &Self)>
Splits the bit-slice into a reference to its last bit, and the rest of
the bit-slice. Returns None
when empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
let (last, rest) = bits.split_last().unwrap();
assert_eq!(last, &true);
assert_eq!(rest, bits![0; 2]);
sourcepub fn split_last_mut(
&mut self
) -> Option<(BitRef<'_, Mut, T::Alias, O>, &mut BitSlice<T::Alias, O>)>
pub fn split_last_mut(
&mut self
) -> Option<(BitRef<'_, Mut, T::Alias, O>, &mut BitSlice<T::Alias, O>)>
Splits the bit-slice into mutable references to its last bit, and the
rest of the bit-slice. Returns None
when empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some((mut last, rest)) = bits.split_last_mut() {
*last = true;
assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![0, 0, 1]);
sourcepub fn last(&self) -> Option<BitRef<'_, Const, T, O>>
pub fn last(&self) -> Option<BitRef<'_, Const, T, O>>
Gets a reference to the last bit of the bit-slice, or None
if it is
empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
assert_eq!(bits.last().as_deref(), Some(&true));
assert!(bits![].last().is_none());
sourcepub fn last_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
pub fn last_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
Gets a mutable reference to the last bit of the bit-slice, or None
if
it is empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some(mut last) = bits.last_mut() {
*last = true;
}
assert_eq!(bits, bits![0, 0, 1]);
assert!(bits![mut].last_mut().is_none());
sourcepub fn get<'a, I>(&'a self, index: I) -> Option<I::Immut> where
I: BitSliceIndex<'a, T, O>,
pub fn get<'a, I>(&'a self, index: I) -> Option<I::Immut> where
I: BitSliceIndex<'a, T, O>,
Gets a reference to a single bit or a subsection of the bit-slice,
depending on the type of index
.
- If given a
usize
, this produces a reference structure to thebool
at the position. - If given any form of range, this produces a smaller bit-slice.
This returns None
if the index
departs the bounds of self
.
Original
API Differences
BitSliceIndex
uses discrete types for immutable and mutable
references, rather than a single referent type.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
assert_eq!(bits.get(1).as_deref(), Some(&true));
assert_eq!(bits.get(0 .. 2), Some(bits![0, 1]));
assert!(bits.get(3).is_none());
assert!(bits.get(0 .. 4).is_none());
sourcepub fn get_mut<'a, I>(&'a mut self, index: I) -> Option<I::Mut> where
I: BitSliceIndex<'a, T, O>,
pub fn get_mut<'a, I>(&'a mut self, index: I) -> Option<I::Mut> where
I: BitSliceIndex<'a, T, O>,
Gets a mutable reference to a single bit or a subsection of the
bit-slice, depending on the type of index
.
- If given a
usize
, this produces a reference structure to thebool
at the position. - If given any form of range, this produces a smaller bit-slice.
This returns None
if the index
departs the bounds of self
.
Original
API Differences
BitSliceIndex
uses discrete types for immutable and mutable
references, rather than a single referent type.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
*bits.get_mut(0).unwrap() = true;
bits.get_mut(1 ..).unwrap().fill(true);
assert_eq!(bits, bits![1; 3]);
sourcepub unsafe fn get_unchecked<'a, I>(&'a self, index: I) -> I::Immut where
I: BitSliceIndex<'a, T, O>,
pub unsafe fn get_unchecked<'a, I>(&'a self, index: I) -> I::Immut where
I: BitSliceIndex<'a, T, O>,
Gets a reference to a single bit or to a subsection of the bit-slice, without bounds checking.
This has the same arguments and behavior as .get()
, except that it
does not check that index
is in bounds.
Original
Safety
You must ensure that index
is within bounds (within the range 0 .. self.len()
), or this method will introduce memory safety and/or
undefined behavior.
It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.
Examples
use bitvec::prelude::*;
let data = 0b0001_0010u8;
let bits = &data.view_bits::<Lsb0>()[.. 3];
unsafe {
assert!(bits.get_unchecked(1));
assert!(bits.get_unchecked(4));
}
sourcepub unsafe fn get_unchecked_mut<'a, I>(&'a mut self, index: I) -> I::Mut where
I: BitSliceIndex<'a, T, O>,
pub unsafe fn get_unchecked_mut<'a, I>(&'a mut self, index: I) -> I::Mut where
I: BitSliceIndex<'a, T, O>,
Gets a mutable reference to a single bit or a subsection of the
bit-slice, depending on the type of index
.
This has the same arguments and behavior as .get_mut()
, except that
it does not check that index
is in bounds.
Original
Safety
You must ensure that index
is within bounds (within the range 0 .. self.len()
), or this method will introduce memory safety and/or
undefined behavior.
It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.
Examples
use bitvec::prelude::*;
let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 3];
unsafe {
bits.get_unchecked_mut(1).commit(true);
bits.get_unchecked_mut(4 .. 6).fill(true);
}
assert_eq!(data, 0b0011_0010);
pub fn as_ptr(&self) -> BitPtr<Const, T, O>
use .as_bitptr()
instead
pub fn as_mut_ptr(&mut self) -> BitPtr<Mut, T, O>
use .as_mut_bitptr()
instead
sourcepub fn as_ptr_range(&self) -> Range<BitPtr<Const, T, O>>
pub fn as_ptr_range(&self) -> Range<BitPtr<Const, T, O>>
Produces a range of bit-pointers to each bit in the bit-slice.
This is a standard-library range, which has no real functionality for
pointer types. You should prefer .as_bitptr_range()
instead, as it
produces a custom structure that provides expected ranging
functionality.
Original
sourcepub fn as_mut_ptr_range(&mut self) -> Range<BitPtr<Mut, T, O>>
pub fn as_mut_ptr_range(&mut self) -> Range<BitPtr<Mut, T, O>>
Produces a range of mutable bit-pointers to each bit in the bit-slice.
This is a standard-library range, which has no real functionality for
pointer types. You should prefer .as_mut_bitptr_range()
instead, as
it produces a custom structure that provides expected ranging
functionality.
Original
sourcepub fn swap(&mut self, a: usize, b: usize)
pub fn swap(&mut self, a: usize, b: usize)
sourcepub fn reverse(&mut self)
pub fn reverse(&mut self)
sourcepub fn iter(&self) -> Iter<'_, T, O>ⓘNotable traits for Iter<'a, T, O>impl<'a, T, O> Iterator for Iter<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = <usize as BitSliceIndex<'a, T, O>>::Immut;
pub fn iter(&self) -> Iter<'_, T, O>ⓘNotable traits for Iter<'a, T, O>impl<'a, T, O> Iterator for Iter<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = <usize as BitSliceIndex<'a, T, O>>::Immut;
T: 'a + BitStore,
O: BitOrder, type Item = <usize as BitSliceIndex<'a, T, O>>::Immut;
Produces an iterator over each bit in the bit-slice.
Original
API Differences
This iterator yields proxy-reference structures, not &bool
. It can be
adapted to yield &bool
with the .by_refs()
method, or bool
with
.by_vals()
.
This iterator, and its adapters, are fast. Do not try to be more clever
than them by abusing .as_bitptr_range()
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 1];
let mut iter = bits.iter();
assert!(!iter.next().unwrap());
assert!( iter.next().unwrap());
assert!( iter.next_back().unwrap());
assert!(!iter.next_back().unwrap());
assert!( iter.next().is_none());
sourcepub fn iter_mut(&mut self) -> IterMut<'_, T, O>ⓘNotable traits for IterMut<'a, T, O>impl<'a, T, O> Iterator for IterMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = <usize as BitSliceIndex<'a, T::Alias, O>>::Mut;
pub fn iter_mut(&mut self) -> IterMut<'_, T, O>ⓘNotable traits for IterMut<'a, T, O>impl<'a, T, O> Iterator for IterMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = <usize as BitSliceIndex<'a, T::Alias, O>>::Mut;
T: 'a + BitStore,
O: BitOrder, type Item = <usize as BitSliceIndex<'a, T::Alias, O>>::Mut;
Produces a mutable iterator over each bit in the bit-slice.
Original
API Differences
This iterator yields proxy-reference structures, not &mut bool
. In
addition, it marks each proxy as alias-tainted.
If you are using this in an ordinary loop and not keeping multiple
yielded proxy-references alive at the same scope, you may use the
.remove_alias()
adapter to undo the alias marking.
This iterator is fast. Do not try to be more clever than it by abusing
.as_mut_bitptr_range()
.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 4];
let mut iter = bits.iter_mut();
iter.nth(1).unwrap().commit(true); // index 1
iter.next_back().unwrap().commit(true); // index 3
assert!(iter.next().is_some()); // index 2
assert!(iter.next().is_none()); // complete
assert_eq!(bits, bits![0, 1, 0, 1]);
sourcepub fn windows(&self, size: usize) -> Windows<'_, T, O>ⓘNotable traits for Windows<'a, T, O>impl<'a, T, O> Iterator for Windows<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
pub fn windows(&self, size: usize) -> Windows<'_, T, O>ⓘNotable traits for Windows<'a, T, O>impl<'a, T, O> Iterator for Windows<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
Iterates over consecutive windowing subslices in a bit-slice.
Windows are overlapping views of the bit-slice. Each window advances one
bit from the previous, so in a bit-slice [A, B, C, D, E]
, calling
.windows(3)
will yield [A, B, C]
, [B, C, D]
, and [C, D, E]
.
Original
Panics
This panics if size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.windows(3);
assert_eq!(iter.next(), Some(bits![0, 1, 0]));
assert_eq!(iter.next(), Some(bits![1, 0, 0]));
assert_eq!(iter.next(), Some(bits![0, 0, 1]));
assert!(iter.next().is_none());
sourcepub fn chunks(&self, chunk_size: usize) -> Chunks<'_, T, O>ⓘNotable traits for Chunks<'a, T, O>impl<'a, T, O> Iterator for Chunks<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, T, O>ⓘNotable traits for Chunks<'a, T, O>impl<'a, T, O> Iterator for Chunks<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
Iterates over non-overlapping subslices of a bit-slice.
Unlike .windows()
, the subslices this yields do not overlap with each
other. If self.len()
is not an even multiple of chunk_size
, then the
last chunk yielded will be shorter.
Original
Sibling Methods
.chunks_mut()
has the same division logic, but each yielded bit-slice is mutable..chunks_exact()
does not yield the final chunk if it is shorter thanchunk_size
..rchunks()
iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert_eq!(iter.next(), Some(bits![1]));
assert!(iter.next().is_none());
sourcepub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, T, O>ⓘNotable traits for ChunksMut<'a, T, O>impl<'a, T, O> Iterator for ChunksMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, T, O>ⓘNotable traits for ChunksMut<'a, T, O>impl<'a, T, O> Iterator for ChunksMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
Iterates over non-overlapping mutable subslices of a bit-slice.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
Sibling Methods
.chunks()
has the same division logic, but each yielded bit-slice is immutable..chunks_exact_mut()
does not yield the final chunk if it is shorter thanchunk_size
..rchunks_mut()
iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
for (idx, chunk) in unsafe {
bits.chunks_mut(2).remove_alias()
}.enumerate() {
chunk.store(idx + 1);
}
assert_eq!(bits, bits![0, 1, 1, 0, 1]);
// ^^^^ ^^^^ ^
sourcepub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, T, O>ⓘNotable traits for ChunksExact<'a, T, O>impl<'a, T, O> Iterator for ChunksExact<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, T, O>ⓘNotable traits for ChunksExact<'a, T, O>impl<'a, T, O> Iterator for ChunksExact<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
Iterates over non-overlapping subslices of a bit-slice.
If self.len()
is not an even multiple of chunk_size
, then the last
few bits are not yielded by the iterator at all. They can be accessed
with the .remainder()
method if the iterator is bound to a name.
Original
Sibling Methods
.chunks()
yields any leftover bits at the end as a shorter chunk during iteration..chunks_exact_mut()
has the same division logic, but each yielded bit-slice is mutable..rchunks_exact()
iterates from the back of the bit-slice to the front, with the unyielded remainder segment at the front edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks_exact(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![1]);
sourcepub fn chunks_exact_mut(&mut self, chunk_size: usize) -> ChunksExactMut<'_, T, O>ⓘNotable traits for ChunksExactMut<'a, T, O>impl<'a, T, O> Iterator for ChunksExactMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
pub fn chunks_exact_mut(&mut self, chunk_size: usize) -> ChunksExactMut<'_, T, O>ⓘNotable traits for ChunksExactMut<'a, T, O>impl<'a, T, O> Iterator for ChunksExactMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
Iterates over non-overlapping mutable subslices of a bit-slice.
If self.len()
is not an even multiple of chunk_size
, then the last
few bits are not yielded by the iterator at all. They can be accessed
with the .into_remainder()
method if the iterator is bound to a
name.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
Sibling Methods
.chunks_mut()
yields any leftover bits at the end as a shorter chunk during iteration..chunks_exact()
has the same division logic, but each yielded bit-slice is immutable..rchunks_exact_mut()
iterates from the back of the bit-slice forwards, with the unyielded remainder segment at the front edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.chunks_exact_mut(2);
for (idx, chunk) in iter.by_ref().enumerate() {
chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);
assert_eq!(bits, bits![0, 1, 1, 0, 1]);
// remainder ^
sourcepub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, T, O>ⓘNotable traits for RChunks<'a, T, O>impl<'a, T, O> Iterator for RChunks<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, T, O>ⓘNotable traits for RChunks<'a, T, O>impl<'a, T, O> Iterator for RChunks<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
Iterates over non-overlapping subslices of a bit-slice, from the back edge.
Unlike .chunks()
, this aligns its chunks to the back edge of self
.
If self.len()
is not an even multiple of chunk_size
, then the
leftover partial chunk is self[0 .. len % chunk_size]
.
Original
Sibling Methods
.rchunks_mut()
has the same division logic, but each yielded bit-slice is mutable..rchunks_exact()
does not yield the final chunk if it is shorter thanchunk_size
..chunks()
iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert_eq!(iter.next(), Some(bits![0]));
assert!(iter.next().is_none());
sourcepub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, T, O>ⓘNotable traits for RChunksMut<'a, T, O>impl<'a, T, O> Iterator for RChunksMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, T, O>ⓘNotable traits for RChunksMut<'a, T, O>impl<'a, T, O> Iterator for RChunksMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.
Unlike .chunks_mut()
, this aligns its chunks to the back edge of
self
. If self.len()
is not an even multiple of chunk_size
, then
the leftover partial chunk is self[0 .. len % chunk_size]
.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded values for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
Sibling Methods
.rchunks()
has the same division logic, but each yielded bit-slice is immutable..rchunks_exact_mut()
does not yield the final chunk if it is shorter thanchunk_size
..chunks_mut()
iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
for (idx, chunk) in unsafe {
bits.rchunks_mut(2).remove_alias()
}.enumerate() {
chunk.store(idx + 1);
}
assert_eq!(bits, bits![1, 1, 0, 0, 1]);
// remainder ^ ^^^^ ^^^^
sourcepub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, T, O>ⓘNotable traits for RChunksExact<'a, T, O>impl<'a, T, O> Iterator for RChunksExact<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, T, O>ⓘNotable traits for RChunksExact<'a, T, O>impl<'a, T, O> Iterator for RChunksExact<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
T: 'a + BitStore,
O: BitOrder, type Item = &'a BitSlice<T, O>;
Iterates over non-overlapping subslices of a bit-slice, from the back edge.
If self.len()
is not an even multiple of chunk_size
, then the first
few bits are not yielded by the iterator at all. They can be accessed
with the .remainder()
method if the iterator is bound to a name.
Original
Sibling Methods
.rchunks()
yields any leftover bits at the front as a shorter chunk during iteration..rchunks_exact_mut()
has the same division logic, but each yielded bit-slice is mutable..chunks_exact()
iterates from the front of the bit-slice to the back, with the unyielded remainder segment at the back edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks_exact(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![0]);
sourcepub fn rchunks_exact_mut(
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, T, O>ⓘNotable traits for RChunksExactMut<'a, T, O>impl<'a, T, O> Iterator for RChunksExactMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
pub fn rchunks_exact_mut(
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, T, O>ⓘNotable traits for RChunksExactMut<'a, T, O>impl<'a, T, O> Iterator for RChunksExactMut<'a, T, O> where
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
T: 'a + BitStore,
O: BitOrder, type Item = &'a mut BitSlice<T::Alias, O>;
Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.
If self.len()
is not an even multiple of chunk_size
, then the first
few bits are not yielded by the iterator at all. They can be accessed
with the .into_remainder()
method if the iterator is bound to a
name.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Sibling Methods
.rchunks_mut()
yields any leftover bits at the front as a shorter chunk during iteration..rchunks_exact()
has the same division logic, but each yielded bit-slice is immutable..chunks_exact_mut()
iterates from the front of the bit-slice backwards, with the unyielded remainder segment at the back edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.rchunks_exact_mut(2);
for (idx, chunk) in iter.by_ref().enumerate() {
chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);
assert_eq!(bits, bits![1, 1, 0, 0, 1]);
// remainder ^
sourcepub fn split_at(&self, mid: usize) -> (&Self, &Self)
pub fn split_at(&self, mid: usize) -> (&Self, &Self)
Splits a bit-slice in two parts at an index.
The returned bit-slices are self[.. mid]
and self[mid ..]
. mid
is
included in the right bit-slice, not the left.
If mid
is 0
then the left bit-slice is empty; if it is self.len()
then the right bit-slice is empty.
This method guarantees that even when either partition is empty, the
encoded bit-pointer values of the bit-slice references is &self[0]
and
&self[mid]
.
Original
Panics
This panics if mid
is greater than self.len()
. It is allowed to be
equal to the length, in which case the right bit-slice is simply empty.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 0, 1, 1, 1];
let base = bits.as_bitptr();
let (a, b) = bits.split_at(0);
assert_eq!(unsafe { a.as_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 0);
let (a, b) = bits.split_at(6);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 6);
let (a, b) = bits.split_at(3);
assert_eq!(a, bits![0; 3]);
assert_eq!(b, bits![1; 3]);
sourcepub fn split_at_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<T::Alias, O>, &mut BitSlice<T::Alias, O>)
pub fn split_at_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<T::Alias, O>, &mut BitSlice<T::Alias, O>)
Splits a mutable bit-slice in two parts at an index.
The returned bit-slices are self[.. mid]
and self[mid ..]
. mid
is
included in the right bit-slice, not the left.
If mid
is 0
then the left bit-slice is empty; if it is self.len()
then the right bit-slice is empty.
This method guarantees that even when either partition is empty, the
encoded bit-pointer values of the bit-slice references is &self[0]
and
&self[mid]
.
Original
API Differences
The end bits of the left half and the start bits of the right half might
be stored in the same memory element. In order to avoid breaking
bitvec
’s memory-safety guarantees, both bit-slices are marked as
T::Alias
. This marking allows them to be used without interfering with
each other when they interact with memory.
Panics
This panics if mid
is greater than self.len()
. It is allowed to be
equal to the length, in which case the right bit-slice is simply empty.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 6];
let base = bits.as_mut_bitptr();
let (a, b) = bits.split_at_mut(0);
assert_eq!(unsafe { a.as_mut_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 0);
let (a, b) = bits.split_at_mut(6);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 6);
let (a, b) = bits.split_at_mut(3);
a.store(3);
b.store(5);
assert_eq!(bits, bits![0, 1, 1, 1, 0, 1]);
sourcepub fn split<F>(&self, pred: F) -> Split<'_, T, O, F>ⓘNotable traits for Split<'a, T, O, P>impl<'a, T, O, P> Iterator for Split<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a BitSlice<T, O>;
where
F: FnMut(usize, &bool) -> bool,
pub fn split<F>(&self, pred: F) -> Split<'_, T, O, F>ⓘNotable traits for Split<'a, T, O, P>impl<'a, T, O, P> Iterator for Split<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a BitSlice<T, O>;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a BitSlice<T, O>;
Iterates over subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.split_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split_inclusive()
includes the matched bit in the yielded bit-slice..rsplit()
iterates from the back of the bit-slice instead of the front..splitn()
times out aftern
yields.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
// ^
let mut iter = bits.split(|pos, _bit| pos % 3 == 2);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert_eq!(iter.next().unwrap(), bits![0]);
assert!(iter.next().is_none());
If the first bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the last bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
// ^
let mut iter = bits.split(|_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().unwrap().is_empty());
assert!(iter.next().is_none());
If two matched bits are directly adjacent, then an empty bit-slice will be yielded between them:
use bitvec::prelude::*;
let bits = bits![1, 0, 0, 1];
// ^ ^
let mut iter = bits.split(|_pos, bit| !*bit);
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());
sourcepub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, O, F>ⓘNotable traits for SplitMut<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a mut BitSlice<T::Alias, O>;
where
F: FnMut(usize, &bool) -> bool,
pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, O, F>ⓘNotable traits for SplitMut<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a mut BitSlice<T::Alias, O>;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a mut BitSlice<T::Alias, O>;
Iterates over mutable subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.split()
has the same splitting logic, but each yielded bit-slice is immutable..split_inclusive_mut()
includes the matched bit in the yielded bit-slice..rsplit_mut()
iterates from the back of the bit-slice instead of the front..splitn_mut()
times out aftern
yields.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// ^ ^
for group in bits.split_mut(|_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);
sourcepub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, O, F>ⓘNotable traits for SplitInclusive<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitInclusive<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a BitSlice<T, O>;
where
F: FnMut(usize, &bool) -> bool,
pub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, O, F>ⓘNotable traits for SplitInclusive<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitInclusive<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a BitSlice<T, O>;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a BitSlice<T, O>;
Iterates over subslices separated by bits that match a predicate. Unlike
.split()
, this does include the matching bit as the last bit in the
yielded bit-slice.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.split_inclusive_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split()
does not include the matched bit in the yielded bit-slice.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1];
// ^ ^
let mut iter = bits.split_inclusive(|_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());
sourcepub fn split_inclusive_mut<F>(
&mut self,
pred: F
) -> SplitInclusiveMut<'_, T, O, F>ⓘNotable traits for SplitInclusiveMut<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitInclusiveMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a mut BitSlice<T::Alias, O>;
where
F: FnMut(usize, &bool) -> bool,
pub fn split_inclusive_mut<F>(
&mut self,
pred: F
) -> SplitInclusiveMut<'_, T, O, F>ⓘNotable traits for SplitInclusiveMut<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitInclusiveMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a mut BitSlice<T::Alias, O>;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a mut BitSlice<T::Alias, O>;
Iterates over mutable subslices separated by bits that match a
predicate. Unlike .split_mut()
, this does include the matching bit
as the last bit in the bit-slice.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.split_inclusive()
has the same splitting logic, but each yielded bit-slice is immutable..split_mut()
does not include the matched bit in the yielded bit-slice.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 0, 0, 0];
// ^
for group in bits.split_inclusive_mut(|pos, _bit| pos % 3 == 2) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 0, 1, 0]);
sourcepub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, O, F>ⓘNotable traits for RSplit<'a, T, O, P>impl<'a, T, O, P> Iterator for RSplit<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a BitSlice<T, O>;
where
F: FnMut(usize, &bool) -> bool,
pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, O, F>ⓘNotable traits for RSplit<'a, T, O, P>impl<'a, T, O, P> Iterator for RSplit<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a BitSlice<T, O>;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a BitSlice<T, O>;
Iterates over subslices separated by bits that match a predicate, from the back edge. The matched bit is not contained in the yielded bit-slices.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.rsplit_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split()
iterates from the front of the bit-slice instead of the back..rsplitn()
times out aftern
yields.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
// ^
let mut iter = bits.rsplit(|pos, _bit| pos % 3 == 2);
assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());
If the last bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the first bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
// ^
let mut iter = bits.rsplit(|_pos, bit| *bit);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().is_none());
If two yielded bits are directly adjacent, then an empty bit-slice will be yielded between them:
use bitvec::prelude::*;
let bits = bits![1, 0, 0, 1];
// ^ ^
let mut iter = bits.split(|_pos, bit| !*bit);
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());
sourcepub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, O, F>ⓘNotable traits for RSplitMut<'a, T, O, P>impl<'a, T, O, P> Iterator for RSplitMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a mut BitSlice<T::Alias, O>;
where
F: FnMut(usize, &bool) -> bool,
pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, O, F>ⓘNotable traits for RSplitMut<'a, T, O, P>impl<'a, T, O, P> Iterator for RSplitMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a mut BitSlice<T::Alias, O>;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = &'a mut BitSlice<T::Alias, O>;
Iterates over mutable subslices separated by bits that match a predicate, from the back. The matched bit is not contained in the yielded bit-slices.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.rsplit()
has the same splitting logic, but each yielded bit-slice is immutable..split_mut()
iterates from the front of the bit-slice to the back..rsplitn_mut()
iterates from the front of the bit-slice to the back.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// ^ ^
for group in bits.rsplit_mut(|_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);
sourcepub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, O, F>ⓘNotable traits for SplitN<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitN<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <Split<'a, T, O, P> as Iterator>::Item;
where
F: FnMut(usize, &bool) -> bool,
pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, O, F>ⓘNotable traits for SplitN<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitN<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <Split<'a, T, O, P> as Iterator>::Item;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <Split<'a, T, O, P> as Iterator>::Item;
Iterates over subslices separated by bits that match a predicate, giving
up after yielding n
times. The n
th yield contains the rest of the
bit-slice. As with .split()
, the yielded bit-slices do not contain the
matched bit.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.splitn_mut()
has the same splitting logic, but each yielded bit-slice is mutable..rsplitn()
iterates from the back of the bit-slice instead of the front..split()
has the same splitting logic, but never times out.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1, 0];
let mut iter = bits.splitn(2, |_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0, 0]);
assert_eq!(iter.next().unwrap(), bits![0, 1, 0]);
assert!(iter.next().is_none());
sourcepub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, O, F>ⓘNotable traits for SplitNMut<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitNMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <SplitMut<'a, T, O, P> as Iterator>::Item;
where
F: FnMut(usize, &bool) -> bool,
pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, O, F>ⓘNotable traits for SplitNMut<'a, T, O, P>impl<'a, T, O, P> Iterator for SplitNMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <SplitMut<'a, T, O, P> as Iterator>::Item;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <SplitMut<'a, T, O, P> as Iterator>::Item;
Iterates over mutable subslices separated by bits that match a
predicate, giving up after yielding n
times. The n
th yield contains
the rest of the bit-slice. As with .split_mut()
, the yielded
bit-slices do not contain the matched bit.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.splitn()
has the same splitting logic, but each yielded bit-slice is immutable..rsplitn_mut()
iterates from the back of the bit-slice instead of the front..split_mut()
has the same splitting logic, but never times out.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
for group in bits.splitn_mut(2, |_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 0]);
sourcepub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, O, F>ⓘNotable traits for RSplitN<'a, T, O, P>impl<'a, T, O, P> Iterator for RSplitN<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <RSplit<'a, T, O, P> as Iterator>::Item;
where
F: FnMut(usize, &bool) -> bool,
pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, O, F>ⓘNotable traits for RSplitN<'a, T, O, P>impl<'a, T, O, P> Iterator for RSplitN<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <RSplit<'a, T, O, P> as Iterator>::Item;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <RSplit<'a, T, O, P> as Iterator>::Item;
Iterates over mutable subslices separated by bits that match a
predicate from the back edge, giving up after yielding n
times. The
n
th yield contains the rest of the bit-slice. As with .split_mut()
,
the yielded bit-slices do not contain the matched bit.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.rsplitn_mut()
has the same splitting logic, but each yielded bit-slice is mutable..splitn()
: iterates from the front of the bit-slice instead of the back..rsplit()
has the same splitting logic, but never times out.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 1, 0];
// ^
let mut iter = bits.rsplitn(2, |_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert!(iter.next().is_none());
sourcepub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, O, F>ⓘNotable traits for RSplitNMut<'a, T, O, P>impl<'a, T, O, P> Iterator for RSplitNMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <RSplitMut<'a, T, O, P> as Iterator>::Item;
where
F: FnMut(usize, &bool) -> bool,
pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, O, F>ⓘNotable traits for RSplitNMut<'a, T, O, P>impl<'a, T, O, P> Iterator for RSplitNMut<'a, T, O, P> where
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <RSplitMut<'a, T, O, P> as Iterator>::Item;
where
F: FnMut(usize, &bool) -> bool,
T: 'a + BitStore,
O: BitOrder,
P: FnMut(usize, &bool) -> bool, type Item = <RSplitMut<'a, T, O, P> as Iterator>::Item;
Iterates over mutable subslices separated by bits that match a
predicate from the back edge, giving up after yielding n
times. The
n
th yield contains the rest of the bit-slice. As with .split_mut()
,
the yielded bit-slices do not contain the matched bit.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.rsplitn()
has the same splitting logic, but each yielded bit-slice is immutable..splitn_mut()
iterates from the front of the bit-slice instead of the back..rsplit_mut()
has the same splitting logic, but never times out.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 0, 1, 0, 0, 0];
for group in bits.rsplitn_mut(2, |_idx, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 0, 0, 1, 1, 0, 0]);
// ^ group 2 ^ group 1
sourcepub fn contains<T2, O2>(&self, other: &BitSlice<T2, O2>) -> bool where
T2: BitStore,
O2: BitOrder,
pub fn contains<T2, O2>(&self, other: &BitSlice<T2, O2>) -> bool where
T2: BitStore,
O2: BitOrder,
Tests if the bit-slice contains the given sequence anywhere within it.
This scans over self.windows(other.len())
until one of the windows
matches. The search key does not need to share type parameters with the
bit-slice being tested, as the comparison is bit-wise. However, sharing
type parameters will accelerate the comparison.
Original
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1, 1, 0, 0];
assert!( bits.contains(bits![0, 1, 1, 0]));
assert!(!bits.contains(bits![1, 0, 0, 1]));
sourcepub fn starts_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool where
T2: BitStore,
O2: BitOrder,
pub fn starts_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool where
T2: BitStore,
O2: BitOrder,
Tests if the bit-slice begins with the given sequence.
The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.
Original
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
assert!( bits.starts_with(bits![0, 1]));
assert!(!bits.starts_with(bits![1, 0]));
This always returns true
if the needle is empty:
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.starts_with(empty));
assert!(empty.starts_with(empty));
sourcepub fn ends_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool where
T2: BitStore,
O2: BitOrder,
pub fn ends_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> bool where
T2: BitStore,
O2: BitOrder,
Tests if the bit-slice ends with the given sequence.
The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.
Original
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
assert!( bits.ends_with(bits![1, 0]));
assert!(!bits.ends_with(bits![0, 1]));
This always returns true
if the needle is empty:
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.ends_with(empty));
assert!(empty.ends_with(empty));
sourcepub fn strip_prefix<T2, O2>(&self, prefix: &BitSlice<T2, O2>) -> Option<&Self> where
T2: BitStore,
O2: BitOrder,
pub fn strip_prefix<T2, O2>(&self, prefix: &BitSlice<T2, O2>) -> Option<&Self> where
T2: BitStore,
O2: BitOrder,
Removes a prefix bit-slice, if present.
Like .starts_with()
, the search key does not need to share type
parameters with the bit-slice being stripped. If
self.starts_with(suffix)
, then this returns Some(&self[prefix.len() ..])
, otherwise it returns None
.
Original
API Differences
BitSlice
does not support pattern searches; instead, it permits self
and prefix
to differ in type parameters.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_prefix(bits![0, 1]).unwrap(), bits[2 ..]);
assert_eq!(bits.strip_prefix(bits![0, 1, 0, 0,]).unwrap(), bits[4 ..]);
assert!(bits.strip_prefix(bits![1, 0]).is_none());
sourcepub fn strip_suffix<T2, O2>(&self, suffix: &BitSlice<T2, O2>) -> Option<&Self> where
T2: BitStore,
O2: BitOrder,
pub fn strip_suffix<T2, O2>(&self, suffix: &BitSlice<T2, O2>) -> Option<&Self> where
T2: BitStore,
O2: BitOrder,
Removes a suffix bit-slice, if present.
Like .ends_with()
, the search key does not need to share type
parameters with the bit-slice being stripped. If
self.ends_with(suffix)
, then this returns Some(&self[.. self.len() - suffix.len()])
, otherwise it returns None
.
Original
API Differences
BitSlice
does not support pattern searches; instead, it permits self
and suffix
to differ in type parameters.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_suffix(bits![1, 0]).unwrap(), bits[.. 7]);
assert_eq!(bits.strip_suffix(bits![0, 1, 1, 0]).unwrap(), bits[.. 5]);
assert!(bits.strip_suffix(bits![0, 1]).is_none());
sourcepub fn rotate_left(&mut self, by: usize)
pub fn rotate_left(&mut self, by: usize)
Rotates the contents of a bit-slice to the left (towards the zero index).
This essentially splits the bit-slice at by
, then exchanges the two
pieces. self[.. by]
becomes the first section, and is then followed by
self[.. by]
.
The implementation is batch-accelerated where possible. It should have a
runtime complexity much lower than O(by)
.
Original
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// split occurs here ^
bits.rotate_left(2);
assert_eq!(bits, bits![1, 0, 1, 0, 0, 0]);
sourcepub fn rotate_right(&mut self, by: usize)
pub fn rotate_right(&mut self, by: usize)
Rotates the contents of a bit-slice to the right (away from the zero index).
This essentially splits the bit-slice at self.len() - by
, then
exchanges the two pieces. self[len - by ..]
becomes the first section,
and is then followed by self[.. len - by]
.
The implementation is batch-accelerated where possible. It should have a
runtime complexity much lower than O(by)
.
Original
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 1, 1, 0];
// split occurs here ^
bits.rotate_right(2);
assert_eq!(bits, bits![1, 0, 0, 0, 1, 1]);
sourcepub fn fill(&mut self, value: bool)
pub fn fill(&mut self, value: bool)
Fills the bit-slice with a given bit.
This is a recent stabilization in the standard library. bitvec
previously offered this behavior as the novel API .set_all()
. That
method name is now removed in favor of this standard-library analogue.
Original
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 5];
bits.fill(true);
assert_eq!(bits, bits![1; 5]);
sourcepub fn fill_with<F>(&mut self, func: F) where
F: FnMut(usize) -> bool,
pub fn fill_with<F>(&mut self, func: F) where
F: FnMut(usize) -> bool,
Fills the bit-slice with bits produced by a generator function.
Original
API Differences
The generator function receives the index of the bit being initialized as an argument.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 5];
bits.fill_with(|idx| idx % 2 == 0);
assert_eq!(bits, bits![1, 0, 1, 0, 1]);
pub fn clone_from_slice<T2, O2>(&mut self, src: &BitSlice<T2, O2>) where
T2: BitStore,
O2: BitOrder,
use .clone_from_bitslice()
instead
pub fn copy_from_slice(&mut self, src: &Self)
use .copy_from_bitslice()
instead
sourcepub fn copy_within<R>(&mut self, src: R, dest: usize) where
R: RangeExt<usize>,
pub fn copy_within<R>(&mut self, src: R, dest: usize) where
R: RangeExt<usize>,
Copies a span of bits to another location in the bit-slice.
src
is the range of bit-indices in the bit-slice to copy, and dest is the starting index of the destination range.
srcand
dest .. dest +
src.len()are permitted to overlap; the copy will automatically detect and manage this. However, both
srcand
dest .. dest + src.len()**must** fall within the bounds of
self`.
Original
Panics
This panics if either the source or destination range exceed
self.len()
.
Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0];
bits.copy_within(1 .. 5, 8);
// v v v v
assert_eq!(bits, bits![1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0]);
// ^ ^ ^ ^
pub fn swap_with_slice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>) where
T2: BitStore,
O2: BitOrder,
use .swap_with_bitslice()
instead
sourcepub unsafe fn align_to<U>(&self) -> (&Self, &BitSlice<U, O>, &Self) where
U: BitStore,
pub unsafe fn align_to<U>(&self) -> (&Self, &BitSlice<U, O>, &Self) where
U: BitStore,
Produces bit-slice view(s) with different underlying storage types.
This may have unexpected effects, and you cannot assume that
before[idx] == after[idx]
! Consult the tables in the manual
for information about memory layouts.
Original
Notes
Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.
Safety
You may not use this to cast away alias protections. Rust does not have
support for higher-kinded types, so this cannot express the relation
Outer<T> -> Outer<U> where Outer: BitStoreContainer
, but memory safety
does require that you respect this rule. Reälign integers to integers,
Cell
s to Cell
s, and atomics to atomics, but do not cross these
boundaries.
Examples
use bitvec::prelude::*;
let bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
bits.align_to::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);
sourcepub unsafe fn align_to_mut<U>(
&mut self
) -> (&mut Self, &mut BitSlice<U, O>, &mut Self) where
U: BitStore,
pub unsafe fn align_to_mut<U>(
&mut self
) -> (&mut Self, &mut BitSlice<U, O>, &mut Self) where
U: BitStore,
Produces bit-slice view(s) with different underlying storage types.
This may have unexpected effects, and you cannot assume that
before[idx] == after[idx]
! Consult the tables in the manual
for information about memory layouts.
Original
Notes
Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.
Safety
You may not use this to cast away alias protections. Rust does not have
support for higher-kinded types, so this cannot express the relation
Outer<T> -> Outer<U> where Outer: BitStoreContainer
, but memory safety
does require that you respect this rule. Reälign integers to integers,
Cell
s to Cell
s, and atomics to atomics, but do not cross these
boundaries.
Examples
use bitvec::prelude::*;
let mut bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits_mut::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
bits.align_to_mut::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);
pub fn to_vec(&self) -> BitVec<T::Unalias, O>ⓘNotable traits for BitVec<T, O>impl<T, O> Read for BitVec<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for BitVec<T, O> where
O: BitOrder,
T: BitStore,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for BitVec<T, O> where
O: BitOrder,
T: BitStore,
BitSlice<T, O>: BitField,
use .to_bitvec()
instead
sourcepub fn repeat(&self, n: usize) -> BitVec<T::Unalias, O>ⓘNotable traits for BitVec<T, O>impl<T, O> Read for BitVec<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for BitVec<T, O> where
O: BitOrder,
T: BitStore,
BitSlice<T, O>: BitField,
pub fn repeat(&self, n: usize) -> BitVec<T::Unalias, O>ⓘNotable traits for BitVec<T, O>impl<T, O> Read for BitVec<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for BitVec<T, O> where
O: BitOrder,
T: BitStore,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for BitVec<T, O> where
O: BitOrder,
T: BitStore,
BitSlice<T, O>: BitField,
Creates a bit-vector by repeating a bit-slice n
times.
Original
Panics
This method panics if self.len() * n
exceeds the BitVec
capacity.
Examples
use bitvec::prelude::*;
assert_eq!(bits![0, 1].repeat(3), bitvec![0, 1, 0, 1, 0, 1]);
This panics by exceeding bit-vector maximum capacity:
use bitvec::prelude::*;
bits![0, 1].repeat(BitSlice::<usize, Lsb0>::MAX_BITS);
sourcepub fn as_bitptr(&self) -> BitPtr<Const, T, O>
pub fn as_bitptr(&self) -> BitPtr<Const, T, O>
Gets a raw pointer to the zeroth bit of the bit-slice.
Original
API Differences
This is renamed in order to indicate that it is returning a bitvec
structure, not a raw pointer.
sourcepub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
pub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
Gets a raw, write-capable pointer to the zeroth bit of the bit-slice.
Original
API Differences
This is renamed in order to indicate that it is returning a bitvec
structure, not a raw pointer.
sourcepub fn as_bitptr_range(&self) -> BitPtrRange<Const, T, O>ⓘNotable traits for BitPtrRange<M, T, O>impl<M, T, O> Iterator for BitPtrRange<M, T, O> where
M: Mutability,
T: BitStore,
O: BitOrder, type Item = BitPtr<M, T, O>;
pub fn as_bitptr_range(&self) -> BitPtrRange<Const, T, O>ⓘNotable traits for BitPtrRange<M, T, O>impl<M, T, O> Iterator for BitPtrRange<M, T, O> where
M: Mutability,
T: BitStore,
O: BitOrder, type Item = BitPtr<M, T, O>;
M: Mutability,
T: BitStore,
O: BitOrder, type Item = BitPtr<M, T, O>;
Views the bit-slice as a half-open range of bit-pointers, to its first bit in the bit-slice and first bit beyond it.
Original
API Differences
This is renamed to indicate that it returns a bitvec
structure, rather
than an ordinary Range
.
Notes
BitSlice
does define a .as_ptr_range()
, which returns a
Range<BitPtr>
. BitPtrRange
has additional capabilities that
Range<*const T>
and Range<BitPtr>
do not.
sourcepub fn as_mut_bitptr_range(&mut self) -> BitPtrRange<Mut, T, O>ⓘNotable traits for BitPtrRange<M, T, O>impl<M, T, O> Iterator for BitPtrRange<M, T, O> where
M: Mutability,
T: BitStore,
O: BitOrder, type Item = BitPtr<M, T, O>;
pub fn as_mut_bitptr_range(&mut self) -> BitPtrRange<Mut, T, O>ⓘNotable traits for BitPtrRange<M, T, O>impl<M, T, O> Iterator for BitPtrRange<M, T, O> where
M: Mutability,
T: BitStore,
O: BitOrder, type Item = BitPtr<M, T, O>;
M: Mutability,
T: BitStore,
O: BitOrder, type Item = BitPtr<M, T, O>;
Views the bit-slice as a half-open range of write-capable bit-pointers, to its first bit in the bit-slice and the first bit beyond it.
Original
API Differences
This is renamed to indicate that it returns a bitvec
structure, rather
than an ordinary Range
.
Notes
BitSlice
does define a [.as_mut_ptr_range()
], which returns a
Range<BitPtr>
. BitPtrRange
has additional capabilities that
Range<*mut T>
and Range<BitPtr>
do not.
sourcepub fn clone_from_bitslice<T2, O2>(&mut self, src: &BitSlice<T2, O2>) where
T2: BitStore,
O2: BitOrder,
pub fn clone_from_bitslice<T2, O2>(&mut self, src: &BitSlice<T2, O2>) where
T2: BitStore,
O2: BitOrder,
Copies the bits from src
into self
.
self
and src
must have the same length.
Performance
If src
has the same type arguments as self
, it will use the same
implementation as .copy_from_bitslice()
; if you know that this will
always be the case, you should prefer to use that method directly.
Only .copy_from_bitslice()
is able to perform acceleration; this
method is always required to perform a bit-by-bit crawl over both
bit-slices.
Original
API Differences
This is renamed to reflect that it copies from another bit-slice, not from an element slice.
In order to support general usage, it allows src
to have different
type parameters than self
, at the cost of performance optimizations.
Panics
This panics if the two bit-slices have different lengths.
Examples
use bitvec::prelude::*;
sourcepub fn copy_from_bitslice(&mut self, src: &Self)
pub fn copy_from_bitslice(&mut self, src: &Self)
sourcepub fn swap_with_bitslice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>) where
T2: BitStore,
O2: BitOrder,
pub fn swap_with_bitslice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>) where
T2: BitStore,
O2: BitOrder,
Swaps the contents of two bit-slices.
self
and other
must have the same length.
Original
API Differences
This method is renamed, as it takes a bit-slice rather than an element slice.
Panics
This panics if the two bit-slices have different lengths.
Examples
use bitvec::prelude::*;
let mut one = [0xA5u8, 0x69];
let mut two = 0x1234u16;
let one_bits = one.view_bits_mut::<Msb0>();
let two_bits = two.view_bits_mut::<Lsb0>();
one_bits.swap_with_bitslice(two_bits);
assert_eq!(one, [0x2C, 0x48]);
assert_eq!(two, 0x96A5);
sourcepub fn set(&mut self, index: usize, value: bool)
pub fn set(&mut self, index: usize, value: bool)
Writes a new value into a single bit.
This is the replacement for *slice[index] = value;
, as bitvec
is not
able to express that under the current IndexMut
API signature.
Parameters
&mut self
index
: The bit-index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
Panics
This panics if index
is out of bounds.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 1];
bits.set(0, true);
bits.set(1, false);
assert_eq!(bits, bits![1, 0]);
sourcepub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
Writes a new value into a single bit, without bounds checking.
Parameters
&mut self
index
: The bit-index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
Safety
You must ensure that index
is in the range 0 .. self.len()
.
This performs bit-pointer offset arithmetic without doing any bounds
checks. If index
is out of bounds, then this will issue an
out-of-bounds access and will trigger memory unsafety.
Examples
use bitvec::prelude::*;
let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 2];
assert_eq!(bits.len(), 2);
unsafe {
bits.set_unchecked(3, true);
}
assert_eq!(data, 8);
sourcepub unsafe fn replace_unchecked(&mut self, index: usize, value: bool) -> bool
pub unsafe fn replace_unchecked(&mut self, index: usize, value: bool) -> bool
sourcepub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
Swaps two bits in a bit-slice, without bounds checking.
See .swap()
for documentation.
Safety
You must ensure that a
and b
are both in the range 0 .. self.len()
.
This method performs bit-pointer offset arithmetic without doing any
bounds checks. If a
or b
are out of bounds, then this will issue an
out-of-bounds access and will trigger memory unsafety.
sourcepub unsafe fn split_at_unchecked(&self, mid: usize) -> (&Self, &Self)
pub unsafe fn split_at_unchecked(&self, mid: usize) -> (&Self, &Self)
Splits a bit-slice at an index, without bounds checking.
See .split_at()
for documentation.
Safety
You must ensure that mid
is in the range 0 ..= self.len()
.
This method produces new bit-slice references. If mid
is out of
bounds, its behavior is library-level undefined. You must
conservatively assume that an out-of-bounds split point produces
compiler-level UB.
sourcepub unsafe fn split_at_unchecked_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<T::Alias, O>, &mut BitSlice<T::Alias, O>)
pub unsafe fn split_at_unchecked_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<T::Alias, O>, &mut BitSlice<T::Alias, O>)
Splits a mutable bit-slice at an index, without bounds checking.
See .split_at_mut()
for documentation.
Safety
You must ensure that mid
is in the range 0 ..= self.len()
.
This method produces new bit-slice references. If mid
is out of
bounds, its behavior is library-level undefined. You must
conservatively assume that an out-of-bounds split point produces
compiler-level UB.
sourcepub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize) where
R: RangeExt<usize>,
pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize) where
R: RangeExt<usize>,
Copies bits from one region of the bit-slice to another region of itself, without doing bounds checks.
The regions are allowed to overlap.
Parameters
&mut self
src
: The range withinself
from which to copy.dst
: The starting index withinself
at which to paste.
Effects
self[src]
is copied to self[dest .. dest + src.len()]
. The bits of
self[src]
are in an unspecified, but initialized, state.
Safety
src.end()
and dest + src.len()
must be entirely within bounds.
Examples
use bitvec::prelude::*;
let mut data = 0b1011_0000u8;
let bits = data.view_bits_mut::<Msb0>();
unsafe {
bits.copy_within_unchecked(.. 4, 2);
}
assert_eq!(data, 0b1010_1100);
sourcepub fn bit_domain(&self) -> BitDomain<'_, Const, T, O>
pub fn bit_domain(&self) -> BitDomain<'_, Const, T, O>
Partitions a bit-slice into maybe-contended and known-uncontended parts.
The documentation of BitDomain
goes into this in more detail. In
short, this produces a &BitSlice
that is as large as possible without
requiring alias protection, as well as any bits that were not able to be
included in the unaliased bit-slice.
sourcepub fn bit_domain_mut(&mut self) -> BitDomain<'_, Mut, T, O>
pub fn bit_domain_mut(&mut self) -> BitDomain<'_, Mut, T, O>
Partitions a mutable bit-slice into maybe-contended and known-uncontended parts.
The documentation of BitDomain
goes into this in more detail. In
short, this produces a &mut BitSlice
that is as large as possible
without requiring alias protection, as well as any bits that were not
able to be included in the unaliased bit-slice.
sourcepub fn domain(&self) -> Domain<'_, Const, T, O>ⓘNotable traits for Domain<'_, Const, T, O>impl<T, O> Iterator for Domain<'_, Const, T, O> where
T: BitStore,
O: BitOrder, type Item = T::Mem;
pub fn domain(&self) -> Domain<'_, Const, T, O>ⓘNotable traits for Domain<'_, Const, T, O>impl<T, O> Iterator for Domain<'_, Const, T, O> where
T: BitStore,
O: BitOrder, type Item = T::Mem;
T: BitStore,
O: BitOrder, type Item = T::Mem;
Views the underlying memory of a bit-slice, removing alias protections where possible.
The documentation of Domain
goes into this in more detail. In short,
this produces a &[T]
slice with alias protections removed, covering
all elements that self
completely fills. Partially-used elements on
either the front or back edge of the slice are returned separately.
sourcepub fn domain_mut(&mut self) -> Domain<'_, Mut, T, O>ⓘNotable traits for Domain<'_, Const, T, O>impl<T, O> Iterator for Domain<'_, Const, T, O> where
T: BitStore,
O: BitOrder, type Item = T::Mem;
pub fn domain_mut(&mut self) -> Domain<'_, Mut, T, O>ⓘNotable traits for Domain<'_, Const, T, O>impl<T, O> Iterator for Domain<'_, Const, T, O> where
T: BitStore,
O: BitOrder, type Item = T::Mem;
T: BitStore,
O: BitOrder, type Item = T::Mem;
Views the underlying memory of a bit-slice, removing alias protections where possible.
The documentation of Domain
goes into this in more detail. In short,
this produces a &mut [T]
slice with alias protections removed,
covering all elements that self
completely fills. Partially-used
elements on the front or back edge of the slice are returned separately.
sourcepub fn count_ones(&self) -> usize
pub fn count_ones(&self) -> usize
Counts the number of bits set to 1
in the bit-slice contents.
Examples
use bitvec::prelude::*;
let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_ones(), 2);
assert_eq!(bits[2 ..].count_ones(), 0);
assert_eq!(bits![].count_ones(), 0);
sourcepub fn count_zeros(&self) -> usize
pub fn count_zeros(&self) -> usize
Counts the number of bits cleared to 0
in the bit-slice contents.
Examples
use bitvec::prelude::*;
let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_zeros(), 0);
assert_eq!(bits[2 ..].count_zeros(), 2);
assert_eq!(bits![].count_zeros(), 0);
sourcepub fn iter_ones(&self) -> IterOnes<'_, T, O>ⓘNotable traits for IterOnes<'_, T, O>impl<T, O> Iterator for IterOnes<'_, T, O> where
T: BitStore,
O: BitOrder, type Item = usize;
pub fn iter_ones(&self) -> IterOnes<'_, T, O>ⓘNotable traits for IterOnes<'_, T, O>impl<T, O> Iterator for IterOnes<'_, T, O> where
T: BitStore,
O: BitOrder, type Item = usize;
T: BitStore,
O: BitOrder, type Item = usize;
Enumerates the index of each bit in a bit-slice set to 1
.
This is a shorthand for a .enumerate().filter_map()
iterator that
selects the index of each true
bit; however, its implementation is
eligible for optimizations that the individual-bit iterator is not.
Specializations for the Lsb0
and Msb0
orderings allow processors
with instructions that seek particular bits within an element to operate
on whole elements, rather than on each bit individually.
Examples
This example uses .iter_ones()
, a .filter_map()
that finds the index
of each set bit, and the known indices, in order to show that they have
equivalent behavior.
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 0, 0, 1];
let iter_ones = bits.iter_ones();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
.by_vals()
.enumerate()
.filter_map(|(idx, bit)| if bit { Some(idx) } else { None });
let all = iter_ones.zip(known_indices).zip(filter);
for ((iter_one, known), filtered) in all {
assert_eq!(iter_one, known);
assert_eq!(known, filtered);
}
sourcepub fn iter_zeros(&self) -> IterZeros<'_, T, O>ⓘNotable traits for IterZeros<'_, T, O>impl<T, O> Iterator for IterZeros<'_, T, O> where
T: BitStore,
O: BitOrder, type Item = usize;
pub fn iter_zeros(&self) -> IterZeros<'_, T, O>ⓘNotable traits for IterZeros<'_, T, O>impl<T, O> Iterator for IterZeros<'_, T, O> where
T: BitStore,
O: BitOrder, type Item = usize;
T: BitStore,
O: BitOrder, type Item = usize;
Enumerates the index of each bit in a bit-slice cleared to 0
.
This is a shorthand for a .enumerate().filter_map()
iterator that
selects the index of each false
bit; however, its implementation is
eligible for optimizations that the individual-bit iterator is not.
Specializations for the Lsb0
and Msb0
orderings allow processors
with instructions that seek particular bits within an element to operate
on whole elements, rather than on each bit individually.
Examples
This example uses .iter_zeros()
, a .filter_map()
that finds the
index of each cleared bit, and the known indices, in order to show that
they have equivalent behavior.
use bitvec::prelude::*;
let bits = bits![1, 0, 1, 1, 0, 1, 1, 1, 0];
let iter_zeros = bits.iter_zeros();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
.by_vals()
.enumerate()
.filter_map(|(idx, bit)| if !bit { Some(idx) } else { None });
let all = iter_zeros.zip(known_indices).zip(filter);
for ((iter_zero, known), filtered) in all {
assert_eq!(iter_zero, known);
assert_eq!(known, filtered);
}
sourcepub fn first_one(&self) -> Option<usize>
pub fn first_one(&self) -> Option<usize>
Finds the index of the first bit in the bit-slice set to 1
.
Returns None
if there is no true
bit in the bit-slice.
Examples
use bitvec::prelude::*;
assert!(bits![].first_one().is_none());
assert!(bits![0].first_one().is_none());
assert_eq!(bits![0, 1].first_one(), Some(1));
sourcepub fn first_zero(&self) -> Option<usize>
pub fn first_zero(&self) -> Option<usize>
Finds the index of the first bit in the bit-slice cleared to 0
.
Returns None
if there is no false
bit in the bit-slice.
Examples
use bitvec::prelude::*;
assert!(bits![].first_zero().is_none());
assert!(bits![1].first_zero().is_none());
assert_eq!(bits![1, 0].first_zero(), Some(1));
sourcepub fn last_one(&self) -> Option<usize>
pub fn last_one(&self) -> Option<usize>
Finds the index of the last bit in the bit-slice set to 1
.
Returns None
if there is no true
bit in the bit-slice.
Examples
use bitvec::prelude::*;
assert!(bits![].last_one().is_none());
assert!(bits![0].last_one().is_none());
assert_eq!(bits![1, 0].last_one(), Some(0));
sourcepub fn last_zero(&self) -> Option<usize>
pub fn last_zero(&self) -> Option<usize>
Finds the index of the last bit in the bit-slice cleared to 0
.
Returns None
if there is no false
bit in the bit-slice.
Examples
use bitvec::prelude::*;
assert!(bits![].last_zero().is_none());
assert!(bits![1].last_zero().is_none());
assert_eq!(bits![0, 1].last_zero(), Some(0));
sourcepub fn leading_ones(&self) -> usize
pub fn leading_ones(&self) -> usize
Counts the number of bits from the start of the bit-slice to the first
bit set to 0
.
This returns 0
if the bit-slice is empty.
Examples
use bitvec::prelude::*;
assert_eq!(bits![].leading_ones(), 0);
assert_eq!(bits![0].leading_ones(), 0);
assert_eq!(bits![1, 0].leading_ones(), 1);
sourcepub fn leading_zeros(&self) -> usize
pub fn leading_zeros(&self) -> usize
Counts the number of bits from the start of the bit-slice to the first
bit set to 1
.
This returns 0
if the bit-slice is empty.
Examples
use bitvec::prelude::*;
assert_eq!(bits![].leading_zeros(), 0);
assert_eq!(bits![1].leading_zeros(), 0);
assert_eq!(bits![0, 1].leading_zeros(), 1);
sourcepub fn trailing_ones(&self) -> usize
pub fn trailing_ones(&self) -> usize
Counts the number of bits from the end of the bit-slice to the last bit
set to 0
.
This returns 0
if the bit-slice is empty.
Examples
use bitvec::prelude::*;
assert_eq!(bits![].trailing_ones(), 0);
assert_eq!(bits![0].trailing_ones(), 0);
assert_eq!(bits![0, 1].trailing_ones(), 1);
sourcepub fn trailing_zeros(&self) -> usize
pub fn trailing_zeros(&self) -> usize
Counts the number of bits from the end of the bit-slice to the last bit
set to 1
.
This returns 0
if the bit-slice is empty.
Examples
use bitvec::prelude::*;
assert_eq!(bits![].trailing_zeros(), 0);
assert_eq!(bits![1].trailing_zeros(), 0);
assert_eq!(bits![1, 0].trailing_zeros(), 1);
sourcepub fn any(&self) -> bool
pub fn any(&self) -> bool
Tests if there is at least one bit set to 1
in the bit-slice.
Returns false
when self
is empty.
Examples
use bitvec::prelude::*;
assert!(!bits![].any());
assert!(!bits![0].any());
assert!(bits![0, 1].any());
sourcepub fn all(&self) -> bool
pub fn all(&self) -> bool
Tests if every bit is set to 1
in the bit-slice.
Returns true
when self
is empty.
Examples
use bitvec::prelude::*;
assert!( bits![].all());
assert!(!bits![0].all());
assert!( bits![1].all());
sourcepub fn not_any(&self) -> bool
pub fn not_any(&self) -> bool
Tests if every bit is cleared to 0
in the bit-slice.
Returns true
when self
is empty.
Examples
use bitvec::prelude::*;
assert!( bits![].not_any());
assert!(!bits![1].not_any());
assert!( bits![0].not_any());
sourcepub fn not_all(&self) -> bool
pub fn not_all(&self) -> bool
Tests if at least one bit is cleared to 0
in the bit-slice.
Returns false
when self
is empty.
Examples
use bitvec::prelude::*;
assert!(!bits![].not_all());
assert!(!bits![1].not_all());
assert!( bits![0].not_all());
sourcepub fn some(&self) -> bool
pub fn some(&self) -> bool
Tests if at least one bit is set to 1
, and at least one bit is cleared
to 0
, in the bit-slice.
Returns false
when self
is empty.
Examples
use bitvec::prelude::*;
assert!(!bits![].some());
assert!(!bits![0].some());
assert!(!bits![1].some());
assert!( bits![0, 1].some());
sourcepub fn shift_left(&mut self, by: usize)
pub fn shift_left(&mut self, by: usize)
Shifts the contents of a bit-slice “left” (towards the zero-index),
clearing the “right” bits to 0
.
This is a strictly-worse analogue to taking bits = &bits[by ..]
: it
has to modify the entire memory region that bits
governs, and destroys
contained information. Unless the actual memory layout and contents of
your bit-slice matters to your program, you should probably prefer to
munch your way forward through a bit-slice handle.
Note also that the “left” here is semantic only, and does not necessarily correspond to a left-shift instruction applied to the underlying integer storage.
This has no effect when by
is 0
. When by
is self.len()
, the
bit-slice is entirely cleared to 0
.
Panics
This panics if by
is not less than self.len()
.
Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits are retained ^--------------------------^
bits.shift_left(2);
assert_eq!(bits, bits![1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0]);
// and move here ^--------------------------^
let bits = bits![mut 1; 2];
bits.shift_left(2);
assert_eq!(bits, bits![0; 2]);
sourcepub fn shift_right(&mut self, by: usize)
pub fn shift_right(&mut self, by: usize)
Shifts the contents of a bit-slice “right” (away from the zero-index),
clearing the “left” bits to 0
.
This is a strictly-worse analogue to taking `bits = &bits[.. bits.len()
- by]
: it must modify the entire memory region that
bits` governs, and destroys contained information. Unless the actual memory layout and contents of your bit-slice matters to your program, you should probably prefer to munch your way backward through a bit-slice handle.
Note also that the “right” here is semantic only, and does not necessarily correspond to a right-shift instruction applied to the underlying integer storage.
This has no effect when by
is 0
. When by
is self.len()
, the
bit-slice is entirely cleared to 0
.
Panics
This panics if by
is not less than self.len()
.
Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits stay ^--------------------------^
bits.shift_right(2);
assert_eq!(bits, bits![0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1]);
// and move here ^--------------------------^
let bits = bits![mut 1; 2];
bits.shift_right(2);
assert_eq!(bits, bits![0; 2]);
sourcepub fn set_aliased(&self, index: usize, value: bool)
pub fn set_aliased(&self, index: usize, value: bool)
Writes a new value into a single bit, using alias-safe operations.
This is equivalent to .set()
, except that it does not require an
&mut
reference, and allows bit-slices with alias-safe storage to share
write permissions.
Parameters
&self
: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.index
: The bit index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
Panics
This panics if index
is out of bounds.
Examples
use bitvec::prelude::*;
use core::cell::Cell;
let bits: &BitSlice<_, _> = bits![Cell<usize>, Lsb0; 0, 1];
bits.set_aliased(0, true);
bits.set_aliased(1, false);
assert_eq!(bits, bits![1, 0]);
sourcepub unsafe fn set_aliased_unchecked(&self, index: usize, value: bool)
pub unsafe fn set_aliased_unchecked(&self, index: usize, value: bool)
Writes a new value into a single bit, using alias-safe operations and without bounds checking.
This is equivalent to .set_unchecked()
, except that it does not
require an &mut
reference, and allows bit-slices with alias-safe
storage to share write permissions.
Parameters
&self
: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.index
: The bit index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
Safety
The caller must ensure that index
is not out of bounds.
Examples
use bitvec::prelude::*;
use core::cell::Cell;
let data = Cell::new(0u8);
let bits = &data.view_bits::<Lsb0>()[.. 2];
unsafe {
bits.set_aliased_unchecked(3, true);
}
assert_eq!(data.get(), 8);
pub const MAX_BITS: usize = 2_305_843_009_213_693_951usize
pub const MAX_ELTS: usize = BitSpan<Const, T, O>::REGION_MAX_ELTS
sourcepub fn to_bitvec(&self) -> BitVec<T::Unalias, O>ⓘNotable traits for BitVec<T, O>impl<T, O> Read for BitVec<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for BitVec<T, O> where
O: BitOrder,
T: BitStore,
BitSlice<T, O>: BitField,
pub fn to_bitvec(&self) -> BitVec<T::Unalias, O>ⓘNotable traits for BitVec<T, O>impl<T, O> Read for BitVec<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for BitVec<T, O> where
O: BitOrder,
T: BitStore,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for BitVec<T, O> where
O: BitOrder,
T: BitStore,
BitSlice<T, O>: BitField,
Copies a bit-slice into an owned bit-vector.
Since the new vector is freshly owned, this gets marked as ::Unalias
to remove any guards that may have been inserted by the bit-slice’s
history.
It does not use the underlying memory type, so that a BitSlice<_, Cell<_>>
will produce a BitVec<_, Cell<_>>
.
Original
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 1];
let bv = bits.to_bitvec();
assert_eq!(bits, bv);
Trait Implementations
sourceimpl<A, O> AsMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> AsMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourcefn as_mut(&mut self) -> &mut BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
fn as_mut(&mut self) -> &mut BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
Converts this type into a mutable reference of the (usually inferred) input type.
sourceimpl<A, O> AsRef<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> AsRef<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourcefn as_ref(&self) -> &BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
fn as_ref(&self) -> &BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
Converts this type into a shared reference of the (usually inferred) input type.
sourceimpl<A, O> Binary for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
impl<A, O> Binary for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
sourceimpl<A, O, Rhs> BitAnd<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitAndAssign<Rhs>,
impl<A, O, Rhs> BitAnd<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitAndAssign<Rhs>,
sourceimpl<A, O> BitAndAssign<&BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitAndAssign<&BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
sourcefn bitand_assign(&mut self, rhs: &BitArray<A, O>)
fn bitand_assign(&mut self, rhs: &BitArray<A, O>)
Performs the &=
operation. Read more
sourceimpl<A, O> BitAndAssign<BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitAndAssign<BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
sourcefn bitand_assign(&mut self, rhs: BitArray<A, O>)
fn bitand_assign(&mut self, rhs: BitArray<A, O>)
Performs the &=
operation. Read more
sourceimpl<A, O, Rhs> BitAndAssign<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitAndAssign<Rhs>,
impl<A, O, Rhs> BitAndAssign<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitAndAssign<Rhs>,
sourcefn bitand_assign(&mut self, rhs: Rhs)
fn bitand_assign(&mut self, rhs: Rhs)
Performs the &=
operation. Read more
sourceimpl<A, O> BitField for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
BitSlice<A::Store, O>: BitField,
impl<A, O> BitField for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
BitSlice<A::Store, O>: BitField,
Bit-Array Implementation of BitField
The BitArray
implementation is only ever called when the entire bit-array is
available for use, which means it can skip the bit-slice memory detection and
instead use the underlying storage elements directly.
The implementation still performs the segmentation for each element contained in the array, in order to maintain value consistency so that viewing the array as a bit-slice is still able to correctly interact with data contained in it.
sourceimpl<A, O, Rhs> BitOr<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitOrAssign<Rhs>,
impl<A, O, Rhs> BitOr<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitOrAssign<Rhs>,
sourceimpl<A, O> BitOrAssign<&BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitOrAssign<&BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
sourcefn bitor_assign(&mut self, rhs: &BitArray<A, O>)
fn bitor_assign(&mut self, rhs: &BitArray<A, O>)
Performs the |=
operation. Read more
sourceimpl<A, O> BitOrAssign<BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitOrAssign<BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
sourcefn bitor_assign(&mut self, rhs: BitArray<A, O>)
fn bitor_assign(&mut self, rhs: BitArray<A, O>)
Performs the |=
operation. Read more
sourceimpl<A, O, Rhs> BitOrAssign<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitOrAssign<Rhs>,
impl<A, O, Rhs> BitOrAssign<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitOrAssign<Rhs>,
sourcefn bitor_assign(&mut self, rhs: Rhs)
fn bitor_assign(&mut self, rhs: Rhs)
Performs the |=
operation. Read more
sourceimpl<A, O, Rhs> BitXor<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitXorAssign<Rhs>,
impl<A, O, Rhs> BitXor<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitXorAssign<Rhs>,
sourceimpl<A, O> BitXorAssign<&BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitXorAssign<&BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
sourcefn bitxor_assign(&mut self, rhs: &BitArray<A, O>)
fn bitxor_assign(&mut self, rhs: &BitArray<A, O>)
Performs the ^=
operation. Read more
sourceimpl<A, O> BitXorAssign<BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitXorAssign<BitArray<A, O>> for BitSlice<A::Store, O> where
A: BitViewSized,
O: BitOrder,
sourcefn bitxor_assign(&mut self, rhs: BitArray<A, O>)
fn bitxor_assign(&mut self, rhs: BitArray<A, O>)
Performs the ^=
operation. Read more
sourceimpl<A, O, Rhs> BitXorAssign<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitXorAssign<Rhs>,
impl<A, O, Rhs> BitXorAssign<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: BitXorAssign<Rhs>,
sourcefn bitxor_assign(&mut self, rhs: Rhs)
fn bitxor_assign(&mut self, rhs: Rhs)
Performs the ^=
operation. Read more
sourceimpl<A, O> Borrow<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> Borrow<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourcefn borrow(&self) -> &BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
fn borrow(&self) -> &BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
Immutably borrows from an owned value. Read more
sourceimpl<A, O> BorrowMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> BorrowMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourcefn borrow_mut(&mut self) -> &mut BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
fn borrow_mut(&mut self) -> &mut BitSlice<A::Store, O>ⓘNotable traits for &BitSlice<T, O>impl<T, O> Read for &BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField, impl<T, O> Write for &mut BitSlice<T, O> where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
Mutably borrows from an owned value. Read more
sourceimpl<A, O> Clone for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> Clone for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> Debug for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> Debug for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> Default for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> Default for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> Deref for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> Deref for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> DerefMut for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> DerefMut for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<'de, T, O, const N: usize> Deserialize<'de> for BitArray<[T; N], O> where
T: BitStore,
O: BitOrder,
T::Mem: Deserialize<'de>,
impl<'de, T, O, const N: usize> Deserialize<'de> for BitArray<[T; N], O> where
T: BitStore,
O: BitOrder,
T::Mem: Deserialize<'de>,
sourcefn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
sourceimpl<'de, T, O> Deserialize<'de> for BitArray<T, O> where
T: BitStore,
O: BitOrder,
T::Mem: Deserialize<'de>,
impl<'de, T, O> Deserialize<'de> for BitArray<T, O> where
T: BitStore,
O: BitOrder,
T::Mem: Deserialize<'de>,
sourcefn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
sourceimpl<A, O> Display for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
impl<A, O> Display for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
sourceimpl<A, O> From<A> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> From<A> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> Hash for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> Hash for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O, Idx> Index<Idx> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: Index<Idx>,
impl<A, O, Idx> Index<Idx> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: Index<Idx>,
sourceimpl<A, O, Idx> IndexMut<Idx> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: IndexMut<Idx>,
impl<A, O, Idx> IndexMut<Idx> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
BitSlice<A::Store, O>: IndexMut<Idx>,
sourceimpl<'a, A, O> IntoIterator for &'a BitArray<A, O> where
O: BitOrder,
A: 'a + BitViewSized,
impl<'a, A, O> IntoIterator for &'a BitArray<A, O> where
O: BitOrder,
A: 'a + BitViewSized,
sourceimpl<'a, A, O> IntoIterator for &'a mut BitArray<A, O> where
O: BitOrder,
A: 'a + BitViewSized,
impl<'a, A, O> IntoIterator for &'a mut BitArray<A, O> where
O: BitOrder,
A: 'a + BitViewSized,
sourceimpl<A, O> IntoIterator for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> IntoIterator for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> LowerHex for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
impl<A, O> LowerHex for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
sourceimpl<A, O> Not for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> Not for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> Octal for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
impl<A, O> Octal for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
sourceimpl<A, O> Ord for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> Ord for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<O1, A, O2, T> PartialEq<BitArray<A, O2>> for BitSlice<T, O1> where
O1: BitOrder,
O2: BitOrder,
A: BitViewSized,
T: BitStore,
impl<O1, A, O2, T> PartialEq<BitArray<A, O2>> for BitSlice<T, O1> where
O1: BitOrder,
O2: BitOrder,
A: BitViewSized,
T: BitStore,
sourceimpl<A, O, Rhs> PartialEq<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
Rhs: ?Sized,
BitSlice<A::Store, O>: PartialEq<Rhs>,
impl<A, O, Rhs> PartialEq<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
Rhs: ?Sized,
BitSlice<A::Store, O>: PartialEq<Rhs>,
sourceimpl<A, T, O> PartialOrd<BitArray<A, O>> for BitSlice<T, O> where
A: BitViewSized,
T: BitStore,
O: BitOrder,
impl<A, T, O> PartialOrd<BitArray<A, O>> for BitSlice<T, O> where
A: BitViewSized,
T: BitStore,
O: BitOrder,
sourcefn partial_cmp(&self, other: &BitArray<A, O>) -> Option<Ordering>
fn partial_cmp(&self, other: &BitArray<A, O>) -> Option<Ordering>
This method returns an ordering between self
and other
values if one exists. Read more
1.0.0 · sourcefn lt(&self, other: &Rhs) -> bool
fn lt(&self, other: &Rhs) -> bool
This method tests less than (for self
and other
) and is used by the <
operator. Read more
1.0.0 · sourcefn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for self
and other
) and is used by the <=
operator. Read more
sourceimpl<A, O, Rhs> PartialOrd<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
Rhs: ?Sized,
BitSlice<A::Store, O>: PartialOrd<Rhs>,
impl<A, O, Rhs> PartialOrd<Rhs> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
Rhs: ?Sized,
BitSlice<A::Store, O>: PartialOrd<Rhs>,
sourcefn partial_cmp(&self, other: &Rhs) -> Option<Ordering>
fn partial_cmp(&self, other: &Rhs) -> Option<Ordering>
This method returns an ordering between self
and other
values if one exists. Read more
1.0.0 · sourcefn lt(&self, other: &Rhs) -> bool
fn lt(&self, other: &Rhs) -> bool
This method tests less than (for self
and other
) and is used by the <
operator. Read more
1.0.0 · sourcefn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for self
and other
) and is used by the <=
operator. Read more
sourceimpl<T, O, const N: usize> Serialize for BitArray<[T; N], O> where
T: BitStore,
O: BitOrder,
T::Mem: Serialize,
impl<T, O, const N: usize> Serialize for BitArray<[T; N], O> where
T: BitStore,
O: BitOrder,
T::Mem: Serialize,
sourcefn serialize<S>(
&self,
serializer: S
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error> where
S: Serializer,
fn serialize<S>(
&self,
serializer: S
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error> where
S: Serializer,
Serialize this value into the given Serde serializer. Read more
sourceimpl<T, O> Serialize for BitArray<T, O> where
T: BitStore,
O: BitOrder,
T::Mem: Serialize,
impl<T, O> Serialize for BitArray<T, O> where
T: BitStore,
O: BitOrder,
T::Mem: Serialize,
sourcefn serialize<S>(
&self,
serializer: S
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error> where
S: Serializer,
fn serialize<S>(
&self,
serializer: S
) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error> where
S: Serializer,
Serialize this value into the given Serde serializer. Read more
sourceimpl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for &BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for &BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> TryFrom<&mut BitSlice<<A as BitView>::Store, O>> for &mut BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> TryFrom<&mut BitSlice<<A as BitView>::Store, O>> for &mut BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
sourceimpl<A, O> UpperHex for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
impl<A, O> UpperHex for BitArray<A, O> where
O: BitOrder,
A: BitViewSized,
impl<A, O> Copy for BitArray<A, O> where
O: BitOrder,
A: BitViewSized + Copy,
impl<A, O> Eq for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
impl<A, O> Unpin for BitArray<A, O> where
A: BitViewSized,
O: BitOrder,
Auto Trait Implementations
impl<A, O> RefUnwindSafe for BitArray<A, O> where
A: RefUnwindSafe,
O: RefUnwindSafe,
impl<A, O> Send for BitArray<A, O> where
A: Send,
O: Send,
impl<A, O> Sync for BitArray<A, O> where
A: Sync,
O: Sync,
impl<A, O> UnwindSafe for BitArray<A, O> where
A: UnwindSafe,
O: UnwindSafe,
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> FmtForward for T
impl<T> FmtForward for T
sourcefn fmt_binary(self) -> FmtBinary<Self> where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self> where
Self: Binary,
Causes self
to use its Binary
implementation when Debug
-formatted. Read more
sourcefn fmt_display(self) -> FmtDisplay<Self> where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self> where
Self: Display,
Causes self
to use its Display
implementation when
Debug
-formatted. Read more
sourcefn fmt_lower_exp(self) -> FmtLowerExp<Self> where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self> where
Self: LowerExp,
Causes self
to use its LowerExp
implementation when
Debug
-formatted. Read more
sourcefn fmt_lower_hex(self) -> FmtLowerHex<Self> where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self> where
Self: LowerHex,
Causes self
to use its LowerHex
implementation when
Debug
-formatted. Read more
sourcefn fmt_octal(self) -> FmtOctal<Self> where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self> where
Self: Octal,
Causes self
to use its Octal
implementation when Debug
-formatted. Read more
sourcefn fmt_pointer(self) -> FmtPointer<Self> where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self> where
Self: Pointer,
Causes self
to use its Pointer
implementation when
Debug
-formatted. Read more
sourcefn fmt_upper_exp(self) -> FmtUpperExp<Self> where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self> where
Self: UpperExp,
Causes self
to use its UpperExp
implementation when
Debug
-formatted. Read more
sourcefn fmt_upper_hex(self) -> FmtUpperHex<Self> where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self> where
Self: UpperHex,
Causes self
to use its UpperHex
implementation when
Debug
-formatted. Read more
sourceimpl<T> Pipe for T where
T: ?Sized,
impl<T> Pipe for T where
T: ?Sized,
sourcefn pipe<R>(self, func: impl FnOnce(Self) -> R) -> R
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> R
Pipes by value. This is generally the method you want to use. Read more
sourcefn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R where
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R where
R: 'a,
Borrows self
and passes that borrow into the pipe function. Read more
sourcefn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R where
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R where
R: 'a,
Mutably borrows self
and passes that borrow into the pipe function. Read more
sourcefn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R where
Self: Borrow<B>,
B: 'a + ?Sized,
R: 'a,
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R where
Self: Borrow<B>,
B: 'a + ?Sized,
R: 'a,
Borrows self
, then passes self.borrow()
into the pipe function. Read more
sourcefn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> R where
Self: BorrowMut<B>,
B: 'a + ?Sized,
R: 'a,
fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> R where
Self: BorrowMut<B>,
B: 'a + ?Sized,
R: 'a,
Mutably borrows self
, then passes self.borrow_mut()
into the pipe
function. Read more
sourcefn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R where
Self: AsRef<U>,
U: 'a + ?Sized,
R: 'a,
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R where
Self: AsRef<U>,
U: 'a + ?Sized,
R: 'a,
Borrows self
, then passes self.as_ref()
into the pipe function.
sourcefn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R where
Self: AsMut<U>,
U: 'a + ?Sized,
R: 'a,
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R where
Self: AsMut<U>,
U: 'a + ?Sized,
R: 'a,
Mutably borrows self
, then passes self.as_mut()
into the pipe
function. Read more
sourceimpl<T> Tap for T
impl<T> Tap for T
sourcefn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self where
Self: Borrow<B>,
B: ?Sized,
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self where
Self: Borrow<B>,
B: ?Sized,
Immutable access to the Borrow<B>
of a value. Read more
sourcefn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self where
Self: BorrowMut<B>,
B: ?Sized,
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self where
Self: BorrowMut<B>,
B: ?Sized,
Mutable access to the BorrowMut<B>
of a value. Read more
sourcefn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self where
Self: AsRef<R>,
R: ?Sized,
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self where
Self: AsRef<R>,
R: ?Sized,
Immutable access to the AsRef<R>
view of a value. Read more
sourcefn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self where
Self: AsMut<R>,
R: ?Sized,
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self where
Self: AsMut<R>,
R: ?Sized,
Mutable access to the AsMut<R>
view of a value. Read more
sourcefn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self where
Self: Deref<Target = T>,
T: ?Sized,
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self where
Self: Deref<Target = T>,
T: ?Sized,
Immutable access to the Deref::Target
of a value. Read more
sourcefn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self where
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self where
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
Mutable access to the Deref::Target
of a value. Read more
sourcefn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
Calls .tap()
only in debug builds, and is erased in release builds.
sourcefn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
Calls .tap_mut()
only in debug builds, and is erased in release
builds. Read more
sourcefn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self where
Self: Borrow<B>,
B: ?Sized,
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self where
Self: Borrow<B>,
B: ?Sized,
Calls .tap_borrow()
only in debug builds, and is erased in release
builds. Read more
sourcefn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self where
Self: BorrowMut<B>,
B: ?Sized,
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self where
Self: BorrowMut<B>,
B: ?Sized,
Calls .tap_borrow_mut()
only in debug builds, and is erased in release
builds. Read more
sourcefn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self where
Self: AsRef<R>,
R: ?Sized,
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self where
Self: AsRef<R>,
R: ?Sized,
Calls .tap_ref()
only in debug builds, and is erased in release
builds. Read more
sourcefn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self where
Self: AsMut<R>,
R: ?Sized,
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self where
Self: AsMut<R>,
R: ?Sized,
Calls .tap_ref_mut()
only in debug builds, and is erased in release
builds. Read more