[−][src]Struct bitvec::array::BitArray
An array of individual bits, able to be held by value on the stack.
This type is generic over all Sized
implementors of the BitView
trait. Due
to limitations in the Rust language’s const-generics implementation (it is both
unstable and incomplete), this must take an array type parameter, rather than a
bit-count integer parameter, making it inconvenient to use. The bitarr!
macro is capable of constructing both values and specific types of BitArray
,
and this macro should be preferred for most use.
The advantage of using this wrapper is that it implements Deref
/Mut
to
BitSlice
, as well as implementing all of BitSlice
’s traits by forwarding to
the bit-slice view of its contained data. This allows it to have BitSlice
behavior by itself, without requiring explicit .as_bitslice()
calls in user
code.
Note: Not all traits may be implemented for forwarding, as a matter of effort and perceived need. Please file an issue for any additional traits that you need to be forwarded.
Limitations
This always produces a bit-slice that fully spans its data; you cannot produce, for example, an array of twelve bits.
Type Parameters
O
: The ordering of bits within memory elements.V
: Some amount of memory which can be used as the basis for aBitSlice
view. This will usually be an array[T: BitStore; N]
.
Examples
This type is useful for marking that some value is always to be used as a bit-slice.
use bitvec::prelude::*; struct HasBitfields { header: u32, // creates a type declaration fields: bitarr!(for 20, in Msb0, u8), } impl HasBitfields { pub fn new() -> Self { Self { header: 0, // creates a value object. the type paramaters must be repeated. fields: bitarr![Msb0, u8; 0; 20], } } /// Access a bit region directly pub fn get_subfield(&self) -> &BitSlice<Msb0, u8> { &self.fields[.. 4] } /// Read a 12-bit value out of a region pub fn read_value(&self) -> u16 { self.fields[4 .. 16].load() } /// Write a 12-bit value into a region pub fn set_value(&mut self, value: u16) { self.fields[4 .. 16].store(value); } }
Eventual Obsolescence
When const-generics stabilize, this will be modified to have a signature more
like BitArray<O, T: BitStore, const N: usize>([T; elts::<T>(N)]);
, to mirror
the behavior of ordinary arrays [T; N]
as they stand today.
Implementations
impl<O, V> BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
pub fn zeroed() -> Self
[src]
Constructs a new BitArray
with zeroed memory.
pub fn new(data: V) -> Self
[src]
pub fn unwrap(self) -> V
[src]
Removes the bit-array wrapper, returning the contained data.
Examples
use bitvec::prelude::*; let bitarr: BitArray<LocalBits, [usize; 1]> = bitarr![0; 30]; let native: [usize; 1] = bitarr.unwrap();
pub fn as_bitslice(&self) -> &BitSlice<O, V::Store>ⓘ
[src]
Views the array as a bit-slice.
pub fn as_mut_bitslice(&mut self) -> &mut BitSlice<O, V::Store>ⓘ
[src]
Views the array as a mutable bit-slice.
pub fn as_slice(&self) -> &[V::Store]
[src]
Views the array as a slice of its underlying elements.
pub fn as_mut_slice(&mut self) -> &mut [V::Store]
[src]
Views the array as a mutable slice of its underlying elements.
pub fn as_raw_slice(&self) -> &[V::Mem]
[src]
Views the array as a slice of its raw underlying memory type.
pub fn as_raw_mut_slice(&mut self) -> &mut [V::Mem]
[src]
Views the array as a mutable slice of its raw underlying memory type.
Methods from Deref<Target = BitSlice<O, V::Store>>
pub fn len(&self) -> usize
[src]
Returns the number of bits in the slice.
Original
Examples
use bitvec::prelude::*; let data = 0u32; let bits = data.view_bits::<LocalBits>(); assert_eq!(bits.len(), 32);
pub fn is_empty(&self) -> bool
[src]
Returns true
if the slice has a length of 0.
Original
Examples
use bitvec::prelude::*; assert!(BitSlice::<LocalBits, u8>::empty().is_empty()); assert!(!(0u32.view_bits::<LocalBits>()).is_empty());
pub fn first(&self) -> Option<&bool>
[src]
Returns the first bit of the slice, or None
if it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Lsb0>(); assert_eq!(Some(&true), bits.first()); let empty = BitSlice::<LocalBits, usize>::empty(); assert_eq!(None, empty.first());
pub fn first_mut(&mut self) -> Option<BitMut<'_, O, T>>
[src]
Returns a mutable pointer to the first bit of the slice, or None
if it
is empty.
Original
API Differences
This crate cannot manifest &mut bool
references, and must use the
BitMut
proxy type where &mut bool
exists in the standard library
API. The proxy value must be bound as mut
in order to write through
it.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); if let Some(mut first) = bits.first_mut() { *first = true; } assert_eq!(data, 1);
pub fn split_first(&self) -> Option<(&bool, &Self)>
[src]
Returns the first and all the rest of the bits of the slice, or None
if it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Lsb0>(); if let Some((first, rest)) = bits.split_first() { assert!(*first); }
pub fn split_first_mut(
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
[src]
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
Returns the first and all the rest of the bits of the slice, or None
if it is empty.
Original
API Differences
This crate cannot manifest &mut bool
references, and must use the
BitMut
proxy type where &mut bool
exists in the standard library
API. The proxy value must be bound as mut
in order to write through
it.
Because the references are permitted to use the same memory address, they are marked as aliasing in order to satisfy Rust’s requirements about freedom from data races.
Examples
use bitvec::prelude::*; let mut data = 0usize; let bits = data.view_bits_mut::<Lsb0>(); if let Some((mut first, rest)) = bits.split_first_mut() { *first = true; *rest.get_mut(1).unwrap() = true; } assert_eq!(data, 5); assert!(BitSlice::<LocalBits, usize>::empty_mut().split_first_mut().is_none());
pub fn split_last(&self) -> Option<(&bool, &Self)>
[src]
Returns the last and all the rest of the bits of the slice, or None
if
it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Msb0>(); if let Some((last, rest)) = bits.split_last() { assert!(*last); }
pub fn split_last_mut(
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
[src]
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
Returns the last and all the rest of the bits of the slice, or None
if
it is empty.
Original
API Differences
This crate cannot manifest &mut bool
references, and must use the
BitMut
proxy type where &mut bool
exists in the standard library
API. The proxy value must be bound as mut
in order to write through
it.
Because the references are permitted to use the same memory address, they are marked as aliasing in order to satisfy Rust’s requirements about freedom from data races.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); if let Some((mut last, rest)) = bits.split_last_mut() { *last = true; *rest.get_mut(5).unwrap() = true; } assert_eq!(data, 5); assert!(BitSlice::<LocalBits, usize>::empty_mut().split_last_mut().is_none());
pub fn last(&self) -> Option<&bool>
[src]
Returns the last bit of the slice, or None
if it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Msb0>(); assert_eq!(Some(&true), bits.last()); let empty = BitSlice::<LocalBits, usize>::empty(); assert_eq!(None, empty.last());
pub fn last_mut(&mut self) -> Option<BitMut<'_, O, T>>
[src]
Returns a mutable pointer to the last bit of the slice, or None
if it
is empty.
Original
API Differences
This crate cannot manifest &mut bool
references, and must use the
BitMut
proxy type where &mut bool
exists in the standard library
API. The proxy value must be bound as mut
in order to write through
it.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); if let Some(mut last) = bits.last_mut() { *last = true; } assert_eq!(data, 1);
pub fn get<'a, I>(&'a self, index: I) -> Option<I::Immut> where
I: BitSliceIndex<'a, O, T>,
[src]
I: BitSliceIndex<'a, O, T>,
Returns a reference to an element or subslice depending on the type of index.
- If given a position, returns a reference to the element at that
position or
None
if out of bounds. - If given a range, returns the subslice corresponding to that range, or
None
if out of bounds.
Original
Examples
use bitvec::prelude::*; let data = 2u8; let bits = data.view_bits::<Lsb0>(); assert_eq!(Some(&true), bits.get(1)); assert_eq!(Some(&bits[1 .. 3]), bits.get(1 .. 3)); assert_eq!(None, bits.get(9)); assert_eq!(None, bits.get(8 .. 10));
pub fn get_mut<'a, I>(&'a mut self, index: I) -> Option<I::Mut> where
I: BitSliceIndex<'a, O, T>,
[src]
I: BitSliceIndex<'a, O, T>,
Returns a mutable reference to an element or subslice depending on the
type of index (see get
) or None
if the index is out of bounds.
Original
API Differences
When I
is usize
, this returns BitMut
instead of &mut bool
.
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Lsb0>(); assert!(!bits.get(1).unwrap()); *bits.get_mut(1).unwrap() = true; assert!(bits.get(1).unwrap());
pub unsafe fn get_unchecked<'a, I>(&'a self, index: I) -> I::Immut where
I: BitSliceIndex<'a, O, T>,
[src]
I: BitSliceIndex<'a, O, T>,
Returns a reference to an element or subslice, without doing bounds checking.
This is generally not recommended; use with caution!
Unlike the original slice function, calling this with an out-of-bounds
index is not technically compile-time undefined behavior, as the
references produced do not actually describe local memory. However, the
use of an out-of-bounds index will eventually cause an out-of-bounds
memory read, which is a runtime safety violation. For a safe alternative
see get
.
Original
Examples
use bitvec::prelude::*; let data = 2u16; let bits = data.view_bits::<Lsb0>(); unsafe{ assert_eq!(bits.get_unchecked(1), &true); }
pub unsafe fn get_unchecked_mut<'a, I>(&'a mut self, index: I) -> I::Mut where
I: BitSliceIndex<'a, O, T>,
[src]
I: BitSliceIndex<'a, O, T>,
Returns a mutable reference to the output at this location, without doing bounds checking.
This is generally not recommended; use with caution!
Unlike the original slice function, calling this with an out-of-bounds
index is not technically compile-time undefined behavior, as the
references produced do not actually describe local memory. However, the
use of an out-of-bounds index will eventually cause an out-of-bounds
memory write, which is a runtime safety violation. For a safe
alternative see get_mut
.
Original
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Lsb0>(); unsafe { let mut bit = bits.get_unchecked_mut(1); *bit = true; } assert_eq!(data, 2);
pub fn as_ptr(&self) -> *const Self
[src]
Returns a raw bit-slice pointer to the region.
The caller must ensure that the slice outlives the pointer this function returns, or else it will end up pointing to garbage.
The caller must also ensure that the memory the pointer
(non-transitively) points to is only written to if T
allows shared
mutation, using this pointer or any pointer derived from it. If you need
to mutate the contents of the slice, use as_mut_ptr
.
Modifying the container (such as BitVec
) referenced by this slice may
cause its buffer to be reällocated, which would also make any pointers
to it invalid.
Original
API Differences
This returns *const BitSlice
, which is the equivalent of *const [T]
instead of *const T
. The pointer encoding used requires more than one
CPU word of space to address a single bit, so there is no advantage to
removing the length information from the encoded pointer value.
Notes
You cannot use any of the methods in the pointer
fundamental type
or the core::ptr
module on the *_ BitSlice
type. This pointer
retains the bitvec
-specific value encoding, and is incomprehensible by
the Rust standard library.
The only thing you can do with this pointer is dereference it.
Examples
use bitvec::prelude::*; let data = 2u16; let bits = data.view_bits::<Lsb0>(); let bits_ptr = bits.as_ptr(); for i in 0 .. bits.len() { assert_eq!(bits[i], unsafe { (&*bits_ptr)[i] }); }
pub fn as_mut_ptr(&mut self) -> *mut Self
[src]
Returns an unsafe mutable bit-slice pointer to the region.
The caller must ensure that the slice outlives the pointer this function returns, or else it will end up pointing to garbage.
Modifying the container (such as BitVec
) referenced by this slice may
cause its buffer to be reällocated, which would also make any pointers
to it invalid.
Original
API Differences
This returns *mut BitSlice
, which is the equivalont of *mut [T]
instead of *mut T
. The pointer encoding used requires more than one
CPU word of space to address a single bit, so there is no advantage to
removing the length information from the encoded pointer value.
Notes
You cannot use any of the methods in the pointer
fundamental type
or the core::ptr
module on the *_ BitSlice
type. This pointer
retains the bitvec
-specific value encoding, and is incomprehensible by
the Rust standard library.
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Lsb0>(); let bits_ptr = bits.as_mut_ptr(); for i in 0 .. bits.len() { unsafe { &mut *bits_ptr }.set(i, i % 2 == 0); } assert_eq!(data, 0b0101_0101_0101_0101);
pub fn swap(&mut self, a: usize, b: usize)
[src]
Swaps two bits in the slice.
Original
Arguments
a
: The index of the first bitb
: The index of the second bit
Panics
Panics if a
or b
are out of bounds.
Examples
use bitvec::prelude::*; let mut data = 2u8; let bits = data.view_bits_mut::<Lsb0>(); bits.swap(1, 3); assert_eq!(data, 8);
pub fn reverse(&mut self)
[src]
Reverses the order of bits in the slice, in place.
Original
Examples
use bitvec::prelude::*; let mut data = 0b1_1001100u8; let bits = data.view_bits_mut::<Msb0>(); bits[1 ..].reverse(); assert_eq!(data, 0b1_0011001);
pub fn iter(&self) -> Iter<'_, O, T>ⓘ
[src]
Returns an iterator over the slice.
Original
Examples
use bitvec::prelude::*; let data = 130u8; let bits = data.view_bits::<Lsb0>(); let mut iterator = bits.iter(); assert_eq!(iterator.next(), Some(&false)); assert_eq!(iterator.next(), Some(&true)); assert_eq!(iterator.nth(5), Some(&true)); assert_eq!(iterator.next(), None);
pub fn iter_mut(&mut self) -> IterMut<'_, O, T>ⓘ
[src]
Returns an iterator that allows modifying each bit.
Original
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); for (idx, mut elem) in bits.iter_mut().enumerate() { *elem = idx % 3 == 0; } assert_eq!(data, 0b100_100_10);
pub fn windows(&self, size: usize) -> Windows<'_, O, T>ⓘ
[src]
Returns an iterator over all contiguous windows of length size
. The
windows overlap. If the slice is shorter than size
, the iterator
returns no values.
Original
Panics
Panics if size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.windows(6); assert_eq!(iter.next().unwrap(), &bits[.. 6]); assert_eq!(iter.next().unwrap(), &bits[1 .. 7]); assert_eq!(iter.next().unwrap(), &bits[2 ..]); assert!(iter.next().is_none());
If the slice is shorter than size
:
use bitvec::prelude::*; let bits = BitSlice::<LocalBits, usize>::empty(); let mut iter = bits.windows(1); assert!(iter.next().is_none());
pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, O, T>ⓘ
[src]
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the beginning of the slice.
The chunks are slices and do not overlap. If chunk_size
does not
divide the length of the slice, then the last chunk will not have length
chunk_size
.
See chunks_exact
for a variant of this iterator that returns chunks
of always exactly chunk_size
bits, and rchunks
for the same
iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.chunks(3); assert_eq!(iter.next().unwrap(), &bits[.. 3]); assert_eq!(iter.next().unwrap(), &bits[3 .. 6]); assert_eq!(iter.next().unwrap(), &bits[6 ..]); assert!(iter.next().is_none());
pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, O, T>ⓘ
[src]
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the beginning of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size
does
not divide the length of the slice, then the last chunk will not have
length chunk_size
.
See chunks_exact_mut
for a variant of this iterator that returns
chunks of always exactly chunk_size
bits, and rchunks_mut
for the
same iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.chunks_mut(3).enumerate() { chunk.set(2 - idx, true); } assert_eq!(data, 0b01_010_100);
pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, O, T>ⓘNotable traits for ChunksExact<'a, O, T>
impl<'a, O, T> Iterator for ChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;
[src]
Notable traits for ChunksExact<'a, O, T>
impl<'a, O, T> Iterator for ChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the beginning of the slice.
The chunks are slices and do not overlap. If chunk_size
does not
divide the length of the slice, then the last up to chunk_size-1
bits
will be omitted and can be retrieved from the remainder
function of
the iterator.
Due to each chunk having exactly chunk_size
bits, the compiler may
optimize the resulting code better than in the case of chunks
.
See chunks
for a variant of this iterator that also returns the
remainder as a smaller chunk, and rchunks_exact
for the same
iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.chunks_exact(3); assert_eq!(iter.next().unwrap(), &bits[.. 3]); assert_eq!(iter.next().unwrap(), &bits[3 .. 6]); assert!(iter.next().is_none()); assert_eq!(iter.remainder(), &bits[6 ..]);
pub fn chunks_exact_mut(
&mut self,
chunk_size: usize
) -> ChunksExactMut<'_, O, T>ⓘNotable traits for ChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for ChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
[src]
&mut self,
chunk_size: usize
) -> ChunksExactMut<'_, O, T>ⓘ
Notable traits for ChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for ChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the beginning of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size
does
not divide the beginning length of the slice, then the last up to
chunk_size-1
bits will be omitted and can be retrieved from the
into_remainder
function of the iterator.
Due to each chunk having exactly chunk_size
bits, the compiler may
optimize the resulting code better than in the case of chunks_mut
.
See chunks_mut
for a variant of this iterator that also returns the
remainder as a smaller chunk, and rchunks_exact_mut
for the same
iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.chunks_exact_mut(3).enumerate() { chunk.set(idx, true); } assert_eq!(data, 0b00_010_001);
pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, O, T>ⓘ
[src]
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the end of the slice.
The chunks are slices and do not overlap. If chunk_size
does not
divide the length of the slice, then the last chunk will not have length
chunk_size
.
See rchunks_exact
for a variant of this iterator that returns chunks
of always exactly chunk_size
bits, and chunks
for the same
iterator but starting at the beginning of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.rchunks(3); assert_eq!(iter.next().unwrap(), &bits[5 ..]); assert_eq!(iter.next().unwrap(), &bits[2 .. 5]); assert_eq!(iter.next().unwrap(), &bits[.. 2]); assert!(iter.next().is_none());
pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, O, T>ⓘNotable traits for RChunksMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
[src]
Notable traits for RChunksMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the end of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size
does
not divide the length of the slice, then the last chunk will not have
length chunk_size
.
See rchunks_exact_mut
for a variant of this iterator that returns
chunks of always exactly chunk_size
bits, and chunks_mut
for the
same iterator but starting at the beginning of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.rchunks_mut(3).enumerate() { chunk.set(2 - idx, true); } assert_eq!(data, 0b100_010_01);
pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, O, T>ⓘNotable traits for RChunksExact<'a, O, T>
impl<'a, O, T> Iterator for RChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;
[src]
Notable traits for RChunksExact<'a, O, T>
impl<'a, O, T> Iterator for RChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the end of the slice.
The chunks are slices and do not overlap. If chunk_size
does not
divide the length of the slice, then the last up to chunk_size-1
bits
will be omitted and can be retrieved from the remainder
function of
the iterator.
Due to each chunk having exactly chunk_size
bits, the compiler can
often optimize the resulting code better than in the case of chunks
.
See rchunks
for a variant of this iterator that also returns the
remainder as a smaller chunk, and chunks_exact
for the same iterator
but starting at the beginning of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.rchunks_exact(3); assert_eq!(iter.next().unwrap(), &bits[5 ..]); assert_eq!(iter.next().unwrap(), &bits[2 .. 5]); assert!(iter.next().is_none()); assert_eq!(iter.remainder(), &bits[.. 2]);
pub fn rchunks_exact_mut(
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, O, T>ⓘNotable traits for RChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
[src]
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, O, T>ⓘ
Notable traits for RChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the end of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size
does
not divide the length of the slice, then the last up to chunk_size-1
bits will be omitted and can be retrieved from the into_remainder
function of the iterator.
Due to each chunk having exactly chunk_size
bits, the compiler can
often optimize the resulting code better than in the case of
chunks_mut
.
See rchunks_mut
for a variant of this iterator that also returns the
remainder as a smaller chunk, and chunks_exact_mut
for the same
iterator but starting at the beginning of the slice.
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.rchunks_exact_mut(3).enumerate() { chunk.set(idx, true); } assert_eq!(data, 0b001_010_00);
pub fn split_at(&self, mid: usize) -> (&Self, &Self)
[src]
Divides one slice into two at an index.
The first will contain all indices from [0, mid)
(excluding the index
mid
itself) and the second will contain all indices from [mid, len)
(excluding the index len
itself).
Original
Panics
Panics if mid > len
.
Examples
use bitvec::prelude::*; let data = 0xC3u8; let bits = data.view_bits::<LocalBits>(); let (left, right) = bits.split_at(0); assert!(left.is_empty()); assert_eq!(right, bits); let (left, right) = bits.split_at(2); assert_eq!(left, &bits[.. 2]); assert_eq!(right, &bits[2 ..]); let (left, right) = bits.split_at(8); assert_eq!(left, bits); assert!(right.is_empty());
pub fn split_at_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
[src]
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
Divides one mutable slice into two at an index.
The first will contain all indices from [0, mid)
(excluding the index
mid
itself) and the second will contain all indices from [mid, len)
(excluding the index len
itself).
Original
API Differences
Because the partition point mid
is permitted to occur in the interior
of a memory element T
, this method is required to mark the returned
slices as being to aliased memory. This marking ensures that writes to
the covered memory use the appropriate synchronization behavior of your
build to avoid data races – by default, this makes all writes atomic; on
builds with the atomic
feature disabled, this uses Cell
s and
forbids the produced subslices from leaving the current thread.
See the BitStore
documentation for more information.
Panics
Panics if mid > len
.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); // scoped to restrict the lifetime of the borrows { let (left, right) = bits.split_at_mut(3); *left.get_mut(1).unwrap() = true; *right.get_mut(2).unwrap() = true; } assert_eq!(data, 0b010_00100);
pub fn split<F>(&self, pred: F) -> Split<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
.
The matched bit is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0b01_001_000u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.split(|_pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[.. 1]); assert_eq!(iter.next().unwrap(), &bits[2 .. 4]); assert_eq!(iter.next().unwrap(), &bits[5 ..]); assert!(iter.next().is_none());
If the first bit is matched, an empty slice will be the first item returned by the iterator. Similarly, if the last element in the slice is matched, an empty slice will be the last item returned by the iterator:
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.split(|_pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[.. 7]); assert!(iter.next().unwrap().is_empty()); assert!(iter.next().is_none());
If two matched bits are directly adjacent, an empty slice will be present between them:
use bitvec::prelude::*; let data = 0b001_100_00u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.split(|pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[0 .. 2]); assert!(iter.next().unwrap().is_empty()); assert_eq!(iter.next().unwrap(), &bits[4 .. 8]); assert!(iter.next().is_none());
pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over mutable subslices separated by bits that match
pred
. The matched bit is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.split_mut(|_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_100_11);
pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
,
starting at the end of the slice and working backwards. The matched bit
is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0b0001_0000u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.rsplit(|_pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[4 ..]); assert_eq!(iter.next().unwrap(), &bits[.. 3]); assert!(iter.next().is_none());
As with split()
, if the first or last bit is matched, an empty slice
will be the first (or last) item returned by the iterator.
use bitvec::prelude::*; let data = 0b1001_0001u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.rsplit(|_pos, bit| *bit); assert!(iter.next().unwrap().is_empty()); assert_eq!(iter.next().unwrap(), &bits[4 .. 7]); assert_eq!(iter.next().unwrap(), &bits[1 .. 3]); assert!(iter.next().unwrap().is_empty()); assert!(iter.next().is_none());
pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over mutable subslices separated by bits that match
pred
, starting at the end of the slice and working backwards. The
matched bit is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.rsplit_mut(|_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_100_11);
pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
,
limited to returning at most n
items. The matched bit is not contained
in the subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Msb0>(); for group in bits.splitn(2, |pos, _bit| pos % 3 == 2) { println!("{}", group.len()); } // 2 // 5
pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
,
limited to returning at most n
items. The matched element is not
contained in the subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.splitn_mut(2, |_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_100_10);
pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
limited to returining at most n
items. This starts at the end of the
slice and works backwards. The matched bit is not contained in the
subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Msb0>(); for group in bits.rsplitn(2, |pos, _bit| pos % 3 == 2) { println!("{}", group.len()); } // 2 // 5
pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
limited to returning at most n
items. This starts at the end of the
slice and works backwards. The matched bit is not contained in the
subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.rsplitn_mut(2, |_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_000_11);
pub fn contains<O2, T2>(&self, x: &BitSlice<O2, T2>) -> bool where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Returns true
if the slice contains a subslice that matches the given
span.
Original
API Differences
This searches for a matching subslice (allowing different type
parameters) rather than for a specific bit. Searching for a contained
element with a given value is not as useful on a collection of bool
.
Furthermore, BitSlice
defines any
and not_all
, which are
optimized searchers for any true
or false
bit, respectively, in a
sequence.
Examples
use bitvec::prelude::*; let data = 0b0101_1010u8; let bits_msb = data.view_bits::<Msb0>(); let bits_lsb = data.view_bits::<Lsb0>(); assert!(bits_msb.contains(&bits_lsb[1 .. 5]));
This example uses a palindrome pattern to demonstrate that the slice being searched for does not need to have the same type parameters as the slice being searched.
pub fn starts_with<O2, T2>(&self, needle: &BitSlice<O2, T2>) -> bool where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Returns true
if needle
is a prefix of the slice.
Original
Examples
use bitvec::prelude::*; let data = 0b0100_1011u8; let haystack = data.view_bits::<Msb0>(); let needle = &data.view_bits::<Lsb0>()[2 .. 5]; assert!(haystack.starts_with(&needle[.. 2])); assert!(haystack.starts_with(needle)); assert!(!haystack.starts_with(&haystack[2 .. 4]));
Always returns true
if needle
is an empty slice:
use bitvec::prelude::*; let empty = BitSlice::<LocalBits, usize>::empty(); assert!(0u8.view_bits::<LocalBits>().starts_with(empty)); assert!(empty.starts_with(empty));
pub fn ends_with<O2, T2>(&self, needle: &BitSlice<O2, T2>) -> bool where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Returns true
if needle
is a suffix of the slice.
Original
Examples
use bitvec::prelude::*; let data = 0b0100_1011u8; let haystack = data.view_bits::<Lsb0>(); let needle = &data.view_bits::<Msb0>()[3 .. 6]; assert!(haystack.ends_with(&needle[1 ..])); assert!(haystack.ends_with(needle)); assert!(!haystack.ends_with(&haystack[2 .. 4]));
Always returns true
if needle
is an empty slice:
use bitvec::prelude::*; let empty = BitSlice::<LocalBits, usize>::empty(); assert!(0u8.view_bits::<LocalBits>().ends_with(empty)); assert!(empty.ends_with(empty));
pub fn rotate_left(&mut self, by: usize)
[src]
Rotates the slice in-place such that the first by
bits of the slice
move to the end while the last self.len() - by
bits move to the front.
After calling rotate_left
, the bit previously at index by
will
become the first bit in the slice.
Original
Panics
This function will panic if by
is greater than the length of the
slice. Note that by == self.len()
does not panic and is a no-op
rotation.
Complexity
Takes linear (in self.len()
) time.
Performance
While this is faster than the equivalent rotation on [bool]
, it is
slower than a handcrafted partial-element rotation on [T]
. Because of
the support for custom orderings, and the lack of specialization, this
method can only accelerate by reducing the number of loop iterations
performed on the slice body, and cannot accelerate by using shift-mask
instructions to move multiple bits in one operation.
Examples
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits.rotate_left(2); assert_eq!(data, 0xC3);
Rotating a subslice:
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits[1 .. 5].rotate_left(1); assert_eq!(data, 0b1_1101_000);
pub fn rotate_right(&mut self, by: usize)
[src]
Rotates the slice in-place such that the first self.len() - by
bits of
the slice move to the end while the last by
bits move to the front.
After calling rotate_right
, the bit previously at index self.len() - by
will become the first bit in the slice.
Original
Panics
This function will panic if by
is greater than the length of the
slice. Note that by == self.len()
does not panic and is a no-op
rotation.
Complexity
Takes linear (in self.len()
) time.
Performance
While this is faster than the equivalent rotation on [bool]
, it is
slower than a handcrafted partial-element rotation on [T]
. Because of
the support for custom orderings, and the lack of specialization, this
method can only accelerate by reducing the number of loop iterations
performed on the slice body, and cannot accelerate by using shift-mask
instructions to move multiple bits in one operation.
Examples
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits.rotate_right(2); assert_eq!(data, 0x3C);
Rotate a subslice:
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits[1 .. 5].rotate_right(1); assert_eq!(data, 0b1_0111_000);
pub fn clone_from_bitslice<O2, T2>(&mut self, src: &BitSlice<O2, T2>) where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Copies the bits from src
into self
.
The length of src
must be the same as self
.
Original
API Differences
This method is renamed, as it takes a bit slice rather than an element slice.
Panics
This function will panic if the two slices have different lengths.
Examples
Cloning two bits from a slice into another:
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); let src = 0x0Fu16.view_bits::<Lsb0>(); bits[.. 2].clone_from_bitslice(&src[2 .. 4]); assert_eq!(data, 0xC0);
Rust enforces that there can only be one mutable reference with no
immutable references to a particular piece of data in a particular
scope. Because of this, attempting to use clone_from_bitslice
on a
single slice will result in a compile failure:
use bitvec::prelude::*; let mut data = 3u8; let bits = data.view_bits_mut::<Msb0>(); bits[.. 2].clone_from_bitslice(&bits[6 ..]);
To work around this, we can use split_at_mut
to create two distinct
sub-slices from a slice:
use bitvec::prelude::*; let mut data = 3u8; let bits = data.view_bits_mut::<Msb0>(); let (head, tail) = bits.split_at_mut(4); head.clone_from_bitslice(tail); assert_eq!(data, 0x33);
pub fn copy_from_bitslice(&mut self, src: &Self)
[src]
Copies all bits from src
into self
.
The length of src
must be the same as self
.
Original
API Differences
This method is renamed, as it takes a bit slice rather than an element slice.
This is unable to guarantee a strictly faster copy behavior than
clone_from_bitslice
. In the future, the implementation may
specialize, as the language allows.
Panics
This function will panic if the two slices have different lengths.
Examples
Copying two bits from a slice into another:
pub fn copy_within<R>(&mut self, src: R, dest: usize) where
R: RangeBounds<usize>,
[src]
R: RangeBounds<usize>,
Copies bits from one part of the slice to another part of itself.
src
is the range within self
to copy from. dest
is the starting
index of the range within self
to copy to, which will have the same
length as src
. The two ranges may overlap. The ends of the two ranges
must be less than or equal to self.len()
.
Original
Panics
This function will panic if either range exceeds the end of the slice,
or if the end of src
is before the start.
Examples
Copying four bytes within a slice:
use bitvec::prelude::*; let mut data = 0x07u8; let bits = data.view_bits_mut::<Msb0>(); bits.copy_within(5 .., 0); assert_eq!(data, 0xE7);
pub fn swap_with_bitslice<O2, T2>(&mut self, other: &mut BitSlice<O2, T2>) where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Swaps all bits in self
with those in other
.
The length of other
must be the same as self
.
Original
API Differences
This method is renamed, as it takes a bit slice rather than an element slice.
Panics
This function will panic if the two slices have different lengths.
Examples
use bitvec::prelude::*; let mut one = [0xA5u8, 0x69]; let mut two = 0x1234u16; let one_bits = one.view_bits_mut::<Msb0>(); let two_bits = two.view_bits_mut::<Lsb0>(); one_bits.swap_with_bitslice(two_bits); assert_eq!(one, [0x2C, 0x48]); assert_eq!(two, 0x96A5);
pub unsafe fn align_to<U>(&self) -> (&Self, &BitSlice<O, U>, &Self) where
U: BitStore,
[src]
U: BitStore,
Transmute the bitslice to a bitslice of another type, ensuring alignment of the types is maintained.
This method splits the bitslice into three distinct bitslices: prefix, correctly aligned middle bitslice of a new type, and the suffix bitslice. The method may make the middle bitslice the greatest length possible for a given type and input bitslice, but only your algorithm's performance should depend on that, not its correctness. It is permissible for all of the input data to be returned as the prefix or suffix bitslice.
Original
API Differences
Type U
is required to have the same type family as type T
.
Whatever T
is of the fundamental integers, atomics, or Cell
wrappers, U
must be a different width in the same family. Changing the
type family with this method is unsound and strictly forbidden.
Unfortunately, it cannot be guaranteed by this function, so you are
required to abide by this limitation.
Safety
This method is essentially a transmute
with respect to the elements in
the returned middle bitslice, so all the usual caveats pertaining to
transmute::<T, U>
also apply here.
Examples
Basic usage:
use bitvec::prelude::*; unsafe { let bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7]; let bits = bytes.view_bits::<LocalBits>(); let (prefix, shorts, suffix) = bits.align_to::<u16>(); match prefix.len() { 0 => { assert_eq!(shorts, bits[.. 48]); assert_eq!(suffix, bits[48 ..]); }, 8 => { assert_eq!(prefix, bits[.. 8]); assert_eq!(shorts, bits[8 ..]); }, _ => unreachable!("This case will not occur") } }
pub unsafe fn align_to_mut<U>(
&mut self
) -> (&mut Self, &mut BitSlice<O, U>, &mut Self) where
U: BitStore,
[src]
&mut self
) -> (&mut Self, &mut BitSlice<O, U>, &mut Self) where
U: BitStore,
Transmute the bitslice to a bitslice of another type, ensuring alignment of the types is maintained.
This method splits the bitslice into three distinct bitslices: prefix, correctly aligned middle bitslice of a new type, and the suffix bitslice. The method may make the middle bitslice the greatest length possible for a given type and input bitslice, but only your algorithm's performance should depend on that, not its correctness. It is permissible for all of the input data to be returned as the prefix or suffix bitslice.
Original
API Differences
Type U
is required to have the same type family as type T
.
Whatever T
is of the fundamental integers, atomics, or Cell
wrappers, U
must be a different width in the same family. Changing the
type family with this method is unsound and strictly forbidden.
Unfortunately, it cannot be guaranteed by this function, so you are
required to abide by this limitation.
Safety
This method is essentially a transmute
with respect to the elements in
the returned middle bitslice, so all the usual caveats pertaining to
transmute::<T, U>
also apply here.
Examples
Basic usage:
use bitvec::prelude::*; unsafe { let mut bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7]; let bits = bytes.view_bits_mut::<LocalBits>(); let (prefix, shorts, suffix) = bits.align_to_mut::<u16>(); // same access and behavior as in `align_to` }
pub fn to_bitvec(&self) -> BitVec<O, T>ⓘ
[src]
Copies self
into a new BitVec
.
Original
Examples
use bitvec::prelude::*; let bits = bits![0, 1, 0, 1]; let bv = bits.to_bitvec(); assert_eq!(bits, bv);
pub fn repeat(&self, n: usize) -> BitVec<O, T>ⓘ where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
Creates a vector by repeating a slice n
times.
Original
Panics
This function will panic if the capacity would overflow.
Examples
Basic usage:
use bitvec::prelude::*; assert_eq!(bits![0, 1].repeat(3), bits![0, 1, 0, 1, 0, 1]);
A panic upon overflow:
use bitvec::prelude::*; // this will panic at runtime bits![0, 1].repeat(BitSlice::<LocalBits, usize>::MAX_BITS);
pub fn set(&mut self, index: usize, value: bool)
[src]
Sets the bit value at the given position.
Parameters
&mut self
index
: The bit index to set. It must be in the range0 .. self.len()
.value
: The value to be set,true
for1
andfalse
for0
.
Effects
If index
is valid, then the bit to which it refers is set to value
.
Panics
This method panics if index
is outside the slice domain.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); assert!(!bits.get(7).unwrap()); bits.set(7, true); assert!(bits.get(7).unwrap()); assert_eq!(data, 1);
This example panics when it attempts to set a bit that is out of bounds.
use bitvec::prelude::*; let bits = BitSlice::<LocalBits, usize>::empty_mut(); bits.set(0, false);
pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
[src]
Sets a bit at an index, without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see set
.
Parameters
&mut self
index
: The bit index to set. It must be in the range0 .. self.len()
. It will not be checked.
Effects
The bit at index
is set to value
.
Safety
This method is not safe. It performs raw pointer arithmetic to seek
from the start of the slice to the requested index, and set the bit
there. It does not inspect the length of self
, and it is free to
perform out-of-bounds memory write access.
Use this method only when you have already performed the bounds check, and can guarantee that the call occurs with a safely in-bounds index.
Examples
This example uses a bit slice of length 2, and demonstrates out-of-bounds access to the last bit in the element.
use bitvec::prelude::*; let mut data = 0u8; let bits = &mut data.view_bits_mut::<Msb0>()[2 .. 4]; assert_eq!(bits.len(), 2); unsafe { bits.set_unchecked(5, true); } assert_eq!(data, 1);
pub fn all(&self) -> bool
[src]
Tests if all bits in the slice domain are set (logical ∧
).
Truth Table
0 0 => 0
0 1 => 0
1 0 => 0
1 1 => 1
Parameters
&self
Returns
Whether all bits in the slice domain are set. The empty slice returns
true
.
Examples
use bitvec::prelude::*; let bits = 0xFDu8.view_bits::<Msb0>(); assert!(bits[.. 4].all()); assert!(!bits[4 ..].all());
pub fn any(&self) -> bool
[src]
Tests if any bit in the slice is set (logical ∨
).
Truth Table
0 0 => 0
0 1 => 1
1 0 => 1
1 1 => 1
Parameters
&self
Returns
Whether any bit in the slice domain is set. The empty slice returns
false
.
Examples
use bitvec::prelude::*; let bits = 0x40u8.view_bits::<Msb0>(); assert!(bits[.. 4].any()); assert!(!bits[4 ..].any());
pub fn not_all(&self) -> bool
[src]
Tests if any bit in the slice is unset (logical ¬∧
).
Truth Table
0 0 => 1
0 1 => 1
1 0 => 1
1 1 => 0
Parameters
- `&self
Returns
Whether any bit in the slice domain is unset.
Examples
use bitvec::prelude::*; let bits = 0xFDu8.view_bits::<Msb0>(); assert!(!bits[.. 4].not_all()); assert!(bits[4 ..].not_all());
pub fn not_any(&self) -> bool
[src]
Tests if all bits in the slice are unset (logical ¬∨
).
Truth Table
0 0 => 1
0 1 => 0
1 0 => 0
1 1 => 0
Parameters
&self
Returns
Whether all bits in the slice domain are unset.
Examples
use bitvec::prelude::*; let bits = 0x40u8.view_bits::<Msb0>(); assert!(!bits[.. 4].not_any()); assert!(bits[4 ..].not_any());
pub fn some(&self) -> bool
[src]
Tests whether the slice has some, but not all, bits set and some, but not all, bits unset.
This is false
if either .all
or .not_any
are true
.
Truth Table
0 0 => 0
0 1 => 1
1 0 => 1
1 1 => 0
Parameters
&self
Returns
Whether the slice domain has mixed content. The empty slice returns
false
.
Examples
use bitvec::prelude::*; let data = 0b111_000_10u8; let bits = data.view_bits::<Msb0>(); assert!(!bits[.. 3].some()); assert!(!bits[3 .. 6].some()); assert!(bits.some());
pub fn count_ones(&self) -> usize
[src]
Returns the number of ones in the memory region backing self
.
Parameters
&self
Returns
The number of high bits in the slice domain.
Examples
Basic usage:
use bitvec::prelude::*; let data = 0xF0u8; let bits = data.view_bits::<Msb0>(); assert_eq!(bits[.. 4].count_ones(), 4); assert_eq!(bits[4 ..].count_ones(), 0);
pub fn count_zeros(&self) -> usize
[src]
Returns the number of zeros in the memory region backing self
.
Parameters
&self
Returns
The number of low bits in the slice domain.
Examples
Basic usage:
use bitvec::prelude::*; let data = 0xF0u8; let bits = data.view_bits::<Msb0>(); assert_eq!(bits[.. 4].count_zeros(), 0); assert_eq!(bits[4 ..].count_zeros(), 4);
pub fn set_all(&mut self, value: bool)
[src]
Sets all bits in the slice to a value.
Parameters
&mut self
value
: The bit value to which all bits in the slice will be set.
Examples
use bitvec::prelude::*; let mut src = 0u8; let bits = src.view_bits_mut::<Msb0>(); bits[2 .. 6].set_all(true); assert_eq!(bits.as_slice(), &[0b0011_1100]); bits[3 .. 5].set_all(false); assert_eq!(bits.as_slice(), &[0b0010_0100]); bits[.. 1].set_all(true); assert_eq!(bits.as_slice(), &[0b1010_0100]);
pub fn for_each<F>(&mut self, func: F) where
F: FnMut(usize, bool) -> bool,
[src]
F: FnMut(usize, bool) -> bool,
Applies a function to each bit in the slice.
BitSlice
cannot implement IndexMut
, as it cannot manifest &mut bool
references, and the BitMut
proxy reference has an unavoidable
overhead. This method bypasses both problems, by applying a function to
each pair of index and value in the slice, without constructing a proxy
reference.
Parameters
&mut self
func
: A function which receives two arguments,index: usize
andvalue: bool
, and returns abool
.
Effects
For each index in the slice, the result of invoking func
with the
index number and current bit value is written into the slice.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); bits.for_each(|idx, _bit| idx % 3 == 0); assert_eq!(data, 0b100_100_10);
pub fn as_slice(&self) -> &[T]
[src]
Accesses the total backing storage of the BitSlice
, as a slice of its
elements.
This method produces a slice over all the memory elements it touches, using the current storage parameter. This is safe to do, as any events that would create an aliasing view into the elements covered by the returned slice will also have caused the slice to use its alias-aware type.
Parameters
&self
Returns
A view of the entire memory region this slice covers, including the edge elements.
pub fn as_raw_slice(&self) -> &[T::Mem]
[src]
Views the wholly-filled elements of the BitSlice
.
This will not include partially-owned edge elements, as they may be
aliased by other handles. To gain access to all elements that the
BitSlice
region covers, use one of the following:
.as_slice
produces a shared slice over all elements, marked aliased as appropriate..domain
produces a view describing each component of the region, marking only the contended edges as aliased and the uncontended interior as unaliased.
Parameters
&self
Returns
A slice of all the wholly-filled elements in the BitSlice
backing
storage.
Examples
use bitvec::prelude::*; let data = [1u8, 66]; let bits = data.view_bits::<Msb0>(); let accum = bits .as_raw_slice() .iter() .copied() .map(u8::count_ones) .sum::<u32>(); assert_eq!(accum, 3);
pub fn as_raw_slice_mut(&mut self) -> &mut [T::Mem]
[src]
Views the wholly-filled elements of the BitSlice
.
This will not include partially-owned edge elements, as they may be
aliased by other handles. To gain access to all elements that the
BitSlice
region covers, use one of the following:
.as_aliased_slice
produces a shared slice over all elements, marked as aliased to allow for the possibliity of mutation..domain_mut
produces a view describing each component of the region, marking only the contended edges as aliased and the uncontended interior as unaliased.
Parameters
&mut self
Returns
A mutable slice of all the wholly-filled elements in the BitSlice
backing storage.
Examples
use bitvec::prelude::*; let mut data = [1u8, 64]; let bits = data.view_bits_mut::<Msb0>(); for elt in bits.as_raw_slice_mut() { *elt |= 2; } assert_eq!(&[3, 66], bits.as_slice());
pub fn bit_domain(&self) -> BitDomain<'_, O, T>
[src]
Splits the slice into the logical components of its memory domain.
This produces a set of read-only subslices, marking as much as possible
as affirmatively lacking any write-capable view (T::NoAlias
). The
unaliased view is able to safely perform unsynchronized reads from
memory without causing undefined behavior, as the type system is able to
statically prove that no other write-capable views exist.
Parameters
&self
Returns
A BitDomain
structure representing the logical components of the
memory region.
Safety Exception
The following snippet describes a means of constructing a T::NoAlias
view into memory that is, in fact, aliased:
use bitvec::prelude::*; use core::sync::atomic::AtomicU8; type Bs<T> = BitSlice<LocalBits, T>; let data = [AtomicU8::new(0), AtomicU8::new(0), AtomicU8::new(0)]; let bits: &Bs<AtomicU8> = data.view_bits::<LocalBits>(); let subslice: &Bs<AtomicU8> = &bits[4 .. 20]; let (_, noalias, _): (_, &Bs<u8>, _) = subslice.bit_domain().region().unwrap();
The noalias
reference, which has memory type u8
, assumes that it can
act as an &u8
reference: unsynchronized loads are permitted, as no
handle exists which is capable of modifying the middle bit of data
.
This means that LLVM is permitted to issue loads from memory wherever
it wants in the block during which noalias
is live, as all loads are
equivalent.
Use of the bits
or subslice
handles, which are still live for the
lifetime of noalias
, to issue .set_aliased
calls into the middle
element introduce undefined behavior. bitvec
permits safe code to
introduce this undefined behavior solely because it requires deliberate
opt-in – you must start from atomic data; this cannot occur when data
is non-atomic – and use of the shared-mutation facility simultaneously
with the unaliasing view.
The .set_aliased
method is speculative, and will be marked as
unsafe
or removed at any suspicion that its presence in the library
has any costs.
Examples
This method can be used to accelerate reads from a slice that is marked as aliased.
use bitvec::prelude::*; type Bs<T> = BitSlice<LocalBits, T>; let mut data = [0u8; 3]; let bits = data.view_bits_mut::<LocalBits>(); let (a, b): ( &mut Bs<<u8 as BitStore>::Alias>, &mut Bs<<u8 as BitStore>::Alias>, ) = bits.split_at_mut(4); let (partial, full, _): ( &Bs<<u8 as BitStore>::Alias>, &Bs<<u8 as BitStore>::Mem>, _, ) = b.bit_domain().region().unwrap(); read_from(partial); // uses alias-aware reads read_from(full); // uses ordinary reads
pub fn bit_domain_mut(&mut self) -> BitDomainMut<'_, O, T>
[src]
Splits the slice into the logical components of its memory domain.
This produces a set of mutable subslices, marking as much as possible as
affirmatively lacking any other view (T::Mem
). The bare view is able
to safely perform unsynchronized reads from and writes to memory without
causing undefined behavior, as the type system is able to statically
prove that no other views exist.
Why This Is More Sound Than .bit_domain
The &mut
exclusion rule makes it impossible to construct two
references over the same memory where one of them is marked &mut
. This
makes it impossible to hold a live reference to memory separately from
any references produced from this method. For the duration of all
references produced by this method, all ancestor references used to
reach this method call are either suspended or dead, and the compiler
will not allow you to use them.
As such, this method cannot introduce undefined behavior where a reference incorrectly believes that the referent memory region is immutable.
pub fn domain(&self) -> Domain<'_, T>ⓘ
[src]
Splits the slice into immutable references to its underlying memory components.
Unlike .bit_domain
and .bit_domain_mut
, this does not return
smaller BitSlice
handles but rather appropriately-marked references to
the underlying memory elements.
The aliased references allow mutation of these elements. You are
required to not use mutating methods on these references at all. This
function is not marked unsafe
, but this is a contract you must uphold.
Use .domain_mut
to modify the underlying elements.
It is not currently possible to forbid mutation through these references. This may change in the future.
Safety Exception
As with .bit_domain
, this produces unsynchronized immutable
references over the fully-populated interior elements. If this view is
constructed from a BitSlice
handle over atomic memory, then it will
remove the atomic access behavior for the interior elements. This by
itself is safe, as long as no contemporaneous atomic writes to that
memory can occur. You must not retain and use an atomic reference to the
memory region marked as NoAlias
for the duration of this view’s
existence.
Parameters
&self
Returns
A read-only descriptor of the memory elements backing *self
.
pub fn domain_mut(&mut self) -> DomainMut<'_, T>
[src]
Splits the slice into mutable references to its underlying memory elements.
Like .domain
, this returns appropriately-marked references to the
underlying memory elements. These references are all writable.
The aliased edge references permit modifying memory beyond their bit
marker. You are required to only mutate the region of these edge
elements that you currently govern. This function is not marked
unsafe
, but this is a contract you must uphold.
It is not currently possible to forbid out-of-bounds mutation through these references. This may change in the future.
Parameters
&mut self
Returns
A descriptor of the memory elements underneath *self
, permitting
mutation.
pub unsafe fn split_at_unchecked(&self, mid: usize) -> (&Self, &Self)
[src]
Splits a slice at some mid-point, without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see split_at
.
Parameters
&self
mid
: The index at which to split the slice. This must be in the range0 .. self.len()
.
Returns
.0
:&self[.. mid]
.1
:&self[mid ..]
Safety
This function is not safe. It performs raw pointer arithmetic to
construct two new references. If mid
is out of bounds, then the first
slice will be too large, and the second will be catastrophically
incorrect. As both are references to invalid memory, they are undefined
to construct, and may not ever be used.
Examples
use bitvec::prelude::*; let data = 0x0180u16; let bits = data.view_bits::<Msb0>(); let (one, two) = unsafe { bits.split_at_unchecked(8) }; assert!(one[7]); assert!(two[0]);
pub unsafe fn split_at_unchecked_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
[src]
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
Splits a mutable slice at some mid-point, without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see split_at_mut
.
Parameters
&mut self
mid
: The index at which to split the slice. This must be in the range0 .. self.len()
.
Returns
.0
:&mut self[.. mid]
.1
:&mut self[mid ..]
Safety
This function is not safe. It performs raw pointer arithmetic to
construct two new references. If mid
is out of bounds, then the first
slice will be too large, and the second will be catastrophically
incorrect. As both are references to invalid memory, they are undefined
to construct, and may not ever be used.
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Msb0>(); let (one, two) = unsafe { bits.split_at_unchecked_mut(8) }; one.set(7, true); two.set(0, true); assert_eq!(data, 0x0180u16);
pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
[src]
Swaps the bits at two indices without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see swap
.
Parameters
&mut self
a
: One index to swap.b
: The other index to swap.
Effects
The bit at index a
is written into index b
, and the bit at index b
is written into a
.
Safety
Both a
and b
must be less than self.len()
. Indices greater than
the length will cause out-of-bounds memory access, which can lead to
memory unsafety and a program crash.
Examples
use bitvec::prelude::*; let mut data = 8u8; let bits = data.view_bits_mut::<Msb0>(); unsafe { bits.swap_unchecked(0, 4); } assert_eq!(data, 128);
pub unsafe fn copy_unchecked(&mut self, from: usize, to: usize)
[src]
Copies a bit from one index to another without checking boundary conditions.
Parameters
&mut self
from
: The index whose bit is to be copiedto
: The index into which the copied bit is written.
Effects
The bit at from
is written into to
.
Safety
Both from
and to
must be less than self.len()
, in order for
self
to legally read from and write to them, respectively.
If self
had been split from a larger slice, reading from from
or
writing to to
may not necessarily cause a memory-safety violation in
the Rust model, due to the aliasing system bitvec
employs. However,
writing outside the bounds of a slice reference is always a logical
error, as it causes changes observable by another reference handle.
Examples
use bitvec::prelude::*; let mut data = 1u8; let bits = data.view_bits_mut::<Lsb0>(); unsafe { bits.copy_unchecked(0, 2) }; assert_eq!(data, 5);
pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize) where
R: RangeBounds<usize>,
[src]
R: RangeBounds<usize>,
Copies bits from one part of the slice to another part of itself.
src
is the range within self
to copy from. dest
is the starting
index of the range within self
to copy to, which will have the same
length as src
. The two ranges may overlap. The ends of the two ranges
must be less than or equal to self.len()
.
Effects
self[src]
is copied to self[dest .. dest + src.end() - src.start()]
.
Panics
This function will panic if either range exceeds the end of the slice,
or if the end of src
is before the start.
Safety
Both the src
range and the target range dest .. dest + src.len()
must not exceed the self.len()
slice range.
Examples
use bitvec::prelude::*; let mut data = 0x07u8; let bits = data.view_bits_mut::<Msb0>(); unsafe { bits.copy_within_unchecked(5 .., 0); } assert_eq!(data, 0xE7);
pub fn split_at_aliased_mut(&mut self, mid: usize) -> (&mut Self, &mut Self)
[src]
Splits a mutable slice at some mid-point.
This method has the same behavior as split_at_mut
, except that it
does not apply an aliasing marker to the partitioned subslices.
Safety
Because this method is defined only on BitSlice
s whose T
type is
alias-safe, the subslices do not need to be additionally marked.
Trait Implementations
impl<O, V> AsMut<BitSlice<O, <V as BitView>::Store>> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> AsRef<BitSlice<O, <V as BitView>::Store>> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> Binary for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V, Rhs> BitAnd<Rhs> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitAndAssign<Rhs>,
[src]
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitAndAssign<Rhs>,
type Output = Self
The resulting type after applying the &
operator.
fn bitand(self, rhs: Rhs) -> Self::Output
[src]
impl<O, V, Rhs> BitAndAssign<Rhs> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitAndAssign<Rhs>,
[src]
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitAndAssign<Rhs>,
fn bitand_assign(&mut self, rhs: Rhs)
[src]
impl<O, V> BitField for BitArray<O, V> where
O: BitOrder,
V: BitView,
BitSlice<O, V::Store>: BitField,
[src]
O: BitOrder,
V: BitView,
BitSlice<O, V::Store>: BitField,
fn load_le<M>(&self) -> M where
M: BitMemory,
[src]
M: BitMemory,
fn load_be<M>(&self) -> M where
M: BitMemory,
[src]
M: BitMemory,
fn store_le<M>(&mut self, value: M) where
M: BitMemory,
[src]
M: BitMemory,
fn store_be<M>(&mut self, value: M) where
M: BitMemory,
[src]
M: BitMemory,
fn load<M>(&self) -> M where
M: BitMemory,
[src]
M: BitMemory,
fn store<M>(&mut self, value: M) where
M: BitMemory,
[src]
M: BitMemory,
impl<O, V, Rhs> BitOr<Rhs> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitOrAssign<Rhs>,
[src]
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitOrAssign<Rhs>,
type Output = Self
The resulting type after applying the |
operator.
fn bitor(self, rhs: Rhs) -> Self::Output
[src]
impl<O, V, Rhs> BitOrAssign<Rhs> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitOrAssign<Rhs>,
[src]
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitOrAssign<Rhs>,
fn bitor_assign(&mut self, rhs: Rhs)
[src]
impl<O, V, Rhs> BitXor<Rhs> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitXorAssign<Rhs>,
[src]
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitXorAssign<Rhs>,
type Output = Self
The resulting type after applying the ^
operator.
fn bitxor(self, rhs: Rhs) -> Self::Output
[src]
impl<O, V, Rhs> BitXorAssign<Rhs> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitXorAssign<Rhs>,
[src]
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: BitXorAssign<Rhs>,
fn bitxor_assign(&mut self, rhs: Rhs)
[src]
impl<O, V> Borrow<BitSlice<O, <V as BitView>::Store>> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> BorrowMut<BitSlice<O, <V as BitView>::Store>> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
fn borrow_mut(&mut self) -> &mut BitSlice<O, V::Store>ⓘ
[src]
impl<O: Clone, V: Clone> Clone for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O: Copy, V: Copy> Copy for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> Debug for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> Default for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> Deref for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
type Target = BitSlice<O, V::Store>
The resulting type after dereferencing.
fn deref(&self) -> &Self::Target
[src]
impl<O, V> DerefMut for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<'de, O, T> Deserialize<'de> for BitArray<O, T> where
O: BitOrder,
T: BitStore + BitRegister,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore + BitRegister,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 0]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 9]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 10]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 11]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 12]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 13]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 14]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 15]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 16]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 17]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 18]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 1]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 19]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 20]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 21]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 22]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 23]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 24]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 25]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 26]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 27]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 28]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 2]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 29]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 30]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 31]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 32]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 3]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 4]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 5]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 6]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 7]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<'de, O, T> Deserialize<'de> for BitArray<O, [T; 8]> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<O, V> Display for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> Eq for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> From<V> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> Hash for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
fn hash<H>(&self, hasher: &mut H) where
H: Hasher,
[src]
H: Hasher,
fn hash_slice<H>(data: &[Self], state: &mut H) where
H: Hasher,
1.3.0[src]
H: Hasher,
impl<O, V, Idx> Index<Idx> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: Index<Idx>,
[src]
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: Index<Idx>,
type Output = <BitSlice<O, V::Store> as Index<Idx>>::Output
The returned type after indexing.
fn index(&self, index: Idx) -> &Self::Output
[src]
impl<O, V, Idx> IndexMut<Idx> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: IndexMut<Idx>,
[src]
O: BitOrder,
V: BitView + Sized,
BitSlice<O, V::Store>: IndexMut<Idx>,
impl<'a, O, V> IntoIterator for &'a BitArray<O, V> where
O: 'a + BitOrder,
V: 'a + BitView + Sized,
[src]
O: 'a + BitOrder,
V: 'a + BitView + Sized,
type IntoIter = <&'a BitSlice<O, V::Store> as IntoIterator>::IntoIter
Which kind of iterator are we turning this into?
type Item = <&'a BitSlice<O, V::Store> as IntoIterator>::Item
The type of the elements being iterated over.
fn into_iter(self) -> Self::IntoIter
[src]
impl<'a, O, V> IntoIterator for &'a mut BitArray<O, V> where
O: 'a + BitOrder,
V: 'a + BitView + Sized,
[src]
O: 'a + BitOrder,
V: 'a + BitView + Sized,
type IntoIter = <&'a mut BitSlice<O, V::Store> as IntoIterator>::IntoIter
Which kind of iterator are we turning this into?
type Item = <&'a mut BitSlice<O, V::Store> as IntoIterator>::Item
The type of the elements being iterated over.
fn into_iter(self) -> Self::IntoIter
[src]
impl<O, V> LowerHex for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> Not for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
type Output = Self
The resulting type after applying the !
operator.
fn not(self) -> Self::Output
[src]
impl<O, V> Octal for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> Ord for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
fn cmp(&self, other: &Self) -> Ordering
[src]
#[must_use]fn max(self, other: Self) -> Self
1.21.0[src]
#[must_use]fn min(self, other: Self) -> Self
1.21.0[src]
#[must_use]fn clamp(self, min: Self, max: Self) -> Self
[src]
impl<O, V, T> PartialEq<BitArray<O, V>> for BitSlice<O, T> where
O: BitOrder,
V: BitView + Sized,
T: BitStore,
[src]
O: BitOrder,
V: BitView + Sized,
T: BitStore,
fn eq(&self, other: &BitArray<O, V>) -> bool
[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool
1.0.0[src]
impl<O, V, Rhs> PartialEq<Rhs> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
Rhs: ?Sized,
BitSlice<O, V::Store>: PartialEq<Rhs>,
[src]
O: BitOrder,
V: BitView + Sized,
Rhs: ?Sized,
BitSlice<O, V::Store>: PartialEq<Rhs>,
impl<O, V, T> PartialOrd<BitArray<O, V>> for BitSlice<O, T> where
O: BitOrder,
V: BitView + Sized,
T: BitStore,
[src]
O: BitOrder,
V: BitView + Sized,
T: BitStore,
fn partial_cmp(&self, other: &BitArray<O, V>) -> Option<Ordering>
[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<O, V, Rhs> PartialOrd<Rhs> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
Rhs: ?Sized,
BitSlice<O, V::Store>: PartialOrd<Rhs>,
[src]
O: BitOrder,
V: BitView + Sized,
Rhs: ?Sized,
BitSlice<O, V::Store>: PartialOrd<Rhs>,
fn partial_cmp(&self, other: &Rhs) -> Option<Ordering>
[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<O, V> Serialize for BitArray<O, V> where
O: BitOrder,
V: BitView,
V::Mem: Serialize,
[src]
O: BitOrder,
V: BitView,
V::Mem: Serialize,
impl<O, O2, T, V, '_> TryFrom<&'_ BitSlice<O2, T>> for BitArray<O, V> where
O: BitOrder,
O2: BitOrder,
T: BitStore,
V: BitView + Sized,
[src]
O: BitOrder,
O2: BitOrder,
T: BitStore,
V: BitView + Sized,
type Error = TryFromBitSliceError
The type returned in the event of a conversion error.
fn try_from(src: &BitSlice<O2, T>) -> Result<Self, Self::Error>
[src]
impl<'a, O, V> TryFrom<&'a BitSlice<O, <V as BitView>::Store>> for &'a BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
type Error = TryFromBitSliceError
The type returned in the event of a conversion error.
fn try_from(src: &'a BitSlice<O, V::Store>) -> Result<Self, Self::Error>
[src]
impl<'a, O, V> TryFrom<&'a mut BitSlice<O, <V as BitView>::Store>> for &'a mut BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
type Error = TryFromBitSliceError
The type returned in the event of a conversion error.
fn try_from(src: &'a mut BitSlice<O, V::Store>) -> Result<Self, Self::Error>
[src]
impl<O, V> Unpin for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
impl<O, V> UpperHex for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized,
[src]
O: BitOrder,
V: BitView + Sized,
Auto Trait Implementations
impl<O, V> RefUnwindSafe for BitArray<O, V> where
O: RefUnwindSafe,
V: RefUnwindSafe,
O: RefUnwindSafe,
V: RefUnwindSafe,
impl<O, V> Send for BitArray<O, V> where
O: Send,
V: Send,
O: Send,
V: Send,
impl<O, V> Sync for BitArray<O, V> where
O: Sync,
V: Sync,
O: Sync,
V: Sync,
impl<O, V> UnwindSafe for BitArray<O, V> where
O: UnwindSafe,
V: UnwindSafe,
O: UnwindSafe,
V: UnwindSafe,
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> Conv for T
[src]
impl<T> Conv for T
[src]
impl<T> DeserializeOwned for T where
T: for<'de> Deserialize<'de>,
[src]
T: for<'de> Deserialize<'de>,
impl<T> FmtForward for T
[src]
fn fmt_binary(self) -> FmtBinary<Self> where
Self: Binary,
[src]
Self: Binary,
fn fmt_display(self) -> FmtDisplay<Self> where
Self: Display,
[src]
Self: Display,
fn fmt_lower_exp(self) -> FmtLowerExp<Self> where
Self: LowerExp,
[src]
Self: LowerExp,
fn fmt_lower_hex(self) -> FmtLowerHex<Self> where
Self: LowerHex,
[src]
Self: LowerHex,
fn fmt_octal(self) -> FmtOctal<Self> where
Self: Octal,
[src]
Self: Octal,
fn fmt_pointer(self) -> FmtPointer<Self> where
Self: Pointer,
[src]
Self: Pointer,
fn fmt_upper_exp(self) -> FmtUpperExp<Self> where
Self: UpperExp,
[src]
Self: UpperExp,
fn fmt_upper_hex(self) -> FmtUpperHex<Self> where
Self: UpperHex,
[src]
Self: UpperHex,
impl<T> From<!> for T
[src]
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T> Pipe for T where
T: ?Sized,
[src]
T: ?Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> R
[src]
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R where
R: 'a,
[src]
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R where
R: 'a,
[src]
R: 'a,
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R where
B: 'a + ?Sized,
R: 'a,
Self: Borrow<B>,
[src]
B: 'a + ?Sized,
R: 'a,
Self: Borrow<B>,
fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> R where
B: 'a + ?Sized,
R: 'a,
Self: BorrowMut<B>,
[src]
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> R where
B: 'a + ?Sized,
R: 'a,
Self: BorrowMut<B>,
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R where
R: 'a,
Self: AsRef<U>,
U: 'a + ?Sized,
[src]
R: 'a,
Self: AsRef<U>,
U: 'a + ?Sized,
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R where
R: 'a,
Self: AsMut<U>,
U: 'a + ?Sized,
[src]
R: 'a,
Self: AsMut<U>,
U: 'a + ?Sized,
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
R: 'a,
Self: Deref<Target = T>,
T: 'a + ?Sized,
[src]
R: 'a,
Self: Deref<Target = T>,
T: 'a + ?Sized,
fn pipe_deref_mut<'a, T, R>(
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: DerefMut<Target = T> + Deref,
T: 'a + ?Sized,
[src]
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: DerefMut<Target = T> + Deref,
T: 'a + ?Sized,
impl<T> Pipe for T
[src]
impl<T> PipeAsRef for T
[src]
fn pipe_as_ref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
R: 'a,
Self: AsRef<T>,
T: 'a,
[src]
R: 'a,
Self: AsRef<T>,
T: 'a,
fn pipe_as_mut<'a, T, R>(&'a mut self, func: impl FnOnce(&'a mut T) -> R) -> R where
R: 'a,
Self: AsMut<T>,
T: 'a,
[src]
R: 'a,
Self: AsMut<T>,
T: 'a,
impl<T> PipeBorrow for T
[src]
fn pipe_borrow<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
R: 'a,
Self: Borrow<T>,
T: 'a,
[src]
R: 'a,
Self: Borrow<T>,
T: 'a,
fn pipe_borrow_mut<'a, T, R>(
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: BorrowMut<T>,
T: 'a,
[src]
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: BorrowMut<T>,
T: 'a,
impl<T> PipeDeref for T
[src]
fn pipe_deref<'a, R>(&'a self, func: impl FnOnce(&'a Self::Target) -> R) -> R where
R: 'a,
Self: Deref,
[src]
R: 'a,
Self: Deref,
fn pipe_deref_mut<'a, R>(
&'a mut self,
func: impl FnOnce(&'a mut Self::Target) -> R
) -> R where
R: 'a,
Self: DerefMut,
[src]
&'a mut self,
func: impl FnOnce(&'a mut Self::Target) -> R
) -> R where
R: 'a,
Self: DerefMut,
impl<T> PipeRef for T
[src]
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R where
R: 'a,
[src]
R: 'a,
fn pipe_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R where
R: 'a,
[src]
R: 'a,
impl<T> Tap for T
[src]
fn tap(self, func: impl FnOnce(&Self)) -> Self
[src]
fn tap_mut(self, func: impl FnOnce(&mut Self)) -> Self
[src]
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self where
B: ?Sized,
Self: Borrow<B>,
[src]
B: ?Sized,
Self: Borrow<B>,
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self where
B: ?Sized,
Self: BorrowMut<B>,
[src]
B: ?Sized,
Self: BorrowMut<B>,
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self where
R: ?Sized,
Self: AsRef<R>,
[src]
R: ?Sized,
Self: AsRef<R>,
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self where
R: ?Sized,
Self: AsMut<R>,
[src]
R: ?Sized,
Self: AsMut<R>,
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self where
Self: Deref<Target = T>,
T: ?Sized,
[src]
Self: Deref<Target = T>,
T: ?Sized,
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self where
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
[src]
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
[src]
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
[src]
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self where
B: ?Sized,
Self: Borrow<B>,
[src]
B: ?Sized,
Self: Borrow<B>,
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self where
B: ?Sized,
Self: BorrowMut<B>,
[src]
B: ?Sized,
Self: BorrowMut<B>,
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self where
R: ?Sized,
Self: AsRef<R>,
[src]
R: ?Sized,
Self: AsRef<R>,
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self where
R: ?Sized,
Self: AsMut<R>,
[src]
R: ?Sized,
Self: AsMut<R>,
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self where
Self: Deref<Target = T>,
T: ?Sized,
[src]
Self: Deref<Target = T>,
T: ?Sized,
fn tap_deref_mut_dbg<T>(self, func: impl FnOnce(&mut T)) -> Self where
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
[src]
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
impl<T> Tap for T
[src]
fn tap<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R,
[src]
F: FnOnce(&Self) -> R,
fn tap_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R,
[src]
F: FnOnce(&Self) -> R,
fn tap_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R,
[src]
F: FnOnce(&mut Self) -> R,
fn tap_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R,
[src]
F: FnOnce(&mut Self) -> R,
impl<T, U> TapAsRef<U> for T where
U: ?Sized,
[src]
U: ?Sized,
fn tap_ref<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: AsRef<T>,
[src]
F: FnOnce(&T) -> R,
Self: AsRef<T>,
fn tap_ref_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: AsRef<T>,
[src]
F: FnOnce(&T) -> R,
Self: AsRef<T>,
fn tap_ref_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
[src]
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
fn tap_ref_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
[src]
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
impl<T, U> TapBorrow<U> for T where
U: ?Sized,
[src]
U: ?Sized,
fn tap_borrow<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: Borrow<T>,
[src]
F: FnOnce(&T) -> R,
Self: Borrow<T>,
fn tap_borrow_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: Borrow<T>,
[src]
F: FnOnce(&T) -> R,
Self: Borrow<T>,
fn tap_borrow_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
[src]
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
fn tap_borrow_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
[src]
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
impl<T> TapDeref for T
[src]
fn tap_deref<F, R>(self, func: F) -> Self where
F: FnOnce(&Self::Target) -> R,
Self: Deref,
[src]
F: FnOnce(&Self::Target) -> R,
Self: Deref,
fn tap_deref_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&Self::Target) -> R,
Self: Deref,
[src]
F: FnOnce(&Self::Target) -> R,
Self: Deref,
fn tap_deref_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
[src]
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
fn tap_deref_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
[src]
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
impl<T> ToOwned for T where
T: Clone,
[src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
fn to_owned(&self) -> T
[src]
fn clone_into(&self, target: &mut T)
[src]
impl<T> ToString for T where
T: Display + ?Sized,
[src]
T: Display + ?Sized,
impl<T> TryConv for T
[src]
impl<T> TryConv for T
[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,