[−][src]Struct bitvec::slice::BitSlice
A slice of individual bits, anywhere in memory.
This is the main working type of the crate. It is analagous to [bool], and is
written to be as close as possible to drop-in replacable for it. This type
contains most of the methods used to operate on memory, but it will rarely be
named directly in your code. You should generally prefer to use BitArray for
fixed-size arrays or BitVec for dynamic vectors, and use &BitSlice
references only where you would directly use &[bool] or &[u8] references
before using this crate.
As it is a slice wrapper, you are intended to work with this through references
(&BitSlice<O, T> and &mut BitSlice<O, T>) or through the other data
structures provided by bitvec that are implemented atop it. Once created,
references to BitSlice are guaranteed to work just like references to [bool]
to the fullest extent possible in the Rust language.
Every bit-vector crate can give you an opaque type that hides shift/mask
operations from you. BitSlice does far more than this: it offers you the full
Rust guarantees about reference behavior, including lifetime tracking,
mutability and aliasing awareness, and explicit memory control, as well as the
full set of tools and APIs available to the standard [bool] slice type.
BitSlice can arbitrarily split and subslice, just like [bool]. You can write
a linear consuming function and keep the patterns already know.
For example, to trim all the bits off either edge that match a condition, you could write
use bitvec::prelude::*; fn trim<O: BitOrder, T: BitStore>( bits: &BitSlice<O, T>, to_trim: bool, ) -> &BitSlice<O, T> { let stop = |b: &bool| *b != to_trim; let front = bits.iter().position(stop).unwrap_or(0); let back = bits.iter().rposition(stop).unwrap_or(0); &bits[front ..= back] }
to get behavior something like
trim(&BitSlice[0, 0, 1, 1, 0, 1, 0], false) == &BitSlice[1, 1, 0, 1].
Documentation
All APIs that mirror something in the standard library will have an Original
section linking to the corresponding item. All APIs that have a different
signature or behavior than the original will have an API Differences section
explaining what has changed, and how to adapt your existing code to the change.
These sections look like this:
Original
API Differences
The slice type [bool] has no type parameters. BitSlice<O, T> has two: one
for the memory type used as backing storage, and one for the order of bits
within that memory type.
&BitSlice<O, T> is capable of producing &bool references to read bits out
of its memory, but is not capable of producing &mut bool references to write
bits into its memory. Any [bool] API that would produce a &mut bool will
instead produce a BitMut<O, T> proxy reference.
Behavior
BitSlice is a wrapper over [T]. It describes a region of memory, and must be
handled indirectly. This is most commonly through the reference types
&BitSlice and &mut BitSlice, which borrow memory owned by some other value
in the program. These buffers can be directly owned by the sibling types
BitBox, which behavios like Box<[T]>, and BitVec, which behaves like
Vec<T>. It cannot be used as the type parameter to a standard-library-provided
handle type.
The BitSlice region provides access to each individual bit in the region, as
if each bit had a memory address that you could use to dereference it. It packs
each logical bit into exactly one bit of storage memory, just like
std::bitset and std::vector<bool> in C++.
Type Parameters
BitSlice has two type parameters which propagate through nearly every public
API in the crate. These are very important to its operation, and your choice
of type arguments informs nearly every part of this library’s behavior.
T: BitStore
This is the simpler of the two parameters. It refers to the integer type used to
hold bits. It must be one of the Rust unsigned integer fundamentals: u8,
u16, u32, usize, and on 64-bit systems only, u64. In addition, it can
also be the Cell<N> wrapper over any of those, or their equivalent types in
core::sync::atomic. Unless you know you need to have Cell or atomic
properties, though, you should use a plain integer.
The default type argument is usize.
The argument you choose is used as the basis of a [T] slice, over which the
BitSlice view type is placed. BitSlice<_, T> is subject to all of the rules
about alignment that [T] is. If you are working with in-memory representation
formats, chances are that you already have a T type with which you’ve been
working, and should use it here.
If you are only using this crate to discard the seven wasted bits per bool
of a collection of bools, and are not too concerned about the in-memory
representation, then you should use the default type argument of usize. This
is because most processors work best when moving an entire usize between
memory and the processor itself, and using a smaller type may cause it to slow
down.
O: BitOrder
This is the more complex parameter. It has a default argument which, like
usize, is the good-enough choice when you do not explicitly need to control
the representation of bits in memory.
This parameter determines how to index the bits within a single memory element
T. Computers all agree that in a slice of elements T, the element with the
lower index has a lower memory address than the element with the higher index.
But the individual bits within an element do not have addresses, and so there is
no uniform standard of which bit is the zeroth, which is the first, which is the
penultimate, and which is the last.
To make matters even more confusing, there are two predominant ideas of
in-element ordering that often correlate with the in-element byte ordering
of integer types, but are in fact wholly unrelated! bitvec provides these two
main orders as types for you, and if you need a different one, it also provides
the tools you need to make your own.
Least Significant Bit Comes First
This ordering, named the Lsb0 type, indexes bits within an element by
placing the 0 index at the least significant bit (numeric value 1) and the
final index at the most significant bit (numeric value T::min_value(), for
signed integers on most machines).
For example, this is the ordering used by the TCP wire format, and by most C compilers to lay out bit-field struct members on little-endian byte-ordered machines.
Most Significant Bit Comes First
This ordering, named the Msb0 type, indexes bits within an element by
placing the 0 index at the most significant bit (numeric value T::min_value()
for most signed integers) and the final index at the least significant bit
(numeric value 1).
This is the ordering used by most C compilers to lay out bit-field struct members on big-endian byte-ordered machines.
Default Ordering
The default ordering is Lsb0, as it typically produces shorter object code
than Msb0 does. If you are implementing a collection, then Lsb0 is likely
the more performant ordering; if you are implementing a buffer protocol, then
your choice of ordering is dictated by the protocol definition.
Safety
BitSlice is designed to never introduce new memory unsafety that you did not
provide yourself, either before or during the use of this crate. Bugs do, and
have, occured, and you are encouraged to submit any discovered flaw as a defect
report.
The &BitSlice reference type uses a private encoding scheme to hold all the
information needed in its stack value. This encoding is not part of the
public API of the library, and is not binary-compatible with &[T].
Furthermore, in order to satisfy Rust’s requirements about alias conditions,
BitSlice performs type transformations on the T parameter to ensure that it
never creates the potential for undefined behavior.
You must never attempt to type-cast a reference to BitSlice in any way. You
must not use mem::transmute with BitSlice anywhere in its type arguments.
You must not use as-casting to convert between *BitSlice and any other type.
You must not attempt to modify the binary representation of a &BitSlice
reference value. These actions will all lead to runtime memory unsafety, are
(hopefully) likely to induce a program crash, and may possibly cause undefined
behavior at compile-time.
Everything in the BitSlice public API, even the unsafe parts, are guaranteed
to have no more unsafety than their equivalent parts in the standard library.
All unsafe APIs will have documentation explicitly detailing what the API
requires you to uphold in order for it to function safely and correctly. All
safe APIs will do so themselves.
Performance
Like the standard library’s [T] slice, BitSlice is designed to be very easy
to use safely, while supporting unsafe when necessary. Rust has a powerful
optimizing engine, and BitSlice will frequently be compiled to have zero
runtime cost. Where it is slower, it will not be significantly slower than a
manual replacement.
As the machine instructions operate on registers rather than bits, your choice
of T: BitOrder type parameter can influence your slice’s performance. Using
larger register types means that slices can gallop over completely-filled
interior elements faster, while narrower register types permit more graceful
handling of subslicing and aliased splits.
Construction
BitSlice views of memory can be constructed over borrowed data in a number of
ways. As this is a reference-only type, it can only ever be built by borrowing
an existing memory buffer and taking temporary control of your program’s view of
the region.
Macro Constructor
BitSlice buffers can be constructed at compile-time through the bits!
macro. This macro accepts a superset of the vec! arguments, and creates an
appropriate buffer in your program’s static memory.
use bitvec::prelude::*; let static_borrow = bits![0, 1, 0, 0, 1, 0, 0, 1]; let mutable_static: &mut BitSlice<_, _> = bits![mut 0; 8]; assert_ne!(static_borrow, mutable_static); mutable_static.clone_from_bitslice(static_borrow); assert_eq!(static_borrow, mutable_static);
Note that, despite constructing a static mut binding, the bits![mut …] call
is not unsafe, as the constructed symbol is hidden and only accessible by the
sole &mut reference returned by the macro call.
Borrowing Constructors
The functions [from_element], [from_element_mut], [from_slice], and
[from_slice_mut] take references to existing memory, and construct BitSlice
references over them. These are the most basic ways to borrow memory and view it
as bits.
use bitvec::prelude::*; let data = [0u16; 3]; let local_borrow = BitSlice::<Lsb0, _>::from_slice(&data); let mut data = [0u8; 5]; let local_mut = BitSlice::<Lsb0, _>::from_slice_mut(&mut data);
Trait Method Constructors
The BitView trait implements .view_bits::<O>() and .view_bits_mut::<O>()
methods on elements, arrays not larger than 32 elements, and slices. This trait,
imported in the crate prelude, is probably the easiest way for you to borrow
memory.
use bitvec::prelude::*; let data = [0u32; 5]; let trait_view = data.view_bits::<Msb0>(); let mut data = 0usize; let trait_mut = data.view_bits_mut::<Msb0>();
Owned Bit Slices
If you wish to take ownership of a memory region and enforce that it is always
viewed as a BitSlice by default, you can use one of the BitArray,
BitBox, or BitVec types, rather than pairing ordinary buffer types with
the borrowing constructors.
use bitvec::prelude::*; let slice = bits![0; 27]; let array = bitarr![LocalBits, u8; 0; 10]; let boxed = bitbox![0; 10]; let vec = bitvec![0; 20]; // arrays always round up assert_eq!(array.as_bitslice(), slice[.. 16]); assert_eq!(boxed.as_bitslice(), slice[.. 10]); assert_eq!(vec.as_bitslice(), slice[.. 20]);
Implementations
impl<O, T> BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Port of the [T] inherent API.
pub fn len(&self) -> usize[src]
Returns the number of bits in the slice.
Original
Examples
use bitvec::prelude::*; let data = 0u32; let bits = data.view_bits::<LocalBits>(); assert_eq!(bits.len(), 32);
pub fn is_empty(&self) -> bool[src]
Returns true if the slice has a length of 0.
Original
Examples
use bitvec::prelude::*; assert!(BitSlice::<LocalBits, u8>::empty().is_empty()); assert!(!(0u32.view_bits::<LocalBits>()).is_empty());
pub fn first(&self) -> Option<&bool>[src]
Returns the first bit of the slice, or None if it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Lsb0>(); assert_eq!(Some(&true), bits.first()); let empty = BitSlice::<LocalBits, usize>::empty(); assert_eq!(None, empty.first());
pub fn first_mut(&mut self) -> Option<BitMut<'_, O, T>>[src]
Returns a mutable pointer to the first bit of the slice, or None if it
is empty.
Original
API Differences
This crate cannot manifest &mut bool references, and must use the
BitMut proxy type where &mut bool exists in the standard library
API. The proxy value must be bound as mut in order to write through
it.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); if let Some(mut first) = bits.first_mut() { *first = true; } assert_eq!(data, 1);
pub fn split_first(&self) -> Option<(&bool, &Self)>[src]
Returns the first and all the rest of the bits of the slice, or None
if it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Lsb0>(); if let Some((first, rest)) = bits.split_first() { assert!(*first); }
pub fn split_first_mut(
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>[src]
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
Returns the first and all the rest of the bits of the slice, or None
if it is empty.
Original
API Differences
This crate cannot manifest &mut bool references, and must use the
BitMut proxy type where &mut bool exists in the standard library
API. The proxy value must be bound as mut in order to write through
it.
Because the references are permitted to use the same memory address, they are marked as aliasing in order to satisfy Rust’s requirements about freedom from data races.
Examples
use bitvec::prelude::*; let mut data = 0usize; let bits = data.view_bits_mut::<Lsb0>(); if let Some((mut first, rest)) = bits.split_first_mut() { *first = true; *rest.get_mut(1).unwrap() = true; } assert_eq!(data, 5); assert!(BitSlice::<LocalBits, usize>::empty_mut().split_first_mut().is_none());
pub fn split_last(&self) -> Option<(&bool, &Self)>[src]
Returns the last and all the rest of the bits of the slice, or None if
it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Msb0>(); if let Some((last, rest)) = bits.split_last() { assert!(*last); }
pub fn split_last_mut(
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>[src]
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
Returns the last and all the rest of the bits of the slice, or None if
it is empty.
Original
API Differences
This crate cannot manifest &mut bool references, and must use the
BitMut proxy type where &mut bool exists in the standard library
API. The proxy value must be bound as mut in order to write through
it.
Because the references are permitted to use the same memory address, they are marked as aliasing in order to satisfy Rust’s requirements about freedom from data races.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); if let Some((mut last, rest)) = bits.split_last_mut() { *last = true; *rest.get_mut(5).unwrap() = true; } assert_eq!(data, 5); assert!(BitSlice::<LocalBits, usize>::empty_mut().split_last_mut().is_none());
pub fn last(&self) -> Option<&bool>[src]
Returns the last bit of the slice, or None if it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Msb0>(); assert_eq!(Some(&true), bits.last()); let empty = BitSlice::<LocalBits, usize>::empty(); assert_eq!(None, empty.last());
pub fn last_mut(&mut self) -> Option<BitMut<'_, O, T>>[src]
Returns a mutable pointer to the last bit of the slice, or None if it
is empty.
Original
API Differences
This crate cannot manifest &mut bool references, and must use the
BitMut proxy type where &mut bool exists in the standard library
API. The proxy value must be bound as mut in order to write through
it.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); if let Some(mut last) = bits.last_mut() { *last = true; } assert_eq!(data, 1);
pub fn get<'a, I>(&'a self, index: I) -> Option<I::Immut> where
I: BitSliceIndex<'a, O, T>, [src]
I: BitSliceIndex<'a, O, T>,
Returns a reference to an element or subslice depending on the type of index.
- If given a position, returns a reference to the element at that
position or
Noneif out of bounds. - If given a range, returns the subslice corresponding to that range, or
Noneif out of bounds.
Original
Examples
use bitvec::prelude::*; let data = 2u8; let bits = data.view_bits::<Lsb0>(); assert_eq!(Some(&true), bits.get(1)); assert_eq!(Some(&bits[1 .. 3]), bits.get(1 .. 3)); assert_eq!(None, bits.get(9)); assert_eq!(None, bits.get(8 .. 10));
pub fn get_mut<'a, I>(&'a mut self, index: I) -> Option<I::Mut> where
I: BitSliceIndex<'a, O, T>, [src]
I: BitSliceIndex<'a, O, T>,
Returns a mutable reference to an element or subslice depending on the
type of index (see get) or None if the index is out of bounds.
Original
API Differences
When I is usize, this returns BitMut instead of &mut bool.
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Lsb0>(); assert!(!bits.get(1).unwrap()); *bits.get_mut(1).unwrap() = true; assert!(bits.get(1).unwrap());
pub unsafe fn get_unchecked<'a, I>(&'a self, index: I) -> I::Immut where
I: BitSliceIndex<'a, O, T>, [src]
I: BitSliceIndex<'a, O, T>,
Returns a reference to an element or subslice, without doing bounds checking.
This is generally not recommended; use with caution!
Unlike the original slice function, calling this with an out-of-bounds
index is not technically compile-time undefined behavior, as the
references produced do not actually describe local memory. However, the
use of an out-of-bounds index will eventually cause an out-of-bounds
memory read, which is a runtime safety violation. For a safe alternative
see get.
Original
Examples
use bitvec::prelude::*; let data = 2u16; let bits = data.view_bits::<Lsb0>(); unsafe{ assert_eq!(bits.get_unchecked(1), &true); }
pub unsafe fn get_unchecked_mut<'a, I>(&'a mut self, index: I) -> I::Mut where
I: BitSliceIndex<'a, O, T>, [src]
I: BitSliceIndex<'a, O, T>,
Returns a mutable reference to the output at this location, without doing bounds checking.
This is generally not recommended; use with caution!
Unlike the original slice function, calling this with an out-of-bounds
index is not technically compile-time undefined behavior, as the
references produced do not actually describe local memory. However, the
use of an out-of-bounds index will eventually cause an out-of-bounds
memory write, which is a runtime safety violation. For a safe
alternative see get_mut.
Original
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Lsb0>(); unsafe { let mut bit = bits.get_unchecked_mut(1); *bit = true; } assert_eq!(data, 2);
pub fn as_ptr(&self) -> *const Self[src]
Returns a raw bit-slice pointer to the region.
The caller must ensure that the slice outlives the pointer this function returns, or else it will end up pointing to garbage.
The caller must also ensure that the memory the pointer
(non-transitively) points to is only written to if T allows shared
mutation, using this pointer or any pointer derived from it. If you need
to mutate the contents of the slice, use as_mut_ptr.
Modifying the container (such as BitVec) referenced by this slice may
cause its buffer to be reällocated, which would also make any pointers
to it invalid.
Original
API Differences
This returns *const BitSlice, which is the equivalent of *const [T]
instead of *const T. The pointer encoding used requires more than one
CPU word of space to address a single bit, so there is no advantage to
removing the length information from the encoded pointer value.
Notes
You cannot use any of the methods in the pointer fundamental type
or the core::ptr module on the *_ BitSlice type. This pointer
retains the bitvec-specific value encoding, and is incomprehensible by
the Rust standard library.
The only thing you can do with this pointer is dereference it.
Examples
use bitvec::prelude::*; let data = 2u16; let bits = data.view_bits::<Lsb0>(); let bits_ptr = bits.as_ptr(); for i in 0 .. bits.len() { assert_eq!(bits[i], unsafe { (&*bits_ptr)[i] }); }
pub fn as_mut_ptr(&mut self) -> *mut Self[src]
Returns an unsafe mutable bit-slice pointer to the region.
The caller must ensure that the slice outlives the pointer this function returns, or else it will end up pointing to garbage.
Modifying the container (such as BitVec) referenced by this slice may
cause its buffer to be reällocated, which would also make any pointers
to it invalid.
Original
API Differences
This returns *mut BitSlice, which is the equivalont of *mut [T]
instead of *mut T. The pointer encoding used requires more than one
CPU word of space to address a single bit, so there is no advantage to
removing the length information from the encoded pointer value.
Notes
You cannot use any of the methods in the pointer fundamental type
or the core::ptr module on the *_ BitSlice type. This pointer
retains the bitvec-specific value encoding, and is incomprehensible by
the Rust standard library.
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Lsb0>(); let bits_ptr = bits.as_mut_ptr(); for i in 0 .. bits.len() { unsafe { &mut *bits_ptr }.set(i, i % 2 == 0); } assert_eq!(data, 0b0101_0101_0101_0101);
pub fn swap(&mut self, a: usize, b: usize)[src]
Swaps two bits in the slice.
Original
Arguments
a: The index of the first bitb: The index of the second bit
Panics
Panics if a or b are out of bounds.
Examples
use bitvec::prelude::*; let mut data = 2u8; let bits = data.view_bits_mut::<Lsb0>(); bits.swap(1, 3); assert_eq!(data, 8);
pub fn reverse(&mut self)[src]
Reverses the order of bits in the slice, in place.
Original
Examples
use bitvec::prelude::*; let mut data = 0b1_1001100u8; let bits = data.view_bits_mut::<Msb0>(); bits[1 ..].reverse(); assert_eq!(data, 0b1_0011001);
pub fn iter(&self) -> Iter<'_, O, T>ⓘ[src]
Returns an iterator over the slice.
Original
Examples
use bitvec::prelude::*; let data = 130u8; let bits = data.view_bits::<Lsb0>(); let mut iterator = bits.iter(); assert_eq!(iterator.next(), Some(&false)); assert_eq!(iterator.next(), Some(&true)); assert_eq!(iterator.nth(5), Some(&true)); assert_eq!(iterator.next(), None);
pub fn iter_mut(&mut self) -> IterMut<'_, O, T>ⓘ[src]
Returns an iterator that allows modifying each bit.
Original
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); for (idx, mut elem) in bits.iter_mut().enumerate() { *elem = idx % 3 == 0; } assert_eq!(data, 0b100_100_10);
pub fn windows(&self, size: usize) -> Windows<'_, O, T>ⓘ[src]
Returns an iterator over all contiguous windows of length size. The
windows overlap. If the slice is shorter than size, the iterator
returns no values.
Original
Panics
Panics if size is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.windows(6); assert_eq!(iter.next().unwrap(), &bits[.. 6]); assert_eq!(iter.next().unwrap(), &bits[1 .. 7]); assert_eq!(iter.next().unwrap(), &bits[2 ..]); assert!(iter.next().is_none());
If the slice is shorter than size:
use bitvec::prelude::*; let bits = BitSlice::<LocalBits, usize>::empty(); let mut iter = bits.windows(1); assert!(iter.next().is_none());
pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, O, T>ⓘ[src]
Returns an iterator over chunk_size bits of the slice at a time,
starting at the beginning of the slice.
The chunks are slices and do not overlap. If chunk_size does not
divide the length of the slice, then the last chunk will not have length
chunk_size.
See chunks_exact for a variant of this iterator that returns chunks
of always exactly chunk_size bits, and rchunks for the same
iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.chunks(3); assert_eq!(iter.next().unwrap(), &bits[.. 3]); assert_eq!(iter.next().unwrap(), &bits[3 .. 6]); assert_eq!(iter.next().unwrap(), &bits[6 ..]); assert!(iter.next().is_none());
pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, O, T>ⓘ[src]
Returns an iterator over chunk_size bits of the slice at a time,
starting at the beginning of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size does
not divide the length of the slice, then the last chunk will not have
length chunk_size.
See chunks_exact_mut for a variant of this iterator that returns
chunks of always exactly chunk_size bits, and rchunks_mut for the
same iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.chunks_mut(3).enumerate() { chunk.set(2 - idx, true); } assert_eq!(data, 0b01_010_100);
pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, O, T>ⓘNotable traits for ChunksExact<'a, O, T>
impl<'a, O, T> Iterator for ChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;[src]
Notable traits for ChunksExact<'a, O, T>
impl<'a, O, T> Iterator for ChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;Returns an iterator over chunk_size bits of the slice at a time,
starting at the beginning of the slice.
The chunks are slices and do not overlap. If chunk_size does not
divide the length of the slice, then the last up to chunk_size-1 bits
will be omitted and can be retrieved from the remainder function of
the iterator.
Due to each chunk having exactly chunk_size bits, the compiler may
optimize the resulting code better than in the case of chunks.
See chunks for a variant of this iterator that also returns the
remainder as a smaller chunk, and rchunks_exact for the same
iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.chunks_exact(3); assert_eq!(iter.next().unwrap(), &bits[.. 3]); assert_eq!(iter.next().unwrap(), &bits[3 .. 6]); assert!(iter.next().is_none()); assert_eq!(iter.remainder(), &bits[6 ..]);
pub fn chunks_exact_mut(
&mut self,
chunk_size: usize
) -> ChunksExactMut<'_, O, T>ⓘNotable traits for ChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for ChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;[src]
&mut self,
chunk_size: usize
) -> ChunksExactMut<'_, O, T>ⓘ
Notable traits for ChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for ChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;Returns an iterator over chunk_size bits of the slice at a time,
starting at the beginning of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size does
not divide the beginning length of the slice, then the last up to
chunk_size-1 bits will be omitted and can be retrieved from the
into_remainder function of the iterator.
Due to each chunk having exactly chunk_size bits, the compiler may
optimize the resulting code better than in the case of chunks_mut.
See chunks_mut for a variant of this iterator that also returns the
remainder as a smaller chunk, and rchunks_exact_mut for the same
iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.chunks_exact_mut(3).enumerate() { chunk.set(idx, true); } assert_eq!(data, 0b00_010_001);
pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, O, T>ⓘ[src]
Returns an iterator over chunk_size bits of the slice at a time,
starting at the end of the slice.
The chunks are slices and do not overlap. If chunk_size does not
divide the length of the slice, then the last chunk will not have length
chunk_size.
See rchunks_exact for a variant of this iterator that returns chunks
of always exactly chunk_size bits, and chunks for the same
iterator but starting at the beginning of the slice.
Original
Panics
Panics if chunk_size is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.rchunks(3); assert_eq!(iter.next().unwrap(), &bits[5 ..]); assert_eq!(iter.next().unwrap(), &bits[2 .. 5]); assert_eq!(iter.next().unwrap(), &bits[.. 2]); assert!(iter.next().is_none());
pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, O, T>ⓘNotable traits for RChunksMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;[src]
Notable traits for RChunksMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;Returns an iterator over chunk_size bits of the slice at a time,
starting at the end of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size does
not divide the length of the slice, then the last chunk will not have
length chunk_size.
See rchunks_exact_mut for a variant of this iterator that returns
chunks of always exactly chunk_size bits, and chunks_mut for the
same iterator but starting at the beginning of the slice.
Original
Panics
Panics if chunk_size is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.rchunks_mut(3).enumerate() { chunk.set(2 - idx, true); } assert_eq!(data, 0b100_010_01);
pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, O, T>ⓘNotable traits for RChunksExact<'a, O, T>
impl<'a, O, T> Iterator for RChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;[src]
Notable traits for RChunksExact<'a, O, T>
impl<'a, O, T> Iterator for RChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;Returns an iterator over chunk_size bits of the slice at a time,
starting at the end of the slice.
The chunks are slices and do not overlap. If chunk_size does not
divide the length of the slice, then the last up to chunk_size-1 bits
will be omitted and can be retrieved from the remainder function of
the iterator.
Due to each chunk having exactly chunk_size bits, the compiler can
often optimize the resulting code better than in the case of chunks.
See rchunks for a variant of this iterator that also returns the
remainder as a smaller chunk, and chunks_exact for the same iterator
but starting at the beginning of the slice.
Original
Panics
Panics if chunk_size is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.rchunks_exact(3); assert_eq!(iter.next().unwrap(), &bits[5 ..]); assert_eq!(iter.next().unwrap(), &bits[2 .. 5]); assert!(iter.next().is_none()); assert_eq!(iter.remainder(), &bits[.. 2]);
pub fn rchunks_exact_mut(
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, O, T>ⓘNotable traits for RChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;[src]
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, O, T>ⓘ
Notable traits for RChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;Returns an iterator over chunk_size bits of the slice at a time,
starting at the end of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size does
not divide the length of the slice, then the last up to chunk_size-1
bits will be omitted and can be retrieved from the into_remainder
function of the iterator.
Due to each chunk having exactly chunk_size bits, the compiler can
often optimize the resulting code better than in the case of
chunks_mut.
See rchunks_mut for a variant of this iterator that also returns the
remainder as a smaller chunk, and chunks_exact_mut for the same
iterator but starting at the beginning of the slice.
Panics
Panics if chunk_size is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.rchunks_exact_mut(3).enumerate() { chunk.set(idx, true); } assert_eq!(data, 0b001_010_00);
pub fn split_at(&self, mid: usize) -> (&Self, &Self)[src]
Divides one slice into two at an index.
The first will contain all indices from [0, mid) (excluding the index
mid itself) and the second will contain all indices from [mid, len)
(excluding the index len itself).
Original
Panics
Panics if mid > len.
Examples
use bitvec::prelude::*; let data = 0xC3u8; let bits = data.view_bits::<LocalBits>(); let (left, right) = bits.split_at(0); assert!(left.is_empty()); assert_eq!(right, bits); let (left, right) = bits.split_at(2); assert_eq!(left, &bits[.. 2]); assert_eq!(right, &bits[2 ..]); let (left, right) = bits.split_at(8); assert_eq!(left, bits); assert!(right.is_empty());
pub fn split_at_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)[src]
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
Divides one mutable slice into two at an index.
The first will contain all indices from [0, mid) (excluding the index
mid itself) and the second will contain all indices from [mid, len)
(excluding the index len itself).
Original
API Differences
Because the partition point mid is permitted to occur in the interior
of a memory element T, this method is required to mark the returned
slices as being to aliased memory. This marking ensures that writes to
the covered memory use the appropriate synchronization behavior of your
build to avoid data races – by default, this makes all writes atomic; on
builds with the atomic feature disabled, this uses Cells and
forbids the produced subslices from leaving the current thread.
See the BitStore documentation for more information.
Panics
Panics if mid > len.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); // scoped to restrict the lifetime of the borrows { let (left, right) = bits.split_at_mut(3); *left.get_mut(1).unwrap() = true; *right.get_mut(2).unwrap() = true; } assert_eq!(data, 0b010_00100);
pub fn split<F>(&self, pred: F) -> Split<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool, [src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred.
The matched bit is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0b01_001_000u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.split(|_pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[.. 1]); assert_eq!(iter.next().unwrap(), &bits[2 .. 4]); assert_eq!(iter.next().unwrap(), &bits[5 ..]); assert!(iter.next().is_none());
If the first bit is matched, an empty slice will be the first item returned by the iterator. Similarly, if the last element in the slice is matched, an empty slice will be the last item returned by the iterator:
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.split(|_pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[.. 7]); assert!(iter.next().unwrap().is_empty()); assert!(iter.next().is_none());
If two matched bits are directly adjacent, an empty slice will be present between them:
use bitvec::prelude::*; let data = 0b001_100_00u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.split(|pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[0 .. 2]); assert!(iter.next().unwrap().is_empty()); assert_eq!(iter.next().unwrap(), &bits[4 .. 8]); assert!(iter.next().is_none());
pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool, [src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over mutable subslices separated by bits that match
pred. The matched bit is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.split_mut(|_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_100_11);
pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool, [src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred,
starting at the end of the slice and working backwards. The matched bit
is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0b0001_0000u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.rsplit(|_pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[4 ..]); assert_eq!(iter.next().unwrap(), &bits[.. 3]); assert!(iter.next().is_none());
As with split(), if the first or last bit is matched, an empty slice
will be the first (or last) item returned by the iterator.
use bitvec::prelude::*; let data = 0b1001_0001u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.rsplit(|_pos, bit| *bit); assert!(iter.next().unwrap().is_empty()); assert_eq!(iter.next().unwrap(), &bits[4 .. 7]); assert_eq!(iter.next().unwrap(), &bits[1 .. 3]); assert!(iter.next().unwrap().is_empty()); assert!(iter.next().is_none());
pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool, [src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over mutable subslices separated by bits that match
pred, starting at the end of the slice and working backwards. The
matched bit is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.rsplit_mut(|_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_100_11);
pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool, [src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred,
limited to returning at most n items. The matched bit is not contained
in the subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Msb0>(); for group in bits.splitn(2, |pos, _bit| pos % 3 == 2) { println!("{}", group.len()); } // 2 // 5
pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool, [src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred,
limited to returning at most n items. The matched element is not
contained in the subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.splitn_mut(2, |_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_100_10);
pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool, [src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
limited to returining at most n items. This starts at the end of the
slice and works backwards. The matched bit is not contained in the
subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Msb0>(); for group in bits.rsplitn(2, |pos, _bit| pos % 3 == 2) { println!("{}", group.len()); } // 2 // 5
pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool, [src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
limited to returning at most n items. This starts at the end of the
slice and works backwards. The matched bit is not contained in the
subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.rsplitn_mut(2, |_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_000_11);
pub fn contains<O2, T2>(&self, x: &BitSlice<O2, T2>) -> bool where
O2: BitOrder,
T2: BitStore, [src]
O2: BitOrder,
T2: BitStore,
Returns true if the slice contains a subslice that matches the given
span.
Original
API Differences
This searches for a matching subslice (allowing different type
parameters) rather than for a specific bit. Searching for a contained
element with a given value is not as useful on a collection of bool.
Furthermore, BitSlice defines any and not_all, which are
optimized searchers for any true or false bit, respectively, in a
sequence.
Examples
use bitvec::prelude::*; let data = 0b0101_1010u8; let bits_msb = data.view_bits::<Msb0>(); let bits_lsb = data.view_bits::<Lsb0>(); assert!(bits_msb.contains(&bits_lsb[1 .. 5]));
This example uses a palindrome pattern to demonstrate that the slice being searched for does not need to have the same type parameters as the slice being searched.
pub fn starts_with<O2, T2>(&self, needle: &BitSlice<O2, T2>) -> bool where
O2: BitOrder,
T2: BitStore, [src]
O2: BitOrder,
T2: BitStore,
Returns true if needle is a prefix of the slice.
Original
Examples
use bitvec::prelude::*; let data = 0b0100_1011u8; let haystack = data.view_bits::<Msb0>(); let needle = &data.view_bits::<Lsb0>()[2 .. 5]; assert!(haystack.starts_with(&needle[.. 2])); assert!(haystack.starts_with(needle)); assert!(!haystack.starts_with(&haystack[2 .. 4]));
Always returns true if needle is an empty slice:
use bitvec::prelude::*; let empty = BitSlice::<LocalBits, usize>::empty(); assert!(0u8.view_bits::<LocalBits>().starts_with(empty)); assert!(empty.starts_with(empty));
pub fn ends_with<O2, T2>(&self, needle: &BitSlice<O2, T2>) -> bool where
O2: BitOrder,
T2: BitStore, [src]
O2: BitOrder,
T2: BitStore,
Returns true if needle is a suffix of the slice.
Original
Examples
use bitvec::prelude::*; let data = 0b0100_1011u8; let haystack = data.view_bits::<Lsb0>(); let needle = &data.view_bits::<Msb0>()[3 .. 6]; assert!(haystack.ends_with(&needle[1 ..])); assert!(haystack.ends_with(needle)); assert!(!haystack.ends_with(&haystack[2 .. 4]));
Always returns true if needle is an empty slice:
use bitvec::prelude::*; let empty = BitSlice::<LocalBits, usize>::empty(); assert!(0u8.view_bits::<LocalBits>().ends_with(empty)); assert!(empty.ends_with(empty));
pub fn rotate_left(&mut self, by: usize)[src]
Rotates the slice in-place such that the first by bits of the slice
move to the end while the last self.len() - by bits move to the front.
After calling rotate_left, the bit previously at index by will
become the first bit in the slice.
Original
Panics
This function will panic if by is greater than the length of the
slice. Note that by == self.len() does not panic and is a no-op
rotation.
Complexity
Takes linear (in self.len()) time.
Performance
While this is faster than the equivalent rotation on [bool], it is
slower than a handcrafted partial-element rotation on [T]. Because of
the support for custom orderings, and the lack of specialization, this
method can only accelerate by reducing the number of loop iterations
performed on the slice body, and cannot accelerate by using shift-mask
instructions to move multiple bits in one operation.
Examples
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits.rotate_left(2); assert_eq!(data, 0xC3);
Rotating a subslice:
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits[1 .. 5].rotate_left(1); assert_eq!(data, 0b1_1101_000);
pub fn rotate_right(&mut self, by: usize)[src]
Rotates the slice in-place such that the first self.len() - by bits of
the slice move to the end while the last by bits move to the front.
After calling rotate_right, the bit previously at index self.len() - by will become the first bit in the slice.
Original
Panics
This function will panic if by is greater than the length of the
slice. Note that by == self.len() does not panic and is a no-op
rotation.
Complexity
Takes linear (in self.len()) time.
Performance
While this is faster than the equivalent rotation on [bool], it is
slower than a handcrafted partial-element rotation on [T]. Because of
the support for custom orderings, and the lack of specialization, this
method can only accelerate by reducing the number of loop iterations
performed on the slice body, and cannot accelerate by using shift-mask
instructions to move multiple bits in one operation.
Examples
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits.rotate_right(2); assert_eq!(data, 0x3C);
Rotate a subslice:
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits[1 .. 5].rotate_right(1); assert_eq!(data, 0b1_0111_000);
pub fn clone_from_bitslice<O2, T2>(&mut self, src: &BitSlice<O2, T2>) where
O2: BitOrder,
T2: BitStore, [src]
O2: BitOrder,
T2: BitStore,
Copies the bits from src into self.
The length of src must be the same as self.
Original
API Differences
This method is renamed, as it takes a bit slice rather than an element slice.
Panics
This function will panic if the two slices have different lengths.
Examples
Cloning two bits from a slice into another:
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); let src = 0x0Fu16.view_bits::<Lsb0>(); bits[.. 2].clone_from_bitslice(&src[2 .. 4]); assert_eq!(data, 0xC0);
Rust enforces that there can only be one mutable reference with no
immutable references to a particular piece of data in a particular
scope. Because of this, attempting to use clone_from_bitslice on a
single slice will result in a compile failure:
use bitvec::prelude::*; let mut data = 3u8; let bits = data.view_bits_mut::<Msb0>(); bits[.. 2].clone_from_bitslice(&bits[6 ..]);
To work around this, we can use split_at_mut to create two distinct
sub-slices from a slice:
use bitvec::prelude::*; let mut data = 3u8; let bits = data.view_bits_mut::<Msb0>(); let (head, tail) = bits.split_at_mut(4); head.clone_from_bitslice(tail); assert_eq!(data, 0x33);
pub fn copy_from_bitslice(&mut self, src: &Self)[src]
Copies all bits from src into self.
The length of src must be the same as self.
Original
API Differences
This method is renamed, as it takes a bit slice rather than an element slice.
This is unable to guarantee a strictly faster copy behavior than
clone_from_bitslice. In the future, the implementation may
specialize, as the language allows.
Panics
This function will panic if the two slices have different lengths.
Examples
Copying two bits from a slice into another:
pub fn copy_within<R>(&mut self, src: R, dest: usize) where
R: RangeBounds<usize>, [src]
R: RangeBounds<usize>,
Copies bits from one part of the slice to another part of itself.
src is the range within self to copy from. dest is the starting
index of the range within self to copy to, which will have the same
length as src. The two ranges may overlap. The ends of the two ranges
must be less than or equal to self.len().
Original
Panics
This function will panic if either range exceeds the end of the slice,
or if the end of src is before the start.
Examples
Copying four bytes within a slice:
use bitvec::prelude::*; let mut data = 0x07u8; let bits = data.view_bits_mut::<Msb0>(); bits.copy_within(5 .., 0); assert_eq!(data, 0xE7);
pub fn swap_with_bitslice<O2, T2>(&mut self, other: &mut BitSlice<O2, T2>) where
O2: BitOrder,
T2: BitStore, [src]
O2: BitOrder,
T2: BitStore,
Swaps all bits in self with those in other.
The length of other must be the same as self.
Original
API Differences
This method is renamed, as it takes a bit slice rather than an element slice.
Panics
This function will panic if the two slices have different lengths.
Examples
use bitvec::prelude::*; let mut one = [0xA5u8, 0x69]; let mut two = 0x1234u16; let one_bits = one.view_bits_mut::<Msb0>(); let two_bits = two.view_bits_mut::<Lsb0>(); one_bits.swap_with_bitslice(two_bits); assert_eq!(one, [0x2C, 0x48]); assert_eq!(two, 0x96A5);
pub unsafe fn align_to<U>(&self) -> (&Self, &BitSlice<O, U>, &Self) where
U: BitStore, [src]
U: BitStore,
Transmute the bitslice to a bitslice of another type, ensuring alignment of the types is maintained.
This method splits the bitslice into three distinct bitslices: prefix, correctly aligned middle bitslice of a new type, and the suffix bitslice. The method may make the middle bitslice the greatest length possible for a given type and input bitslice, but only your algorithm's performance should depend on that, not its correctness. It is permissible for all of the input data to be returned as the prefix or suffix bitslice.
Original
API Differences
Type U is required to have the same type family as type T.
Whatever T is of the fundamental integers, atomics, or Cell
wrappers, U must be a different width in the same family. Changing the
type family with this method is unsound and strictly forbidden.
Unfortunately, it cannot be guaranteed by this function, so you are
required to abide by this limitation.
Safety
This method is essentially a transmute with respect to the elements in
the returned middle bitslice, so all the usual caveats pertaining to
transmute::<T, U> also apply here.
Examples
Basic usage:
use bitvec::prelude::*; unsafe { let bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7]; let bits = bytes.view_bits::<LocalBits>(); let (prefix, shorts, suffix) = bits.align_to::<u16>(); match prefix.len() { 0 => { assert_eq!(shorts, bits[.. 48]); assert_eq!(suffix, bits[48 ..]); }, 8 => { assert_eq!(prefix, bits[.. 8]); assert_eq!(shorts, bits[8 ..]); }, _ => unreachable!("This case will not occur") } }
pub unsafe fn align_to_mut<U>(
&mut self
) -> (&mut Self, &mut BitSlice<O, U>, &mut Self) where
U: BitStore, [src]
&mut self
) -> (&mut Self, &mut BitSlice<O, U>, &mut Self) where
U: BitStore,
Transmute the bitslice to a bitslice of another type, ensuring alignment of the types is maintained.
This method splits the bitslice into three distinct bitslices: prefix, correctly aligned middle bitslice of a new type, and the suffix bitslice. The method may make the middle bitslice the greatest length possible for a given type and input bitslice, but only your algorithm's performance should depend on that, not its correctness. It is permissible for all of the input data to be returned as the prefix or suffix bitslice.
Original
API Differences
Type U is required to have the same type family as type T.
Whatever T is of the fundamental integers, atomics, or Cell
wrappers, U must be a different width in the same family. Changing the
type family with this method is unsound and strictly forbidden.
Unfortunately, it cannot be guaranteed by this function, so you are
required to abide by this limitation.
Safety
This method is essentially a transmute with respect to the elements in
the returned middle bitslice, so all the usual caveats pertaining to
transmute::<T, U> also apply here.
Examples
Basic usage:
use bitvec::prelude::*; unsafe { let mut bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7]; let bits = bytes.view_bits_mut::<LocalBits>(); let (prefix, shorts, suffix) = bits.align_to_mut::<u16>(); // same access and behavior as in `align_to` }
impl<O, T> BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
These functions only exist when BitVec does.
pub fn to_bitvec(&self) -> BitVec<O, T>ⓘ[src]
Copies self into a new BitVec.
Original
Examples
use bitvec::prelude::*; let bits = bits![0, 1, 0, 1]; let bv = bits.to_bitvec(); assert_eq!(bits, bv);
pub fn repeat(&self, n: usize) -> BitVec<O, T>ⓘ where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Creates a vector by repeating a slice n times.
Original
Panics
This function will panic if the capacity would overflow.
Examples
Basic usage:
use bitvec::prelude::*; assert_eq!(bits![0, 1].repeat(3), bits![0, 1, 0, 1, 0, 1]);
A panic upon overflow:
use bitvec::prelude::*; // this will panic at runtime bits![0, 1].repeat(BitSlice::<LocalBits, usize>::MAX_BITS);
impl<O, T> BitSlice<O, T> where
O: BitOrder,
T: BitStore + BitMemory, [src]
O: BitOrder,
T: BitStore + BitMemory,
Constructors are limited to integers only, and not their Cells or atomics.
pub fn from_element(elem: &T) -> &Self[src]
Constructs a shared &BitSlice reference over a shared element.
The BitView trait, implemented on all T elements, provides a
method .view_bits::<O>() which delegates to this function and may be
more convenient for you to write.
Parameters
elem: A shared reference to a memory element.
Returns
A shared &BitSlice over the elem element.
Examples
use bitvec::prelude::*; let elem = 0u8; let bits = BitSlice::<LocalBits, _>::from_element(&elem); assert_eq!(bits.len(), 8);
pub fn from_element_mut(elem: &mut T) -> &mut Self[src]
Constructs an exclusive &mut BitSlice reference over an element.
The BitView trait, implemented on all T elements, provides a
method .view_bits_mut::<O>() which delegates to this function and
may be more convenient for you to write.
Parameters
elem: An exclusive reference to a memory element.
Returns
An exclusive &mut BitSlice over the elem element.
Note that the original elem reference will be inaccessible for the
duration of the returned slice handle’s lifetime.
Examples
use bitvec::prelude::*; let mut elem = 0u16; let bits = BitSlice::<Msb0, _>::from_element_mut(&mut elem); bits.set(15, true); assert!(bits.get(15).unwrap()); assert_eq!(elem, 1);
pub fn from_slice(slice: &[T]) -> Option<&Self>[src]
Constructs a shared &BitSlice reference over a shared element slice.
The BitView trait, implemented on all [T] slices, provides a
method .view_bits::<O>() that is equivalent to this function and may
be more convenient for you to write.
Parameters
slice: A shared reference over a sequence of memory elements.
Returns
If slice does not have fewer than MAX_ELTS elements, this returns
None. Otherwise, it returns a shared &BitSlice over the slice
elements.
Conditions
The produced &BitSlice handle always begins at the zeroth bit.
Examples
use bitvec::prelude::*; let slice = &[0u8, 1]; let bits = BitSlice::<Msb0, _>::from_slice(slice).unwrap(); assert!(bits[15]);
An example showing this function failing would require a slice exceeding
!0usize >> 3 bytes in size, which is infeasible to produce.
pub unsafe fn from_slice_unchecked(slice: &[T]) -> &Self[src]
Converts a slice reference into a BitSlice reference without checking
that its size can be safely used.
Safety
If the slice length is too long, then it will be capped at
MAX_BITS. You are responsible for ensuring that the input slice is
not unduly truncated.
Prefer from_slice.
pub fn from_slice_mut(slice: &mut [T]) -> Option<&mut Self>[src]
Constructs an exclusive &mut BitSlice reference over a slice.
The BitView trait, implemented on all [T] slices, provides a
method .view_bits_mut::<O>() that is equivalent to this function and
may be more convenient for you to write.
Parameters
slice: An exclusive reference over a sequence of memory elements.
Returns
An exclusive &mut BitSlice over the slice elements.
Note that the original slice reference will be inaccessible for the
duration of the returned slice handle’s lifetime.
Panics
This panics if slice does not have fewer than MAX_ELTS elements.
Conditions
The produced &mut BitSlice handle always begins at the zeroth bit of
the zeroth element in slice.
Examples
use bitvec::prelude::*; let mut slice = [0u8; 2]; let bits = BitSlice::<Lsb0, _>::from_slice_mut(&mut slice).unwrap(); assert!(!bits[0]); bits.set(0, true); assert!(bits[0]); assert_eq!(slice[0], 1);
This example attempts to construct a &mut BitSlice handle from a slice
that is too large to index. Either the vec! allocation will fail, or
the bit-slice constructor will fail.
use bitvec::prelude::*; let mut data = vec![0usize; BitSlice::<LocalBits, usize>::MAX_ELTS]; let bits = BitSlice::<LocalBits, _>::from_slice_mut(&mut data[..]).unwrap();
pub unsafe fn from_slice_unchecked_mut(slice: &mut [T]) -> &mut Self[src]
Converts a slice reference into a BitSlice reference without checking
that its size can be safely used.
Safety
If the slice length is too long, then it will be capped at
MAX_BITS. You are responsible for ensuring that the input slice is
not unduly truncated.
Prefer from_slice_mut.
impl<O, T> BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Methods specific to BitSlice<_, T>, and not present on [T].
pub fn empty<'a>() -> &'a Self[src]
Produces the empty slice. This is equivalent to &[] for ordinary
slices.
Examples
use bitvec::prelude::*; let bits: &BitSlice = BitSlice::empty(); assert!(bits.is_empty());
pub fn empty_mut<'a>() -> &'a mut Self[src]
Produces the empty mutable slice. This is equivalent to &mut [] for
ordinary slices.
Examples
use bitvec::prelude::*; let bits: &mut BitSlice = BitSlice::empty_mut(); assert!(bits.is_empty());
pub fn set(&mut self, index: usize, value: bool)[src]
Sets the bit value at the given position.
Parameters
&mut selfindex: The bit index to set. It must be in the range0 .. self.len().value: The value to be set,truefor1andfalsefor0.
Effects
If index is valid, then the bit to which it refers is set to value.
Panics
This method panics if index is outside the slice domain.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); assert!(!bits.get(7).unwrap()); bits.set(7, true); assert!(bits.get(7).unwrap()); assert_eq!(data, 1);
This example panics when it attempts to set a bit that is out of bounds.
use bitvec::prelude::*; let bits = BitSlice::<LocalBits, usize>::empty_mut(); bits.set(0, false);
pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)[src]
Sets a bit at an index, without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see set.
Parameters
&mut selfindex: The bit index to set. It must be in the range0 .. self.len(). It will not be checked.
Effects
The bit at index is set to value.
Safety
This method is not safe. It performs raw pointer arithmetic to seek
from the start of the slice to the requested index, and set the bit
there. It does not inspect the length of self, and it is free to
perform out-of-bounds memory write access.
Use this method only when you have already performed the bounds check, and can guarantee that the call occurs with a safely in-bounds index.
Examples
This example uses a bit slice of length 2, and demonstrates out-of-bounds access to the last bit in the element.
use bitvec::prelude::*; let mut data = 0u8; let bits = &mut data.view_bits_mut::<Msb0>()[2 .. 4]; assert_eq!(bits.len(), 2); unsafe { bits.set_unchecked(5, true); } assert_eq!(data, 1);
pub fn all(&self) -> bool[src]
Tests if all bits in the slice domain are set (logical ∧).
Truth Table
0 0 => 0
0 1 => 0
1 0 => 0
1 1 => 1
Parameters
&self
Returns
Whether all bits in the slice domain are set. The empty slice returns
true.
Examples
use bitvec::prelude::*; let bits = 0xFDu8.view_bits::<Msb0>(); assert!(bits[.. 4].all()); assert!(!bits[4 ..].all());
pub fn any(&self) -> bool[src]
Tests if any bit in the slice is set (logical ∨).
Truth Table
0 0 => 0
0 1 => 1
1 0 => 1
1 1 => 1
Parameters
&self
Returns
Whether any bit in the slice domain is set. The empty slice returns
false.
Examples
use bitvec::prelude::*; let bits = 0x40u8.view_bits::<Msb0>(); assert!(bits[.. 4].any()); assert!(!bits[4 ..].any());
pub fn not_all(&self) -> bool[src]
Tests if any bit in the slice is unset (logical ¬∧).
Truth Table
0 0 => 1
0 1 => 1
1 0 => 1
1 1 => 0
Parameters
- `&self
Returns
Whether any bit in the slice domain is unset.
Examples
use bitvec::prelude::*; let bits = 0xFDu8.view_bits::<Msb0>(); assert!(!bits[.. 4].not_all()); assert!(bits[4 ..].not_all());
pub fn not_any(&self) -> bool[src]
Tests if all bits in the slice are unset (logical ¬∨).
Truth Table
0 0 => 1
0 1 => 0
1 0 => 0
1 1 => 0
Parameters
&self
Returns
Whether all bits in the slice domain are unset.
Examples
use bitvec::prelude::*; let bits = 0x40u8.view_bits::<Msb0>(); assert!(!bits[.. 4].not_any()); assert!(bits[4 ..].not_any());
pub fn some(&self) -> bool[src]
Tests whether the slice has some, but not all, bits set and some, but not all, bits unset.
This is false if either .all or .not_any are true.
Truth Table
0 0 => 0
0 1 => 1
1 0 => 1
1 1 => 0
Parameters
&self
Returns
Whether the slice domain has mixed content. The empty slice returns
false.
Examples
use bitvec::prelude::*; let data = 0b111_000_10u8; let bits = data.view_bits::<Msb0>(); assert!(!bits[.. 3].some()); assert!(!bits[3 .. 6].some()); assert!(bits.some());
pub fn count_ones(&self) -> usize[src]
Returns the number of ones in the memory region backing self.
Parameters
&self
Returns
The number of high bits in the slice domain.
Examples
Basic usage:
use bitvec::prelude::*; let data = 0xF0u8; let bits = data.view_bits::<Msb0>(); assert_eq!(bits[.. 4].count_ones(), 4); assert_eq!(bits[4 ..].count_ones(), 0);
pub fn count_zeros(&self) -> usize[src]
Returns the number of zeros in the memory region backing self.
Parameters
&self
Returns
The number of low bits in the slice domain.
Examples
Basic usage:
use bitvec::prelude::*; let data = 0xF0u8; let bits = data.view_bits::<Msb0>(); assert_eq!(bits[.. 4].count_zeros(), 0); assert_eq!(bits[4 ..].count_zeros(), 4);
pub fn set_all(&mut self, value: bool)[src]
Sets all bits in the slice to a value.
Parameters
&mut selfvalue: The bit value to which all bits in the slice will be set.
Examples
use bitvec::prelude::*; let mut src = 0u8; let bits = src.view_bits_mut::<Msb0>(); bits[2 .. 6].set_all(true); assert_eq!(bits.as_slice(), &[0b0011_1100]); bits[3 .. 5].set_all(false); assert_eq!(bits.as_slice(), &[0b0010_0100]); bits[.. 1].set_all(true); assert_eq!(bits.as_slice(), &[0b1010_0100]);
pub fn for_each<F>(&mut self, func: F) where
F: FnMut(usize, bool) -> bool, [src]
F: FnMut(usize, bool) -> bool,
Applies a function to each bit in the slice.
BitSlice cannot implement IndexMut, as it cannot manifest &mut bool references, and the BitMut proxy reference has an unavoidable
overhead. This method bypasses both problems, by applying a function to
each pair of index and value in the slice, without constructing a proxy
reference.
Parameters
&mut selffunc: A function which receives two arguments,index: usizeandvalue: bool, and returns abool.
Effects
For each index in the slice, the result of invoking func with the
index number and current bit value is written into the slice.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); bits.for_each(|idx, _bit| idx % 3 == 0); assert_eq!(data, 0b100_100_10);
pub fn as_slice(&self) -> &[T][src]
Accesses the total backing storage of the BitSlice, as a slice of its
elements.
This method produces a slice over all the memory elements it touches, using the current storage parameter. This is safe to do, as any events that would create an aliasing view into the elements covered by the returned slice will also have caused the slice to use its alias-aware type.
Parameters
&self
Returns
A view of the entire memory region this slice covers, including the edge elements.
pub fn as_raw_slice(&self) -> &[T::Mem][src]
Views the wholly-filled elements of the BitSlice.
This will not include partially-owned edge elements, as they may be
aliased by other handles. To gain access to all elements that the
BitSlice region covers, use one of the following:
.as_sliceproduces a shared slice over all elements, marked aliased as appropriate..domainproduces a view describing each component of the region, marking only the contended edges as aliased and the uncontended interior as unaliased.
Parameters
&self
Returns
A slice of all the wholly-filled elements in the BitSlice backing
storage.
Examples
use bitvec::prelude::*; let data = [1u8, 66]; let bits = data.view_bits::<Msb0>(); let accum = bits .as_raw_slice() .iter() .copied() .map(u8::count_ones) .sum::<u32>(); assert_eq!(accum, 3);
pub fn as_raw_slice_mut(&mut self) -> &mut [T::Mem][src]
Views the wholly-filled elements of the BitSlice.
This will not include partially-owned edge elements, as they may be
aliased by other handles. To gain access to all elements that the
BitSlice region covers, use one of the following:
.as_aliased_sliceproduces a shared slice over all elements, marked as aliased to allow for the possibliity of mutation..domain_mutproduces a view describing each component of the region, marking only the contended edges as aliased and the uncontended interior as unaliased.
Parameters
&mut self
Returns
A mutable slice of all the wholly-filled elements in the BitSlice
backing storage.
Examples
use bitvec::prelude::*; let mut data = [1u8, 64]; let bits = data.view_bits_mut::<Msb0>(); for elt in bits.as_raw_slice_mut() { *elt |= 2; } assert_eq!(&[3, 66], bits.as_slice());
pub fn bit_domain(&self) -> BitDomain<'_, O, T>[src]
Splits the slice into the logical components of its memory domain.
This produces a set of read-only subslices, marking as much as possible
as affirmatively lacking any write-capable view (T::NoAlias). The
unaliased view is able to safely perform unsynchronized reads from
memory without causing undefined behavior, as the type system is able to
statically prove that no other write-capable views exist.
Parameters
&self
Returns
A BitDomain structure representing the logical components of the
memory region.
Safety Exception
The following snippet describes a means of constructing a T::NoAlias
view into memory that is, in fact, aliased:
use bitvec::prelude::*; use core::sync::atomic::AtomicU8; type Bs<T> = BitSlice<LocalBits, T>; let data = [AtomicU8::new(0), AtomicU8::new(0), AtomicU8::new(0)]; let bits: &Bs<AtomicU8> = data.view_bits::<LocalBits>(); let subslice: &Bs<AtomicU8> = &bits[4 .. 20]; let (_, noalias, _): (_, &Bs<u8>, _) = subslice.bit_domain().region().unwrap();
The noalias reference, which has memory type u8, assumes that it can
act as an &u8 reference: unsynchronized loads are permitted, as no
handle exists which is capable of modifying the middle bit of data.
This means that LLVM is permitted to issue loads from memory wherever
it wants in the block during which noalias is live, as all loads are
equivalent.
Use of the bits or subslice handles, which are still live for the
lifetime of noalias, to issue .set_aliased calls into the middle
element introduce undefined behavior. bitvec permits safe code to
introduce this undefined behavior solely because it requires deliberate
opt-in – you must start from atomic data; this cannot occur when data
is non-atomic – and use of the shared-mutation facility simultaneously
with the unaliasing view.
The .set_aliased method is speculative, and will be marked as
unsafe or removed at any suspicion that its presence in the library
has any costs.
Examples
This method can be used to accelerate reads from a slice that is marked as aliased.
use bitvec::prelude::*; type Bs<T> = BitSlice<LocalBits, T>; let mut data = [0u8; 3]; let bits = data.view_bits_mut::<LocalBits>(); let (a, b): ( &mut Bs<<u8 as BitStore>::Alias>, &mut Bs<<u8 as BitStore>::Alias>, ) = bits.split_at_mut(4); let (partial, full, _): ( &Bs<<u8 as BitStore>::Alias>, &Bs<<u8 as BitStore>::Mem>, _, ) = b.bit_domain().region().unwrap(); read_from(partial); // uses alias-aware reads read_from(full); // uses ordinary reads
pub fn bit_domain_mut(&mut self) -> BitDomainMut<'_, O, T>[src]
Splits the slice into the logical components of its memory domain.
This produces a set of mutable subslices, marking as much as possible as
affirmatively lacking any other view (T::Mem). The bare view is able
to safely perform unsynchronized reads from and writes to memory without
causing undefined behavior, as the type system is able to statically
prove that no other views exist.
Why This Is More Sound Than .bit_domain
The &mut exclusion rule makes it impossible to construct two
references over the same memory where one of them is marked &mut. This
makes it impossible to hold a live reference to memory separately from
any references produced from this method. For the duration of all
references produced by this method, all ancestor references used to
reach this method call are either suspended or dead, and the compiler
will not allow you to use them.
As such, this method cannot introduce undefined behavior where a reference incorrectly believes that the referent memory region is immutable.
pub fn domain(&self) -> Domain<'_, T>ⓘ[src]
Splits the slice into immutable references to its underlying memory components.
Unlike .bit_domain and .bit_domain_mut, this does not return
smaller BitSlice handles but rather appropriately-marked references to
the underlying memory elements.
The aliased references allow mutation of these elements. You are
required to not use mutating methods on these references at all. This
function is not marked unsafe, but this is a contract you must uphold.
Use .domain_mut to modify the underlying elements.
It is not currently possible to forbid mutation through these references. This may change in the future.
Safety Exception
As with .bit_domain, this produces unsynchronized immutable
references over the fully-populated interior elements. If this view is
constructed from a BitSlice handle over atomic memory, then it will
remove the atomic access behavior for the interior elements. This by
itself is safe, as long as no contemporaneous atomic writes to that
memory can occur. You must not retain and use an atomic reference to the
memory region marked as NoAlias for the duration of this view’s
existence.
Parameters
&self
Returns
A read-only descriptor of the memory elements backing *self.
pub fn domain_mut(&mut self) -> DomainMut<'_, T>[src]
Splits the slice into mutable references to its underlying memory elements.
Like .domain, this returns appropriately-marked references to the
underlying memory elements. These references are all writable.
The aliased edge references permit modifying memory beyond their bit
marker. You are required to only mutate the region of these edge
elements that you currently govern. This function is not marked
unsafe, but this is a contract you must uphold.
It is not currently possible to forbid out-of-bounds mutation through these references. This may change in the future.
Parameters
&mut self
Returns
A descriptor of the memory elements underneath *self, permitting
mutation.
pub unsafe fn split_at_unchecked(&self, mid: usize) -> (&Self, &Self)[src]
Splits a slice at some mid-point, without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see split_at.
Parameters
&selfmid: The index at which to split the slice. This must be in the range0 .. self.len().
Returns
.0:&self[.. mid].1:&self[mid ..]
Safety
This function is not safe. It performs raw pointer arithmetic to
construct two new references. If mid is out of bounds, then the first
slice will be too large, and the second will be catastrophically
incorrect. As both are references to invalid memory, they are undefined
to construct, and may not ever be used.
Examples
use bitvec::prelude::*; let data = 0x0180u16; let bits = data.view_bits::<Msb0>(); let (one, two) = unsafe { bits.split_at_unchecked(8) }; assert!(one[7]); assert!(two[0]);
pub unsafe fn split_at_unchecked_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)[src]
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
Splits a mutable slice at some mid-point, without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see split_at_mut.
Parameters
&mut selfmid: The index at which to split the slice. This must be in the range0 .. self.len().
Returns
.0:&mut self[.. mid].1:&mut self[mid ..]
Safety
This function is not safe. It performs raw pointer arithmetic to
construct two new references. If mid is out of bounds, then the first
slice will be too large, and the second will be catastrophically
incorrect. As both are references to invalid memory, they are undefined
to construct, and may not ever be used.
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Msb0>(); let (one, two) = unsafe { bits.split_at_unchecked_mut(8) }; one.set(7, true); two.set(0, true); assert_eq!(data, 0x0180u16);
pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)[src]
Swaps the bits at two indices without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see swap.
Parameters
&mut selfa: One index to swap.b: The other index to swap.
Effects
The bit at index a is written into index b, and the bit at index b
is written into a.
Safety
Both a and b must be less than self.len(). Indices greater than
the length will cause out-of-bounds memory access, which can lead to
memory unsafety and a program crash.
Examples
use bitvec::prelude::*; let mut data = 8u8; let bits = data.view_bits_mut::<Msb0>(); unsafe { bits.swap_unchecked(0, 4); } assert_eq!(data, 128);
pub unsafe fn copy_unchecked(&mut self, from: usize, to: usize)[src]
Copies a bit from one index to another without checking boundary conditions.
Parameters
&mut selffrom: The index whose bit is to be copiedto: The index into which the copied bit is written.
Effects
The bit at from is written into to.
Safety
Both from and to must be less than self.len(), in order for
self to legally read from and write to them, respectively.
If self had been split from a larger slice, reading from from or
writing to to may not necessarily cause a memory-safety violation in
the Rust model, due to the aliasing system bitvec employs. However,
writing outside the bounds of a slice reference is always a logical
error, as it causes changes observable by another reference handle.
Examples
use bitvec::prelude::*; let mut data = 1u8; let bits = data.view_bits_mut::<Lsb0>(); unsafe { bits.copy_unchecked(0, 2) }; assert_eq!(data, 5);
pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize) where
R: RangeBounds<usize>, [src]
R: RangeBounds<usize>,
Copies bits from one part of the slice to another part of itself.
src is the range within self to copy from. dest is the starting
index of the range within self to copy to, which will have the same
length as src. The two ranges may overlap. The ends of the two ranges
must be less than or equal to self.len().
Effects
self[src] is copied to self[dest .. dest + src.end() - src.start()].
Panics
This function will panic if either range exceeds the end of the slice,
or if the end of src is before the start.
Safety
Both the src range and the target range dest .. dest + src.len()
must not exceed the self.len() slice range.
Examples
use bitvec::prelude::*; let mut data = 0x07u8; let bits = data.view_bits_mut::<Msb0>(); unsafe { bits.copy_within_unchecked(5 .., 0); } assert_eq!(data, 0xE7);
impl<O, T> BitSlice<O, T> where
O: BitOrder,
T: BitStore + Radium<<T as BitStore>::Mem>, [src]
O: BitOrder,
T: BitStore + Radium<<T as BitStore>::Mem>,
Methods available only when T allows shared mutability.
pub fn split_at_aliased_mut(&mut self, mid: usize) -> (&mut Self, &mut Self)[src]
Splits a mutable slice at some mid-point.
This method has the same behavior as split_at_mut, except that it
does not apply an aliasing marker to the partitioned subslices.
Safety
Because this method is defined only on BitSlices whose T type is
alias-safe, the subslices do not need to be additionally marked.
pub unsafe fn split_at_aliased_unchecked_mut(
&mut self,
mid: usize
) -> (&mut Self, &mut Self)[src]
&mut self,
mid: usize
) -> (&mut Self, &mut Self)
Splits a mutable slice at some mid-point, without checking boundary conditions.
This method has the same behavior as split_at_unchecked_mut, except
that it does not apply an aliasing marker to the partitioned subslices.
Safety
See split_at_unchecked_mut for safety requirements.
Because this method is defined only on BitSlices whose T type is
alias-safe, the subslices do not need to be additionally marked.
impl<O, T> BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Miscellaneous information.
pub const MAX_BITS: usize[src]
The inclusive maximum length of a BitSlice<_, T>.
As BitSlice is zero-indexed, the largest possible index is one less
than this value.
| CPU word width | Value |
|---|---|
| 32 bits | 0x1fff_ffff |
| 64 bits | 0x1fff_ffff_ffff_ffff |
pub const MAX_ELTS: usize[src]
The inclusive maximum length that a slice [T] can be for
BitSlice<_, T> to cover it.
A BitSlice<_, T> that begins in the interior of an element and
contains the maximum number of bits will extend one element past the
cutoff that would occur if the slice began at the zeroth bit. Such a
slice must be manually constructed, but will not otherwise fail.
| Type Bits | Max Elements (32-bit) | Max Elements (64-bit) |
|---|---|---|
| 8 | 0x0400_0001 | 0x0400_0000_0000_0001 |
| 16 | 0x0200_0001 | 0x0200_0000_0000_0001 |
| 32 | 0x0100_0001 | 0x0100_0000_0000_0001 |
| 64 | 0x0080_0001 | 0x0080_0000_0000_0001 |
Trait Implementations
impl<O, V> AsMut<BitSlice<O, <V as BitView>::Store>> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized, [src]
O: BitOrder,
V: BitView + Sized,
impl<O, T> AsMut<BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> AsMut<BitSlice<O, T>> for BitVec<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, V> AsRef<BitSlice<O, <V as BitView>::Store>> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized, [src]
O: BitOrder,
V: BitView + Sized,
impl<O, T, '_> AsRef<BitSlice<O, T>> for Iter<'_, O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> AsRef<BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T, '_> AsRef<BitSlice<O, T>> for Drain<'_, O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> AsRef<BitSlice<O, T>> for BitVec<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> Binary for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Render the contents of a BitSlice in a numeric format.
These implementations render the bits of memory contained in a
BitSlice as one of the three numeric bases that the Rust format
system supports:
Binaryrenders each bit individually as0or1,Octalrenders clusters of three bits as the numbers0through7,- and
UpperHexandLowerHexrender clusters of four bits as the numbers0through9andAthroughF.
The formatters produce a “word” for each element T of memory. The
chunked formats (octal and hexadecimal) operate somewhat peculiarly:
they show the semantic value of the memory, as interpreted by the
ordering parameter’s implementation rather than the raw value of
memory you might observe with a debugger. In order to ease the
process of expanding numbers back into bits, each digit is grouped to
the right edge of the memory element. So, for example, the byte
0xFF would be rendered in as 0o377 rather than 0o773.
Rendered words are chunked by memory elements, rather than by as clean as possible a number of digits, in order to aid visualization of the slice’s place in memory.
impl<O, T, Rhs> BitAndAssign<Rhs> for BitSlice<O, T> where
O: BitOrder,
T: BitStore,
Rhs: IntoIterator<Item = bool>, [src]
O: BitOrder,
T: BitStore,
Rhs: IntoIterator<Item = bool>,
fn bitand_assign(&mut self, rhs: Rhs)[src]
impl<T> BitField for BitSlice<Lsb0, T> where
T: BitStore, [src]
T: BitStore,
fn load_le<M>(&self) -> M where
M: BitMemory, [src]
M: BitMemory,
fn load_be<M>(&self) -> M where
M: BitMemory, [src]
M: BitMemory,
fn store_le<M>(&mut self, value: M) where
M: BitMemory, [src]
M: BitMemory,
fn store_be<M>(&mut self, value: M) where
M: BitMemory, [src]
M: BitMemory,
fn load<M>(&self) -> M where
M: BitMemory, [src]
M: BitMemory,
fn store<M>(&mut self, value: M) where
M: BitMemory, [src]
M: BitMemory,
impl<T> BitField for BitSlice<Msb0, T> where
T: BitStore, [src]
T: BitStore,
fn load_le<M>(&self) -> M where
M: BitMemory, [src]
M: BitMemory,
fn load_be<M>(&self) -> M where
M: BitMemory, [src]
M: BitMemory,
fn store_le<M>(&mut self, value: M) where
M: BitMemory, [src]
M: BitMemory,
fn store_be<M>(&mut self, value: M) where
M: BitMemory, [src]
M: BitMemory,
fn load<M>(&self) -> M where
M: BitMemory, [src]
M: BitMemory,
fn store<M>(&mut self, value: M) where
M: BitMemory, [src]
M: BitMemory,
impl<O, T, Rhs> BitOrAssign<Rhs> for BitSlice<O, T> where
O: BitOrder,
T: BitStore,
Rhs: IntoIterator<Item = bool>, [src]
O: BitOrder,
T: BitStore,
Rhs: IntoIterator<Item = bool>,
fn bitor_assign(&mut self, rhs: Rhs)[src]
impl<O, T, Rhs> BitXorAssign<Rhs> for BitSlice<O, T> where
O: BitOrder,
T: BitStore,
Rhs: IntoIterator<Item = bool>, [src]
O: BitOrder,
T: BitStore,
Rhs: IntoIterator<Item = bool>,
fn bitxor_assign(&mut self, rhs: Rhs)[src]
impl<O, V> Borrow<BitSlice<O, <V as BitView>::Store>> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized, [src]
O: BitOrder,
V: BitView + Sized,
impl<O, T> Borrow<BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> Borrow<BitSlice<O, T>> for BitVec<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, V> BorrowMut<BitSlice<O, <V as BitView>::Store>> for BitArray<O, V> where
O: BitOrder,
V: BitView + Sized, [src]
O: BitOrder,
V: BitView + Sized,
fn borrow_mut(&mut self) -> &mut BitSlice<O, V::Store>ⓘ[src]
impl<O, T> BorrowMut<BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
fn borrow_mut(&mut self) -> &mut BitSlice<O, T>ⓘ[src]
impl<O, T> BorrowMut<BitSlice<O, T>> for BitVec<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
fn borrow_mut(&mut self) -> &mut BitSlice<O, T>ⓘ[src]
impl<O, T> Debug for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T, '_> Default for &'_ BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T, '_> Default for &'_ mut BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> Display for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> Eq for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<'a, O, T> From<&'a BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<'a, O, T> From<&'a BitSlice<O, T>> for BitVec<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<'a, O, T> From<&'a mut BitSlice<O, T>> for BitVec<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> Hash for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Writes the contents of the BitSlice, in semantic bit order, into a hasher.
fn hash<H>(&self, hasher: &mut H) where
H: Hasher, [src]
H: Hasher,
fn hash_slice<H>(data: &[Self], state: &mut H) where
H: Hasher, 1.3.0[src]
H: Hasher,
impl<O, T> Index<Range<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
type Output = Self
The returned type after indexing.
fn index(&self, index: Range<usize>) -> &Self::Output[src]
impl<O, T> Index<RangeFrom<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
type Output = Self
The returned type after indexing.
fn index(&self, index: RangeFrom<usize>) -> &Self::Output[src]
impl<O, T> Index<RangeFull> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
type Output = Self
The returned type after indexing.
fn index(&self, index: RangeFull) -> &Self::Output[src]
impl<O, T> Index<RangeInclusive<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
type Output = Self
The returned type after indexing.
fn index(&self, index: RangeInclusive<usize>) -> &Self::Output[src]
impl<O, T> Index<RangeTo<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
type Output = Self
The returned type after indexing.
fn index(&self, index: RangeTo<usize>) -> &Self::Output[src]
impl<O, T> Index<RangeToInclusive<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
type Output = Self
The returned type after indexing.
fn index(&self, index: RangeToInclusive<usize>) -> &Self::Output[src]
impl<O, T> Index<usize> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
type Output = bool
The returned type after indexing.
fn index(&self, index: usize) -> &Self::Output[src]
Looks up a single bit by semantic index.
Examples
use bitvec::prelude::*; let bits = bits![Msb0, u8; 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]; assert!(!bits[7]); // --------------------------^ | | assert!( bits[8]); // -----------------------------^ | assert!(!bits[9]); // --------------------------------^
If the index is greater than or equal to the length, indexing will panic.
The below test will panic when accessing index 1, as only index 0 is valid.
use bitvec::prelude::*; let bits = bits![0, ]; bits[1]; // --------^
impl<O, T> IndexMut<Range<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> IndexMut<RangeFrom<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> IndexMut<RangeFull> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> IndexMut<RangeInclusive<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
fn index_mut(&mut self, index: RangeInclusive<usize>) -> &mut Self::Output[src]
impl<O, T> IndexMut<RangeTo<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
impl<O, T> IndexMut<RangeToInclusive<usize>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
fn index_mut(&mut self, index: RangeToInclusive<usize>) -> &mut Self::Output[src]
impl<'a, O, T> IntoIterator for &'a BitSlice<O, T> where
O: 'a + BitOrder,
T: 'a + BitStore, [src]
O: 'a + BitOrder,
T: 'a + BitStore,
type IntoIter = Iter<'a, O, T>
Which kind of iterator are we turning this into?
type Item = <Self::IntoIter as Iterator>::Item
The type of the elements being iterated over.
fn into_iter(self) -> Self::IntoIter[src]
impl<'a, O, T> IntoIterator for &'a mut BitSlice<O, T> where
O: 'a + BitOrder,
T: 'a + BitStore, [src]
O: 'a + BitOrder,
T: 'a + BitStore,
type IntoIter = IterMut<'a, O, T>
Which kind of iterator are we turning this into?
type Item = <Self::IntoIter as Iterator>::Item
The type of the elements being iterated over.
fn into_iter(self) -> Self::IntoIter[src]
impl<O, T> LowerHex for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Render the contents of a BitSlice in a numeric format.
These implementations render the bits of memory contained in a
BitSlice as one of the three numeric bases that the Rust format
system supports:
Binaryrenders each bit individually as0or1,Octalrenders clusters of three bits as the numbers0through7,- and
UpperHexandLowerHexrender clusters of four bits as the numbers0through9andAthroughF.
The formatters produce a “word” for each element T of memory. The
chunked formats (octal and hexadecimal) operate somewhat peculiarly:
they show the semantic value of the memory, as interpreted by the
ordering parameter’s implementation rather than the raw value of
memory you might observe with a debugger. In order to ease the
process of expanding numbers back into bits, each digit is grouped to
the right edge of the memory element. So, for example, the byte
0xFF would be rendered in as 0o377 rather than 0o773.
Rendered words are chunked by memory elements, rather than by as clean as possible a number of digits, in order to aid visualization of the slice’s place in memory.
impl<'a, O, T> Not for &'a mut BitSlice<O, T> where
O: BitOrder,
T: 'a + BitStore, [src]
O: BitOrder,
T: 'a + BitStore,
type Output = Self
The resulting type after applying the ! operator.
fn not(self) -> Self::Output[src]
impl<O, T> Octal for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Render the contents of a BitSlice in a numeric format.
These implementations render the bits of memory contained in a
BitSlice as one of the three numeric bases that the Rust format
system supports:
Binaryrenders each bit individually as0or1,Octalrenders clusters of three bits as the numbers0through7,- and
UpperHexandLowerHexrender clusters of four bits as the numbers0through9andAthroughF.
The formatters produce a “word” for each element T of memory. The
chunked formats (octal and hexadecimal) operate somewhat peculiarly:
they show the semantic value of the memory, as interpreted by the
ordering parameter’s implementation rather than the raw value of
memory you might observe with a debugger. In order to ease the
process of expanding numbers back into bits, each digit is grouped to
the right edge of the memory element. So, for example, the byte
0xFF would be rendered in as 0o377 rather than 0o773.
Rendered words are chunked by memory elements, rather than by as clean as possible a number of digits, in order to aid visualization of the slice’s place in memory.
impl<O, T> Ord for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
fn cmp(&self, rhs: &Self) -> Ordering[src]
#[must_use]fn max(self, other: Self) -> Self1.21.0[src]
#[must_use]fn min(self, other: Self) -> Self1.21.0[src]
#[must_use]fn clamp(self, min: Self, max: Self) -> Self[src]
impl<O1, O2, T1, T2, '_> PartialEq<&'_ BitSlice<O2, T2>> for BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, rhs: &&BitSlice<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialEq<&'_ mut BitSlice<O2, T2>> for BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, rhs: &&mut BitSlice<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O, V, T> PartialEq<BitArray<O, V>> for BitSlice<O, T> where
O: BitOrder,
V: BitView + Sized,
T: BitStore, [src]
O: BitOrder,
V: BitView + Sized,
T: BitStore,
fn eq(&self, other: &BitArray<O, V>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2> PartialEq<BitBox<O2, T2>> for BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, other: &BitBox<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialEq<BitBox<O2, T2>> for &'_ BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, other: &BitBox<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialEq<BitBox<O2, T2>> for &'_ mut BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, other: &BitBox<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2> PartialEq<BitSlice<O2, T2>> for BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
Tests if two BitSlices are semantically — not bitwise — equal.
It is valid to compare slices of different ordering or memory types.
The equality condition requires that they have the same length and that at each index, the two slices have the same bit value.
fn eq(&self, rhs: &BitSlice<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialEq<BitSlice<O2, T2>> for &'_ BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, rhs: &BitSlice<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialEq<BitSlice<O2, T2>> for &'_ mut BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, rhs: &BitSlice<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2> PartialEq<BitVec<O2, T2>> for BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, other: &BitVec<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialEq<BitVec<O2, T2>> for &'_ BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, other: &BitVec<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialEq<BitVec<O2, T2>> for &'_ mut BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, other: &BitVec<O2, T2>) -> bool[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialOrd<&'_ BitSlice<O2, T2>> for BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn partial_cmp(&self, rhs: &&BitSlice<O2, T2>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_, '_> PartialOrd<&'_ BitSlice<O2, T2>> for &'_ mut BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn partial_cmp(&self, rhs: &&BitSlice<O2, T2>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialOrd<&'_ mut BitSlice<O2, T2>> for BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn partial_cmp(&self, rhs: &&mut BitSlice<O2, T2>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_, '_> PartialOrd<&'_ mut BitSlice<O2, T2>> for &'_ BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn partial_cmp(&self, rhs: &&mut BitSlice<O2, T2>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O, V, T> PartialOrd<BitArray<O, V>> for BitSlice<O, T> where
O: BitOrder,
V: BitView + Sized,
T: BitStore, [src]
O: BitOrder,
V: BitView + Sized,
T: BitStore,
fn partial_cmp(&self, other: &BitArray<O, V>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O, T> PartialOrd<BitBox<O, T>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
fn partial_cmp(&self, other: &BitBox<O, T>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2> PartialOrd<BitSlice<O2, T2>> for BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
Compares two BitSlices by semantic — not bitwise — ordering.
The comparison sorts by testing at each index if one slice has a high bit where the other has a low. At the first index where the slices differ, the slice with the high bit is greater. If the slices are equal until at least one terminates, then they are compared by length.
fn partial_cmp(&self, rhs: &BitSlice<O2, T2>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialOrd<BitSlice<O2, T2>> for &'_ BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn partial_cmp(&self, rhs: &BitSlice<O2, T2>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialOrd<BitSlice<O2, T2>> for &'_ mut BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore, [src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn partial_cmp(&self, rhs: &BitSlice<O2, T2>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O, T> PartialOrd<BitVec<O, T>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
fn partial_cmp(&self, other: &BitVec<O, T>) -> Option<Ordering>[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool1.0.0[src]
impl<O, T> Pointer for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Renders a BitSlice handle as its pointer representation.
impl<'a, O, T> Read for &'a BitSlice<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitField, [src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitField,
Mirrors the implementation on [u8] (found here).
The implementation loads bytes out of the &BitSlice reference until exhaustion
of either the source BitSlice or destination [u8]. When .read() returns,
self will have been updated to no longer include the leading segment copied
out as bytes of buf.
fn read(&mut self, buf: &mut [u8]) -> Result<usize>[src]
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error>1.36.0[src]
fn is_read_vectored(&self) -> bool[src]
unsafe fn initializer(&self) -> Initializer[src]
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize, Error>1.0.0[src]
fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error>1.0.0[src]
fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error>1.6.0[src]
fn by_ref(&mut self) -> &mut Self1.0.0[src]
fn bytes(self) -> Bytes<Self>1.0.0[src]
fn chain<R>(self, next: R) -> Chain<Self, R> where
R: Read, 1.0.0[src]
R: Read,
fn take(self, limit: u64) -> Take<Self>1.0.0[src]
impl<O, T> Send for BitSlice<O, T> where
O: BitOrder,
T: BitStore,
T::Threadsafe: Send, [src]
O: BitOrder,
T: BitStore,
T::Threadsafe: Send,
impl<O, T> Serialize for BitSlice<O, T> where
O: BitOrder,
T: BitStore,
T::Mem: Serialize, [src]
O: BitOrder,
T: BitStore,
T::Mem: Serialize,
impl<O, T> Sync for BitSlice<O, T> where
O: BitOrder,
T: BitStore,
T::Threadsafe: Sync, [src]
O: BitOrder,
T: BitStore,
T::Threadsafe: Sync,
impl<O, T> ToOwned for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
type Owned = BitVec<O, T>
The resulting type after obtaining ownership.
fn to_owned(&self) -> Self::Owned[src]
fn clone_into(&self, target: &mut Self::Owned)[src]
impl<O, O2, T, V, '_> TryFrom<&'_ BitSlice<O2, T>> for BitArray<O, V> where
O: BitOrder,
O2: BitOrder,
T: BitStore,
V: BitView + Sized, [src]
O: BitOrder,
O2: BitOrder,
T: BitStore,
V: BitView + Sized,
type Error = TryFromBitSliceError
The type returned in the event of a conversion error.
fn try_from(src: &BitSlice<O2, T>) -> Result<Self, Self::Error>[src]
impl<'a, O, T> TryFrom<&'a [T]> for &'a BitSlice<O, T> where
O: BitOrder,
T: BitStore + BitMemory, [src]
O: BitOrder,
T: BitStore + BitMemory,
type Error = &'a [T]
The type returned in the event of a conversion error.
fn try_from(slice: &'a [T]) -> Result<Self, Self::Error>[src]
impl<'a, O, V> TryFrom<&'a BitSlice<O, <V as BitView>::Store>> for &'a BitArray<O, V> where
O: BitOrder,
V: BitView + Sized, [src]
O: BitOrder,
V: BitView + Sized,
type Error = TryFromBitSliceError
The type returned in the event of a conversion error.
fn try_from(src: &'a BitSlice<O, V::Store>) -> Result<Self, Self::Error>[src]
impl<'a, O, V> TryFrom<&'a mut BitSlice<O, <V as BitView>::Store>> for &'a mut BitArray<O, V> where
O: BitOrder,
V: BitView + Sized, [src]
O: BitOrder,
V: BitView + Sized,
type Error = TryFromBitSliceError
The type returned in the event of a conversion error.
fn try_from(src: &'a mut BitSlice<O, V::Store>) -> Result<Self, Self::Error>[src]
impl<O, T> UpperHex for BitSlice<O, T> where
O: BitOrder,
T: BitStore, [src]
O: BitOrder,
T: BitStore,
Render the contents of a BitSlice in a numeric format.
These implementations render the bits of memory contained in a
BitSlice as one of the three numeric bases that the Rust format
system supports:
Binaryrenders each bit individually as0or1,Octalrenders clusters of three bits as the numbers0through7,- and
UpperHexandLowerHexrender clusters of four bits as the numbers0through9andAthroughF.
The formatters produce a “word” for each element T of memory. The
chunked formats (octal and hexadecimal) operate somewhat peculiarly:
they show the semantic value of the memory, as interpreted by the
ordering parameter’s implementation rather than the raw value of
memory you might observe with a debugger. In order to ease the
process of expanding numbers back into bits, each digit is grouped to
the right edge of the memory element. So, for example, the byte
0xFF would be rendered in as 0o377 rather than 0o773.
Rendered words are chunked by memory elements, rather than by as clean as possible a number of digits, in order to aid visualization of the slice’s place in memory.
impl<'a, O, T> Write for &'a mut BitSlice<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T::Alias>: BitField, [src]
O: BitOrder,
T: BitStore,
BitSlice<O, T::Alias>: BitField,
Mirrors the implementation on [u8] (found here).
The implementation copies bytes into the &mut BitSlice reference until
exhaustion of either the source [u8] or destination BitSlice. When
.write() returns, self will have been updated to no longer include the
leading segment containing bytes copied in from buf.
fn write(&mut self, buf: &[u8]) -> Result<usize>[src]
fn flush(&mut self) -> Result<()>[src]
fn write_vectored(&mut self, bufs: &[IoSlice<'_>]) -> Result<usize, Error>1.36.0[src]
fn is_write_vectored(&self) -> bool[src]
fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>1.0.0[src]
fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>[src]
fn write_fmt(&mut self, fmt: Arguments<'_>) -> Result<(), Error>1.0.0[src]
fn by_ref(&mut self) -> &mut Self1.0.0[src]
Auto Trait Implementations
impl<O, T> RefUnwindSafe for BitSlice<O, T> where
O: RefUnwindSafe,
T: RefUnwindSafe,
O: RefUnwindSafe,
T: RefUnwindSafe,
impl<O, T> Unpin for BitSlice<O, T> where
O: Unpin,
T: Unpin,
O: Unpin,
T: Unpin,
impl<O, T> UnwindSafe for BitSlice<O, T> where
O: UnwindSafe,
T: UnwindSafe,
O: UnwindSafe,
T: UnwindSafe,
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized, [src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized, [src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized, [src]
T: ?Sized,
fn borrow_mut(&mut self) -> &mut T[src]
impl<T> Conv for T[src]
impl<T> FmtForward for T[src]
fn fmt_binary(self) -> FmtBinary<Self> where
Self: Binary, [src]
Self: Binary,
fn fmt_display(self) -> FmtDisplay<Self> where
Self: Display, [src]
Self: Display,
fn fmt_lower_exp(self) -> FmtLowerExp<Self> where
Self: LowerExp, [src]
Self: LowerExp,
fn fmt_lower_hex(self) -> FmtLowerHex<Self> where
Self: LowerHex, [src]
Self: LowerHex,
fn fmt_octal(self) -> FmtOctal<Self> where
Self: Octal, [src]
Self: Octal,
fn fmt_pointer(self) -> FmtPointer<Self> where
Self: Pointer, [src]
Self: Pointer,
fn fmt_upper_exp(self) -> FmtUpperExp<Self> where
Self: UpperExp, [src]
Self: UpperExp,
fn fmt_upper_hex(self) -> FmtUpperHex<Self> where
Self: UpperHex, [src]
Self: UpperHex,
impl<T> From<T> for T[src]
impl<T, U> Into<U> for T where
U: From<T>, [src]
U: From<T>,
impl<T> Pipe for T[src]
impl<T> PipeAsRef for T[src]
fn pipe_as_ref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
R: 'a,
Self: AsRef<T>,
T: 'a, [src]
R: 'a,
Self: AsRef<T>,
T: 'a,
fn pipe_as_mut<'a, T, R>(&'a mut self, func: impl FnOnce(&'a mut T) -> R) -> R where
R: 'a,
Self: AsMut<T>,
T: 'a, [src]
R: 'a,
Self: AsMut<T>,
T: 'a,
impl<T> PipeBorrow for T[src]
fn pipe_borrow<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
R: 'a,
Self: Borrow<T>,
T: 'a, [src]
R: 'a,
Self: Borrow<T>,
T: 'a,
fn pipe_borrow_mut<'a, T, R>(
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: BorrowMut<T>,
T: 'a, [src]
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: BorrowMut<T>,
T: 'a,
impl<T> PipeDeref for T[src]
fn pipe_deref<'a, R>(&'a self, func: impl FnOnce(&'a Self::Target) -> R) -> R where
R: 'a,
Self: Deref, [src]
R: 'a,
Self: Deref,
fn pipe_deref_mut<'a, R>(
&'a mut self,
func: impl FnOnce(&'a mut Self::Target) -> R
) -> R where
R: 'a,
Self: DerefMut, [src]
&'a mut self,
func: impl FnOnce(&'a mut Self::Target) -> R
) -> R where
R: 'a,
Self: DerefMut,
impl<T> PipeRef for T[src]
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R where
R: 'a, [src]
R: 'a,
fn pipe_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R where
R: 'a, [src]
R: 'a,
impl<T> Tap for T[src]
fn tap<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R, [src]
F: FnOnce(&Self) -> R,
fn tap_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R, [src]
F: FnOnce(&Self) -> R,
fn tap_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R, [src]
F: FnOnce(&mut Self) -> R,
fn tap_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R, [src]
F: FnOnce(&mut Self) -> R,
impl<T, U> TapAsRef<U> for T where
U: ?Sized, [src]
U: ?Sized,
fn tap_ref<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: AsRef<T>, [src]
F: FnOnce(&T) -> R,
Self: AsRef<T>,
fn tap_ref_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: AsRef<T>, [src]
F: FnOnce(&T) -> R,
Self: AsRef<T>,
fn tap_ref_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: AsMut<T>, [src]
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
fn tap_ref_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: AsMut<T>, [src]
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
impl<T, U> TapBorrow<U> for T where
U: ?Sized, [src]
U: ?Sized,
fn tap_borrow<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: Borrow<T>, [src]
F: FnOnce(&T) -> R,
Self: Borrow<T>,
fn tap_borrow_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: Borrow<T>, [src]
F: FnOnce(&T) -> R,
Self: Borrow<T>,
fn tap_borrow_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>, [src]
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
fn tap_borrow_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>, [src]
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
impl<T> TapDeref for T[src]
fn tap_deref<F, R>(self, func: F) -> Self where
F: FnOnce(&Self::Target) -> R,
Self: Deref, [src]
F: FnOnce(&Self::Target) -> R,
Self: Deref,
fn tap_deref_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&Self::Target) -> R,
Self: Deref, [src]
F: FnOnce(&Self::Target) -> R,
Self: Deref,
fn tap_deref_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut, [src]
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
fn tap_deref_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut, [src]
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
impl<T> ToOwned for T where
T: Clone, [src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
fn to_owned(&self) -> T[src]
fn clone_into(&self, target: &mut T)[src]
impl<T> ToString for T where
T: Display + ?Sized, [src]
T: Display + ?Sized,
impl<T> TryConv for T[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>, [src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>, [src]
U: TryFrom<T>,