[−][src]Struct bitvec::boxed::BitBox
A frozen heap-allocated buffer of individual bits.
This is essentially a BitVec
that has frozen its allocation, and given up
the ability to change size. It is analagous to Box<[bool]>
, and is written to
be as close as possible to drop-in replacable for it. This type contains almost
no interesting behavior in its own right; it dereferences to BitSlice
to
manipulate its contents, and it converts to and from BitVec
for allocation
control.
If you know the length of your bit sequence at compile-time, and it is
expressible within the limits of BitArray
, you should prefer that type
instead. Large BitArray
s can be Box
ed normally as desired.
Documentation
All APIs that mirror something in the standard library will have an Original
section linking to the corresponding item. All APIs that have a different
signature or behavior than the original will have an API Differences
section
explaining what has changed, and how to adapt your existing code to the change.
These sections look like this:
Original
API Differences
The buffer type Box<[bool]>
has no type parameters. BitBox<O, T>
has the
same two type parameters as BitSlice<O, T>
. Otherwise, BitBox
is able to
implement the full API surface of Box<[bool]>
.
Behavior
Because BitBox
is a fully-owned buffer, it is able to operate on its memory
without concern for any other views that may alias. This enables it to
specialize some BitSlice
behavior to be faster or more efficient.
Type Parameters
This takes the same two type parameters, O: BitOrder
and T: BitStore
, as
BitSlice
.
Safety
Like BitSlice
, BitBox
is exactly equal in size to Box<[bool]>
, and is also
absolutely representation-incompatible with it. You must never attempt to
type-cast between Box<[bool]>
and BitBox
in any way, nor attempt to modify
the memory value of a BitBox
handle. Doing so will cause allocator and memory
errors in your program, likely inducing a panic.
Everything in the BitBox
public API, even the unsafe
parts, are guaranteed
to have no more unsafety than their equivalent items in the standard library.
All unsafe
APIs will have documentation explicitly detailing what the API
requires you to uphold in order for it to function safely and correctly. All
safe APIs will do so themselves.
Performance
Iteration over the buffer is governed by the BitSlice
characteristics on the
type parameter. You are generally better off using larger types when your buffer
is a data collection rather than a specific I/O protocol buffer.
Macro Construction
Heap allocation can only occur at runtime, but the bitbox!
macro will
construct an appropriate BitSlice
buffer at compile-time, and at run-time,
only copy the buffer into a heap allocation.
Implementations
impl<O, T> BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
pub fn new(x: &BitSlice<O, T>) -> Self
[src]
Prefer ::from_bitslice
Allocates memory on the heap and copies x
into it.
This doesn’t actually allocate if x
is zero-length.
Original
API Differences
Box::<[T]>::new
does not exist, because new
cannot take unsized
types by value. Instead, this takes a slice reference, and boxes the
referent slice.
Examples
use bitvec::prelude::*; let boxed = BitBox::new(bits![0; 5]);
pub fn pin(x: &BitSlice<O, T>) -> Pin<Self> where
O: Unpin,
T: Unpin,
[src]
O: Unpin,
T: Unpin,
Constructs a new Pin<BitBox<O, T>>
.
BitSlice
is always Unpin
, so this has no actual immobility effect.
Original
API Differences
As with ::new
, this only exists on Box
when T
is not unsized. This
takes a slice reference, and pins the referent slice.
pub unsafe fn from_raw(raw: *mut BitSlice<O, T>) -> Self
[src]
Constructs a box from a raw pointer.
After calling this function, the raw pointer is owned by the
resulting BitBox
. Specifically, the Box
destructor will free the
allocated memory. For this to be safe, the memory must have been
allocated in accordance with the memory layout used by Box
.
Original
Safety
This function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer.
Examples
Recreate a BitBox
which was previously converted to a raw pointer
using BitBox::into_raw
:
use bitvec::prelude::*; let x = bitbox![0; 10]; let ptr = BitBox::into_raw(x); let x = unsafe { BitBox::from_raw(ptr) };
pub fn into_raw(b: Self) -> *mut BitSlice<O, T>
[src]
Consumes the BitBox
, returning a wrapped raw pointer.
The pointer will be properly aligned and non-null.
After calling this function, the caller is responsible for the memory
previously managed by the BitBox
. In particular, the caller should
properly release the memory by converting the pointer back into a
BitBox
with the BitBox::from_raw
function, allowing the BitBox
destructor to perform the cleanup.
Note: this is an associated function, which means that you have to call
it as BitBox::into_raw(b)
instead of b.into_raw()
. This is to match
layout with the standard library’s Box
API; there will never be a name
conflict with BitSlice
.
Original
Examples
Converting the raw pointer back into a BitBox
with
BitBox::from_raw
for automatic cleanup:
let b = BitBox::new(bits![Msb0, u32; 0; 32]); let ptr = BitBox::into_raw(b); let b = unsafe { BitBox::<Msb0, _>::from_raw(ptr) };
pub fn leak<'a>(b: Self) -> &'a mut BitSlice<O, T>ⓘ where
T: 'a,
[src]
T: 'a,
Consumes and leaks the BitBox
, returning a mutable reference,
&'a mut BitSlice<O, T>
. Note that the memory region [T]
must outlive
the chosen lifetime 'a
.
This function is mainly useful for bit regions that live for the
remainder of the program’s life. Dropping the returned reference will
cause a memory leak. If this is not acceptable, the reference should
first be wrapped with the BitBox::from_raw
function, producing a
BitBox
. This BitBox
can then be dropped which will properly
deallocate the memory.
Note: this is an associated function, which means that you have to call
it as BitBox::leak(b)
instead of b.leak()
. This is to match layout
with the standard library’s Box
API; there will never be a name
conflict with BitSlice
.
Original
Examples
Simple usage:
let b = BitBox::new(bits![LocalBits, u32; 0; 32]); let static_ref: &'static mut BitSlice<LocalBits, u32> = BitBox::leak(b); static_ref.set(0, true); assert_eq!(static_ref.count_ones(), 1);
pub fn into_bitvec(self) -> BitVec<O, T>ⓘ
[src]
Converts self
into a vector without clones or allocation.
The resulting vector can be converted back into a box via BitVec<O, T>
’s into_boxed_bitslice
method.
Original
Despite taking a Box<[T]>
receiver, this function is written in an
impl<T> [T]
block.
Rust does not allow the text
impl<O, T> BitSlice<O, T> { fn into_bitvec(self: BitBox<O, T>); }
to be written, so this function must be implemented directly on BitBox
rather than on BitSlice
with a boxed receiver.
Examples
use bitvec::prelude::*; let bb = bitbox![0, 1, 0, 1]; let bv = bb.into_bitvec(); assert_eq!(bv, bitvec![0, 1, 0, 1]);
impl<O, T> BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
Methods specific to BitBox<_, T>
, and not present on Box<[T]>
.
pub fn from_bitslice(slice: &BitSlice<O, T>) -> Self
[src]
Clones a &BitSlice
into a BitVec
.
Original
<Box<T: Clone> as Clone>::clone
Effects
This performs a direct element-wise copy from the source slice to the newly-allocated buffer, then sets the box to have the same starting bit as the slice did. This allows for faster behavior.
Examples
use bitvec::prelude::*; let bits = bits![0, 1, 0, 1, 1, 0, 1, 1]; let bb = BitBox::from_bitslice(&bits[2 ..]); assert_eq!(bb, bits[2 ..]);
pub fn from_boxed_slice(boxed: Box<[T]>) -> Self
[src]
Converts a Box<[T]>
into a BitBox
<O, T>` without copying its buffer.
Parameters
boxed
: A boxed slice to view as bits.
Returns
A BitBox
over the boxed
buffer.
Panics
This panics if boxed
is too long to convert into a BitBox
. See
BitSlice::MAX_ELTS
.
Examples
use bitvec::prelude::*; let boxed: Box<[u8]> = Box::new([0; 4]); let bb = BitBox::<LocalBits, _>::from_boxed_slice(boxed); assert_eq!(bb, bits![0; 32]);
pub fn try_from_boxed_slice(boxed: Box<[T]>) -> Result<Self, Box<[T]>>
[src]
Converts a Box<[T]>
into a BitBox<O, T>
without copying its buffer.
This method takes ownership of a memory buffer and enables it to be used
as a bit-box. Because Box<[T]>
can be longer than BitBox
es, this is
a fallible method, and the original box will be returned if it cannot be
converted.
Parameters
boxed
: Some boxed slice of memory, to be viewed as bits.
Returns
If boxed
is short enough to be viewed as a BitBox
, then this returns
a BitBox
over the boxed
buffer. If boxed
is too long, then this
returns boxed
unmodified.
Examples
use bitvec::prelude::*; let boxed: Box<[u8]> = Box::new([0; 4]); let bb = BitBox::<LocalBits, _>::try_from_boxed_slice(boxed).unwrap(); assert_eq!(bb[..], bits![0; 32]);
pub fn into_boxed_slice(self) -> Box<[T]>
[src]
Converts the slice back into an ordinary slice of memory elements.
This does not affect the slice’s buffer, only the handle used to control it.
Parameters
self
Returns
An ordinary boxed slice containing all of the bit-slice’s memory buffer.
Examples
use bitvec::prelude::*; let bb = bitbox![0; 5]; let boxed = bb.into_boxed_slice(); assert_eq!(boxed[..], [0][..]);
pub fn as_bitslice(&self) -> &BitSlice<O, T>ⓘ
[src]
Views the buffer’s contents as a BitSlice
.
This is equivalent to &bb[..]
.
Original
<Box<[T]> as AsRef<[T]>>::as_ref
Examples
use bitvec::prelude::*; let bb = bitbox![0, 1, 1, 0]; let bits = bb.as_bitslice();
pub fn as_mut_bitslice(&mut self) -> &mut BitSlice<O, T>ⓘ
[src]
Extracts a mutable bit-slice of the entire vector.
Equivalent to &mut bv[..]
.
Original
<Box<[T]> as AsMut<[T]>>::as_mut
Examples
use bitvec::prelude::*; let mut bv = bitvec![0, 1, 0, 1]; let bits = bv.as_mut_bitslice(); bits.set(0, true);
pub fn as_slice(&self) -> &[T]
[src]
Extracts an element slice containing the entire box.
Original
<Box<[T]> as AsRef<[T]>>::as_ref
Analogue
See as_bitslice
for a &BitBox -> &BitSlice
transform.
Examples
use bitvec::prelude::*; use std::io::{self, Write}; let buffer = bitbox![Msb0, u8; 0, 1, 0, 1, 1, 0, 0, 0]; io::sink().write(buffer.as_slice()).unwrap();
pub fn as_mut_slice(&mut self) -> &mut [T]
[src]
Extracts a mutable slice of the entire box.
Original
<Box<[T]> as AsMut<[T]>>::as_mut
Analogue
See as_mut_bitslice
for a &mut BitBox -> &mut BitSlice
transform.
Examples
use bitvec::prelude::*; use std::io::{self, Read}; let mut buffer = bitbox![Msb0, u8; 0; 24]; io::repeat(0b101).read_exact(buffer.as_mut_slice()).unwrap();
pub fn set_uninitialized(&mut self, value: bool)
[src]
Sets the uninitialized bits of the vector to a fixed value.
This method modifies all bits in the allocated buffer that are outside
the self.as_bitslice()
view so that they have a consistent value. This
can be used to zero the uninitialized memory so that when viewed as a
raw memory slice, bits outside the live region have a predictable value.
Examples
use bitvec::prelude::*; let mut bb = BitBox::new(&220u8.view_bits::<Lsb0>()[.. 4]); assert_eq!(bb.count_ones(), 2); assert_eq!(bb.as_slice(), &[220u8]); bb.set_uninitialized(false); assert_eq!(bb.as_slice(), &[12u8]); bb.set_uninitialized(true); assert_eq!(bb.as_slice(), &[!3u8]);
Methods from Deref<Target = BitSlice<O, T>>
pub fn len(&self) -> usize
[src]
Returns the number of bits in the slice.
Original
Examples
use bitvec::prelude::*; let data = 0u32; let bits = data.view_bits::<LocalBits>(); assert_eq!(bits.len(), 32);
pub fn is_empty(&self) -> bool
[src]
Returns true
if the slice has a length of 0.
Original
Examples
use bitvec::prelude::*; assert!(BitSlice::<LocalBits, u8>::empty().is_empty()); assert!(!(0u32.view_bits::<LocalBits>()).is_empty());
pub fn first(&self) -> Option<&bool>
[src]
Returns the first bit of the slice, or None
if it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Lsb0>(); assert_eq!(Some(&true), bits.first()); let empty = BitSlice::<LocalBits, usize>::empty(); assert_eq!(None, empty.first());
pub fn first_mut(&mut self) -> Option<BitMut<'_, O, T>>
[src]
Returns a mutable pointer to the first bit of the slice, or None
if it
is empty.
Original
API Differences
This crate cannot manifest &mut bool
references, and must use the
BitMut
proxy type where &mut bool
exists in the standard library
API. The proxy value must be bound as mut
in order to write through
it.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); if let Some(mut first) = bits.first_mut() { *first = true; } assert_eq!(data, 1);
pub fn split_first(&self) -> Option<(&bool, &Self)>
[src]
Returns the first and all the rest of the bits of the slice, or None
if it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Lsb0>(); if let Some((first, rest)) = bits.split_first() { assert!(*first); }
pub fn split_first_mut(
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
[src]
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
Returns the first and all the rest of the bits of the slice, or None
if it is empty.
Original
API Differences
This crate cannot manifest &mut bool
references, and must use the
BitMut
proxy type where &mut bool
exists in the standard library
API. The proxy value must be bound as mut
in order to write through
it.
Because the references are permitted to use the same memory address, they are marked as aliasing in order to satisfy Rust’s requirements about freedom from data races.
Examples
use bitvec::prelude::*; let mut data = 0usize; let bits = data.view_bits_mut::<Lsb0>(); if let Some((mut first, rest)) = bits.split_first_mut() { *first = true; *rest.get_mut(1).unwrap() = true; } assert_eq!(data, 5); assert!(BitSlice::<LocalBits, usize>::empty_mut().split_first_mut().is_none());
pub fn split_last(&self) -> Option<(&bool, &Self)>
[src]
Returns the last and all the rest of the bits of the slice, or None
if
it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Msb0>(); if let Some((last, rest)) = bits.split_last() { assert!(*last); }
pub fn split_last_mut(
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
[src]
&mut self
) -> Option<(BitMut<'_, O, T::Alias>, &mut BitSlice<O, T::Alias>)>
Returns the last and all the rest of the bits of the slice, or None
if
it is empty.
Original
API Differences
This crate cannot manifest &mut bool
references, and must use the
BitMut
proxy type where &mut bool
exists in the standard library
API. The proxy value must be bound as mut
in order to write through
it.
Because the references are permitted to use the same memory address, they are marked as aliasing in order to satisfy Rust’s requirements about freedom from data races.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); if let Some((mut last, rest)) = bits.split_last_mut() { *last = true; *rest.get_mut(5).unwrap() = true; } assert_eq!(data, 5); assert!(BitSlice::<LocalBits, usize>::empty_mut().split_last_mut().is_none());
pub fn last(&self) -> Option<&bool>
[src]
Returns the last bit of the slice, or None
if it is empty.
Original
Examples
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Msb0>(); assert_eq!(Some(&true), bits.last()); let empty = BitSlice::<LocalBits, usize>::empty(); assert_eq!(None, empty.last());
pub fn last_mut(&mut self) -> Option<BitMut<'_, O, T>>
[src]
Returns a mutable pointer to the last bit of the slice, or None
if it
is empty.
Original
API Differences
This crate cannot manifest &mut bool
references, and must use the
BitMut
proxy type where &mut bool
exists in the standard library
API. The proxy value must be bound as mut
in order to write through
it.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); if let Some(mut last) = bits.last_mut() { *last = true; } assert_eq!(data, 1);
pub fn get<'a, I>(&'a self, index: I) -> Option<I::Immut> where
I: BitSliceIndex<'a, O, T>,
[src]
I: BitSliceIndex<'a, O, T>,
Returns a reference to an element or subslice depending on the type of index.
- If given a position, returns a reference to the element at that
position or
None
if out of bounds. - If given a range, returns the subslice corresponding to that range, or
None
if out of bounds.
Original
Examples
use bitvec::prelude::*; let data = 2u8; let bits = data.view_bits::<Lsb0>(); assert_eq!(Some(&true), bits.get(1)); assert_eq!(Some(&bits[1 .. 3]), bits.get(1 .. 3)); assert_eq!(None, bits.get(9)); assert_eq!(None, bits.get(8 .. 10));
pub fn get_mut<'a, I>(&'a mut self, index: I) -> Option<I::Mut> where
I: BitSliceIndex<'a, O, T>,
[src]
I: BitSliceIndex<'a, O, T>,
Returns a mutable reference to an element or subslice depending on the
type of index (see get
) or None
if the index is out of bounds.
Original
API Differences
When I
is usize
, this returns BitMut
instead of &mut bool
.
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Lsb0>(); assert!(!bits.get(1).unwrap()); *bits.get_mut(1).unwrap() = true; assert!(bits.get(1).unwrap());
pub unsafe fn get_unchecked<'a, I>(&'a self, index: I) -> I::Immut where
I: BitSliceIndex<'a, O, T>,
[src]
I: BitSliceIndex<'a, O, T>,
Returns a reference to an element or subslice, without doing bounds checking.
This is generally not recommended; use with caution!
Unlike the original slice function, calling this with an out-of-bounds
index is not technically compile-time undefined behavior, as the
references produced do not actually describe local memory. However, the
use of an out-of-bounds index will eventually cause an out-of-bounds
memory read, which is a runtime safety violation. For a safe alternative
see get
.
Original
Examples
use bitvec::prelude::*; let data = 2u16; let bits = data.view_bits::<Lsb0>(); unsafe{ assert_eq!(bits.get_unchecked(1), &true); }
pub unsafe fn get_unchecked_mut<'a, I>(&'a mut self, index: I) -> I::Mut where
I: BitSliceIndex<'a, O, T>,
[src]
I: BitSliceIndex<'a, O, T>,
Returns a mutable reference to the output at this location, without doing bounds checking.
This is generally not recommended; use with caution!
Unlike the original slice function, calling this with an out-of-bounds
index is not technically compile-time undefined behavior, as the
references produced do not actually describe local memory. However, the
use of an out-of-bounds index will eventually cause an out-of-bounds
memory write, which is a runtime safety violation. For a safe
alternative see get_mut
.
Original
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Lsb0>(); unsafe { let mut bit = bits.get_unchecked_mut(1); *bit = true; } assert_eq!(data, 2);
pub fn as_ptr(&self) -> *const Self
[src]
Returns a raw bit-slice pointer to the region.
The caller must ensure that the slice outlives the pointer this function returns, or else it will end up pointing to garbage.
The caller must also ensure that the memory the pointer
(non-transitively) points to is only written to if T
allows shared
mutation, using this pointer or any pointer derived from it. If you need
to mutate the contents of the slice, use as_mut_ptr
.
Modifying the container (such as BitVec
) referenced by this slice may
cause its buffer to be reällocated, which would also make any pointers
to it invalid.
Original
API Differences
This returns *const BitSlice
, which is the equivalent of *const [T]
instead of *const T
. The pointer encoding used requires more than one
CPU word of space to address a single bit, so there is no advantage to
removing the length information from the encoded pointer value.
Notes
You cannot use any of the methods in the pointer
fundamental type
or the core::ptr
module on the *_ BitSlice
type. This pointer
retains the bitvec
-specific value encoding, and is incomprehensible by
the Rust standard library.
The only thing you can do with this pointer is dereference it.
Examples
use bitvec::prelude::*; let data = 2u16; let bits = data.view_bits::<Lsb0>(); let bits_ptr = bits.as_ptr(); for i in 0 .. bits.len() { assert_eq!(bits[i], unsafe { (&*bits_ptr)[i] }); }
pub fn as_mut_ptr(&mut self) -> *mut Self
[src]
Returns an unsafe mutable bit-slice pointer to the region.
The caller must ensure that the slice outlives the pointer this function returns, or else it will end up pointing to garbage.
Modifying the container (such as BitVec
) referenced by this slice may
cause its buffer to be reällocated, which would also make any pointers
to it invalid.
Original
API Differences
This returns *mut BitSlice
, which is the equivalont of *mut [T]
instead of *mut T
. The pointer encoding used requires more than one
CPU word of space to address a single bit, so there is no advantage to
removing the length information from the encoded pointer value.
Notes
You cannot use any of the methods in the pointer
fundamental type
or the core::ptr
module on the *_ BitSlice
type. This pointer
retains the bitvec
-specific value encoding, and is incomprehensible by
the Rust standard library.
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Lsb0>(); let bits_ptr = bits.as_mut_ptr(); for i in 0 .. bits.len() { unsafe { &mut *bits_ptr }.set(i, i % 2 == 0); } assert_eq!(data, 0b0101_0101_0101_0101);
pub fn swap(&mut self, a: usize, b: usize)
[src]
Swaps two bits in the slice.
Original
Arguments
a
: The index of the first bitb
: The index of the second bit
Panics
Panics if a
or b
are out of bounds.
Examples
use bitvec::prelude::*; let mut data = 2u8; let bits = data.view_bits_mut::<Lsb0>(); bits.swap(1, 3); assert_eq!(data, 8);
pub fn reverse(&mut self)
[src]
Reverses the order of bits in the slice, in place.
Original
Examples
use bitvec::prelude::*; let mut data = 0b1_1001100u8; let bits = data.view_bits_mut::<Msb0>(); bits[1 ..].reverse(); assert_eq!(data, 0b1_0011001);
pub fn iter(&self) -> Iter<'_, O, T>ⓘ
[src]
Returns an iterator over the slice.
Original
Examples
use bitvec::prelude::*; let data = 130u8; let bits = data.view_bits::<Lsb0>(); let mut iterator = bits.iter(); assert_eq!(iterator.next(), Some(&false)); assert_eq!(iterator.next(), Some(&true)); assert_eq!(iterator.nth(5), Some(&true)); assert_eq!(iterator.next(), None);
pub fn iter_mut(&mut self) -> IterMut<'_, O, T>ⓘ
[src]
Returns an iterator that allows modifying each bit.
Original
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); for (idx, mut elem) in bits.iter_mut().enumerate() { *elem = idx % 3 == 0; } assert_eq!(data, 0b100_100_10);
pub fn windows(&self, size: usize) -> Windows<'_, O, T>ⓘ
[src]
Returns an iterator over all contiguous windows of length size
. The
windows overlap. If the slice is shorter than size
, the iterator
returns no values.
Original
Panics
Panics if size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.windows(6); assert_eq!(iter.next().unwrap(), &bits[.. 6]); assert_eq!(iter.next().unwrap(), &bits[1 .. 7]); assert_eq!(iter.next().unwrap(), &bits[2 ..]); assert!(iter.next().is_none());
If the slice is shorter than size
:
use bitvec::prelude::*; let bits = BitSlice::<LocalBits, usize>::empty(); let mut iter = bits.windows(1); assert!(iter.next().is_none());
pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, O, T>ⓘ
[src]
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the beginning of the slice.
The chunks are slices and do not overlap. If chunk_size
does not
divide the length of the slice, then the last chunk will not have length
chunk_size
.
See chunks_exact
for a variant of this iterator that returns chunks
of always exactly chunk_size
bits, and rchunks
for the same
iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.chunks(3); assert_eq!(iter.next().unwrap(), &bits[.. 3]); assert_eq!(iter.next().unwrap(), &bits[3 .. 6]); assert_eq!(iter.next().unwrap(), &bits[6 ..]); assert!(iter.next().is_none());
pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, O, T>ⓘ
[src]
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the beginning of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size
does
not divide the length of the slice, then the last chunk will not have
length chunk_size
.
See chunks_exact_mut
for a variant of this iterator that returns
chunks of always exactly chunk_size
bits, and rchunks_mut
for the
same iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.chunks_mut(3).enumerate() { chunk.set(2 - idx, true); } assert_eq!(data, 0b01_010_100);
pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, O, T>ⓘNotable traits for ChunksExact<'a, O, T>
impl<'a, O, T> Iterator for ChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;
[src]
Notable traits for ChunksExact<'a, O, T>
impl<'a, O, T> Iterator for ChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the beginning of the slice.
The chunks are slices and do not overlap. If chunk_size
does not
divide the length of the slice, then the last up to chunk_size-1
bits
will be omitted and can be retrieved from the remainder
function of
the iterator.
Due to each chunk having exactly chunk_size
bits, the compiler may
optimize the resulting code better than in the case of chunks
.
See chunks
for a variant of this iterator that also returns the
remainder as a smaller chunk, and rchunks_exact
for the same
iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.chunks_exact(3); assert_eq!(iter.next().unwrap(), &bits[.. 3]); assert_eq!(iter.next().unwrap(), &bits[3 .. 6]); assert!(iter.next().is_none()); assert_eq!(iter.remainder(), &bits[6 ..]);
pub fn chunks_exact_mut(
&mut self,
chunk_size: usize
) -> ChunksExactMut<'_, O, T>ⓘNotable traits for ChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for ChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
[src]
&mut self,
chunk_size: usize
) -> ChunksExactMut<'_, O, T>ⓘ
Notable traits for ChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for ChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the beginning of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size
does
not divide the beginning length of the slice, then the last up to
chunk_size-1
bits will be omitted and can be retrieved from the
into_remainder
function of the iterator.
Due to each chunk having exactly chunk_size
bits, the compiler may
optimize the resulting code better than in the case of chunks_mut
.
See chunks_mut
for a variant of this iterator that also returns the
remainder as a smaller chunk, and rchunks_exact_mut
for the same
iterator but starting at the end of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.chunks_exact_mut(3).enumerate() { chunk.set(idx, true); } assert_eq!(data, 0b00_010_001);
pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, O, T>ⓘ
[src]
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the end of the slice.
The chunks are slices and do not overlap. If chunk_size
does not
divide the length of the slice, then the last chunk will not have length
chunk_size
.
See rchunks_exact
for a variant of this iterator that returns chunks
of always exactly chunk_size
bits, and chunks
for the same
iterator but starting at the beginning of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.rchunks(3); assert_eq!(iter.next().unwrap(), &bits[5 ..]); assert_eq!(iter.next().unwrap(), &bits[2 .. 5]); assert_eq!(iter.next().unwrap(), &bits[.. 2]); assert!(iter.next().is_none());
pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, O, T>ⓘNotable traits for RChunksMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
[src]
Notable traits for RChunksMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the end of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size
does
not divide the length of the slice, then the last chunk will not have
length chunk_size
.
See rchunks_exact_mut
for a variant of this iterator that returns
chunks of always exactly chunk_size
bits, and chunks_mut
for the
same iterator but starting at the beginning of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.rchunks_mut(3).enumerate() { chunk.set(2 - idx, true); } assert_eq!(data, 0b100_010_01);
pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, O, T>ⓘNotable traits for RChunksExact<'a, O, T>
impl<'a, O, T> Iterator for RChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;
[src]
Notable traits for RChunksExact<'a, O, T>
impl<'a, O, T> Iterator for RChunksExact<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a BitSlice<O, T>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the end of the slice.
The chunks are slices and do not overlap. If chunk_size
does not
divide the length of the slice, then the last up to chunk_size-1
bits
will be omitted and can be retrieved from the remainder
function of
the iterator.
Due to each chunk having exactly chunk_size
bits, the compiler can
often optimize the resulting code better than in the case of chunks
.
See rchunks
for a variant of this iterator that also returns the
remainder as a smaller chunk, and chunks_exact
for the same iterator
but starting at the beginning of the slice.
Original
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Lsb0>(); let mut iter = bits.rchunks_exact(3); assert_eq!(iter.next().unwrap(), &bits[5 ..]); assert_eq!(iter.next().unwrap(), &bits[2 .. 5]); assert!(iter.next().is_none()); assert_eq!(iter.remainder(), &bits[.. 2]);
pub fn rchunks_exact_mut(
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, O, T>ⓘNotable traits for RChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
[src]
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, O, T>ⓘ
Notable traits for RChunksExactMut<'a, O, T>
impl<'a, O, T> Iterator for RChunksExactMut<'a, O, T> where
O: BitOrder,
T: 'a + BitStore, type Item = &'a mut BitSlice<O, T::Alias>;
Returns an iterator over chunk_size
bits of the slice at a time,
starting at the end of the slice.
The chunks are mutable slices, and do not overlap. If chunk_size
does
not divide the length of the slice, then the last up to chunk_size-1
bits will be omitted and can be retrieved from the into_remainder
function of the iterator.
Due to each chunk having exactly chunk_size
bits, the compiler can
often optimize the resulting code better than in the case of
chunks_mut
.
See rchunks_mut
for a variant of this iterator that also returns the
remainder as a smaller chunk, and chunks_exact_mut
for the same
iterator but starting at the beginning of the slice.
Panics
Panics if chunk_size
is 0.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Lsb0>(); for (idx, chunk) in bits.rchunks_exact_mut(3).enumerate() { chunk.set(idx, true); } assert_eq!(data, 0b001_010_00);
pub fn split_at(&self, mid: usize) -> (&Self, &Self)
[src]
Divides one slice into two at an index.
The first will contain all indices from [0, mid)
(excluding the index
mid
itself) and the second will contain all indices from [mid, len)
(excluding the index len
itself).
Original
Panics
Panics if mid > len
.
Examples
use bitvec::prelude::*; let data = 0xC3u8; let bits = data.view_bits::<LocalBits>(); let (left, right) = bits.split_at(0); assert!(left.is_empty()); assert_eq!(right, bits); let (left, right) = bits.split_at(2); assert_eq!(left, &bits[.. 2]); assert_eq!(right, &bits[2 ..]); let (left, right) = bits.split_at(8); assert_eq!(left, bits); assert!(right.is_empty());
pub fn split_at_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
[src]
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
Divides one mutable slice into two at an index.
The first will contain all indices from [0, mid)
(excluding the index
mid
itself) and the second will contain all indices from [mid, len)
(excluding the index len
itself).
Original
API Differences
Because the partition point mid
is permitted to occur in the interior
of a memory element T
, this method is required to mark the returned
slices as being to aliased memory. This marking ensures that writes to
the covered memory use the appropriate synchronization behavior of your
build to avoid data races – by default, this makes all writes atomic; on
builds with the atomic
feature disabled, this uses Cell
s and
forbids the produced subslices from leaving the current thread.
See the BitStore
documentation for more information.
Panics
Panics if mid > len
.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); // scoped to restrict the lifetime of the borrows { let (left, right) = bits.split_at_mut(3); *left.get_mut(1).unwrap() = true; *right.get_mut(2).unwrap() = true; } assert_eq!(data, 0b010_00100);
pub fn split<F>(&self, pred: F) -> Split<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
.
The matched bit is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0b01_001_000u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.split(|_pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[.. 1]); assert_eq!(iter.next().unwrap(), &bits[2 .. 4]); assert_eq!(iter.next().unwrap(), &bits[5 ..]); assert!(iter.next().is_none());
If the first bit is matched, an empty slice will be the first item returned by the iterator. Similarly, if the last element in the slice is matched, an empty slice will be the last item returned by the iterator:
use bitvec::prelude::*; let data = 1u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.split(|_pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[.. 7]); assert!(iter.next().unwrap().is_empty()); assert!(iter.next().is_none());
If two matched bits are directly adjacent, an empty slice will be present between them:
use bitvec::prelude::*; let data = 0b001_100_00u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.split(|pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[0 .. 2]); assert!(iter.next().unwrap().is_empty()); assert_eq!(iter.next().unwrap(), &bits[4 .. 8]); assert!(iter.next().is_none());
pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over mutable subslices separated by bits that match
pred
. The matched bit is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.split_mut(|_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_100_11);
pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
,
starting at the end of the slice and working backwards. The matched bit
is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0b0001_0000u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.rsplit(|_pos, bit| *bit); assert_eq!(iter.next().unwrap(), &bits[4 ..]); assert_eq!(iter.next().unwrap(), &bits[.. 3]); assert!(iter.next().is_none());
As with split()
, if the first or last bit is matched, an empty slice
will be the first (or last) item returned by the iterator.
use bitvec::prelude::*; let data = 0b1001_0001u8; let bits = data.view_bits::<Msb0>(); let mut iter = bits.rsplit(|_pos, bit| *bit); assert!(iter.next().unwrap().is_empty()); assert_eq!(iter.next().unwrap(), &bits[4 .. 7]); assert_eq!(iter.next().unwrap(), &bits[1 .. 3]); assert!(iter.next().unwrap().is_empty()); assert!(iter.next().is_none());
pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over mutable subslices separated by bits that match
pred
, starting at the end of the slice and working backwards. The
matched bit is not contained in the subslices.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.rsplit_mut(|_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_100_11);
pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
,
limited to returning at most n
items. The matched bit is not contained
in the subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Msb0>(); for group in bits.splitn(2, |pos, _bit| pos % 3 == 2) { println!("{}", group.len()); } // 2 // 5
pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
,
limited to returning at most n
items. The matched element is not
contained in the subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.splitn_mut(2, |_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_100_10);
pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
limited to returining at most n
items. This starts at the end of the
slice and works backwards. The matched bit is not contained in the
subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let data = 0xA5u8; let bits = data.view_bits::<Msb0>(); for group in bits.rsplitn(2, |pos, _bit| pos % 3 == 2) { println!("{}", group.len()); } // 2 // 5
pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, O, T, F>ⓘ where
F: FnMut(usize, &bool) -> bool,
[src]
F: FnMut(usize, &bool) -> bool,
Returns an iterator over subslices separated by bits that match pred
limited to returning at most n
items. This starts at the end of the
slice and works backwards. The matched bit is not contained in the
subslices.
The last item returned, if any, will contain the remainder of the slice.
Original
API Differences
In order to allow more than one bit of information for the split decision, the predicate receives the index of each bit, as well as its value.
Examples
use bitvec::prelude::*; let mut data = 0b001_000_10u8; let bits = data.view_bits_mut::<Msb0>(); for group in bits.rsplitn_mut(2, |_pos, bit| *bit) { *group.get_mut(0).unwrap() = true; } assert_eq!(data, 0b101_000_11);
pub fn contains<O2, T2>(&self, x: &BitSlice<O2, T2>) -> bool where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Returns true
if the slice contains a subslice that matches the given
span.
Original
API Differences
This searches for a matching subslice (allowing different type
parameters) rather than for a specific bit. Searching for a contained
element with a given value is not as useful on a collection of bool
.
Furthermore, BitSlice
defines any
and not_all
, which are
optimized searchers for any true
or false
bit, respectively, in a
sequence.
Examples
use bitvec::prelude::*; let data = 0b0101_1010u8; let bits_msb = data.view_bits::<Msb0>(); let bits_lsb = data.view_bits::<Lsb0>(); assert!(bits_msb.contains(&bits_lsb[1 .. 5]));
This example uses a palindrome pattern to demonstrate that the slice being searched for does not need to have the same type parameters as the slice being searched.
pub fn starts_with<O2, T2>(&self, needle: &BitSlice<O2, T2>) -> bool where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Returns true
if needle
is a prefix of the slice.
Original
Examples
use bitvec::prelude::*; let data = 0b0100_1011u8; let haystack = data.view_bits::<Msb0>(); let needle = &data.view_bits::<Lsb0>()[2 .. 5]; assert!(haystack.starts_with(&needle[.. 2])); assert!(haystack.starts_with(needle)); assert!(!haystack.starts_with(&haystack[2 .. 4]));
Always returns true
if needle
is an empty slice:
use bitvec::prelude::*; let empty = BitSlice::<LocalBits, usize>::empty(); assert!(0u8.view_bits::<LocalBits>().starts_with(empty)); assert!(empty.starts_with(empty));
pub fn ends_with<O2, T2>(&self, needle: &BitSlice<O2, T2>) -> bool where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Returns true
if needle
is a suffix of the slice.
Original
Examples
use bitvec::prelude::*; let data = 0b0100_1011u8; let haystack = data.view_bits::<Lsb0>(); let needle = &data.view_bits::<Msb0>()[3 .. 6]; assert!(haystack.ends_with(&needle[1 ..])); assert!(haystack.ends_with(needle)); assert!(!haystack.ends_with(&haystack[2 .. 4]));
Always returns true
if needle
is an empty slice:
use bitvec::prelude::*; let empty = BitSlice::<LocalBits, usize>::empty(); assert!(0u8.view_bits::<LocalBits>().ends_with(empty)); assert!(empty.ends_with(empty));
pub fn rotate_left(&mut self, by: usize)
[src]
Rotates the slice in-place such that the first by
bits of the slice
move to the end while the last self.len() - by
bits move to the front.
After calling rotate_left
, the bit previously at index by
will
become the first bit in the slice.
Original
Panics
This function will panic if by
is greater than the length of the
slice. Note that by == self.len()
does not panic and is a no-op
rotation.
Complexity
Takes linear (in self.len()
) time.
Performance
While this is faster than the equivalent rotation on [bool]
, it is
slower than a handcrafted partial-element rotation on [T]
. Because of
the support for custom orderings, and the lack of specialization, this
method can only accelerate by reducing the number of loop iterations
performed on the slice body, and cannot accelerate by using shift-mask
instructions to move multiple bits in one operation.
Examples
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits.rotate_left(2); assert_eq!(data, 0xC3);
Rotating a subslice:
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits[1 .. 5].rotate_left(1); assert_eq!(data, 0b1_1101_000);
pub fn rotate_right(&mut self, by: usize)
[src]
Rotates the slice in-place such that the first self.len() - by
bits of
the slice move to the end while the last by
bits move to the front.
After calling rotate_right
, the bit previously at index self.len() - by
will become the first bit in the slice.
Original
Panics
This function will panic if by
is greater than the length of the
slice. Note that by == self.len()
does not panic and is a no-op
rotation.
Complexity
Takes linear (in self.len()
) time.
Performance
While this is faster than the equivalent rotation on [bool]
, it is
slower than a handcrafted partial-element rotation on [T]
. Because of
the support for custom orderings, and the lack of specialization, this
method can only accelerate by reducing the number of loop iterations
performed on the slice body, and cannot accelerate by using shift-mask
instructions to move multiple bits in one operation.
Examples
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits.rotate_right(2); assert_eq!(data, 0x3C);
Rotate a subslice:
use bitvec::prelude::*; let mut data = 0xF0u8; let bits = data.view_bits_mut::<Msb0>(); bits[1 .. 5].rotate_right(1); assert_eq!(data, 0b1_0111_000);
pub fn clone_from_bitslice<O2, T2>(&mut self, src: &BitSlice<O2, T2>) where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Copies the bits from src
into self
.
The length of src
must be the same as self
.
Original
API Differences
This method is renamed, as it takes a bit slice rather than an element slice.
Panics
This function will panic if the two slices have different lengths.
Examples
Cloning two bits from a slice into another:
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); let src = 0x0Fu16.view_bits::<Lsb0>(); bits[.. 2].clone_from_bitslice(&src[2 .. 4]); assert_eq!(data, 0xC0);
Rust enforces that there can only be one mutable reference with no
immutable references to a particular piece of data in a particular
scope. Because of this, attempting to use clone_from_bitslice
on a
single slice will result in a compile failure:
use bitvec::prelude::*; let mut data = 3u8; let bits = data.view_bits_mut::<Msb0>(); bits[.. 2].clone_from_bitslice(&bits[6 ..]);
To work around this, we can use split_at_mut
to create two distinct
sub-slices from a slice:
use bitvec::prelude::*; let mut data = 3u8; let bits = data.view_bits_mut::<Msb0>(); let (head, tail) = bits.split_at_mut(4); head.clone_from_bitslice(tail); assert_eq!(data, 0x33);
pub fn copy_from_bitslice(&mut self, src: &Self)
[src]
Copies all bits from src
into self
.
The length of src
must be the same as self
.
Original
API Differences
This method is renamed, as it takes a bit slice rather than an element slice.
This is unable to guarantee a strictly faster copy behavior than
clone_from_bitslice
. In the future, the implementation may
specialize, as the language allows.
Panics
This function will panic if the two slices have different lengths.
Examples
Copying two bits from a slice into another:
pub fn copy_within<R>(&mut self, src: R, dest: usize) where
R: RangeBounds<usize>,
[src]
R: RangeBounds<usize>,
Copies bits from one part of the slice to another part of itself.
src
is the range within self
to copy from. dest
is the starting
index of the range within self
to copy to, which will have the same
length as src
. The two ranges may overlap. The ends of the two ranges
must be less than or equal to self.len()
.
Original
Panics
This function will panic if either range exceeds the end of the slice,
or if the end of src
is before the start.
Examples
Copying four bytes within a slice:
use bitvec::prelude::*; let mut data = 0x07u8; let bits = data.view_bits_mut::<Msb0>(); bits.copy_within(5 .., 0); assert_eq!(data, 0xE7);
pub fn swap_with_bitslice<O2, T2>(&mut self, other: &mut BitSlice<O2, T2>) where
O2: BitOrder,
T2: BitStore,
[src]
O2: BitOrder,
T2: BitStore,
Swaps all bits in self
with those in other
.
The length of other
must be the same as self
.
Original
API Differences
This method is renamed, as it takes a bit slice rather than an element slice.
Panics
This function will panic if the two slices have different lengths.
Examples
use bitvec::prelude::*; let mut one = [0xA5u8, 0x69]; let mut two = 0x1234u16; let one_bits = one.view_bits_mut::<Msb0>(); let two_bits = two.view_bits_mut::<Lsb0>(); one_bits.swap_with_bitslice(two_bits); assert_eq!(one, [0x2C, 0x48]); assert_eq!(two, 0x96A5);
pub unsafe fn align_to<U>(&self) -> (&Self, &BitSlice<O, U>, &Self) where
U: BitStore,
[src]
U: BitStore,
Transmute the bitslice to a bitslice of another type, ensuring alignment of the types is maintained.
This method splits the bitslice into three distinct bitslices: prefix, correctly aligned middle bitslice of a new type, and the suffix bitslice. The method may make the middle bitslice the greatest length possible for a given type and input bitslice, but only your algorithm's performance should depend on that, not its correctness. It is permissible for all of the input data to be returned as the prefix or suffix bitslice.
Original
API Differences
Type U
is required to have the same type family as type T
.
Whatever T
is of the fundamental integers, atomics, or Cell
wrappers, U
must be a different width in the same family. Changing the
type family with this method is unsound and strictly forbidden.
Unfortunately, it cannot be guaranteed by this function, so you are
required to abide by this limitation.
Safety
This method is essentially a transmute
with respect to the elements in
the returned middle bitslice, so all the usual caveats pertaining to
transmute::<T, U>
also apply here.
Examples
Basic usage:
use bitvec::prelude::*; unsafe { let bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7]; let bits = bytes.view_bits::<LocalBits>(); let (prefix, shorts, suffix) = bits.align_to::<u16>(); match prefix.len() { 0 => { assert_eq!(shorts, bits[.. 48]); assert_eq!(suffix, bits[48 ..]); }, 8 => { assert_eq!(prefix, bits[.. 8]); assert_eq!(shorts, bits[8 ..]); }, _ => unreachable!("This case will not occur") } }
pub unsafe fn align_to_mut<U>(
&mut self
) -> (&mut Self, &mut BitSlice<O, U>, &mut Self) where
U: BitStore,
[src]
&mut self
) -> (&mut Self, &mut BitSlice<O, U>, &mut Self) where
U: BitStore,
Transmute the bitslice to a bitslice of another type, ensuring alignment of the types is maintained.
This method splits the bitslice into three distinct bitslices: prefix, correctly aligned middle bitslice of a new type, and the suffix bitslice. The method may make the middle bitslice the greatest length possible for a given type and input bitslice, but only your algorithm's performance should depend on that, not its correctness. It is permissible for all of the input data to be returned as the prefix or suffix bitslice.
Original
API Differences
Type U
is required to have the same type family as type T
.
Whatever T
is of the fundamental integers, atomics, or Cell
wrappers, U
must be a different width in the same family. Changing the
type family with this method is unsound and strictly forbidden.
Unfortunately, it cannot be guaranteed by this function, so you are
required to abide by this limitation.
Safety
This method is essentially a transmute
with respect to the elements in
the returned middle bitslice, so all the usual caveats pertaining to
transmute::<T, U>
also apply here.
Examples
Basic usage:
use bitvec::prelude::*; unsafe { let mut bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7]; let bits = bytes.view_bits_mut::<LocalBits>(); let (prefix, shorts, suffix) = bits.align_to_mut::<u16>(); // same access and behavior as in `align_to` }
pub fn to_bitvec(&self) -> BitVec<O, T>ⓘ
[src]
Copies self
into a new BitVec
.
Original
Examples
use bitvec::prelude::*; let bits = bits![0, 1, 0, 1]; let bv = bits.to_bitvec(); assert_eq!(bits, bv);
pub fn repeat(&self, n: usize) -> BitVec<O, T>ⓘ where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
Creates a vector by repeating a slice n
times.
Original
Panics
This function will panic if the capacity would overflow.
Examples
Basic usage:
use bitvec::prelude::*; assert_eq!(bits![0, 1].repeat(3), bits![0, 1, 0, 1, 0, 1]);
A panic upon overflow:
use bitvec::prelude::*; // this will panic at runtime bits![0, 1].repeat(BitSlice::<LocalBits, usize>::MAX_BITS);
pub fn set(&mut self, index: usize, value: bool)
[src]
Sets the bit value at the given position.
Parameters
&mut self
index
: The bit index to set. It must be in the range0 .. self.len()
.value
: The value to be set,true
for1
andfalse
for0
.
Effects
If index
is valid, then the bit to which it refers is set to value
.
Panics
This method panics if index
is outside the slice domain.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); assert!(!bits.get(7).unwrap()); bits.set(7, true); assert!(bits.get(7).unwrap()); assert_eq!(data, 1);
This example panics when it attempts to set a bit that is out of bounds.
use bitvec::prelude::*; let bits = BitSlice::<LocalBits, usize>::empty_mut(); bits.set(0, false);
pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
[src]
Sets a bit at an index, without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see set
.
Parameters
&mut self
index
: The bit index to set. It must be in the range0 .. self.len()
. It will not be checked.
Effects
The bit at index
is set to value
.
Safety
This method is not safe. It performs raw pointer arithmetic to seek
from the start of the slice to the requested index, and set the bit
there. It does not inspect the length of self
, and it is free to
perform out-of-bounds memory write access.
Use this method only when you have already performed the bounds check, and can guarantee that the call occurs with a safely in-bounds index.
Examples
This example uses a bit slice of length 2, and demonstrates out-of-bounds access to the last bit in the element.
use bitvec::prelude::*; let mut data = 0u8; let bits = &mut data.view_bits_mut::<Msb0>()[2 .. 4]; assert_eq!(bits.len(), 2); unsafe { bits.set_unchecked(5, true); } assert_eq!(data, 1);
pub fn all(&self) -> bool
[src]
Tests if all bits in the slice domain are set (logical ∧
).
Truth Table
0 0 => 0
0 1 => 0
1 0 => 0
1 1 => 1
Parameters
&self
Returns
Whether all bits in the slice domain are set. The empty slice returns
true
.
Examples
use bitvec::prelude::*; let bits = 0xFDu8.view_bits::<Msb0>(); assert!(bits[.. 4].all()); assert!(!bits[4 ..].all());
pub fn any(&self) -> bool
[src]
Tests if any bit in the slice is set (logical ∨
).
Truth Table
0 0 => 0
0 1 => 1
1 0 => 1
1 1 => 1
Parameters
&self
Returns
Whether any bit in the slice domain is set. The empty slice returns
false
.
Examples
use bitvec::prelude::*; let bits = 0x40u8.view_bits::<Msb0>(); assert!(bits[.. 4].any()); assert!(!bits[4 ..].any());
pub fn not_all(&self) -> bool
[src]
Tests if any bit in the slice is unset (logical ¬∧
).
Truth Table
0 0 => 1
0 1 => 1
1 0 => 1
1 1 => 0
Parameters
- `&self
Returns
Whether any bit in the slice domain is unset.
Examples
use bitvec::prelude::*; let bits = 0xFDu8.view_bits::<Msb0>(); assert!(!bits[.. 4].not_all()); assert!(bits[4 ..].not_all());
pub fn not_any(&self) -> bool
[src]
Tests if all bits in the slice are unset (logical ¬∨
).
Truth Table
0 0 => 1
0 1 => 0
1 0 => 0
1 1 => 0
Parameters
&self
Returns
Whether all bits in the slice domain are unset.
Examples
use bitvec::prelude::*; let bits = 0x40u8.view_bits::<Msb0>(); assert!(!bits[.. 4].not_any()); assert!(bits[4 ..].not_any());
pub fn some(&self) -> bool
[src]
Tests whether the slice has some, but not all, bits set and some, but not all, bits unset.
This is false
if either .all
or .not_any
are true
.
Truth Table
0 0 => 0
0 1 => 1
1 0 => 1
1 1 => 0
Parameters
&self
Returns
Whether the slice domain has mixed content. The empty slice returns
false
.
Examples
use bitvec::prelude::*; let data = 0b111_000_10u8; let bits = data.view_bits::<Msb0>(); assert!(!bits[.. 3].some()); assert!(!bits[3 .. 6].some()); assert!(bits.some());
pub fn count_ones(&self) -> usize
[src]
Returns the number of ones in the memory region backing self
.
Parameters
&self
Returns
The number of high bits in the slice domain.
Examples
Basic usage:
use bitvec::prelude::*; let data = 0xF0u8; let bits = data.view_bits::<Msb0>(); assert_eq!(bits[.. 4].count_ones(), 4); assert_eq!(bits[4 ..].count_ones(), 0);
pub fn count_zeros(&self) -> usize
[src]
Returns the number of zeros in the memory region backing self
.
Parameters
&self
Returns
The number of low bits in the slice domain.
Examples
Basic usage:
use bitvec::prelude::*; let data = 0xF0u8; let bits = data.view_bits::<Msb0>(); assert_eq!(bits[.. 4].count_zeros(), 0); assert_eq!(bits[4 ..].count_zeros(), 4);
pub fn set_all(&mut self, value: bool)
[src]
Sets all bits in the slice to a value.
Parameters
&mut self
value
: The bit value to which all bits in the slice will be set.
Examples
use bitvec::prelude::*; let mut src = 0u8; let bits = src.view_bits_mut::<Msb0>(); bits[2 .. 6].set_all(true); assert_eq!(bits.as_slice(), &[0b0011_1100]); bits[3 .. 5].set_all(false); assert_eq!(bits.as_slice(), &[0b0010_0100]); bits[.. 1].set_all(true); assert_eq!(bits.as_slice(), &[0b1010_0100]);
pub fn for_each<F>(&mut self, func: F) where
F: FnMut(usize, bool) -> bool,
[src]
F: FnMut(usize, bool) -> bool,
Applies a function to each bit in the slice.
BitSlice
cannot implement IndexMut
, as it cannot manifest &mut bool
references, and the BitMut
proxy reference has an unavoidable
overhead. This method bypasses both problems, by applying a function to
each pair of index and value in the slice, without constructing a proxy
reference.
Parameters
&mut self
func
: A function which receives two arguments,index: usize
andvalue: bool
, and returns abool
.
Effects
For each index in the slice, the result of invoking func
with the
index number and current bit value is written into the slice.
Examples
use bitvec::prelude::*; let mut data = 0u8; let bits = data.view_bits_mut::<Msb0>(); bits.for_each(|idx, _bit| idx % 3 == 0); assert_eq!(data, 0b100_100_10);
pub fn as_slice(&self) -> &[T]
[src]
Accesses the total backing storage of the BitSlice
, as a slice of its
elements.
This method produces a slice over all the memory elements it touches, using the current storage parameter. This is safe to do, as any events that would create an aliasing view into the elements covered by the returned slice will also have caused the slice to use its alias-aware type.
Parameters
&self
Returns
A view of the entire memory region this slice covers, including the edge elements.
pub fn as_raw_slice(&self) -> &[T::Mem]
[src]
Views the wholly-filled elements of the BitSlice
.
This will not include partially-owned edge elements, as they may be
aliased by other handles. To gain access to all elements that the
BitSlice
region covers, use one of the following:
.as_slice
produces a shared slice over all elements, marked aliased as appropriate..domain
produces a view describing each component of the region, marking only the contended edges as aliased and the uncontended interior as unaliased.
Parameters
&self
Returns
A slice of all the wholly-filled elements in the BitSlice
backing
storage.
Examples
use bitvec::prelude::*; let data = [1u8, 66]; let bits = data.view_bits::<Msb0>(); let accum = bits .as_raw_slice() .iter() .copied() .map(u8::count_ones) .sum::<u32>(); assert_eq!(accum, 3);
pub fn as_raw_slice_mut(&mut self) -> &mut [T::Mem]
[src]
Views the wholly-filled elements of the BitSlice
.
This will not include partially-owned edge elements, as they may be
aliased by other handles. To gain access to all elements that the
BitSlice
region covers, use one of the following:
.as_aliased_slice
produces a shared slice over all elements, marked as aliased to allow for the possibliity of mutation..domain_mut
produces a view describing each component of the region, marking only the contended edges as aliased and the uncontended interior as unaliased.
Parameters
&mut self
Returns
A mutable slice of all the wholly-filled elements in the BitSlice
backing storage.
Examples
use bitvec::prelude::*; let mut data = [1u8, 64]; let bits = data.view_bits_mut::<Msb0>(); for elt in bits.as_raw_slice_mut() { *elt |= 2; } assert_eq!(&[3, 66], bits.as_slice());
pub fn bit_domain(&self) -> BitDomain<'_, O, T>
[src]
Splits the slice into the logical components of its memory domain.
This produces a set of read-only subslices, marking as much as possible
as affirmatively lacking any write-capable view (T::NoAlias
). The
unaliased view is able to safely perform unsynchronized reads from
memory without causing undefined behavior, as the type system is able to
statically prove that no other write-capable views exist.
Parameters
&self
Returns
A BitDomain
structure representing the logical components of the
memory region.
Safety Exception
The following snippet describes a means of constructing a T::NoAlias
view into memory that is, in fact, aliased:
use bitvec::prelude::*; use core::sync::atomic::AtomicU8; type Bs<T> = BitSlice<LocalBits, T>; let data = [AtomicU8::new(0), AtomicU8::new(0), AtomicU8::new(0)]; let bits: &Bs<AtomicU8> = data.view_bits::<LocalBits>(); let subslice: &Bs<AtomicU8> = &bits[4 .. 20]; let (_, noalias, _): (_, &Bs<u8>, _) = subslice.bit_domain().region().unwrap();
The noalias
reference, which has memory type u8
, assumes that it can
act as an &u8
reference: unsynchronized loads are permitted, as no
handle exists which is capable of modifying the middle bit of data
.
This means that LLVM is permitted to issue loads from memory wherever
it wants in the block during which noalias
is live, as all loads are
equivalent.
Use of the bits
or subslice
handles, which are still live for the
lifetime of noalias
, to issue .set_aliased
calls into the middle
element introduce undefined behavior. bitvec
permits safe code to
introduce this undefined behavior solely because it requires deliberate
opt-in – you must start from atomic data; this cannot occur when data
is non-atomic – and use of the shared-mutation facility simultaneously
with the unaliasing view.
The .set_aliased
method is speculative, and will be marked as
unsafe
or removed at any suspicion that its presence in the library
has any costs.
Examples
This method can be used to accelerate reads from a slice that is marked as aliased.
use bitvec::prelude::*; type Bs<T> = BitSlice<LocalBits, T>; let mut data = [0u8; 3]; let bits = data.view_bits_mut::<LocalBits>(); let (a, b): ( &mut Bs<<u8 as BitStore>::Alias>, &mut Bs<<u8 as BitStore>::Alias>, ) = bits.split_at_mut(4); let (partial, full, _): ( &Bs<<u8 as BitStore>::Alias>, &Bs<<u8 as BitStore>::Mem>, _, ) = b.bit_domain().region().unwrap(); read_from(partial); // uses alias-aware reads read_from(full); // uses ordinary reads
pub fn bit_domain_mut(&mut self) -> BitDomainMut<'_, O, T>
[src]
Splits the slice into the logical components of its memory domain.
This produces a set of mutable subslices, marking as much as possible as
affirmatively lacking any other view (T::Mem
). The bare view is able
to safely perform unsynchronized reads from and writes to memory without
causing undefined behavior, as the type system is able to statically
prove that no other views exist.
Why This Is More Sound Than .bit_domain
The &mut
exclusion rule makes it impossible to construct two
references over the same memory where one of them is marked &mut
. This
makes it impossible to hold a live reference to memory separately from
any references produced from this method. For the duration of all
references produced by this method, all ancestor references used to
reach this method call are either suspended or dead, and the compiler
will not allow you to use them.
As such, this method cannot introduce undefined behavior where a reference incorrectly believes that the referent memory region is immutable.
pub fn domain(&self) -> Domain<'_, T>ⓘ
[src]
Splits the slice into immutable references to its underlying memory components.
Unlike .bit_domain
and .bit_domain_mut
, this does not return
smaller BitSlice
handles but rather appropriately-marked references to
the underlying memory elements.
The aliased references allow mutation of these elements. You are
required to not use mutating methods on these references at all. This
function is not marked unsafe
, but this is a contract you must uphold.
Use .domain_mut
to modify the underlying elements.
It is not currently possible to forbid mutation through these references. This may change in the future.
Safety Exception
As with .bit_domain
, this produces unsynchronized immutable
references over the fully-populated interior elements. If this view is
constructed from a BitSlice
handle over atomic memory, then it will
remove the atomic access behavior for the interior elements. This by
itself is safe, as long as no contemporaneous atomic writes to that
memory can occur. You must not retain and use an atomic reference to the
memory region marked as NoAlias
for the duration of this view’s
existence.
Parameters
&self
Returns
A read-only descriptor of the memory elements backing *self
.
pub fn domain_mut(&mut self) -> DomainMut<'_, T>
[src]
Splits the slice into mutable references to its underlying memory elements.
Like .domain
, this returns appropriately-marked references to the
underlying memory elements. These references are all writable.
The aliased edge references permit modifying memory beyond their bit
marker. You are required to only mutate the region of these edge
elements that you currently govern. This function is not marked
unsafe
, but this is a contract you must uphold.
It is not currently possible to forbid out-of-bounds mutation through these references. This may change in the future.
Parameters
&mut self
Returns
A descriptor of the memory elements underneath *self
, permitting
mutation.
pub unsafe fn split_at_unchecked(&self, mid: usize) -> (&Self, &Self)
[src]
Splits a slice at some mid-point, without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see split_at
.
Parameters
&self
mid
: The index at which to split the slice. This must be in the range0 .. self.len()
.
Returns
.0
:&self[.. mid]
.1
:&self[mid ..]
Safety
This function is not safe. It performs raw pointer arithmetic to
construct two new references. If mid
is out of bounds, then the first
slice will be too large, and the second will be catastrophically
incorrect. As both are references to invalid memory, they are undefined
to construct, and may not ever be used.
Examples
use bitvec::prelude::*; let data = 0x0180u16; let bits = data.view_bits::<Msb0>(); let (one, two) = unsafe { bits.split_at_unchecked(8) }; assert!(one[7]); assert!(two[0]);
pub unsafe fn split_at_unchecked_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
[src]
&mut self,
mid: usize
) -> (&mut BitSlice<O, T::Alias>, &mut BitSlice<O, T::Alias>)
Splits a mutable slice at some mid-point, without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see split_at_mut
.
Parameters
&mut self
mid
: The index at which to split the slice. This must be in the range0 .. self.len()
.
Returns
.0
:&mut self[.. mid]
.1
:&mut self[mid ..]
Safety
This function is not safe. It performs raw pointer arithmetic to
construct two new references. If mid
is out of bounds, then the first
slice will be too large, and the second will be catastrophically
incorrect. As both are references to invalid memory, they are undefined
to construct, and may not ever be used.
Examples
use bitvec::prelude::*; let mut data = 0u16; let bits = data.view_bits_mut::<Msb0>(); let (one, two) = unsafe { bits.split_at_unchecked_mut(8) }; one.set(7, true); two.set(0, true); assert_eq!(data, 0x0180u16);
pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
[src]
Swaps the bits at two indices without checking boundary conditions.
This is generally not recommended; use with caution! For a safe
alternative, see swap
.
Parameters
&mut self
a
: One index to swap.b
: The other index to swap.
Effects
The bit at index a
is written into index b
, and the bit at index b
is written into a
.
Safety
Both a
and b
must be less than self.len()
. Indices greater than
the length will cause out-of-bounds memory access, which can lead to
memory unsafety and a program crash.
Examples
use bitvec::prelude::*; let mut data = 8u8; let bits = data.view_bits_mut::<Msb0>(); unsafe { bits.swap_unchecked(0, 4); } assert_eq!(data, 128);
pub unsafe fn copy_unchecked(&mut self, from: usize, to: usize)
[src]
Copies a bit from one index to another without checking boundary conditions.
Parameters
&mut self
from
: The index whose bit is to be copiedto
: The index into which the copied bit is written.
Effects
The bit at from
is written into to
.
Safety
Both from
and to
must be less than self.len()
, in order for
self
to legally read from and write to them, respectively.
If self
had been split from a larger slice, reading from from
or
writing to to
may not necessarily cause a memory-safety violation in
the Rust model, due to the aliasing system bitvec
employs. However,
writing outside the bounds of a slice reference is always a logical
error, as it causes changes observable by another reference handle.
Examples
use bitvec::prelude::*; let mut data = 1u8; let bits = data.view_bits_mut::<Lsb0>(); unsafe { bits.copy_unchecked(0, 2) }; assert_eq!(data, 5);
pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize) where
R: RangeBounds<usize>,
[src]
R: RangeBounds<usize>,
Copies bits from one part of the slice to another part of itself.
src
is the range within self
to copy from. dest
is the starting
index of the range within self
to copy to, which will have the same
length as src
. The two ranges may overlap. The ends of the two ranges
must be less than or equal to self.len()
.
Effects
self[src]
is copied to self[dest .. dest + src.end() - src.start()]
.
Panics
This function will panic if either range exceeds the end of the slice,
or if the end of src
is before the start.
Safety
Both the src
range and the target range dest .. dest + src.len()
must not exceed the self.len()
slice range.
Examples
use bitvec::prelude::*; let mut data = 0x07u8; let bits = data.view_bits_mut::<Msb0>(); unsafe { bits.copy_within_unchecked(5 .., 0); } assert_eq!(data, 0xE7);
pub fn split_at_aliased_mut(&mut self, mid: usize) -> (&mut Self, &mut Self)
[src]
Splits a mutable slice at some mid-point.
This method has the same behavior as split_at_mut
, except that it
does not apply an aliasing marker to the partitioned subslices.
Safety
Because this method is defined only on BitSlice
s whose T
type is
alias-safe, the subslices do not need to be additionally marked.
Trait Implementations
impl<O, T> AsMut<BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> AsRef<BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Binary for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T, Rhs> BitAnd<Rhs> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitAndAssign<Rhs>,
[src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitAndAssign<Rhs>,
type Output = Self
The resulting type after applying the &
operator.
fn bitand(self, rhs: Rhs) -> Self::Output
[src]
impl<O, T, Rhs> BitAndAssign<Rhs> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitAndAssign<Rhs>,
[src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitAndAssign<Rhs>,
fn bitand_assign(&mut self, rhs: Rhs)
[src]
impl<O, T> BitField for BitBox<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitField,
[src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitField,
fn load_le<M>(&self) -> M where
M: BitMemory,
[src]
M: BitMemory,
fn load_be<M>(&self) -> M where
M: BitMemory,
[src]
M: BitMemory,
fn store_le<M>(&mut self, value: M) where
M: BitMemory,
[src]
M: BitMemory,
fn store_be<M>(&mut self, value: M) where
M: BitMemory,
[src]
M: BitMemory,
fn load<M>(&self) -> M where
M: BitMemory,
[src]
M: BitMemory,
fn store<M>(&mut self, value: M) where
M: BitMemory,
[src]
M: BitMemory,
impl<O, T, Rhs> BitOr<Rhs> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitOrAssign<Rhs>,
[src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitOrAssign<Rhs>,
type Output = Self
The resulting type after applying the |
operator.
fn bitor(self, rhs: Rhs) -> Self::Output
[src]
impl<O, T, Rhs> BitOrAssign<Rhs> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitOrAssign<Rhs>,
[src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitOrAssign<Rhs>,
fn bitor_assign(&mut self, rhs: Rhs)
[src]
impl<O, T, Rhs> BitXor<Rhs> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitXorAssign<Rhs>,
[src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitXorAssign<Rhs>,
type Output = Self
The resulting type after applying the ^
operator.
fn bitxor(self, rhs: Rhs) -> Self::Output
[src]
impl<O, T, Rhs> BitXorAssign<Rhs> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitXorAssign<Rhs>,
[src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: BitXorAssign<Rhs>,
fn bitxor_assign(&mut self, rhs: Rhs)
[src]
impl<O, T> Borrow<BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> BorrowMut<BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
fn borrow_mut(&mut self) -> &mut BitSlice<O, T>ⓘ
[src]
impl<O, T> Clone for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
fn clone(&self) -> Self
[src]
fn clone_from(&mut self, source: &Self)
1.0.0[src]
impl<O, T> Debug for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Default for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Deref for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
type Target = BitSlice<O, T>
The resulting type after dereferencing.
fn deref(&self) -> &Self::Target
[src]
impl<O, T> DerefMut for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<'de, O, T> Deserialize<'de> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Deserialize<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where
D: Deserializer<'de>,
[src]
D: Deserializer<'de>,
impl<O, T> Display for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Drop for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Eq for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<'a, O, T> From<&'a BitSlice<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> From<BitBox<O, T>> for BitVec<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> From<BitVec<O, T>> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Hash for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
fn hash<H>(&self, state: &mut H) where
H: Hasher,
[src]
H: Hasher,
fn hash_slice<H>(data: &[Self], state: &mut H) where
H: Hasher,
1.3.0[src]
H: Hasher,
impl<O, T, Idx> Index<Idx> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: Index<Idx>,
[src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: Index<Idx>,
type Output = <BitSlice<O, T> as Index<Idx>>::Output
The returned type after indexing.
fn index(&self, index: Idx) -> &Self::Output
[src]
impl<O, T, Idx> IndexMut<Idx> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
BitSlice<O, T>: IndexMut<Idx>,
[src]
O: BitOrder,
T: BitStore,
BitSlice<O, T>: IndexMut<Idx>,
impl<O, T> Into<Box<[T]>> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> LowerHex for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Not for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
type Output = Self
The resulting type after applying the !
operator.
fn not(self) -> Self::Output
[src]
impl<O, T> Octal for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Ord for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
fn cmp(&self, other: &Self) -> Ordering
[src]
#[must_use]fn max(self, other: Self) -> Self
1.21.0[src]
#[must_use]fn min(self, other: Self) -> Self
1.21.0[src]
#[must_use]fn clamp(self, min: Self, max: Self) -> Self
[src]
impl<O1, O2, T1, T2> PartialEq<BitBox<O2, T2>> for BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
[src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, other: &BitBox<O2, T2>) -> bool
[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool
1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialEq<BitBox<O2, T2>> for &'_ BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
[src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, other: &BitBox<O2, T2>) -> bool
[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool
1.0.0[src]
impl<O1, O2, T1, T2, '_> PartialEq<BitBox<O2, T2>> for &'_ mut BitSlice<O1, T1> where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
[src]
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
fn eq(&self, other: &BitBox<O2, T2>) -> bool
[src]
#[must_use]fn ne(&self, other: &Rhs) -> bool
1.0.0[src]
impl<O, T, Rhs: ?Sized> PartialEq<Rhs> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
Rhs: PartialEq<BitSlice<O, T>>,
[src]
O: BitOrder,
T: BitStore,
Rhs: PartialEq<BitSlice<O, T>>,
impl<O, T> PartialOrd<BitBox<O, T>> for BitSlice<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
fn partial_cmp(&self, other: &BitBox<O, T>) -> Option<Ordering>
[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<O, T, Rhs: ?Sized> PartialOrd<Rhs> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
Rhs: PartialOrd<BitSlice<O, T>>,
[src]
O: BitOrder,
T: BitStore,
Rhs: PartialOrd<BitSlice<O, T>>,
fn partial_cmp(&self, other: &Rhs) -> Option<Ordering>
[src]
#[must_use]fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<O, T> Pointer for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Send for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> Serialize for BitBox<O, T> where
O: BitOrder,
T: BitStore,
T::Mem: Serialize,
[src]
O: BitOrder,
T: BitStore,
T::Mem: Serialize,
impl<O, T> Sync for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> TryFrom<Box<[T]>> for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
type Error = Box<[T]>
The type returned in the event of a conversion error.
fn try_from(boxed: Box<[T]>) -> Result<Self, Self::Error>
[src]
impl<O, T> Unpin for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
impl<O, T> UpperHex for BitBox<O, T> where
O: BitOrder,
T: BitStore,
[src]
O: BitOrder,
T: BitStore,
Auto Trait Implementations
impl<O, T> RefUnwindSafe for BitBox<O, T> where
O: RefUnwindSafe,
T: RefUnwindSafe,
O: RefUnwindSafe,
T: RefUnwindSafe,
impl<O, T> UnwindSafe for BitBox<O, T> where
O: RefUnwindSafe,
T: RefUnwindSafe,
O: RefUnwindSafe,
T: RefUnwindSafe,
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> Conv for T
[src]
impl<T> Conv for T
[src]
impl<T> DeserializeOwned for T where
T: for<'de> Deserialize<'de>,
[src]
T: for<'de> Deserialize<'de>,
impl<T> FmtForward for T
[src]
fn fmt_binary(self) -> FmtBinary<Self> where
Self: Binary,
[src]
Self: Binary,
fn fmt_display(self) -> FmtDisplay<Self> where
Self: Display,
[src]
Self: Display,
fn fmt_lower_exp(self) -> FmtLowerExp<Self> where
Self: LowerExp,
[src]
Self: LowerExp,
fn fmt_lower_hex(self) -> FmtLowerHex<Self> where
Self: LowerHex,
[src]
Self: LowerHex,
fn fmt_octal(self) -> FmtOctal<Self> where
Self: Octal,
[src]
Self: Octal,
fn fmt_pointer(self) -> FmtPointer<Self> where
Self: Pointer,
[src]
Self: Pointer,
fn fmt_upper_exp(self) -> FmtUpperExp<Self> where
Self: UpperExp,
[src]
Self: UpperExp,
fn fmt_upper_hex(self) -> FmtUpperHex<Self> where
Self: UpperHex,
[src]
Self: UpperHex,
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T> Pipe for T where
T: ?Sized,
[src]
T: ?Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> R
[src]
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R where
R: 'a,
[src]
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R where
R: 'a,
[src]
R: 'a,
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R where
B: 'a + ?Sized,
R: 'a,
Self: Borrow<B>,
[src]
B: 'a + ?Sized,
R: 'a,
Self: Borrow<B>,
fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> R where
B: 'a + ?Sized,
R: 'a,
Self: BorrowMut<B>,
[src]
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> R where
B: 'a + ?Sized,
R: 'a,
Self: BorrowMut<B>,
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R where
R: 'a,
Self: AsRef<U>,
U: 'a + ?Sized,
[src]
R: 'a,
Self: AsRef<U>,
U: 'a + ?Sized,
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R where
R: 'a,
Self: AsMut<U>,
U: 'a + ?Sized,
[src]
R: 'a,
Self: AsMut<U>,
U: 'a + ?Sized,
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
R: 'a,
Self: Deref<Target = T>,
T: 'a + ?Sized,
[src]
R: 'a,
Self: Deref<Target = T>,
T: 'a + ?Sized,
fn pipe_deref_mut<'a, T, R>(
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: DerefMut<Target = T> + Deref,
T: 'a + ?Sized,
[src]
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: DerefMut<Target = T> + Deref,
T: 'a + ?Sized,
impl<T> Pipe for T
[src]
impl<T> PipeAsRef for T
[src]
fn pipe_as_ref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
R: 'a,
Self: AsRef<T>,
T: 'a,
[src]
R: 'a,
Self: AsRef<T>,
T: 'a,
fn pipe_as_mut<'a, T, R>(&'a mut self, func: impl FnOnce(&'a mut T) -> R) -> R where
R: 'a,
Self: AsMut<T>,
T: 'a,
[src]
R: 'a,
Self: AsMut<T>,
T: 'a,
impl<T> PipeBorrow for T
[src]
fn pipe_borrow<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R where
R: 'a,
Self: Borrow<T>,
T: 'a,
[src]
R: 'a,
Self: Borrow<T>,
T: 'a,
fn pipe_borrow_mut<'a, T, R>(
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: BorrowMut<T>,
T: 'a,
[src]
&'a mut self,
func: impl FnOnce(&'a mut T) -> R
) -> R where
R: 'a,
Self: BorrowMut<T>,
T: 'a,
impl<T> PipeDeref for T
[src]
fn pipe_deref<'a, R>(&'a self, func: impl FnOnce(&'a Self::Target) -> R) -> R where
R: 'a,
Self: Deref,
[src]
R: 'a,
Self: Deref,
fn pipe_deref_mut<'a, R>(
&'a mut self,
func: impl FnOnce(&'a mut Self::Target) -> R
) -> R where
R: 'a,
Self: DerefMut,
[src]
&'a mut self,
func: impl FnOnce(&'a mut Self::Target) -> R
) -> R where
R: 'a,
Self: DerefMut,
impl<T> PipeRef for T
[src]
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R where
R: 'a,
[src]
R: 'a,
fn pipe_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R where
R: 'a,
[src]
R: 'a,
impl<T> Tap for T
[src]
fn tap(self, func: impl FnOnce(&Self)) -> Self
[src]
fn tap_mut(self, func: impl FnOnce(&mut Self)) -> Self
[src]
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self where
B: ?Sized,
Self: Borrow<B>,
[src]
B: ?Sized,
Self: Borrow<B>,
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self where
B: ?Sized,
Self: BorrowMut<B>,
[src]
B: ?Sized,
Self: BorrowMut<B>,
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self where
R: ?Sized,
Self: AsRef<R>,
[src]
R: ?Sized,
Self: AsRef<R>,
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self where
R: ?Sized,
Self: AsMut<R>,
[src]
R: ?Sized,
Self: AsMut<R>,
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self where
Self: Deref<Target = T>,
T: ?Sized,
[src]
Self: Deref<Target = T>,
T: ?Sized,
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self where
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
[src]
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
[src]
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
[src]
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self where
B: ?Sized,
Self: Borrow<B>,
[src]
B: ?Sized,
Self: Borrow<B>,
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self where
B: ?Sized,
Self: BorrowMut<B>,
[src]
B: ?Sized,
Self: BorrowMut<B>,
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self where
R: ?Sized,
Self: AsRef<R>,
[src]
R: ?Sized,
Self: AsRef<R>,
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self where
R: ?Sized,
Self: AsMut<R>,
[src]
R: ?Sized,
Self: AsMut<R>,
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self where
Self: Deref<Target = T>,
T: ?Sized,
[src]
Self: Deref<Target = T>,
T: ?Sized,
fn tap_deref_mut_dbg<T>(self, func: impl FnOnce(&mut T)) -> Self where
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
[src]
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
impl<T> Tap for T
[src]
fn tap<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R,
[src]
F: FnOnce(&Self) -> R,
fn tap_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R,
[src]
F: FnOnce(&Self) -> R,
fn tap_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R,
[src]
F: FnOnce(&mut Self) -> R,
fn tap_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R,
[src]
F: FnOnce(&mut Self) -> R,
impl<T, U> TapAsRef<U> for T where
U: ?Sized,
[src]
U: ?Sized,
fn tap_ref<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: AsRef<T>,
[src]
F: FnOnce(&T) -> R,
Self: AsRef<T>,
fn tap_ref_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: AsRef<T>,
[src]
F: FnOnce(&T) -> R,
Self: AsRef<T>,
fn tap_ref_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
[src]
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
fn tap_ref_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
[src]
F: FnOnce(&mut T) -> R,
Self: AsMut<T>,
impl<T, U> TapBorrow<U> for T where
U: ?Sized,
[src]
U: ?Sized,
fn tap_borrow<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: Borrow<T>,
[src]
F: FnOnce(&T) -> R,
Self: Borrow<T>,
fn tap_borrow_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&T) -> R,
Self: Borrow<T>,
[src]
F: FnOnce(&T) -> R,
Self: Borrow<T>,
fn tap_borrow_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
[src]
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
fn tap_borrow_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
[src]
F: FnOnce(&mut T) -> R,
Self: BorrowMut<T>,
impl<T> TapDeref for T
[src]
fn tap_deref<F, R>(self, func: F) -> Self where
F: FnOnce(&Self::Target) -> R,
Self: Deref,
[src]
F: FnOnce(&Self::Target) -> R,
Self: Deref,
fn tap_deref_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&Self::Target) -> R,
Self: Deref,
[src]
F: FnOnce(&Self::Target) -> R,
Self: Deref,
fn tap_deref_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
[src]
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
fn tap_deref_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
[src]
F: FnOnce(&mut Self::Target) -> R,
Self: DerefMut,
impl<T> ToOwned for T where
T: Clone,
[src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
fn to_owned(&self) -> T
[src]
fn clone_into(&self, target: &mut T)
[src]
impl<T> ToString for T where
T: Display + ?Sized,
[src]
T: Display + ?Sized,
impl<T> TryConv for T
[src]
impl<T> TryConv for T
[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,