Struct bitvec::prelude::BitPtr[][src]

#[repr(C, packed)]pub struct BitPtr<M, O = Lsb0, T = usize> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
{ /* fields omitted */ }

Pointer to an individual bit in a memory element. Analagous to *bool.

Original

*bool and NonNull<bool>

API Differences

This must be a structure, rather than a raw pointer, for two reasons:

  • It is larger than a raw pointer.
  • Raw pointers are not #[fundamental] and cannot have foreign implementations.

Additionally, rather than create two structures to map to *const bool and *mut bool, respectively, this takes mutability as a type parameter.

Because the encoded span pointer requires that memory addresses are well aligned, this type also imposes the alignment requirement and refuses construction for misaligned element addresses. While this type is used in the API equivalent of ordinary raw pointers, it is restricted in value to only be references to memory elements.

ABI Differences

This has alignment 1, rather than an alignment to the processor word. This is necessary for some crate-internal optimizations.

Type Parameters

  • M: Marks whether the pointer permits mutation of memory through it.
  • O: The ordering of bits within a memory element.
  • T: A memory type used to select both the register size and the access behavior when performing loads/stores.

Usage

This structure is used as the bitvec equivalent to *bool. It is used in all raw-pointer APIs, and provides behavior to emulate raw pointers. It cannot be directly dereferenced, as it is not a pointer; it can only be transformed back into higher referential types, or used in bitvec::ptr free functions.

These pointers can never be null, or misaligned.

Implementations

impl<M, O, T> BitPtr<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
[src]

pub const DANGLING: Self[src]

The dangling pointer. This selects the starting bit of the T dangling address.

pub fn try_new<A>(addr: A, head: u8) -> Result<Self, BitPtrError<T>> where
    A: TryInto<Address<M, T>>,
    BitPtrError<T>: From<A::Error>, 
[src]

Tries to construct a BitPtr from a memory location and a bit index.

Type Parameters

  • A: This accepts anything that may be used as a memory address.

Parameters

  • addr: The memory address to use in the BitPtr. If this value violates the Address rules, then its conversion error will be returned.
  • head: The index of the bit in *addr that this pointer selects. If this value violates the BitIdx rules, then its conversion error will be returned.

Returns

A new BitPtr, selecting the memory location addr and the bit head. If either addr or head are invalid values, then this propagates their error.

pub fn new(addr: Address<M, T>, head: BitIdx<T::Mem>) -> Self[src]

Constructs a BitPtr from a memory location and a bit index.

Since this requires that the address and bit index are already well-formed, it can assemble the BitPtr without inspecting their values.

Parameters

  • addr: A well-formed memory address of T.
  • head: A well-formed bit index within T.

Returns

A BitPtr selecting the head bit in the location addr.

pub fn raw_parts(self) -> (Address<M, T>, BitIdx<T::Mem>)[src]

Decomposes the pointer into its element address and bit index.

Parameters

  • self

Returns

  • .0: The memory address in which the referent bit is located.
  • .1: The index of the referent bit within *.0.

pub unsafe fn range(self, count: usize) -> BitPtrRange<M, O, T>

Notable traits for BitPtrRange<M, O, T>

impl<M, O, T> Iterator for BitPtrRange<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
type Item = BitPtr<M, O, T>;
[src]

Produces a pointer range starting at self and running for count bits.

This calls self.add(count), then bundles the resulting pointer as the high end of the produced range.

Parameters

  • self: The starting pointer of the produced range.
  • count: The number of bits that the produced range includes.

Returns

A half-open range of pointers, beginning at (and including) self, running for count bits, and ending at (and excluding) self.add(count).

Safety

count cannot violate the constraints in add.

pub unsafe fn into_bitref<'a>(self) -> BitRef<'a, M, O, T>[src]

Converts a bit-pointer into a proxy bit-reference.

Safety

The pointer must be valid to dereference.

pub fn immut(self) -> BitPtr<Const, O, T>[src]

Removes write permissions from a bit-pointer.

pub unsafe fn assert_mut(self) -> BitPtr<Mut, O, T>[src]

Adds write permissions to a bit-pointer.

Safety

This pointer must have been derived from a *mut pointer.

pub fn is_null(self) -> bool[src]

👎 Deprecated:

BitPtr is never null

Tests if a bit-pointer is the null value.

This is always false, as BitPtr is a NonNull internally. Use Option<BitPtr> to express the potential for a null pointer.

Original

pointer::is_null

pub fn cast<U>(self) -> BitPtr<M, O, U> where
    U: BitStore
[src]

Casts to a bit-pointer of another storage type, preserving the bit-ordering and mutability permissions.

Original

pointer::cast

Behavior

This is not a free typecast! It encodes the pointer as a crate-internal span descriptor, casts the span descriptor to the U storage element parameter, then decodes the result. This preserves general correctness, but will likely change both the virtual and physical bits addressed by this pointer.

pub unsafe fn as_ref<'a>(self) -> Option<BitRef<'a, Const, O, T>>[src]

Produces a proxy reference to the referent bit.

Because BitPtr is a non-null, well-aligned, pointer, this never returns None.

Original

pointer::as_ref

API Differences

This produces a proxy type rather than a true reference. The proxy implements Deref<Target = bool>, and can be converted to &bool with &*.

Safety

Since BitPtr does not permit null or misaligned pointers, this method will always dereference the pointer and you must ensure the following conditions are met:

  • the pointer must be dereferencable as defined in the standard library documentation
  • the pointer must point to an initialized instance of T
  • you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy

Examples

use bitvec::prelude::*;

let data = 1u8;
let ptr = BitPtr::<_, Lsb0, _>::from_ref(&data);
let val = unsafe { ptr.as_ref() }.unwrap();
assert!(*val);

pub unsafe fn offset(self, count: isize) -> Self[src]

Calculates the offset from a pointer.

count is in units of bits.

Original

pointer::offset

Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object. Note that in Rust, every (stack-allocated) variable is considered a separate allocated object.
  • The computed offset, in bytes, cannot overflow an isize.
  • The offset being in bounds cannot rely on “wrapping around” the address space. That is, the infinite-precision sum, in bytes must fit in a usize.

These pointers are almost always derived from BitSlice regions, which have an encoding limitation that the high three bits of the length counter are zero, so bitvec pointers are even less likely than ordinary pointers to run afoul of these limitations.

Use wrapping_offset if you expect to risk hitting the high edge of the address space.

Examples

use bitvec::prelude::*;

let data = 5u8;
let ptr = BitPtr::<_, Lsb0, _>::from_ref(&data);
assert!(unsafe { ptr.read() });
assert!(!unsafe { ptr.offset(1).read() });
assert!(unsafe { ptr.offset(2).read() });

pub fn wrapping_offset(self, count: isize) -> Self[src]

Calculates the offset from a pointer using wrapping arithmetic.

count is in units of bits.

Original

pointer::wrapping_offset

Safety

The resulting pointer does not need to be in bounds, but it is potentially hazardous to dereference.

In particular, the resulting pointer remains attached to the same allocated object that self points to. It may not be used to access a different allocated object. Note that in Rust, every (stack-allocated) variable is considered a separate allocated object.

In other words, x.wrapping_offset((y as usize).wrapping_sub(x as usize) is not the same as y, and dereferencing it is undefined behavior unless x and y point into the same allocated object.

Compared to offset, this method basically delays the requirement of staying within the same allocated object: offset is immediate Undefined Behavior when crossing object boundaries; wrapping_offset produces a pointer but still leads to Undefined Behavior if that pointer is dereferenced. offset can be optimized better and is thus preferable in performance-sensitive code.

If you need to cross object boundaries, destructure this pointer into its base address and bit index, cast the base address to an integer, and do the arithmetic in the purely integer space.

Examples

use bitvec::prelude::*;

let data = 0u8;
let mut ptr = BitPtr::<_, Lsb0, _>::from_ref(&data);
let end = ptr.wrapping_offset(8);
while ptr < end {
  println!("{}", unsafe { ptr.read() });
  ptr = ptr.wrapping_offset(3);
}

pub unsafe fn offset_from(self, origin: Self) -> isize[src]

Calculates the distance between two pointers. The returned value is in units of bits.

This function is the inverse of offset.

Original

pointer::offset

Safety

If any of the following conditions are violated, the result is Undefined Behavior:

  • Both the starting and other pointer must be either in bounds or one byte past the end of the same allocated object. Note that in Rust, every (stack-allocated) variable is considered a separate allocated object.
  • Both pointers must be derived from a pointer to the same object.
  • The distance between the pointers, in bytes, cannot overflow an isize.
  • The distance being in bounds cannot rely on “wrapping around” the address space.

These pointers are almost always derived from BitSlice regions, which have an encoding limitation that the high three bits of the length counter are zero, so bitvec pointers are even less likely than ordinary pointers to run afoul of these limitations.

Examples

Basic usage:

use bitvec::prelude::*;

let data = 0u16;
let base = BitPtr::<_, Lsb0, _>::from_ref(&data);
let low = unsafe { base.add(5) };
let high = unsafe { low.add(6) };
unsafe {
  assert_eq!(high.offset_from(low), 6);
  assert_eq!(low.offset_from(high), -6);
  assert_eq!(low.offset(6), high);
  assert_eq!(high.offset(-6), low);
}

Incorrect usage:

use bitvec::prelude::*;

let a = 0u8;
let b = !0u8;
let a_ptr = BitPtr::<_, Lsb0, _>::from_ref(&a);
let b_ptr = BitPtr::<_, Lsb0, _>::from_ref(&b);
let diff = (b_ptr.pointer() as isize)
  .wrapping_sub(a_ptr.pointer() as isize)
  // Remember: raw pointers are byte-addressed,
  // but these are bit-addressed.
  .wrapping_mul(8);
// Create a pointer to `b`, derived from `a`.
let b_ptr_2 = a_ptr.wrapping_offset(diff);

// The pointers are *arithmetically* equal now
assert_eq!(b_ptr, b_ptr_2);
// Undefined Behavior!
unsafe {
  b_ptr_2.offset_from(b_ptr);
}

pub unsafe fn add(self, count: usize) -> Self[src]

Calculates the offset from a pointer (convenience for .offset(count as isize)).

count is in units of bits.

Original

pointer::add

Safety

See offset.

pub unsafe fn sub(self, count: usize) -> Self[src]

Calculates the offset from a pointer (convenience for .offset((count as isize).wrapping_neg())).

count is in units of bits.

Original

pointer::sub

Safety

See offset.

pub fn wrapping_add(self, count: usize) -> Self[src]

Calculates the offset from a pointer using wrapping arithmetic (convenience for .wrapping_offset(count as isize)).

Original

pointer::wrapping_add

Safety

See wrapping_offset.

pub fn wrapping_sub(self, count: usize) -> Self[src]

Calculates the offset from a pointer using wrapping arithmetic (convenience for .wrapping_offset((count as isize).wrapping_neg())).

Original

pointer::wrapping_sub

Safety

See wrapping_offset.

pub unsafe fn read(self) -> bool[src]

Reads the bit from *self.

Original

pointer::read

Safety

See ptr::read for safety concerns and examples.

pub unsafe fn read_volatile(self) -> bool[src]

Performs a volatile read of the bit from self.

Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reördered by the compiler across other volatile operations.

Original

pointer::read_volatile

Safety

See ptr::read_volatile for safety concerns and examples.

pub unsafe fn copy_to<O2, T2>(self, dest: BitPtr<Mut, O2, T2>, count: usize) where
    O2: BitOrder,
    T2: BitStore
[src]

Copies count bits from self to dest. The source and destination may overlap.

NOTE: this has the same argument order as ptr::copy.

Original

pointer::copy_to

Safety

See ptr::copy for safety concerns and examples.

pub unsafe fn copy_to_nonoverlapping<O2, T2>(
    self,
    dest: BitPtr<Mut, O2, T2>,
    count: usize
) where
    O2: BitOrder,
    T2: BitStore
[src]

Copies count bits from self to dest. The source and destination may not overlap.

NOTE: this has the same argument order as ptr::copy_nonoverlapping.

Original

pointer::copy_to_nonoverlapping

Safety

See ptr::copy_nonoverlapping for safety concerns and examples.

ptr::copy_nonoverlapping

pub fn align_offset(self, align: usize) -> usize[src]

Computes the offset (in bits) that needs to be applied to the pointer in order to make it aligned to align.

“Alignment” here means that the pointer is selecting the start bit of a memory location whose address satisfies the requested alignment.

align is measured in bytes. If you wish to align your bit-pointer to a specific fraction (½, ¼, or ⅛ of one byte), please file an issue and this functionality will be added to BitIdx.

Original

pointer::align_offset

If the base-element address of the pointer is already aligned to align, then this will return the bit-offset required to select the first bit of the successor element.

If it is not possible to align the pointer, the implementation returns usize::MAX. It is permissible for the implementation to always return usize::MAX. Only your algorithm’s performance can depend on getting a usable offset here, not its correctness.

The offset is expressed in number of bits, and not T elements or bytes. The value returned can be used with the wrapping_add method.

Safety

There are no guarantees whatsoëver that offsetting the pointer will not overflow or go beyond the allocation that the pointer points into. It is up to the caller to ensure that the returned offset is correct in all terms other than alignment.

Panics

The function panics if align is not a power-of-two.

Examples

use bitvec::prelude::*;

let data = [0u8; 3];
let ptr = BitPtr::<_, Lsb0, _>::from_ref(&data[0]);
let ptr = unsafe { ptr.add(2) };
let count = ptr.align_offset(2);
assert!(count > 0);

impl<O, T> BitPtr<Const, O, T> where
    O: BitOrder,
    T: BitStore
[src]

pub fn from_ref(elem: &T) -> Self[src]

Constructs a BitPtr from an element reference.

Parameters

  • elem: A borrowed memory element.

Returns

A read-only bit-pointer to the zeroth bit in the *elem location.

pub fn from_ptr(elem: *const T) -> Result<Self, BitPtrError<T>>[src]

Attempts to construct a BitPtr from an element location.

Parameters

  • elem: A read-only element address.

Returns

A read-only bit-pointer to the zeroth bit in the *elem location, if elem is well-formed.

pub fn from_slice(slice: &[T]) -> Self[src]

Constructs a BitPtr from a slice reference.

This differs from from_ref in that the returned pointer keeps its provenance over the entire slice, whereas producing a pointer to the base bit of a slice with BitPtr::from_ref(&slice[0]) narrows its provenance to only the slice[0] element, and calling add to leave that element, even remaining in the slice, may cause UB.

Parameters

  • slice: An immutabily borrowed slice of memory.

Returns

A read-only bit-pointer to the zeroth bit in the base location of the slice.

This pointer has provenance over the entire slice, and may safely use add to traverse memory elements as long as it stays within the slice.

pub fn pointer(&self) -> *const T[src]

Gets the pointer to the base memory location containing the referent bit.

impl<O, T> BitPtr<Mut, O, T> where
    O: BitOrder,
    T: BitStore
[src]

pub fn from_mut(elem: &mut T) -> Self[src]

Constructs a BitPtr from an element reference.

Parameters

  • elem: A mutably borrowed memory element.

Returns

A write-capable bit-pointer to the zeroth bit in the *elem location.

Note that even if elem is an address within a contiguous array or slice, the returned bit-pointer only has provenance for the elem location, and no other.

Safety

The exclusive borrow of elem is released after this function returns. However, you must not use any other pointer than that returned by this function to view or modify *elem, unless the T type supports aliased mutation.

pub fn from_mut_ptr(elem: *mut T) -> Result<Self, BitPtrError<T>>[src]

Attempts to construct a BitPtr from an element location.

Parameters

  • elem: A write-capable element address.

Returns

A write-capable bit-pointer to the zeroth bit in the *elem location, if elem is well-formed.

pub fn from_mut_slice(slice: &mut [T]) -> Self[src]

Constructs a BitPtr from a slice reference.

This differs from from_mut in that the returned pointer keeps its provenance over the entire slice, whereas producing a pointer to the base bit of a slice with BitPtr::from_mut(&mut slice[0]) narrows its provenance to only the slice[0] element, and calling add to leave that element, even remaining in the slice, may cause UB.

Parameters

  • slice: A mutabily borrowed slice of memory.

Returns

A write-capable bit-pointer to the zeroth bit in the base location of the slice.

This pointer has provenance over the entire slice, and may safely use add to traverse memory elements as long as it stays within the slice.

pub fn pointer(&self) -> *mut T[src]

Gets the pointer to the base memory location containing the referent bit.

pub unsafe fn as_mut<'a>(self) -> Option<BitRef<'a, Mut, O, T>>[src]

Produces a proxy mutable reference to the referent bit.

Because BitPtr is a non-null, well-aligned, pointer, this never returns None.

Original

pointer::as_mut

API Differences

This produces a proxy type rather than a true reference. The proxy implements DerefMut<Target = bool>, and can be converted to &mut bool with &mut *. Writes to the proxy are not reflected in the proxied location until the proxy is destroyed, either through Drop or with its set method.

The proxy must be bound as mut in order to write through the binding.

Safety

Since BitPtr does not permit null or misaligned pointers, this method will always dereference the pointer and you must ensure the following conditions are met:

  • the pointer must be dereferencable as defined in the standard library documentation
  • the pointer must point to an initialized instance of T
  • you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy

Examples

use bitvec::prelude::*;

let mut data = 0u8;
let ptr = BitPtr::<_, Lsb0, _>::from_mut(&mut data);
let mut val = unsafe { ptr.as_mut() }.unwrap();
assert!(!*val);
*val = true;
assert!(*val);

pub unsafe fn copy_from<O2, T2>(self, src: BitPtr<Const, O2, T2>, count: usize) where
    O2: BitOrder,
    T2: BitStore
[src]

Copies count bits from src to self. The source and destination may overlap.

Note: this has the opposite argument order of ptr::copy.

Original

pointer::copy_from

Safety

See ptr::copy for safety concerns and examples.

pub unsafe fn copy_from_nonoverlapping<O2, T2>(
    self,
    src: BitPtr<Const, O2, T2>,
    count: usize
) where
    O2: BitOrder,
    T2: BitStore
[src]

Copies count bits from src to self. The source and destination may not overlap.

NOTE: this has the opposite argument order of ptr::copy_nonoverlapping.

Original

pointer::copy_from_nonoverlapping

Safety

See ptr::copy_nonoverlapping for safety concerns and examples.

pub unsafe fn write(self, value: bool)[src]

Overwrites a memory location with the given bit.

See ptr::write for safety concerns and examples.

Original

pointer::write

pub unsafe fn write_volatile(self, val: bool)[src]

Performs a volatile write of a memory location with the given bit.

Because processors do not have single-bit write instructions, this must perform a volatile read of the location, perform the bit modification within the processor register, and then perform a volatile write back to memory. These three steps are guaranteed to be sequential, but are not guaranteed to be atomic.

Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reördered by the compiler across other volatile operations.

Original

pointer::write_volatile

Safety

See ptr::write_volatile for safety concerns and examples.

pub unsafe fn replace(self, src: bool) -> bool[src]

Replaces the bit at *self with src, returning the old bit.

Original

pointer::replace

Safety

See ptr::replace for safety concerns and examples.

pub unsafe fn swap<O2, T2>(self, with: BitPtr<Mut, O2, T2>) where
    O2: BitOrder,
    T2: BitStore
[src]

Swaps the bits at two mutable locations. They may overlap.

Original

pointer::swap

Safety

See ptr::swap for safety concerns and examples.

Trait Implementations

impl<M, O, T> Clone for BitPtr<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
[src]

impl<M, O, T> Copy for BitPtr<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
[src]

impl<M, O, T> Debug for BitPtr<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
[src]

impl<M, O, T> Eq for BitPtr<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
[src]

impl<O, T> From<&'_ T> for BitPtr<Const, O, T> where
    O: BitOrder,
    T: BitStore
[src]

impl<O, T> From<&'_ mut T> for BitPtr<Mut, O, T> where
    O: BitOrder,
    T: BitStore
[src]

impl<M, O, T> Hash for BitPtr<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
[src]

impl<M, O, T> Ord for BitPtr<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
[src]

impl<M1, M2, O, T1, T2> PartialEq<BitPtr<M2, O, T2>> for BitPtr<M1, O, T1> where
    M1: Mutability,
    M2: Mutability,
    O: BitOrder,
    T1: BitStore,
    T2: BitStore
[src]

impl<M1, M2, O, T1, T2> PartialOrd<BitPtr<M2, O, T2>> for BitPtr<M1, O, T1> where
    M1: Mutability,
    M2: Mutability,
    O: BitOrder,
    T1: BitStore,
    T2: BitStore
[src]

impl<M, O, T> Pointer for BitPtr<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
[src]

impl<M, O, T> RangeBounds<BitPtr<M, O, T>> for BitPtrRange<M, O, T> where
    M: Mutability,
    O: BitOrder,
    T: BitStore
[src]

impl<O, T> TryFrom<*const T> for BitPtr<Const, O, T> where
    O: BitOrder,
    T: BitStore
[src]

type Error = BitPtrError<T>

The type returned in the event of a conversion error.

impl<O, T> TryFrom<*mut T> for BitPtr<Mut, O, T> where
    O: BitOrder,
    T: BitStore
[src]

type Error = BitPtrError<T>

The type returned in the event of a conversion error.

Auto Trait Implementations

impl<M, O, T> RefUnwindSafe for BitPtr<M, O, T> where
    M: RefUnwindSafe,
    O: RefUnwindSafe,
    T: RefUnwindSafe,
    <T as BitStore>::Mem: RefUnwindSafe
[src]

impl<M, O = Lsb0, T = usize> !Send for BitPtr<M, O, T>[src]

impl<M, O = Lsb0, T = usize> !Sync for BitPtr<M, O, T>[src]

impl<M, O, T> Unpin for BitPtr<M, O, T> where
    M: Unpin,
    O: Unpin
[src]

impl<M, O, T> UnwindSafe for BitPtr<M, O, T> where
    M: UnwindSafe,
    O: UnwindSafe,
    T: RefUnwindSafe,
    <T as BitStore>::Mem: UnwindSafe
[src]

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> Conv for T[src]

impl<T> Conv for T[src]

impl<T> FmtForward for T[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> Pipe for T where
    T: ?Sized
[src]

impl<T> Pipe for T[src]

impl<T> PipeAsRef for T[src]

impl<T> PipeBorrow for T[src]

impl<T> PipeDeref for T[src]

impl<T> PipeRef for T[src]

impl<T> Tap for T[src]

impl<T> Tap for T[src]

impl<T, U> TapAsRef<U> for T where
    U: ?Sized
[src]

impl<T, U> TapBorrow<U> for T where
    U: ?Sized
[src]

impl<T> TapDeref for T[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T> TryConv for T[src]

impl<T> TryConv for T[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.