#[repr(transparent)]pub struct Bits { /* private fields */ }Expand description
A reference to the bits in an InlAwi, ExtAwi, or other backing
construct. If a function is written just in terms of Bits, it can work on
mixed references to InlAwis, ExtAwis, and FP<B>s.
const big integer arithmetic is possible if the backing type is InlAwi
and the “const_support” flag is enabled.
Bits do not know signedness. Instead, the methods on Bits are
specified to interpret the bits as unsigned or signed two’s complement
integers. If a method’s documentation does not mention signedness, it either
works for both kinds or views the bits as a plain bit string with no
integral properties.
Note
Unless otherwise specified, functions on Bits that return an Option<()>
return None if the input bitwidths are not equal to each other. The Bits
have been left unchanged if None is returned.
Portability
This crate strives to maintain deterministic outputs across architectures
with different usize::BITS and different endiannesses. The
Bits::u8_slice_assign function, the Bits::to_u8_slice functions, the
serialization impls enabled by serde_support, the strings produced by the
const serialization functions, and functions like bits_to_string_radix
in the awint_ext crate are all portable and should be used when sending
representations of Bits between architectures.
The rand_assign_using function enabled by rand_support uses a
deterministic byte oriented implementation to avoid portability issues as
long as the rng itself is portable.
The core::hash::Hash implementation is not deterministic across platforms and may not even be deterministic across compiler versions. This is because of technical problems, and the standard library docs say it is not intended to be portable anyway.
There are many functions that depend on usize and NonZeroUsize. In cases
where the usize describes the bitwidth, a bit shift, or a bit position,
the user should not need to worry about portability, since if the values are
close to usize::MAX, the user is already close to running out of possible
memory any way.
There are a few usages of usize that are not just indexes but are actual
views into a contiguous range of bits inside Bits, such as
Bits::as_slice, Bits::first, and Bits::get_digit (which are all hidden
from the documentation, please refer to the source code of bits.rs if
needed). Most end users should not use these, since they have a strong
dependence on the size of usize. These functions are actual views into the
inner building blocks of this crate that other functions are built around in
such a way that they are portable (e.g. the addition functions may
internally operate on differing numbers of usize digits depending on the
size of usize, but the end result looks the same to users on different
architectures). The only reason these functions are exposed, is that someone
may want to write their own custom performant algorithms, and they want as
few abstractions as possible in the way.
Visible functions that are not portable in general, but always start from
the zeroeth bit or a given bit position like Bits::short_cin_mul,
Bits::short_udivide_assign, or Bits::usize_or_assign, are always
portable as long as the digit inputs and/or outputs are restricted to
0..=u16::MAX, or special care is taken.
Implementations
sourceimpl<'a> Bits
impl<'a> Bits
sourcepub const fn nzbw(&self) -> NonZeroUsize
pub const fn nzbw(&self) -> NonZeroUsize
Returns the bitwidth as a NonZeroUsize
sourcepub const fn u8_slice_assign(&'a mut self, buf: &[u8])
pub const fn u8_slice_assign(&'a mut self, buf: &[u8])
Assigns the bits of buf to self. If (buf.len() * 8) > self.bw()
then the corresponding bits in buf beyond self.bw() are ignored. If
(buf.len() * 8) < self.bw() then the rest of the bits in self are
zeroed. This function is portable across target architecture pointer
sizes and endianness.
sourcepub const fn to_u8_slice(&'a self, buf: &mut [u8])
pub const fn to_u8_slice(&'a self, buf: &mut [u8])
Assigns the bits of self to buf. If (buf.len() * 8) > self.bw()
then the corresponding bits in buf beyond self.bw() are zeroed. If
(buf.len() * 8) < self.bw() then the bits of self beyond the buffer
do nothing. This function is portable across target architecture
pointer sizes and endianness.
sourceimpl Bits
impl Bits
sourcepub const fn zero_assign(&mut self)
pub const fn zero_assign(&mut self)
Zero-assigns. Same as the Unsigned-minimum-value. All bits are set to 0.
sourcepub const fn umax_assign(&mut self)
pub const fn umax_assign(&mut self)
Unsigned-maximum-value-assigns. All bits are set to 1.
sourcepub const fn imax_assign(&mut self)
pub const fn imax_assign(&mut self)
Signed-maximum-value-assigns. All bits are set to 1, except for the most significant bit.
sourcepub const fn imin_assign(&mut self)
pub const fn imin_assign(&mut self)
Signed-minimum-value-assigns. Only the most significant bit is set.
sourcepub const fn uone_assign(&mut self)
pub const fn uone_assign(&mut self)
Unsigned-one-assigns. Only the least significant bit is set. The unsigned distinction is important, because a positive one value does not exist for signed integers with a bitwidth of 1.
sourcepub const fn not_assign(&mut self)
pub const fn not_assign(&mut self)
Not-assigns self
sourcepub const fn copy_assign(&mut self, rhs: &Bits) -> Option<()>
pub const fn copy_assign(&mut self, rhs: &Bits) -> Option<()>
Copy-assigns the bits of rhs to self
sourcepub const fn and_assign(&mut self, rhs: &Bits) -> Option<()>
pub const fn and_assign(&mut self, rhs: &Bits) -> Option<()>
And-assigns rhs to self
sourcepub const fn xor_assign(&mut self, rhs: &Bits) -> Option<()>
pub const fn xor_assign(&mut self, rhs: &Bits) -> Option<()>
Xor-assigns rhs to self
sourcepub const fn range_and_assign(&mut self, range: Range<usize>) -> Option<()>
pub const fn range_and_assign(&mut self, range: Range<usize>) -> Option<()>
And-assigns a range of ones to self. Useful for masking. An empty or
reversed range zeroes self. None is returned if range.start > self.bw() or range.end > self.bw().
sourcepub const fn usize_or_assign(&mut self, rhs: usize, shl: usize)
pub const fn usize_or_assign(&mut self, rhs: usize, shl: usize)
Or-assigns rhs to self at a position shl. Set bits of rhs that
are shifted beyond the bitwidth of self are truncated.
sourceimpl Bits
impl Bits
sourcepub const fn resize_assign(&mut self, rhs: &Bits, extension: bool)
pub const fn resize_assign(&mut self, rhs: &Bits, extension: bool)
Resize-copy-assigns rhs to self. If self.bw() >= rhs.bw(), the
copied value of rhs will be extended with bits set to extension. If
self.bw() < rhs.bw(), the copied value of rhs will be truncated.
sourcepub const fn zero_resize_assign(&mut self, rhs: &Bits) -> bool
pub const fn zero_resize_assign(&mut self, rhs: &Bits) -> bool
Zero-resize-copy-assigns rhs to self and returns overflow. This is
the same as lhs.resize_assign(rhs, false), but returns true if the
unsigned meaning of the integer is changed.
sourcepub const fn sign_resize_assign(&mut self, rhs: &Bits) -> bool
pub const fn sign_resize_assign(&mut self, rhs: &Bits) -> bool
Sign-resize-copy-assigns rhs to self and returns overflow. This is
the same as lhs.resize_assign(rhs, rhs.msb()), but returns true if
the signed meaning of the integer is changed.
sourceimpl Bits
impl Bits
sourcepub const fn ule(&self, rhs: &Bits) -> Option<bool>
pub const fn ule(&self, rhs: &Bits) -> Option<bool>
Unsigned-less-than-or-equal comparison, self <= rhs
sourcepub const fn ugt(&self, rhs: &Bits) -> Option<bool>
pub const fn ugt(&self, rhs: &Bits) -> Option<bool>
Unsigned-greater-than comparison, self > rhs
sourcepub const fn uge(&self, rhs: &Bits) -> Option<bool>
pub const fn uge(&self, rhs: &Bits) -> Option<bool>
Unsigned-greater-than-or-equal comparison, self >= rhs
sourceimpl Bits
impl Bits
const string representation conversion
Note: the awint_ext crate has higher level allocating functions
ExtAwi::bits_to_string_radix, ExtAwi::bits_to_vec_radix, and
<ExtAwi as FromStr>::from_str
sourcepub const fn bytes_radix_assign(
&mut self,
sign: Option<bool>,
src: &[u8],
radix: u8,
pad0: &mut Bits,
pad1: &mut Bits
) -> Result<(), SerdeError>
pub const fn bytes_radix_assign(
&mut self,
sign: Option<bool>,
src: &[u8],
radix: u8,
pad0: &mut Bits,
pad1: &mut Bits
) -> Result<(), SerdeError>
Assigns to self the integer value represented by src in the given
radix. If src should be interpreted as unsigned, sign should be
None, otherwise it should be set to the sign. In order for this
function to be const, two scratchpads pad0 and pad1 with the
same bitwidth as self must be supplied, which can be mutated by
the function in arbitrary ways.
Errors
self is not mutated if an error occurs. See crate::SerdeError for
error conditions. The characters 0..=9, a..=z, and A..=Z are
allowed depending on the radix. The char _ is ignored, and all
other chars result in an error. src cannot be empty. The value of
the string must be representable in the bitwidth of self with the
specified sign, otherwise an overflow error is returned.
sourcepub const fn to_bytes_radix(
&self,
signed: bool,
dst: &mut [u8],
radix: u8,
upper: bool,
pad: &mut Bits
) -> Result<(), SerdeError>
pub const fn to_bytes_radix(
&self,
signed: bool,
dst: &mut [u8],
radix: u8,
upper: bool,
pad: &mut Bits
) -> Result<(), SerdeError>
Assigns the [u8] representation of self to dst (sign indicators,
prefixes, and postfixes not included). signed specifies if self
should be interpreted as signed. radix specifies the radix, and
upper specifies if letters should be uppercase. In order for this
function to be const, a scratchpad pad with the same bitwidth as
self must be supplied. Note that if dst.len() is more than what
is needed to store the representation, the leading bytes will all be
set to b’0’.
Errors
Note: If an error is returned, dst may be set to anything
This function can fail from NonEqualWidths, InvalidRadix, and
Overflow (if dst cannot represent the value of self). See
crate::SerdeError.
sourceimpl Bits
impl Bits
Division
These operations are not inplace unlike many other functions in this crate, because extra mutable space is needed in order to avoid allocation.
Note that signed divisions can overflow when duo.is_imin() and
div.is_umax() (negative one in signed interpretation). The overflow
results in quo.is_imin() and rem.is_zero().
Note about terminology: we like short three letter shorthands, but run into
a problem where the first three letters of “divide”, “dividend”, and
“divisor” all clash with each other. Additionally, the standard Rust
terminology for a function returning a quotient is things such as
i64::wrapping_div, which should have been named i64::wrapping_quo
instead. Here, we choose to type out “divide” in full whenever the operation
involves both quotients and remainders. We don’t use “num” or “den”, because
it may cause confusion later if an awint crate gains rational number
capabilities. We use “quo” for quotient and “rem” for remainder. We use
“div” for divisor. That still leaves a name clash with dividend, so we
choose to use the shorthand “duo”. This originates from the fact that for
inplace division operations (which this crate does not have for performance
purposes and avoiding allocation), the dividend is often subtracted from in
the internal algorithms until it becomes the remainder, so that it serves
two purposes.
sourcepub const fn short_udivide_inplace_assign(&mut self, div: usize) -> Option<usize>
pub const fn short_udivide_inplace_assign(&mut self, div: usize) -> Option<usize>
Unsigned-divides self by div, sets self to the quotient, and
returns the remainder. Returns None if div == 0.
pub const fn short_udivide_assign(
&mut self,
duo: &Bits,
div: usize
) -> Option<usize>
sourcepub const fn udivide(
quo: &mut Bits,
rem: &mut Bits,
duo: &Bits,
div: &Bits
) -> Option<()>
pub const fn udivide(
quo: &mut Bits,
rem: &mut Bits,
duo: &Bits,
div: &Bits
) -> Option<()>
Unsigned-divides duo by div and assigns the quotient to quo and
remainder to rem. Returns None if any bitwidths are not equal or
div.is_zero().
sourcepub const fn idivide(
quo: &mut Bits,
rem: &mut Bits,
duo: &mut Bits,
div: &mut Bits
) -> Option<()>
pub const fn idivide(
quo: &mut Bits,
rem: &mut Bits,
duo: &mut Bits,
div: &mut Bits
) -> Option<()>
Signed-divides duo by div and assigns the quotient to quo and
remainder to rem. Returns None if any bitwidths are not equal or
div.is_zero(). duo and div are marked mutable but their values are
not changed by this function.
sourceimpl Bits
impl Bits
sourcepub const fn get(&self, inx: usize) -> Option<bool>
pub const fn get(&self, inx: usize) -> Option<bool>
Gets the bit at inx bits from the least significant bit, returning
None if inx >= self.bw()
sourcepub const fn set(&mut self, inx: usize, bit: bool) -> Option<()>
pub const fn set(&mut self, inx: usize, bit: bool) -> Option<()>
Sets the bit at inx bits from the least significant bit, returning
None if inx >= self.bw()
sourcepub const fn count_ones(&self) -> usize
pub const fn count_ones(&self) -> usize
Returns the number of set ones
sourcepub const fn field(
&mut self,
to: usize,
rhs: &Bits,
from: usize,
width: usize
) -> Option<()>
pub const fn field(
&mut self,
to: usize,
rhs: &Bits,
from: usize,
width: usize
) -> Option<()>
“Fielding” bitfields with targeted copy assigns. The bitwidths of self
and rhs do not have to be equal, but the inputs must collectively obey
width <= self.bw() && width <= rhs.bw() && to <= (self.bw() - width) && from <= (rhs.bw() - width) or else None is
returned. width can be zero, in which case this function just checks
the input correctness and does not mutate self.
This function works by copying a width sized bitfield from rhs at
bitposition from and overwriting width bits at bitposition to in
self. Only the width bits in self are mutated, any bits before and
after the bitfield are left unchanged.
use awint::{Bits, inlawi, InlAwi};
// As an example, two hexadecimal digits will be overwritten
// starting with the 12th digit in `y` using a bitfield with
// value 0x42u8 extracted from `x`.
let x = inlawi!(0x11142111u50);
// the underscores are just for emphasis
let mut y = inlawi!(0xfd_ec_ba9876543210u100);
// from `x` digit place 3, we copy 2 digits to `y` digit place 12.
y.field(12 * 4, &x, 3 * 4, 2 * 4);
assert_eq!(y, inlawi!(0xfd_42_ba9876543210u100));sourcepub const fn field_to(&mut self, to: usize, rhs: &Bits, width: usize) -> Option<()>
pub const fn field_to(&mut self, to: usize, rhs: &Bits, width: usize) -> Option<()>
A specialization of Bits::field with from set to 0.
sourcepub const fn field_from(
&mut self,
rhs: &Bits,
from: usize,
width: usize
) -> Option<()>
pub const fn field_from(
&mut self,
rhs: &Bits,
from: usize,
width: usize
) -> Option<()>
A specialization of Bits::field with to set to 0.
sourcepub const fn field_width(&mut self, rhs: &Bits, width: usize) -> Option<()>
pub const fn field_width(&mut self, rhs: &Bits, width: usize) -> Option<()>
A specialization of Bits::field with to and from set to 0.
sourcepub const fn field_bit(&mut self, to: usize, rhs: &Bits, from: usize) -> Option<()>
pub const fn field_bit(&mut self, to: usize, rhs: &Bits, from: usize) -> Option<()>
A specialization of Bits::field with width set to 1.
sourcepub const fn lut_assign(&mut self, lut: &Bits, inx: &Bits) -> Option<()>
pub const fn lut_assign(&mut self, lut: &Bits, inx: &Bits) -> Option<()>
Copy entry from lookup table. Copies a self.bw() sized bitfield from
lut at bit position inx.to_usize() * self.bw(). If lut.bw() != (self.bw() * (2^inx.bw())), None will be returned.
use awint::{Bits, inlawi, InlAwi};
let mut out = inlawi!(0u10);
// lookup table consisting of 4 10-bit entries
let lut = inlawi!(4u10, 3u10, 2u10, 1u10);
// the indexer has to have a bitwidth of 2 to index 2^2 = 4 entries
let mut inx = inlawi!(0u2);
// get the third entry (this is using zero indexing)
inx.usize_assign(2);
out.lut_assign(&lut, &inx).unwrap();
assert_eq!(out, inlawi!(3u10));sourcepub const fn lut_set(&mut self, entry: &Bits, inx: &Bits) -> Option<()>
pub const fn lut_set(&mut self, entry: &Bits, inx: &Bits) -> Option<()>
Set entry in lookup table. The inverse of Bits::lut_assign, this uses
entry as a bitfield to overwrite part of self at bit position
inx.to_usize() * entry.bw(). If
self.bw() != (entry.bw() * (2^inx.bw())), None will be returned.
sourceimpl Bits
impl Bits
sourcepub const fn short_cin_mul(&mut self, cin: usize, rhs: usize) -> usize
pub const fn short_cin_mul(&mut self, cin: usize, rhs: usize) -> usize
Assigns cin + (self * rhs) to self and returns the overflow
sourcepub const fn short_mul_add_assign(
&mut self,
lhs: &Bits,
rhs: usize
) -> Option<bool>
pub const fn short_mul_add_assign(
&mut self,
lhs: &Bits,
rhs: usize
) -> Option<bool>
Add-assigns lhs * rhs to self and returns if overflow happened
sourcepub const fn mul_add_assign(&mut self, lhs: &Bits, rhs: &Bits) -> Option<()>
pub const fn mul_add_assign(&mut self, lhs: &Bits, rhs: &Bits) -> Option<()>
Multiplies lhs by rhs and add-assigns the product to self. Three
operands eliminates the need for an allocating temporary.
sourcepub const fn mul_assign(&mut self, rhs: &Bits, pad: &mut Bits) -> Option<()>
pub const fn mul_assign(&mut self, rhs: &Bits, pad: &mut Bits) -> Option<()>
Multiply-assigns self by rhs. pad is a scratchpad that will be
mutated arbitrarily.
sourcepub const fn arb_umul_add_assign(&mut self, lhs: &Bits, rhs: &Bits)
pub const fn arb_umul_add_assign(&mut self, lhs: &Bits, rhs: &Bits)
Arbitrarily-unsigned-multiplies lhs by rhs and add-assigns the
product to self. This function is equivalent to:
use awint::prelude::*;
fn arb_umul_assign(add: &mut Bits, lhs: &Bits, rhs: &Bits) {
let mut resized_lhs = ExtAwi::zero(add.nzbw());
// Note that this function is specified as unsigned,
// because we use `zero_resize_assign`
resized_lhs.zero_resize_assign(lhs);
let mut resized_rhs = ExtAwi::zero(add.nzbw());
resized_rhs.zero_resize_assign(rhs);
add.mul_add_assign(&resized_lhs, &resized_rhs).unwrap();
}except that it avoids allocation and is more efficient overall
sourcepub const fn arb_imul_add_assign(&mut self, lhs: &mut Bits, rhs: &mut Bits)
pub const fn arb_imul_add_assign(&mut self, lhs: &mut Bits, rhs: &mut Bits)
Arbitrarily-signed-multiplies lhs by rhs and add-assigns the product
to self. duo and div are marked mutable but their values are
not changed by this function.
sourceimpl Bits
impl Bits
sourcepub const fn shl_assign(&mut self, s: usize) -> Option<()>
pub const fn shl_assign(&mut self, s: usize) -> Option<()>
Left-shifts-assigns by s bits. If s >= self.bw(), then
None is returned and the Bits are left unchanged.
Left shifts can act as a very fast multiplication by a power of two for
both the signed and unsigned interpretation of Bits.
sourcepub const fn lshr_assign(&mut self, s: usize) -> Option<()>
pub const fn lshr_assign(&mut self, s: usize) -> Option<()>
Logically-right-shift-assigns by s bits. If s >= self.bw(), then
None is returned and the Bits are left unchanged.
Logical right shifts do not copy the sign bit, and thus can act as a
very fast floored division by a power of two for the unsigned
interpretation of Bits.
sourcepub const fn ashr_assign(&mut self, s: usize) -> Option<()>
pub const fn ashr_assign(&mut self, s: usize) -> Option<()>
Arithmetically-right-shift-assigns by s bits. If s >= self.bw(),
then None is returned and the Bits are left unchanged.
Arithmetic right shifts copy the sign bit, and thus can act as a very
fast floored division by a power of two for the signed interpretation
of Bits.
sourcepub const fn rotl_assign(&mut self, s: usize) -> Option<()>
pub const fn rotl_assign(&mut self, s: usize) -> Option<()>
Left-rotate-assigns by s bits. If s >= self.bw(), then
None is returned and the Bits are left unchanged.
This function is equivalent to the following:
use awint::prelude::*;
let mut input = inlawi!(0x4321u16);
let mut output = inlawi!(0u16);
// rotate left by 4 bits or one hexadecimal digit
let shift = 4;
// temporary clone of the input
let mut tmp = ExtAwi::from(input);
cc!(input; output).unwrap();
if shift != 0 {
if shift >= input.bw() {
// the actual function would return `None`
panic!();
}
output.shl_assign(shift).unwrap();
tmp.lshr_assign(input.bw() - shift).unwrap();
output.or_assign(&tmp);
};
assert_eq!(output, inlawi!(0x3214u16));
let mut using_rotate = ExtAwi::from(input);
using_rotate.rotl_assign(shift).unwrap();
assert_eq!(using_rotate, extawi!(0x3214u16));
// Note that slices are typed in a little-endian order opposite of
// how integers are typed, but they still visually rotate in the
// same way. This means `Rust`s built in slice rotation is in the
// opposite direction to integers and `Bits`
let mut array = [4, 3, 2, 1];
array.rotate_left(1);
assert_eq!(array, [3, 2, 1, 4]);
assert_eq!(0x4321u16.rotate_left(4), 0x3214);
let mut x = inlawi!(0x4321u16);
x.rotl_assign(4).unwrap();
// `Bits` has the preferred endianness
assert_eq!(x, inlawi!(0x3214u16));Unlike the example above which needs cloning, this function avoids any allocation and has many optimized branches for different input sizes and shifts.
sourcepub const fn rotr_assign(&mut self, s: usize) -> Option<()>
pub const fn rotr_assign(&mut self, s: usize) -> Option<()>
Right-rotate-assigns by s bits. If s >= self.bw(), then
None is returned and the Bits are left unchanged.
See Bits::rotl_assign for more details.
sourcepub const fn rev_assign(&mut self)
pub const fn rev_assign(&mut self)
Reverse-bit-order-assigns self. The least significant bit becomes the
most significant bit, the second least significant bit becomes the
second most significant bit, etc.
sourcepub const fn funnel(&mut self, rhs: &Bits, s: &Bits) -> Option<()>
pub const fn funnel(&mut self, rhs: &Bits, s: &Bits) -> Option<()>
Funnel shift with power-of-two bitwidths. Returns None if
2*self.bw() != rhs.bw() || 2^s.bw() != self.bw(). A self.bw() sized
field is assigned to self from rhs starting from the bit position
s. The shift cannot overflow because of the restriction on the
bitwidth of s.
use awint::prelude::*;
let mut lhs = inlawi!(0xffff_ffffu32);
let mut rhs = inlawi!(0xfedc_ba98_7654_3210u64);
// `lhs.bw()` must be a power of two, `s.bw()` here is
// `log_2(32) == 5`. The value of `s` is set to what bit
// of `rhs` should be the starting bit for `lhs`.
let mut s = inlawi!(12u5);
lhs.funnel(&rhs, &s).unwrap();
assert_eq!(lhs, inlawi!(0xa9876543_u32))sourceimpl Bits
impl Bits
Primitive assignment
If self.bw() is smaller than the primitive bitwidth, truncation will be
used when copying bits from x to self. If the primitive is unsigned (or
is a boolean), then zero extension will be used if self.bw() is larger
than the primitive bitwidth. If the primitive is signed, then sign extension
will be used if self.bw() is larger than the primitive bitwidth.
pub const fn u8_assign(&mut self, x: u8)
pub const fn i8_assign(&mut self, x: i8)
pub const fn u16_assign(&mut self, x: u16)
pub const fn i16_assign(&mut self, x: i16)
pub const fn u32_assign(&mut self, x: u32)
pub const fn i32_assign(&mut self, x: i32)
pub const fn u64_assign(&mut self, x: u64)
pub const fn i64_assign(&mut self, x: i64)
pub const fn u128_assign(&mut self, x: u128)
pub const fn i128_assign(&mut self, x: i128)
pub const fn usize_assign(&mut self, x: usize)
pub const fn isize_assign(&mut self, x: isize)
pub const fn bool_assign(&mut self, x: bool)
sourceimpl Bits
impl Bits
Primitive conversion
If self.bw() is larger than the primitive bitwidth, truncation will be
used when copying the bits of self and returning them. If the primitive is
unsigned, then zero extension will be used if self.bw() is smaller than
the primitive bitwidth. If the primitive is signed, then sign extension will
be used if self.bw() is smaller than the primitive bitwidth.
pub const fn to_u8(&self) -> u8
pub const fn to_i8(&self) -> i8
pub const fn to_u16(&self) -> u16
pub const fn to_i16(&self) -> i16
pub const fn to_u32(&self) -> u32
pub const fn to_i32(&self) -> i32
pub const fn to_u64(&self) -> u64
pub const fn to_i64(&self) -> i64
pub const fn to_u128(&self) -> u128
pub const fn to_i128(&self) -> i128
pub const fn to_usize(&self) -> usize
pub const fn to_isize(&self) -> isize
pub const fn to_bool(&self) -> bool
sourceimpl Bits
impl Bits
sourcepub const fn inc_assign(&mut self, cin: bool) -> bool
pub const fn inc_assign(&mut self, cin: bool) -> bool
Increment-assigns self with a carry-in cin and returns the carry-out
bit. If cin == true then one is added to self, otherwise nothing
happens. false is always returned unless self.is_umax().
sourcepub const fn dec_assign(&mut self, cin: bool) -> bool
pub const fn dec_assign(&mut self, cin: bool) -> bool
Decrement-assigns self with a carry-in cin and returns the carry-out
bit. If cin == false then one is subtracted from self, otherwise
nothing happens. true is always returned unless self.is_zero().
sourcepub const fn neg_assign(&mut self, neg: bool)
pub const fn neg_assign(&mut self, neg: bool)
Negate-assigns self if neg is true. Note that signed minimum values
will overflow.
sourcepub const fn abs_assign(&mut self)
pub const fn abs_assign(&mut self)
Absolute-value-assigns self. Note that signed minimum values will
overflow, unless self is interpreted as unsigned after a call to this
function.
sourcepub const fn add_assign(&mut self, rhs: &Bits) -> Option<()>
pub const fn add_assign(&mut self, rhs: &Bits) -> Option<()>
Add-assigns by rhs
sourcepub const fn sub_assign(&mut self, rhs: &Bits) -> Option<()>
pub const fn sub_assign(&mut self, rhs: &Bits) -> Option<()>
Subtract-assigns by rhs
sourcepub const fn rsb_assign(&mut self, rhs: &Bits) -> Option<()>
pub const fn rsb_assign(&mut self, rhs: &Bits) -> Option<()>
Reverse-subtract-assigns by rhs. Sets self to (-self) + rhs.
sourcepub const fn neg_add_assign(&mut self, neg: bool, rhs: &Bits) -> Option<()>
pub const fn neg_add_assign(&mut self, neg: bool, rhs: &Bits) -> Option<()>
Negate-add-assigns by rhs. Negates conditionally on neg.
sourcepub const fn cin_sum_assign(
&mut self,
cin: bool,
lhs: &Bits,
rhs: &Bits
) -> Option<(bool, bool)>
pub const fn cin_sum_assign(
&mut self,
cin: bool,
lhs: &Bits,
rhs: &Bits
) -> Option<(bool, bool)>
A general summation with carry-in cin and two inputs lhs and rhs.
self is set to the sum. The unsigned overflow (equivalent to the
carry-out bit) and the signed overflow is returned as a tuple. None is
returned if any bitwidths do not match. If subtraction is desired,
one of the operands can be negated.
Trait Implementations
sourceimpl BorrowMut<Bits> for ExtAwi
impl BorrowMut<Bits> for ExtAwi
sourcefn borrow_mut(&mut self) -> &mut Bits
fn borrow_mut(&mut self) -> &mut Bits
Mutably borrows from an owned value. Read more
sourceimpl<const BW: usize, const LEN: usize> BorrowMut<Bits> for InlAwi<BW, LEN>
impl<const BW: usize, const LEN: usize> BorrowMut<Bits> for InlAwi<BW, LEN>
sourceconst fn borrow_mut(&mut self) -> &mut Bits
const fn borrow_mut(&mut self) -> &mut Bits
Mutably borrows from an owned value. Read more
sourceimpl PartialEq<Bits> for Bits
impl PartialEq<Bits> for Bits
If self and other have unmatching bit widths, false will be returned.
impl Eq for Bits
If self and other have unmatching bit widths, false will be returned.
impl Send for Bits
Bits is safe to send between threads since it does not own
aliasing memory and has no reference counting mechanism like Rc.
impl Sync for Bits
Bits is safe to share between threads since it does not own
aliasing memory and has no mutable internal state like Cell or RefCell.