[−][src]Struct staticvec::StaticVec
A Vec
-like struct (mostly directly API-compatible where it can be)
implemented with const generics around an array of fixed N
capacity.
Methods
impl<T, const N: usize> StaticVec<T, N>
[src]
pub const fn new() -> Self
[src]
Returns a new StaticVec instance.
pub fn new_from_slice(values: &[T]) -> Self where
T: Copy,
[src]
T: Copy,
Returns a new StaticVec instance filled with the contents, if any, of a slice reference,
which can be either &mut
or &
as if it is &mut
it will implicitly coerce to &
.
If the slice has a length greater than the StaticVec's declared capacity,
any contents after that point are ignored.
Locally requires that T
implements Copy
to avoid soundness issues.
pub fn new_from_array<const N2: usize>(values: [T; N2]) -> Self
[src]
Returns a new StaticVec instance filled with the contents, if any, of an array. If the array has a length greater than the StaticVec's declared capacity, any contents after that point are ignored.
The N2
parameter does not need to be provided explicitly, and can be inferred from the array
itself.
This function does not leak memory, as any ignored extra elements in the source
array are explicitly dropped with drop_in_place
after it is
first wrapped in an instance of MaybeUninit
to inhibit the
automatic calling of any destructors its contents may have.
Example usage:
// Same input length as the declared capacity: let v = StaticVec::<i32, 3>::new_from_array([1, 2, 3]); assert_eq!(v, [1, 2, 3]); // Truncated to fit the declared capacity: let v2 = StaticVec::<i32, 3>::new_from_array([1, 2, 3, 4, 5, 6]); assert_eq!(v2, [1, 2, 3]);
Note that StaticVec also implements From
for both slices
and static arrays, which may prove more ergonomic in some cases as it allows
for a greater degree of type inference:
// The StaticVec on the next line is inferred to be of type `StaticVec<&'static str, 4>`. let v = StaticVec::from(["A", "B", "C", "D"]);
pub const fn new_from_const_array(values: [T; N]) -> Self
[src]
A version of new_from_array
specifically designed
for use as a const fn
constructor (although it can of course be used in non-const contexts
as well.)
Being const
necessitates that this function can only accept arrays with a length
exactly equal to the declared capacity of the resulting StaticVec, so if you do need
flexibility with regards to input lengths it's recommended that you use
new_from_array
or the From
implementations instead.
Note that both forms of the staticvec!
macro are implemented using
new_from_const_array
, so you may also prefer
to use them instead of it directly.
pub const fn len(&self) -> usize
[src]
Returns the current length of the StaticVec. Just as for a normal Vec
,
this means the number of elements that have been added to it with
push
, insert
, etc. except in the
case that it has been set directly with the unsafe set_len
function.
pub const fn capacity(&self) -> usize
[src]
Returns the total capacity of the StaticVec.
This is always equivalent to the generic N
parameter it was declared with,
which determines the fixed size of the backing array.
pub const fn cap() -> usize
[src]
Does the same thing as capacity
, but as an associated
function rather than a method.
pub const CAPACITY: usize
[src]
Serves the same purpose as capacity
, but as an associated
constant rather than a method.
pub const fn remaining_capacity(&self) -> usize
[src]
Returns the remaining capacity (which is to say, self.capacity() - self.len()
) of the
StaticVec.
pub const fn size_in_bytes(&self) -> usize
[src]
Returns the total size of the inhabited part of the StaticVec (which may be zero if it has a
length of zero or contains ZSTs) in bytes. Specifically, the return value of this function
amounts to a calculation of size_of::<T>() * self.length
.
pub unsafe fn set_len(&mut self, new_len: usize)
[src]
Directly sets the length field of the StaticVec to new_len
. Useful if you intend
to write to it solely element-wise, but marked unsafe due to how it creates
the potential for reading from uninitialized memory later on.
Safety
It is up to the caller to ensure that new_len
is less than or equal to the StaticVec's
constant N
parameter, and that the range of elements covered by a length of new_len
is
actually initialized. Failure to do so will almost certainly result in undefined behavior.
pub const fn is_empty(&self) -> bool
[src]
Returns true if the current length of the StaticVec is 0.
pub const fn is_not_empty(&self) -> bool
[src]
Returns true if the current length of the StaticVec is greater than 0.
pub const fn is_full(&self) -> bool
[src]
Returns true if the current length of the StaticVec is equal to its capacity.
pub const fn is_not_full(&self) -> bool
[src]
Returns true if the current length of the StaticVec is less than its capacity.
pub const fn as_ptr(&self) -> *const T
[src]
Returns a constant pointer to the first element of the StaticVec's internal array.
pub const fn as_mut_ptr(&mut self) -> *mut T
[src]
Returns a mutable pointer to the first element of the StaticVec's internal array.
pub const fn as_slice(&self) -> &[T]
[src]
Returns a constant reference to a slice of the StaticVec's inhabited area.
pub const fn as_mut_slice(&mut self) -> &mut [T]
[src]
Returns a mutable reference to a slice of the StaticVec's inhabited area.
pub unsafe fn ptr_at_unchecked(&self, index: usize) -> *const T
[src]
Returns a constant pointer to the element of the StaticVec at index
without doing any
checking to ensure that index
is actually within any particular bounds. The return value of
this function is equivalent to what would be returned from as_ptr().add(index)
.
Safety
It is up to the caller to ensure that index
is within the appropriate bounds such that the
function returns a pointer to a location that falls somewhere inside the full span of the
StaticVec's backing array, and that if reading from the returned pointer, it has already
been initialized properly.
pub unsafe fn mut_ptr_at_unchecked(&mut self, index: usize) -> *mut T
[src]
Returns a mutable pointer to the element of the StaticVec at index
without doing any
checking to ensure that index
is actually within any particular bounds. The return value of
this function is equivalent to what would be returned from as_mut_ptr().add(index)
.
Safety
It is up to the caller to ensure that index
is within the appropriate bounds such that the
function returns a pointer to a location that falls somewhere inside the full span of the
StaticVec's backing array.
It is also the responsibility of the caller to ensure that the length
field of the StaticVec
is adjusted to properly reflect whatever range of elements this function may be used to
initialize, and that if reading from the returned pointer, it has already been initialized
properly.
pub fn ptr_at(&self, index: usize) -> *const T
[src]
Returns a constant pointer to the element of the StaticVec at index
if index
is within the range 0..self.length
, or panics if it is not. The return value of this
function is equivalent to what would be returned from as_ptr().add(index)
.
pub fn mut_ptr_at(&mut self, index: usize) -> *mut T
[src]
Returns a mutable pointer to the element of the StaticVec at index
if index
is within the range 0..self.length
, or panics if it is not. The return value of this
function is equivalent to what would be returned from as_mut_ptr().add(index)
.
pub unsafe fn get_unchecked(&self, index: usize) -> &T
[src]
Returns a constant reference to the element of the StaticVec at index
without doing any
checking to ensure that index
is actually within any particular bounds.
Note that unlike slice::get_unchecked
,
this method only supports accessing individual elements via usize
; it cannot also produce
subslices. To get a subslice without a bounds check, use
self.as_slice().get_unchecked(a..b)
.
Safety
It is up to the caller to ensure that index
is within the range 0..self.length
.
pub unsafe fn get_unchecked_mut(&mut self, index: usize) -> &mut T
[src]
Returns a mutable reference to the element of the StaticVec at index
without doing any
checking to ensure that index
is actually within any particular bounds.
The same differences between this method and the slice method of the same name
apply as do for get_unchecked
.
Safety
It is up to the caller to ensure that index
is within the range 0..self.length
.
pub unsafe fn push_unchecked(&mut self, value: T)
[src]
Appends a value to the end of the StaticVec without asserting that
its current length is less than N
.
Safety
It is up to the caller to ensure that the length of the StaticVec
prior to using this function is less than N
. Failure to do so will result
in writing to an out-of-bounds memory region.
pub unsafe fn pop_unchecked(&mut self) -> T
[src]
Pops a value from the end of the StaticVec and returns it directly without asserting that the StaticVec's current length is greater than 0.
Safety
It is up to the caller to ensure that the StaticVec contains at least one element prior to using this function. Failure to do so will result in reading from uninitialized memory.
pub fn try_push(&mut self, value: T) -> Result<(), PushCapacityError<T, N>>
[src]
Pushes value
to the StaticVec if its current length is less than its capacity,
or returns a PushCapacityError
otherwise.
pub fn push(&mut self, value: T)
[src]
Pushes a value to the end of the StaticVec. Panics if the collection is
full; that is, if self.len() == self.capacity()
.
pub fn pop(&mut self) -> Option<T>
[src]
Removes the value at the last position of the StaticVec and returns it in Some
if
the StaticVec has a current length greater than 0, and returns None
otherwise.
pub fn first(&self) -> Option<&T>
[src]
Returns a constant reference to the first element of the StaticVec in Some
if the StaticVec
is not empty, or None
otherwise.
pub fn first_mut(&mut self) -> Option<&mut T>
[src]
Returns a mutable reference to the first element of the StaticVec in Some
if the StaticVec
is not empty, or None
otherwise.
pub fn last(&self) -> Option<&T>
[src]
Returns a constant reference to the last element of the StaticVec in Some
if the StaticVec
is not empty, or None
otherwise.
pub fn last_mut(&mut self) -> Option<&mut T>
[src]
Returns a mutable reference to the last element of the StaticVec in Some
if the StaticVec is
not empty, or None
otherwise.
pub fn remove(&mut self, index: usize) -> T
[src]
Asserts that index
is less than the current length of the StaticVec,
and if so removes the value at that position and returns it. Any values
that exist in later positions are shifted to the left.
pub fn remove_item(&mut self, item: &T) -> Option<T> where
T: PartialEq,
[src]
T: PartialEq,
Removes the first instance of item
from the StaticVec if the item exists.
pub fn swap_pop(&mut self, index: usize) -> Option<T>
[src]
Returns None
if index
is greater than or equal to the current length of the StaticVec.
Otherwise, removes the value at that position and returns it in Some
, and then
moves the last value in the StaticVec into the empty slot.
pub fn swap_remove(&mut self, index: usize) -> T
[src]
Asserts that index
is less than the current length of the StaticVec,
and if so removes the value at that position and returns it, and then
moves the last value in the StaticVec into the empty slot.
pub fn insert(&mut self, index: usize, value: T)
[src]
Asserts that the current length of the StaticVec is less than N
and that
index
is less than the length, and if so inserts value
at that position.
Any values that exist in positions after index
are shifted to the right.
pub fn insert_many<I: IntoIterator<Item = T>>(&mut self, index: usize, iter: I) where
I::IntoIter: ExactSizeIterator<Item = T>,
[src]
I::IntoIter: ExactSizeIterator<Item = T>,
Functionally equivalent to insert
, except with multiple
items provided by an iterator as opposed to just one. This function will return immediately
if / when the StaticVec reaches maximum capacity, regardless of whether the iterator still has
more items to yield.
For safety reasons, as StaticVec cannot increase in capacity, the
iterator is required to implement ExactSizeIterator
rather than just Iterator
(though this function still does
the appropriate checking internally to avoid dangerous outcomes in the event of a blatantly
incorrect ExactSizeIterator
implementation.)
pub fn try_insert(
&mut self,
index: usize,
value: T
) -> Result<(), CapacityError<N>>
[src]
&mut self,
index: usize,
value: T
) -> Result<(), CapacityError<N>>
Inserts value
at index
if the current length of the StaticVec is less than N
and index
is less than the length, or returns a CapacityError
otherwise. Any values that exist in positions after index
are shifted to the right.
pub fn contains(&self, value: &T) -> bool where
T: PartialEq,
[src]
T: PartialEq,
Returns true
if value
is present in the StaticVec.
Locally requires that T
implements PartialEq
to make it possible to compare the elements of the StaticVec with value
.
pub fn clear(&mut self)
[src]
Removes all contents from the StaticVec and sets its length back to 0.
ⓘImportant traits for StaticVecIterConst<'a, T, N>pub fn iter(&self) -> StaticVecIterConst<T, N>
[src]
Returns a StaticVecIterConst
over the StaticVec's
inhabited area.
ⓘImportant traits for StaticVecIterMut<'a, T, N>pub fn iter_mut(&mut self) -> StaticVecIterMut<T, N>
[src]
Returns a StaticVecIterMut
over the StaticVec's
inhabited area.
pub fn sorted(&self) -> Self where
T: Copy + Ord,
[src]
T: Copy + Ord,
feature="std"
only.Returns a separate, stable-sorted StaticVec of the contents of the
StaticVec's inhabited area without modifying the original data.
Locally requires that T
implements Copy
to avoid soundness issues,
and Ord
to make the sorting possible.
Example usage:
const V: StaticVec<StaticVec<i32, 2>, 2> = staticvec![staticvec![1, 3], staticvec![4, 2]]; assert_eq!( V.iter().flatten().collect::<StaticVec<i32, 4>>().sorted(), [1, 2, 3, 4] );
pub fn sorted_unstable(&self) -> Self where
T: Copy + Ord,
[src]
T: Copy + Ord,
Returns a separate, unstable-sorted StaticVec of the contents of the
StaticVec's inhabited area without modifying the original data.
Locally requires that T
implements Copy
to avoid soundness issues,
and Ord
to make the sorting possible.
pub fn quicksorted_unstable(&self) -> Self where
T: Copy + PartialOrd,
[src]
T: Copy + PartialOrd,
Returns a separate, unstable-quicksorted StaticVec of the contents of the
StaticVec's inhabited area without modifying the original data.
Locally requires that T
implements Copy
to avoid soundness issues,
and PartialOrd
to make the sorting possible.
Unlike sorted
and
sorted_unstable
, this function does not make use of
Rust's built-in sorting methods, but instead makes direct use of a fairly unsophisticated
recursive quicksort algorithm implemented in this crate.
This has the advantage of only needing to have PartialOrd
as a
constraint as opposed to Ord
, but is very likely less performant for
most inputs, so if the type you're sorting does derive or implement
Ord
it's recommended that you use sorted
or
sorted_unstable
instead of this function.
pub fn reversed(&self) -> Self where
T: Copy,
[src]
T: Copy,
Returns a separate, reversed StaticVec of the contents of the StaticVec's
inhabited area without modifying the original data.
Locally requires that T
implements Copy
to avoid soundness issues.
pub fn filled_with<F>(initializer: F) -> Self where
F: FnMut() -> T,
[src]
F: FnMut() -> T,
Returns a new StaticVec instance filled with the return value of an initializer function. The length field of the newly created StaticVec will be equal to its capacity.
Example usage:
let mut i = 0; let v = StaticVec::<i32, 64>::filled_with(|| { i += 1; i }); assert_eq!(v.len(), 64); assert_eq!(v[0], 1); assert_eq!(v[1], 2); assert_eq!(v[2], 3); assert_eq!(v[3], 4);
pub fn filled_with_by_index<F>(initializer: F) -> Self where
F: FnMut(usize) -> T,
[src]
F: FnMut(usize) -> T,
Returns a new StaticVec instance filled with the return value of an initializer function.
Unlike for filled_with
, the initializer function in
this case must take a single usize variable as an input parameter, which will be called
with the current index of the 0..N
loop that
filled_with_by_index
is implemented with
internally. The length field of the newly created StaticVec will be equal to its capacity.
Example usage:
let v = StaticVec::<usize, 64>::filled_with_by_index(|i| { i + 1 }); assert_eq!(v.len(), 64); assert_eq!(v[0], 1); assert_eq!(v[1], 2); assert_eq!(v[2], 3); assert_eq!(v[3], 4);
pub fn extend_from_slice(&mut self, other: &[T]) where
T: Copy,
[src]
T: Copy,
Copies and appends all elements, if any, of a slice (which can also be &mut
as it will
coerce implicitly to &
) to the StaticVec. If the slice has a length greater than the
StaticVec's remaining capacity, any contents after that point are ignored.
Locally requires that T
implements Copy
to avoid soundness issues.
pub fn try_extend_from_slice(
&mut self,
other: &[T]
) -> Result<(), CapacityError<N>> where
T: Copy,
[src]
&mut self,
other: &[T]
) -> Result<(), CapacityError<N>> where
T: Copy,
Copies and appends all elements, if any, of a slice to the StaticVec if the
StaticVec's remaining capacity is greater than the length of the slice, or returns
a CapacityError
otherwise.
pub fn append<const N2: usize>(&mut self, other: &mut StaticVec<T, N2>)
[src]
Appends self.remaining_capacity()
(or as many as available) items from
other
to self
. The appended items (if any) will no longer exist in other
afterwards,
as other
's length
field will be adjusted to indicate.
The N2
parameter does not need to be provided explicitly, and can be inferred directly from
the constant N2
constraint of other
(which may or may not be the same as the N
constraint of self
.)
pub fn concat<const N2: usize>(
&self,
other: &StaticVec<T, N2>
) -> StaticVec<T, { N + N2 }> where
T: Copy,
[src]
&self,
other: &StaticVec<T, N2>
) -> StaticVec<T, { N + N2 }> where
T: Copy,
Returns a new StaticVec consisting of the elements of self
and other
concatenated in
linear fashion such that the first element of other
comes immediately after the last
element of self
.
The N2
parameter does not need to be provided explicitly, and can be inferred directly from
the constant N2
constraint of other
(which may or may not be the same as the N
constraint of self
.)
Locally requires that T
implements Copy
to
avoid soundness issues and also allow for a more efficient implementation than would otherwise
be possible.
Example usage:
assert_eq!( staticvec!["A, B"].concat(&staticvec!["C", "D", "E", "F"]), ["A, B", "C", "D", "E", "F"] );
pub fn concat_clone<const N2: usize>(
&self,
other: &StaticVec<T, N2>
) -> StaticVec<T, { N + N2 }> where
T: Clone,
[src]
&self,
other: &StaticVec<T, N2>
) -> StaticVec<T, { N + N2 }> where
T: Clone,
A version of concat
for scenarios where T
does not
derive Copy
but does implement Clone
.
Due to needing to call clone()
through each individual element of self
and other
, this
function is less efficient than concat
, so
concat
should be preferred whenever possible.
pub fn intersperse(&self, separator: T) -> StaticVec<T, { N * 2 }> where
T: Copy,
[src]
T: Copy,
Returns a new StaticVec consisting of the elements of self
in linear order, interspersed
with a copy of separator
between each one.
Locally requires that T
implements Copy
to
avoid soundness issues and also allow for a more efficient implementation than would otherwise
be possible.
Example usage:
assert_eq!( staticvec!["A", "B", "C", "D"].intersperse("Z"), ["A", "Z", "B", "Z", "C", "Z", "D"] );
pub fn intersperse_clone(&self, separator: T) -> StaticVec<T, { N * 2 }> where
T: Clone,
[src]
T: Clone,
A version of intersperse
for scenarios where T
does not
derive Copy
but does implement Clone
.
Due to needing to call clone()
through each individual element of self
and also on
separator
, this function is less efficient than
intersperse
, so
intersperse
should be preferred whenever possible.
pub fn from_vec(vec: Vec<T>) -> Self
[src]
feature="std"
only.Returns a StaticVec containing the contents of a Vec
instance.
If the Vec
has a length greater than the declared capacity of the
resulting StaticVec, any contents after that point are ignored. Note that using this function
consumes the source Vec
.
pub fn into_vec(self) -> Vec<T>
[src]
feature="std"
only.Returns a Vec
containing the contents of the StaticVec instance.
The returned Vec
will initially have the same value for
len
and capacity
as the source
StaticVec. Note that using this function consumes the source StaticVec.
pub fn drain<R>(&mut self, range: R) -> Self where
R: RangeBounds<usize>,
[src]
R: RangeBounds<usize>,
Removes the specified range of elements from the StaticVec and returns them in a new one.
ⓘImportant traits for StaticVecDrain<'a, T, N>pub fn drain_iter<R>(&mut self, range: R) -> StaticVecDrain<T, N> where
R: RangeBounds<usize>,
[src]
R: RangeBounds<usize>,
Removes the specified range of elements from the StaticVec and returns them in a
StaticVecDrain
.
pub fn drain_filter<F>(&mut self, filter: F) -> Self where
F: FnMut(&mut T) -> bool,
[src]
F: FnMut(&mut T) -> bool,
Removes all elements in the StaticVec for which filter
returns true and
returns them in a new one.
pub fn retain<F>(&mut self, filter: F) where
F: FnMut(&T) -> bool,
[src]
F: FnMut(&T) -> bool,
Removes all elements in the StaticVec for which filter
returns false.
pub fn truncate(&mut self, length: usize)
[src]
Shortens the StaticVec, keeping the first length
elements and dropping the rest.
Does nothing if length
is greater than or equal to the current length of the StaticVec.
pub fn split_off(&mut self, at: usize) -> Self
[src]
Splits the StaticVec into two at the given index.
The original StaticVec will contain elements 0..at
,
and the new one will contain elements at..self.len()
.
pub fn dedup_by<F>(&mut self, same_bucket: F) where
F: FnMut(&mut T, &mut T) -> bool,
[src]
F: FnMut(&mut T, &mut T) -> bool,
Removes all but the first of consecutive elements in the StaticVec satisfying a given equality relation.
pub fn dedup(&mut self) where
T: PartialEq,
[src]
T: PartialEq,
Removes consecutive repeated elements in the StaticVec according to the
locally required PartialEq
trait implementation for T
.
pub fn dedup_by_key<F, K>(&mut self, key: F) where
F: FnMut(&mut T) -> K,
K: PartialEq<K>,
[src]
F: FnMut(&mut T) -> K,
K: PartialEq<K>,
Removes all but the first of consecutive elements in the StaticVec that resolve to the same key.
pub fn difference<const N2: usize>(&self, other: &StaticVec<T, N2>) -> Self where
T: Clone + PartialEq,
[src]
T: Clone + PartialEq,
Returns a new StaticVec representing the difference of self
and other
(that is,
all items present in self
, but not present in other
.)
The N2
parameter does not need to be provided explicitly, and can be inferred from other
itself.
Locally requires that T
implements Clone
to avoid soundness issues
while accommodating for more types than Copy
would appropriately for
this function, and PartialEq
to make the item comparisons possible.
Example usage:
assert_eq!( staticvec![4, 5, 6, 7].difference(&staticvec![1, 2, 3, 7]), [4, 5, 6] );
pub fn symmetric_difference<const N2: usize>(
&self,
other: &StaticVec<T, N2>
) -> StaticVec<T, { N + N2 }> where
T: Clone + PartialEq,
[src]
&self,
other: &StaticVec<T, N2>
) -> StaticVec<T, { N + N2 }> where
T: Clone + PartialEq,
Returns a new StaticVec representing the symmetric difference of self
and other
(that is,
all items present in at least one of self
or other
, but not present in both.)
The N2
parameter does not need to be provided explicitly, and can be inferred from other
itself.
Locally requires that T
implements Clone
to avoid soundness issues
while accommodating for more types than Copy
would appropriately for
this function, and PartialEq
to make the item comparisons possible.
Example usage:
assert_eq!( staticvec![1, 2, 3].symmetric_difference(&staticvec![3, 4, 5]), [1, 2, 4, 5] );
pub fn intersection<const N2: usize>(&self, other: &StaticVec<T, N2>) -> Self where
T: Clone + PartialEq,
[src]
T: Clone + PartialEq,
Returns a new StaticVec representing the intersection of self
and other
(that is,
all items present in both self
and other
.)
The N2
parameter does not need to be provided explicitly, and can be inferred from other
itself.
Locally requires that T
implements Clone
to avoid soundness issues
while accommodating for more types than Copy
would appropriately for
this function, and PartialEq
to make the item comparisons possible.
Example usage:
assert_eq!( staticvec![4, 5, 6, 7].intersection(&staticvec![1, 2, 3, 7, 4]), [4, 7], );
pub fn union<const N2: usize>(
&self,
other: &StaticVec<T, N2>
) -> StaticVec<T, { N + N2 }> where
T: Clone + PartialEq,
[src]
&self,
other: &StaticVec<T, N2>
) -> StaticVec<T, { N + N2 }> where
T: Clone + PartialEq,
Returns a new StaticVec representing the union of self
and other
(that is, the full
contents of both self
and other
, minus any duplicates.)
The N2
parameter does not need to be provided explicitly, and can be inferred from other
itself.
Locally requires that T
implements Clone
to avoid soundness issues
while accommodating for more types than Copy
would appropriately for
this function, and PartialEq
to make the item comparisons possible.
Example usage:
assert_eq!( staticvec![1, 2, 3].union(&staticvec![4, 2, 3, 4]), [1, 2, 3, 4], );
pub const fn triple(&self) -> (*const T, usize, usize)
[src]
A concept borrowed from the widely-used SmallVec
crate, this function
returns a tuple consisting of a constant pointer to the first element of the StaticVec,
the length of the StaticVec, and the capacity of the StaticVec.
pub const fn triple_mut(&mut self) -> (*mut T, usize, usize)
[src]
A mutable version of triple
. This implementation differs from
the one found in SmallVec
in that it only provides the first element of the StaticVec as
a mutable pointer, not also the length as a mutable reference.
pub fn added(&self, other: &Self) -> Self where
T: Copy + Add<Output = T>,
[src]
T: Copy + Add<Output = T>,
Linearly adds (in a mathematical sense) the contents of two same-capacity StaticVecs and returns the results in a new one of equal capacity.
Locally requires that T
implements Copy
to allow
for an efficient implementation, and Add
to make it possible
to add the elements.
For both performance and safety reasons, this function requires that both self
and other
are at full capacity, and will panic if that is not the case (that is,
if self.is_full() && other.is_full()
is not equal to true
.)
Example usage:
const A: StaticVec<f64, 4> = staticvec![4.0, 5.0, 6.0, 7.0]; const B: StaticVec<f64, 4> = staticvec![2.0, 3.0, 4.0, 5.0]; assert_eq!(A.added(&B), [6.0, 8.0, 10.0, 12.0]);
pub fn subtracted(&self, other: &Self) -> Self where
T: Copy + Sub<Output = T>,
[src]
T: Copy + Sub<Output = T>,
Linearly subtracts (in a mathematical sense) the contents of two same-capacity StaticVecs and returns the results in a new one of equal capacity.
Locally requires that T
implements Copy
to allow
for an efficient implementation, and Sub
to make it possible
to subtract the elements.
For both performance and safety reasons, this function requires that both self
and other
are at full capacity, and will panic if that is not the case (that is,
if self.is_full() && other.is_full()
is not equal to true
.)
Example usage:
const A: StaticVec<f64, 4> = staticvec![4.0, 5.0, 6.0, 7.0]; const B: StaticVec<f64, 4> = staticvec![2.0, 3.0, 4.0, 5.0]; assert_eq!(A.subtracted(&B), [2.0, 2.0, 2.0, 2.0]);
pub fn multiplied(&self, other: &Self) -> Self where
T: Copy + Mul<Output = T>,
[src]
T: Copy + Mul<Output = T>,
Linearly multiplies (in a mathematical sense) the contents of two same-capacity StaticVecs and returns the results in a new one of equal capacity.
Locally requires that T
implements Copy
to allow
for an efficient implementation, and Mul
to make it possible
to multiply the elements.
For both performance and safety reasons, this function requires that both self
and other
are at full capacity, and will panic if that is not the case (that is,
if self.is_full() && other.is_full()
is not equal to true
.)
Example usage:
const A: StaticVec<f64, 4> = staticvec![4.0, 5.0, 6.0, 7.0]; const B: StaticVec<f64, 4> = staticvec![2.0, 3.0, 4.0, 5.0]; assert_eq!(A.multiplied(&B), [8.0, 15.0, 24.0, 35.0]);
pub fn divided(&self, other: &Self) -> Self where
T: Copy + Div<Output = T>,
[src]
T: Copy + Div<Output = T>,
Linearly divides (in a mathematical sense) the contents of two same-capacity StaticVecs and returns the results in a new one of equal capacity.
Locally requires that T
implements Copy
to allow
for an efficient implementation, and Div
to make it possible
to divide the elements.
For both performance and safety reasons, this function requires that both self
and other
are at full capacity, and will panic if that is not the case (that is,
if self.is_full() && other.is_full()
is not equal to true
.)
Example usage:
const A: StaticVec<f64, 4> = staticvec![4.0, 5.0, 6.0, 7.0]; const B: StaticVec<f64, 4> = staticvec![2.0, 3.0, 4.0, 5.0]; assert_eq!(A.divided(&B), [2.0, 1.6666666666666667, 1.5, 1.4]);
Trait Implementations
impl<T, const N: usize> AsMut<[T]> for StaticVec<T, N>
[src]
impl<T, const N: usize> AsRef<[T]> for StaticVec<T, N>
[src]
impl<T, const N: usize> Borrow<[T]> for StaticVec<T, N>
[src]
impl<T, const N: usize> BorrowMut<[T]> for StaticVec<T, N>
[src]
fn borrow_mut(&mut self) -> &mut [T]
[src]
impl<const N: usize> BufRead for StaticVec<u8, N>
[src]
fn fill_buf(&mut self) -> Result<&[u8]>
[src]
fn consume(&mut self, amt: usize)
[src]
fn read_until(&mut self, byte: u8, buf: &mut Vec<u8>) -> Result<usize, Error>
1.0.0[src]
fn read_line(&mut self, buf: &mut String) -> Result<usize, Error>
1.0.0[src]
fn split(self, byte: u8) -> Split<Self>
1.0.0[src]
fn lines(self) -> Lines<Self>
1.0.0[src]
impl<T: Clone, const N: usize> Clone for StaticVec<T, N>
[src]
default fn clone(&self) -> Self
[src]
default fn clone_from(&mut self, other: &Self)
[src]
impl<T: Copy, const N: usize> Clone for StaticVec<T, N>
[src]
fn clone(&self) -> Self
[src]
fn clone_from(&mut self, rhs: &Self)
[src]
impl<T: Debug, const N: usize> Debug for StaticVec<T, N>
[src]
impl<T, const N: usize> Default for StaticVec<T, N>
[src]
impl<T, const N: usize> Deref for StaticVec<T, N>
[src]
impl<T, const N: usize> DerefMut for StaticVec<T, N>
[src]
impl<T, const N: usize> Drop for StaticVec<T, N>
[src]
impl<T: Eq, const N: usize> Eq for StaticVec<T, N>
[src]
impl<'a, T: 'a + Copy, const N: usize> Extend<&'a T> for StaticVec<T, N>
[src]
fn extend<I: IntoIterator<Item = &'a T>>(&mut self, iter: I)
[src]
impl<T, const N: usize> Extend<T> for StaticVec<T, N>
[src]
fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I)
[src]
impl<'_, T: Copy, const N: usize> From<&'_ [T; N]> for StaticVec<T, N>
[src]
fn from(values: &[T; N]) -> Self
[src]
Creates a new StaticVec instance from the contents of values
, using
new_from_slice
internally.
impl<'_, T: Copy, const N1: usize, const N2: usize> From<&'_ [T; N1]> for StaticVec<T, N2>
[src]
default fn from(values: &[T; N1]) -> Self
[src]
Creates a new StaticVec instance from the contents of values
, using
new_from_slice
internally.
impl<'_, T: Copy, const N: usize> From<&'_ [T]> for StaticVec<T, N>
[src]
fn from(values: &[T]) -> Self
[src]
Creates a new StaticVec instance from the contents of values
, using
new_from_slice
internally.
impl<'_, T: Copy, const N: usize> From<&'_ mut [T; N]> for StaticVec<T, N>
[src]
fn from(values: &mut [T; N]) -> Self
[src]
Creates a new StaticVec instance from the contents of values
, using
new_from_slice
internally.
impl<'_, T: Copy, const N1: usize, const N2: usize> From<&'_ mut [T; N1]> for StaticVec<T, N2>
[src]
default fn from(values: &mut [T; N1]) -> Self
[src]
Creates a new StaticVec instance from the contents of values
, using
new_from_slice
internally.
impl<'_, T: Copy, const N: usize> From<&'_ mut [T]> for StaticVec<T, N>
[src]
fn from(values: &mut [T]) -> Self
[src]
Creates a new StaticVec instance from the contents of values
, using
new_from_slice
internally.
impl<T, const N: usize> From<[T; N]> for StaticVec<T, N>
[src]
impl<T, const N1: usize, const N2: usize> From<[T; N1]> for StaticVec<T, N2>
[src]
default fn from(values: [T; N1]) -> Self
[src]
Creates a new StaticVec instance from the contents of values
, using
new_from_array
internally.
impl<T, const N: usize> From<Vec<T>> for StaticVec<T, N>
[src]
impl<'a, T: 'a + Copy, const N: usize> FromIterator<&'a T> for StaticVec<T, N>
[src]
fn from_iter<I: IntoIterator<Item = &'a T>>(iter: I) -> Self
[src]
impl<T, const N: usize> FromIterator<T> for StaticVec<T, N>
[src]
fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> Self
[src]
impl<T: Hash, const N: usize> Hash for StaticVec<T, N>
[src]
fn hash<H: Hasher>(&self, state: &mut H)
[src]
fn hash_slice<H>(data: &[Self], state: &mut H) where
H: Hasher,
1.3.0[src]
H: Hasher,
impl<T, const N: usize> Index<Range<usize>> for StaticVec<T, N>
[src]
type Output = [T]
The returned type after indexing.
fn index(&self, index: Range<usize>) -> &Self::Output
[src]
Asserts that the lower bound of index
is less than its upper bound,
and that its upper bound is less than or equal to the current length of the StaticVec,
and if so returns a constant reference to a slice of elements index.start..index.end
.
impl<T, const N: usize> Index<RangeFrom<usize>> for StaticVec<T, N>
[src]
type Output = [T]
The returned type after indexing.
fn index(&self, index: RangeFrom<usize>) -> &Self::Output
[src]
Asserts that the lower bound of index
is less than or equal to the
current length of the StaticVec, and if so returns a constant reference
to a slice of elements index.start()..self.length
.
impl<T, const N: usize> Index<RangeFull> for StaticVec<T, N>
[src]
type Output = [T]
The returned type after indexing.
fn index(&self, _index: RangeFull) -> &Self::Output
[src]
Returns a constant reference to a slice consisting of 0..self.length
elements of the StaticVec, using as_slice internally.
impl<T, const N: usize> Index<RangeInclusive<usize>> for StaticVec<T, N>
[src]
type Output = [T]
The returned type after indexing.
fn index(&self, index: RangeInclusive<usize>) -> &Self::Output
[src]
Asserts that the lower bound of index
is less than or equal to its upper bound,
and that its upper bound is less than the current length of the StaticVec,
and if so returns a constant reference to a slice of elements index.start()..=index.end()
.
impl<T, const N: usize> Index<RangeTo<usize>> for StaticVec<T, N>
[src]
type Output = [T]
The returned type after indexing.
fn index(&self, index: RangeTo<usize>) -> &Self::Output
[src]
Asserts that the upper bound of index
is less than or equal to the
current length of the StaticVec, and if so returns a constant reference
to a slice of elements 0..index.end
.
impl<T, const N: usize> Index<RangeToInclusive<usize>> for StaticVec<T, N>
[src]
type Output = [T]
The returned type after indexing.
fn index(&self, index: RangeToInclusive<usize>) -> &Self::Output
[src]
Asserts that the upper bound of index
is less than the
current length of the StaticVec, and if so returns a constant reference
to a slice of elements 0..=index.end
.
impl<T, const N: usize> Index<usize> for StaticVec<T, N>
[src]
type Output = T
The returned type after indexing.
fn index(&self, index: usize) -> &Self::Output
[src]
Asserts that index
is less than the current length of the StaticVec,
and if so returns the value at that position as a constant reference.
impl<T, const N: usize> IndexMut<Range<usize>> for StaticVec<T, N>
[src]
fn index_mut(&mut self, index: Range<usize>) -> &mut Self::Output
[src]
Asserts that the lower bound of index
is less than its upper bound,
and that its upper bound is less than or equal to the current length of the StaticVec,
and if so returns a mutable reference to a slice of elements index.start..index.end
.
impl<T, const N: usize> IndexMut<RangeFrom<usize>> for StaticVec<T, N>
[src]
fn index_mut(&mut self, index: RangeFrom<usize>) -> &mut Self::Output
[src]
Asserts that the lower bound of index
is less than or equal to the
current length of the StaticVec, and if so returns a mutable reference
to a slice of elements index.start()..self.length
.
impl<T, const N: usize> IndexMut<RangeFull> for StaticVec<T, N>
[src]
fn index_mut(&mut self, _index: RangeFull) -> &mut Self::Output
[src]
Returns a mutable reference to a slice consisting of 0..self.length
elements of the StaticVec, using as_mut_slice internally.
impl<T, const N: usize> IndexMut<RangeInclusive<usize>> for StaticVec<T, N>
[src]
fn index_mut(&mut self, index: RangeInclusive<usize>) -> &mut Self::Output
[src]
Asserts that the lower bound of index
is less than or equal to its upper bound,
and that its upper bound is less than the current length of the StaticVec,
and if so returns a mutable reference to a slice of elements index.start()..=index.end()
.
impl<T, const N: usize> IndexMut<RangeTo<usize>> for StaticVec<T, N>
[src]
fn index_mut(&mut self, index: RangeTo<usize>) -> &mut Self::Output
[src]
Asserts that the upper bound of index
is less than or equal to the
current length of the StaticVec, and if so returns a constant reference
to a slice of elements 0..index.end
.
impl<T, const N: usize> IndexMut<RangeToInclusive<usize>> for StaticVec<T, N>
[src]
fn index_mut(&mut self, index: RangeToInclusive<usize>) -> &mut Self::Output
[src]
Asserts that the upper bound of index
is less than the
current length of the StaticVec, and if so returns a constant reference
to a slice of elements 0..=index.end
.
impl<T, const N: usize> IndexMut<usize> for StaticVec<T, N>
[src]
fn index_mut(&mut self, index: usize) -> &mut Self::Output
[src]
Asserts that index
is less than the current length of the StaticVec,
and if so returns the value at that position as a mutable reference.
impl<T, const N: usize> Into<Vec<T>> for StaticVec<T, N>
[src]
impl<'a, T: 'a, const N: usize> IntoIterator for &'a StaticVec<T, N>
[src]
type IntoIter = StaticVecIterConst<'a, T, N>
Which kind of iterator are we turning this into?
type Item = &'a T
The type of the elements being iterated over.
fn into_iter(self) -> Self::IntoIter
[src]
Returns a StaticVecIterConst
over the StaticVec's
inhabited area.
impl<'a, T: 'a, const N: usize> IntoIterator for &'a mut StaticVec<T, N>
[src]
type IntoIter = StaticVecIterMut<'a, T, N>
Which kind of iterator are we turning this into?
type Item = &'a mut T
The type of the elements being iterated over.
fn into_iter(self) -> Self::IntoIter
[src]
Returns a StaticVecIterMut
over the StaticVec's
inhabited area.
impl<T, const N: usize> IntoIterator for StaticVec<T, N>
[src]
type IntoIter = StaticVecIntoIter<T, N>
Which kind of iterator are we turning this into?
type Item = T
The type of the elements being iterated over.
fn into_iter(self) -> Self::IntoIter
[src]
Returns a by-value StaticVecIntoIter
over the
StaticVec's inhabited area, which consumes the StaticVec.
impl<T: Ord, const N: usize> Ord for StaticVec<T, N>
[src]
fn cmp(&self, other: &Self) -> Ordering
[src]
fn max(self, other: Self) -> Self
1.21.0[src]
fn min(self, other: Self) -> Self
1.21.0[src]
fn clamp(self, min: Self, max: Self) -> Self
[src]
impl<'_, T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<&'_ [T1; N1]> for StaticVec<T2, N2>
[src]
impl<'_, T1, T2: PartialEq<T1>, const N: usize> PartialEq<&'_ [T1]> for StaticVec<T2, N>
[src]
impl<'_, T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<&'_ StaticVec<T1, N1>> for StaticVec<T2, N2>
[src]
fn eq(&self, other: &&StaticVec<T1, N1>) -> bool
[src]
fn ne(&self, other: &&StaticVec<T1, N1>) -> bool
[src]
impl<'_, T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<&'_ mut [T1; N1]> for StaticVec<T2, N2>
[src]
impl<'_, T1, T2: PartialEq<T1>, const N: usize> PartialEq<&'_ mut [T1]> for StaticVec<T2, N>
[src]
impl<'_, T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<&'_ mut StaticVec<T1, N1>> for StaticVec<T2, N2>
[src]
fn eq(&self, other: &&mut StaticVec<T1, N1>) -> bool
[src]
fn ne(&self, other: &&mut StaticVec<T1, N1>) -> bool
[src]
impl<T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<[T1; N1]> for StaticVec<T2, N2>
[src]
impl<'_, T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<[T1; N1]> for &'_ StaticVec<T2, N2>
[src]
impl<'_, T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<[T1; N1]> for &'_ mut StaticVec<T2, N2>
[src]
impl<T1, T2: PartialEq<T1>, const N: usize> PartialEq<[T1]> for StaticVec<T2, N>
[src]
impl<'_, T1, T2: PartialEq<T1>, const N: usize> PartialEq<[T1]> for &'_ StaticVec<T2, N>
[src]
impl<'_, T1, T2: PartialEq<T1>, const N: usize> PartialEq<[T1]> for &'_ mut StaticVec<T2, N>
[src]
impl<T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<StaticVec<T1, N1>> for StaticVec<T2, N2>
[src]
fn eq(&self, other: &StaticVec<T1, N1>) -> bool
[src]
fn ne(&self, other: &StaticVec<T1, N1>) -> bool
[src]
impl<'_, T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<StaticVec<T1, N1>> for &'_ StaticVec<T2, N2>
[src]
fn eq(&self, other: &StaticVec<T1, N1>) -> bool
[src]
fn ne(&self, other: &StaticVec<T1, N1>) -> bool
[src]
impl<'_, T1, T2: PartialEq<T1>, const N1: usize, const N2: usize> PartialEq<StaticVec<T1, N1>> for &'_ mut StaticVec<T2, N2>
[src]
fn eq(&self, other: &StaticVec<T1, N1>) -> bool
[src]
fn ne(&self, other: &StaticVec<T1, N1>) -> bool
[src]
impl<'_, T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<&'_ [T1; N1]> for StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &&[T1; N1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N: usize> PartialOrd<&'_ [T1]> for StaticVec<T2, N>
[src]
fn partial_cmp(&self, other: &&[T1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<&'_ StaticVec<T1, N1>> for StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &&StaticVec<T1, N1>) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<&'_ mut [T1; N1]> for StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &&mut [T1; N1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N: usize> PartialOrd<&'_ mut [T1]> for StaticVec<T2, N>
[src]
fn partial_cmp(&self, other: &&mut [T1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<&'_ mut StaticVec<T1, N1>> for StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &&mut StaticVec<T1, N1>) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<[T1; N1]> for StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &[T1; N1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<[T1; N1]> for &'_ StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &[T1; N1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<[T1; N1]> for &'_ mut StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &[T1; N1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<T1, T2: PartialOrd<T1>, const N: usize> PartialOrd<[T1]> for StaticVec<T2, N>
[src]
fn partial_cmp(&self, other: &[T1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N: usize> PartialOrd<[T1]> for &'_ StaticVec<T2, N>
[src]
fn partial_cmp(&self, other: &[T1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N: usize> PartialOrd<[T1]> for &'_ mut StaticVec<T2, N>
[src]
fn partial_cmp(&self, other: &[T1]) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<StaticVec<T1, N1>> for StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &StaticVec<T1, N1>) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<StaticVec<T1, N1>> for &'_ StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &StaticVec<T1, N1>) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<'_, T1, T2: PartialOrd<T1>, const N1: usize, const N2: usize> PartialOrd<StaticVec<T1, N1>> for &'_ mut StaticVec<T2, N2>
[src]
fn partial_cmp(&self, other: &StaticVec<T1, N1>) -> Option<Ordering>
[src]
#[must_use]
fn lt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn le(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn gt(&self, other: &Rhs) -> bool
1.0.0[src]
#[must_use]
fn ge(&self, other: &Rhs) -> bool
1.0.0[src]
impl<const N: usize> Read for StaticVec<u8, N>
[src]
Read from a StaticVec. This implementation operates by copying bytes into the destination buffers, then shifting the remaining bytes over.
unsafe fn initializer(&self) -> Initializer
[src]
fn read(&mut self, buf: &mut [u8]) -> Result<usize>
[src]
fn read_to_end(&mut self, buf: &mut Vec<u8>) -> Result<usize>
[src]
fn read_to_string(&mut self, buf: &mut String) -> Result<usize>
[src]
fn read_exact(&mut self, buf: &mut [u8]) -> Result<()>
[src]
fn read_vectored(&mut self, bufs: &mut [IoSliceMut]) -> Result<usize>
[src]
fn by_ref(&mut self) -> &mut Self
1.0.0[src]
fn bytes(self) -> Bytes<Self>
1.0.0[src]
fn chain<R>(self, next: R) -> Chain<Self, R> where
R: Read,
1.0.0[src]
R: Read,
fn take(self, limit: u64) -> Take<Self>
1.0.0[src]
impl<const N: usize> Write for StaticVec<u8, N>
[src]
fn write(&mut self, buf: &[u8]) -> Result<usize>
[src]
fn write_vectored(&mut self, bufs: &[IoSlice]) -> Result<usize>
[src]
fn write_all(&mut self, buf: &[u8]) -> Result<()>
[src]
fn flush(&mut self) -> Result<()>
[src]
fn write_fmt(&mut self, fmt: Arguments) -> Result<(), Error>
1.0.0[src]
fn by_ref(&mut self) -> &mut Self
1.0.0[src]
Auto Trait Implementations
impl<const N: usize, T> RefUnwindSafe for StaticVec<T, N> where
T: RefUnwindSafe,
T: RefUnwindSafe,
impl<const N: usize, T> Send for StaticVec<T, N> where
T: Send,
T: Send,
impl<const N: usize, T> Sync for StaticVec<T, N> where
T: Sync,
T: Sync,
impl<const N: usize, T> Unpin for StaticVec<T, N> where
T: Unpin,
T: Unpin,
impl<const N: usize, T> UnwindSafe for StaticVec<T, N> where
T: UnwindSafe,
T: UnwindSafe,
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<I> IntoIterator for I where
I: Iterator,
[src]
I: Iterator,
type Item = <I as Iterator>::Item
The type of the elements being iterated over.
type IntoIter = I
Which kind of iterator are we turning this into?
fn into_iter(self) -> I
[src]
impl<T> ToOwned for T where
T: Clone,
[src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
fn to_owned(&self) -> T
[src]
fn clone_into(&self, target: &mut T)
[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,