Skip to main content

Tensor

Struct Tensor 

Source
pub struct Tensor {
    pub buffer: Buffer<f64>,
    /* private fields */
}
Expand description

An N-dimensional tensor backed by a Buffer<f64>.

Supports element-wise arithmetic, matrix multiplication (2-D), and numerically-stable reductions via BinnedAccumulator summation.

Fields§

§buffer: Buffer<f64>

Implementations§

Source§

impl Tensor

Source

pub fn zeros(shape: &[usize]) -> Self

Create a tensor filled with zeros.

Source

pub fn ones(shape: &[usize]) -> Self

Create a tensor filled with ones.

Source

pub fn randn(shape: &[usize], rng: &mut Rng) -> Self

Create a tensor filled with samples from the standard normal distribution, drawn deterministically from rng.

Source

pub fn from_vec(data: Vec<f64>, shape: &[usize]) -> Result<Self, RuntimeError>

Create a tensor from raw data and a shape. Returns an error if the number of elements does not match the shape.

Source

pub fn shape(&self) -> &[usize]

The shape of this tensor.

Source

pub fn ndim(&self) -> usize

Number of dimensions.

Source

pub fn len(&self) -> usize

Total number of elements.

Source

pub fn is_empty(&self) -> bool

Whether the tensor has zero elements.

Source

pub fn is_contiguous(&self) -> bool

Whether this tensor is contiguous in memory (row-major, no offset).

Source

pub fn slice(&self, ranges: &[(usize, usize)]) -> Result<Tensor, RuntimeError>

Create a zero-copy slice (view) of this tensor. ranges contains (start, end) for each dimension.

Source

pub fn to_contiguous(&self) -> Tensor

Materialize a contiguous copy if this tensor is non-contiguous.

Source

pub fn broadcast_to( &self, target_shape: &[usize], ) -> Result<Tensor, RuntimeError>

Create a broadcast view of this tensor to target_shape. Uses stride=0 for dimensions that need broadcasting (size 1 -> target size).

Source

pub fn get(&self, indices: &[usize]) -> Result<f64, RuntimeError>

Read the element at the given multi-dimensional index.

Source

pub fn set(&mut self, indices: &[usize], val: f64) -> Result<(), RuntimeError>

Write the element at the given multi-dimensional index.

Source

pub fn to_vec(&self) -> Vec<f64>

Extract the raw data as a Vec<f64>, respecting strides and offset.

Source

pub fn reshape(&self, new_shape: &[usize]) -> Result<Tensor, RuntimeError>

Reshape to new_shape. The new shape must have the same total number of elements. The returned tensor shares the underlying buffer.

Source

pub fn add(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise addition (SIMD-accelerated for contiguous same-shape tensors).

Source

pub fn sub(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise subtraction (SIMD-accelerated for contiguous same-shape tensors).

Source

pub fn mul_elem(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise (Hadamard) multiplication (SIMD-accelerated for contiguous same-shape tensors).

Source

pub fn div_elem(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise division (SIMD-accelerated for contiguous same-shape tensors).

Source

pub fn fused_mul_add( &self, b: &Tensor, c: &Tensor, ) -> Result<Tensor, RuntimeError>

Fused multiply-add: self * b + c element-wise in a single pass.

Eliminates the intermediate tensor that separate mul + add would create. Uses software FMA (a * b + c with two roundings, not hardware FMA) to preserve bit-identity with the non-fused path.

Source

pub fn elem_pow(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise power: a^b.

Source

pub fn elem_min(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise minimum.

Source

pub fn elem_max(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise maximum.

Source

pub fn elem_atan2(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise atan2(self, other).

Source

pub fn elem_hypot(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise hypot(self, other).

Source

pub fn map(&self, f: impl Fn(f64) -> f64) -> Tensor

Apply a unary function to every element, returning a new contiguous tensor.

Source

pub fn map_simd(&self, op: UnaryOp) -> Tensor

SIMD-accelerated unary map for known operations (sqrt, abs, neg, relu).

Uses AVX2 (4-wide f64) when available, scalar fallback otherwise. Bit-identical to map(f) for the supported operations.

Source

pub fn sum(&self) -> f64

Sum of all elements (binned accumulation — order-invariant, deterministic).

Source

pub fn binned_sum(&self) -> f64

Sum of all elements using BinnedAccumulator (order-invariant, deterministic).

Bit-identical results regardless of element ordering or reduction schedule.

Source

pub fn dispatched_sum(&self, ctx: &ReductionContext) -> f64

Sum with dispatched strategy based on execution context.

Uses Kahan in serial mode, Binned in parallel/@nogc/strict/linalg mode.

Source

pub fn mean(&self) -> f64

Mean of all elements (binned sum / count).

Source

pub fn dispatched_mean(&self, ctx: &ReductionContext) -> f64

Mean with dispatched strategy based on execution context.

Source

pub fn sum_axis(&self, axis: usize) -> Result<Tensor, RuntimeError>

Sum along a specific axis, returning a tensor with that dimension reduced.

Supports N-D tensors. The reduced axis becomes size 1 in the output. Uses BinnedAccumulator for order-invariant, deterministic summation.

Examples:

  • 2D [M, N] with axis=0: result [1, N] (sum columns)
  • 2D [M, N] with axis=1: result [M, 1] (sum rows)
  • 3D [A, B, C] with axis=1: result [A, 1, C]
Source

pub fn neg(&self) -> Tensor

Negate every element, returning a new tensor.

Source

pub fn transpose(&self) -> Tensor

Transpose a tensor. For 2-D: swaps rows and columns (zero-copy view). For N-D: reverses all axes (zero-copy view).

Source

pub fn transpose_axes(&self, axes: &[usize]) -> Result<Tensor, RuntimeError>

Transpose with explicit axis permutation (N-D). Zero-copy view.

axes must be a permutation of [0, 1, ..., ndim-1].

Source

pub fn scalar_mul(&self, s: f64) -> Tensor

Multiply every element by a scalar, returning a new tensor.

Source

pub fn from_vec_unchecked(data: Vec<f64>, shape: &[usize]) -> Tensor

Create a tensor from raw data and shape. Panics if data.len() does not match the shape.

Source

pub fn add_unchecked(&self, other: &Tensor) -> Tensor

Element-wise addition. Panics on shape mismatch.

Source

pub fn sub_unchecked(&self, other: &Tensor) -> Tensor

Element-wise subtraction. Panics on shape mismatch.

Source

pub fn mul_elem_unchecked(&self, other: &Tensor) -> Tensor

Element-wise multiplication. Panics on shape mismatch.

Source

pub fn div_elem_unchecked(&self, other: &Tensor) -> Tensor

Element-wise division. Panics on shape mismatch.

Source

pub fn matmul_unchecked(&self, other: &Tensor) -> Tensor

Matrix multiplication. Panics on dimension mismatch.

Source

pub fn matmul(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Matrix multiplication for 2-D tensors.

self is (M, K), other is (K, N) => result is (M, N).

Source

pub fn bmm(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Batched matrix multiplication.

self is [..., M, K], other is [..., K, N] => result is [..., M, N]. The batch dimensions must be identical (no broadcast). For 2-D inputs, delegates to matmul.

Source

pub fn softmax(&self) -> Result<Tensor, RuntimeError>

Softmax along the last dimension (two-pass stable algorithm).

Pass 1: find max per row (prevents overflow in exp) Pass 2: compute exp(x - max), accumulate sum, normalize

For a tensor of shape [..., N], softmax is applied independently to each length-N slice along the last axis.

Source

pub fn layer_norm( &self, gamma: &Tensor, beta: &Tensor, eps: f64, ) -> Result<Tensor, RuntimeError>

Layer normalization over the last dimension.

For each length-D slice along the last axis:

  1. mean = Σx / D (BinnedAccumulator)
  2. var = Σ(x - mean)² / D (BinnedAccumulator)
  3. normalized = (x - mean) / √(var + eps)
  4. output = gamma * normalized + beta

gamma and beta are 1-D tensors of shape [D]. eps is a small constant (typically 1e-5).

Source

pub fn relu(&self) -> Tensor

Source

pub fn sigmoid(&self) -> Tensor

Sigmoid activation: 1 / (1 + exp(-x)) element-wise.

Source

pub fn tanh_activation(&self) -> Tensor

Tanh activation element-wise.

Source

pub fn leaky_relu(&self, alpha: f64) -> Tensor

Leaky ReLU activation: max(alpha*x, x) element-wise.

Source

pub fn silu(&self) -> Tensor

SiLU (Swish) activation: x * sigmoid(x) element-wise.

Source

pub fn mish(&self) -> Tensor

Mish activation: x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x))).

Source

pub fn argmax(&self) -> usize

Argmax: index of the maximum element (first occurrence, deterministic).

Source

pub fn argmin(&self) -> usize

Argmin: index of the minimum element (first occurrence, deterministic).

Source

pub fn clamp(&self, min: f64, max: f64) -> Tensor

Clamp all elements to [min, max].

Source

pub fn one_hot(indices: &[usize], depth: usize) -> Result<Tensor, RuntimeError>

One-hot encoding: given a 1D tensor of integer indices and a depth, returns a 2D tensor of shape [len, depth].

Source

pub fn cat(tensors: &[&Tensor], axis: usize) -> Result<Tensor, RuntimeError>

Concatenate tensors along existing axis.

Source

pub fn stack(tensors: &[&Tensor], axis: usize) -> Result<Tensor, RuntimeError>

Stack tensors along a new axis.

Source

pub fn topk(&self, k: usize) -> Result<(Tensor, Vec<usize>), RuntimeError>

Top-k values and indices (largest k values from flat data).

Source

pub fn gelu(&self) -> Tensor

GELU activation (approximate): x * 0.5 * (1 + tanh(√(2/π) * (x + 0.044715 * x³)))

Source

pub fn linear( &self, weight: &Tensor, bias: &Tensor, ) -> Result<Tensor, RuntimeError>

Linear layer: output = input @ weight^T + bias

self is [..., in_features], weight is [out_features, in_features], bias is [out_features]. Result is [..., out_features].

Source

pub fn conv1d( &self, filters: &Tensor, bias: &Tensor, ) -> Result<Tensor, RuntimeError>

1D convolution: signal [signal_len] * filters [out_ch, kernel_size] + bias

Returns [out_ch, signal_len - kernel_size + 1] (valid mode, stride=1).

Source

pub fn conv2d( &self, filters: &Tensor, bias: &Tensor, stride: usize, ) -> Result<Tensor, RuntimeError>

2D convolution — NCHW layout, valid mode, configurable stride.

§Arguments
  • self: [N, C_in, H, W] input tensor
  • filters: [C_out, C_in, kH, kW]
  • bias: [C_out]
  • stride: spatial stride (default 1)
§Returns

[N, C_out, H_out, W_out] where H_out = (H - kH) / stride + 1.

Uses BinnedAccumulatorF64 for every dot product — bit-identical results across all runs and hardware configurations.

Source

pub fn maxpool2d(&self, ph: usize, pw: usize) -> Result<Tensor, RuntimeError>

2D max-pooling — NCHW layout, non-overlapping windows.

  • self: [N, C, H, W]
  • ph, pw: pool height/width (stride = window size)

Returns [N, C, H/ph, W/pw].

Source

pub fn scaled_dot_product_attention( queries: &Tensor, keys: &Tensor, values: &Tensor, ) -> Result<Tensor, RuntimeError>

Scaled dot-product attention (single head).

queries is [..., T, d_k] keys is [..., S, d_k] values is [..., S, d_v]

Computes: softmax(Q × Kᵀ / √d_k) × V Returns [..., T, d_v].

Source

pub fn transpose_last_two(&self) -> Result<Tensor, RuntimeError>

Transpose the last two dimensions of a tensor.

[..., A, B][..., B, A]

Source

pub fn from_bytes( bytes: &[u8], shape: &[usize], dtype: &str, ) -> Result<Tensor, RuntimeError>

Create a tensor view from raw bytes — zero allocation.

Interprets bytes as a contiguous block of f64 (8 bytes each) or f32 (4 bytes each, promoted to f64) values and maps them into a Tensor with the given shape.

dtype must be "f64" or "f32".

For f64: bytes.len() must equal shape_numel * 8. For f32: bytes.len() must equal shape_numel * 4.

The returned tensor owns its buffer (copied from the raw bytes) but performs exactly one allocation for the data vector.

Source

pub fn split_heads(&self, num_heads: usize) -> Result<Tensor, RuntimeError>

Reshape a 3D tensor [batch, seq, model_dim] into 4D [batch, num_heads, seq, head_dim] by splitting the last dimension.

This is a zero-copy view — it only changes shape/strides metadata. model_dim must be divisible by num_heads.

Source

pub fn merge_heads(&self) -> Result<Tensor, RuntimeError>

Merge heads back: reshape 4D [batch, num_heads, seq, head_dim] into 3D [batch, seq, model_dim]. Materializes if non-contiguous.

Source

pub fn view_reshape(&self, new_shape: &[usize]) -> Result<Tensor, RuntimeError>

View-only reshape: reinterpret shape without copying. Only works on contiguous tensors. Falls back to copy if non-contiguous.

Source

pub fn argsort(&self) -> Tensor

Returns indices that would sort the flattened tensor in ascending order. Uses f64::total_cmp for deterministic ordering of NaN.

Source

pub fn gather( &self, dim: usize, indices: &Tensor, ) -> Result<Tensor, RuntimeError>

Gather elements from the tensor along a dimension using index tensor. For 1D: result[i] = self[indices[i]] For 2D dim=0: result[i][j] = self[indices[i][j]][j] For 2D dim=1: result[i][j] = self[i][indices[i][j]]

Source

pub fn scatter( &self, dim: usize, indices: &Tensor, src: &Tensor, ) -> Result<Tensor, RuntimeError>

Scatter src values into a tensor of given shape at indices along a dimension. For 1D: result[indices[i]] = src[i] For 2D dim=0: result[indices[i][j]][j] = src[i][j] For 2D dim=1: result[i][indices[i][j]] = src[i][j]

Source

pub fn index_select( &self, dim: usize, indices: &Tensor, ) -> Result<Tensor, RuntimeError>

Select slices along a dimension by index. For 2D dim=0: selects rows For 2D dim=1: selects columns

Source

pub fn tensor_where( &self, condition: &Tensor, other: &Tensor, ) -> Result<Tensor, RuntimeError>

Element-wise conditional select: where(condition, other). For each element, returns self[i] if condition[i] != 0.0, else other[i].

Source

pub fn any(&self) -> bool

Returns true if any element is non-zero.

Source

pub fn all(&self) -> bool

Returns true if all elements are non-zero.

Source

pub fn nonzero(&self) -> Tensor

Returns a 1-D tensor of flat indices where elements are non-zero.

Source

pub fn masked_fill( &self, mask: &Tensor, value: f64, ) -> Result<Tensor, RuntimeError>

Fill elements where mask is non-zero with value.

Source

pub fn mean_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>

Mean along an axis with optional keepdim.

Source

pub fn max_axis( &self, axis: usize, keepdim: bool, ) -> Result<(Tensor, Tensor), RuntimeError>

Max along an axis with optional keepdim. Returns (values, indices).

Source

pub fn min_axis( &self, axis: usize, keepdim: bool, ) -> Result<(Tensor, Tensor), RuntimeError>

Min along an axis with optional keepdim. Returns (values, indices).

Source

pub fn var_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>

Variance along an axis with optional keepdim.

Source

pub fn std_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>

Standard deviation along an axis with optional keepdim.

Source

pub fn prod_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>

Product along an axis with optional keepdim.

Source

pub fn sort_axis( &self, axis: usize, descending: bool, ) -> Result<Tensor, RuntimeError>

Sort along an axis (stable sort). Returns the sorted tensor. For N-D tensors, sorts slices along the specified axis.

Source

pub fn argsort_axis( &self, axis: usize, descending: bool, ) -> Result<Tensor, RuntimeError>

N-D argsort along an axis. Returns indices tensor.

Source

pub fn einsum( notation: &str, inputs: &[&Tensor], ) -> Result<Tensor, RuntimeError>

Einstein summation notation. Supports patterns like “ij,jk->ik” (matmul), “ii->i” (diagonal), “ij->ji” (transpose), “ijk,ikl->ijl” (batched matmul). Uses BinnedAccumulator for all reductions.

Source

pub fn unsqueeze(&self, dim: usize) -> Result<Tensor, RuntimeError>

Add a dimension of size 1 at position dim.

Source

pub fn squeeze(&self, dim: Option<usize>) -> Result<Tensor, RuntimeError>

Remove a dimension of size 1 at position dim. If dim is None, removes all dimensions of size 1.

Source

pub fn expand(&self, target_shape: &[usize]) -> Result<Tensor, RuntimeError>

Broadcast without copying. Returns a view with stride=0 for broadcasted dims. Same as broadcast_to but named for consistency with the gap-fix plan.

Source

pub fn flatten( &self, start_dim: usize, end_dim: usize, ) -> Result<Tensor, RuntimeError>

Flatten a range of dimensions [start_dim, end_dim] into a single dimension.

Source

pub fn chunk(&self, n: usize, dim: usize) -> Result<Vec<Tensor>, RuntimeError>

Split tensor into n roughly equal chunks along dimension dim.

Source

pub fn split( &self, sizes: &[usize], dim: usize, ) -> Result<Vec<Tensor>, RuntimeError>

Split tensor along dimension dim according to the given sizes.

Source

pub fn scale_add( &self, alpha: f64, other: &Tensor, beta: f64, ) -> Result<Tensor, RuntimeError>

Fused alpha * self + beta * other element-wise. Single pass, one allocation.

Critical for LSTM/GRU gates where f * c_prev + i * g would otherwise create 3 intermediate tensors.

Source§

impl Tensor

Source

pub fn lu_decompose(&self) -> Result<(Tensor, Tensor, Vec<usize>), RuntimeError>

LU decomposition with partial pivoting. Returns (L, U, pivot_indices). Input must be square 2D.

Determinism contract: Pivot selection uses strict > comparison on absolute values. When two candidates have identical absolute values, the first (lowest row index) is chosen. This is deterministic given identical input bits.

Source

pub fn qr_decompose(&self) -> Result<(Tensor, Tensor), RuntimeError>

QR decomposition via Modified Gram-Schmidt. Returns (Q, R). Input must be 2D with rows >= cols.

Source

pub fn cholesky(&self) -> Result<Tensor, RuntimeError>

Cholesky decomposition: A = L * L^T. Input must be symmetric positive definite 2D.

Source

pub fn det(&self) -> Result<f64, RuntimeError>

Determinant via LU decomposition: product of U diagonal * parity.

Source

pub fn solve(&self, b: &Tensor) -> Result<Tensor, RuntimeError>

Solve Ax = b via LU decomposition. self = A (n x n), b = vector (n).

Source

pub fn lstsq(&self, b: &Tensor) -> Result<Tensor, RuntimeError>

Least squares solution: min ||Ax - b||_2 via QR decomposition. self = A (m x n, m >= n), b = vector (m).

Source

pub fn trace(&self) -> Result<f64, RuntimeError>

Matrix trace: sum of diagonal elements.

Source

pub fn norm_frobenius(&self) -> Result<f64, RuntimeError>

Frobenius norm: sqrt(sum(aij^2)).

Source

pub fn eigh(&self) -> Result<(Vec<f64>, Tensor), RuntimeError>

Eigenvalue decomposition for symmetric matrices (Jacobi method). Returns (eigenvalues sorted ascending, eigenvectors n x n). DETERMINISM: fixed row-major sweep order, smallest (i,j) tie-breaking.

Source

pub fn matrix_rank(&self) -> Result<usize, RuntimeError>

Matrix rank via SVD: count singular values > tolerance.

Source

pub fn kron(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Kronecker product: A ⊗ B.

Source

pub fn inverse(&self) -> Result<Tensor, RuntimeError>

Matrix inverse via LU decomposition + back-substitution.

Source

pub fn norm_1(&self) -> Result<f64, RuntimeError>

1-norm: maximum absolute column sum.

Source

pub fn norm_inf(&self) -> Result<f64, RuntimeError>

Infinity norm: maximum absolute row sum.

Source

pub fn cond(&self) -> Result<f64, RuntimeError>

Condition number via eigenvalue ratio. For symmetric: |lambda_max| / |lambda_min|. For general: sqrt(sigma_max/sigma_min).

Source

pub fn schur(&self) -> Result<(Tensor, Tensor), RuntimeError>

Real Schur decomposition: A = Q * T * Q^T. Uses Hessenberg reduction + implicit double-shift QR.

Source

pub fn svd(&self) -> Result<(Tensor, Vec<f64>, Tensor), RuntimeError>

Compute the Singular Value Decomposition: A = U @ diag(S) @ Vt. Returns (U, S, Vt) as (Tensor, Vec, Tensor).

Implementation: via eigendecomposition of A^TA (for V and S^2), then U = AV*diag(1/s_i).

Determinism contract: All intermediate float reductions use BinnedAccumulatorF64. Iteration order is fixed row-major.

Source

pub fn svd_truncated( &self, k: usize, ) -> Result<(Tensor, Vec<f64>, Tensor), RuntimeError>

Truncated SVD — only the top k singular values/vectors. Returns (U_k, S_k, Vt_k) where U_k is m x k, Vt_k is k x n.

Source

pub fn pinv(&self) -> Result<Tensor, RuntimeError>

Compute the Moore-Penrose pseudoinverse via SVD. A+ = V @ diag(1/s_i) @ Ut (with default tolerance for near-zero singular values).

Source

pub fn pinv_with_tol(&self, tol: f64) -> Result<Tensor, RuntimeError>

Compute the Moore-Penrose pseudoinverse via SVD with explicit tolerance.

Source

pub fn matrix_exp(&self) -> Result<Tensor, RuntimeError>

Matrix exponential via scaling and squaring with Pade(13,13) approximation.

Trait Implementations§

Source§

impl Clone for Tensor

Source§

fn clone(&self) -> Tensor

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Tensor

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Display for Tensor

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

§

impl Freeze for Tensor

§

impl !RefUnwindSafe for Tensor

§

impl !Send for Tensor

§

impl !Sync for Tensor

§

impl Unpin for Tensor

§

impl UnsafeUnpin for Tensor

§

impl !UnwindSafe for Tensor

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T> ToString for T
where T: Display + ?Sized,

Source§

fn to_string(&self) -> String

Converts the given value to a String. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.