Skip to main content

Tensor

Struct Tensor 

Source
pub struct Tensor { /* private fields */ }
Expand description

A tensor wrapping a libtorch C++ tensor.

Owns the underlying C++ handle. When dropped, the C++ tensor is freed immediately — including any GPU memory. This is the entire VRAM management story.

Operations are chainable and return Result<Tensor>:

let y = x.matmul(&w)?.add(&b)?.relu()?;

Implementations§

Source§

impl Tensor

Source

pub fn zeros(shape: &[i64], opts: TensorOptions) -> Result<Self>

Create a tensor filled with zeros.

let t = Tensor::zeros(&[2, 3], TensorOptions::default())?;
assert_eq!(t.shape(), vec![2, 3]);
Source

pub fn ones(shape: &[i64], opts: TensorOptions) -> Result<Self>

Create a tensor filled with ones. Like torch.ones().

let t = Tensor::ones(&[2, 3], TensorOptions::default())?;
Source

pub fn from_f32(data: &[f32], shape: &[i64], device: Device) -> Result<Self>

Create a tensor from f32 data.

let t = Tensor::from_f32(&[1.0, 2.0, 3.0, 4.0], &[2, 2], Device::CPU)?;
assert_eq!(t.shape(), vec![2, 2]);
Source

pub fn from_f64(data: &[f64], shape: &[i64], device: Device) -> Result<Self>

Create a Float64 tensor from f64 data. Use when full double precision is needed (e.g. loss accumulation, high-precision metrics).

Source

pub fn from_i64(data: &[i64], shape: &[i64], device: Device) -> Result<Self>

Create an Int64 tensor from i64 data. Commonly used for class labels, token indices, and any integer indexing (e.g. cross_entropy_loss targets).

Source

pub fn ndim(&self) -> usize

Number of dimensions (rank). Like tensor.ndim in PyTorch.

Source

pub fn shape(&self) -> Vec<i64>

Shape of each dimension as a Vec. Like tensor.shape in PyTorch.

Source

pub fn numel(&self) -> i64

Total number of elements (product of all dimensions). Like tensor.numel().

Source

pub fn dtype(&self) -> DType

Element data type of this tensor. Like tensor.dtype in PyTorch.

Source

pub fn device(&self) -> Device

Device where this tensor’s data resides (CPU or CUDA). Like tensor.device.

Source

pub fn to_f32_vec(&self) -> Result<Vec<f32>>

Copy tensor data to a Vec<f32>. Transparently moves to CPU first if the tensor lives on CUDA. Non-f32 dtypes are cast via libtorch.

Source

pub fn to_f64_vec(&self) -> Result<Vec<f64>>

Copy tensor data to a Vec<f64>. Moves to CPU if needed. Float64 tensors are copied at full precision. All other dtypes go through f32 (lossless for f16/bf16, and the best f32 can offer).

Source

pub fn to_i64_vec(&self) -> Result<Vec<i64>>

Copy tensor data to a Vec<i64>. Moves to CPU if needed. Intended for Int64 tensors (indices, labels).

Source

pub fn item(&self) -> Result<f64>

Extract a scalar value as f64. Like PyTorch’s .item().

The tensor must contain exactly one element (any shape is fine, e.g. [1], [1, 1], or []). Returns an error otherwise. Preserves full precision for Float64 tensors.

let loss_val = loss_tensor.item()?;
println!("loss: {:.4}", loss_val);
Source

pub fn add(&self, other: &Tensor) -> Result<Tensor>

Element-wise addition. Shapes must be broadcastable.

let c = a.add(&b)?; // [2, 3] + [2, 3] → [2, 3]
Source

pub fn sub(&self, other: &Tensor) -> Result<Tensor>

Element-wise subtraction. Shapes must be broadcastable.

Source

pub fn mul(&self, other: &Tensor) -> Result<Tensor>

Element-wise (Hadamard) multiplication. Shapes must be broadcastable. For matrix multiplication, use matmul.

Source

pub fn matmul(&self, other: &Tensor) -> Result<Tensor>

Matrix multiplication.

// [batch, M, K] @ [batch, K, N] → [batch, M, N]
let c = a.matmul(&b)?;
Source

pub fn mul_scalar(&self, scalar: f64) -> Result<Tensor>

Multiply every element by a scalar. Like tensor * 0.5 in PyTorch.

Source

pub fn relu(&self) -> Result<Tensor>

ReLU activation: max(0, x).

Source

pub fn sigmoid(&self) -> Result<Tensor>

Sigmoid activation: 1 / (1 + exp(-x)).

Source

pub fn sum(&self) -> Result<Tensor>

Sum of all elements (scalar result).

Source

pub fn mean(&self) -> Result<Tensor>

Mean of all elements (scalar result).

Source

pub fn flatten(&self, start_dim: i32, end_dim: i32) -> Result<Tensor>

Flatten dimensions [start_dim..=end_dim] into one.

Source

pub fn div(&self, other: &Tensor) -> Result<Tensor>

Element-wise division.

Source

pub fn neg(&self) -> Result<Tensor>

Negate every element.

Source

pub fn add_scalar(&self, scalar: f64) -> Result<Tensor>

Add a scalar to every element.

Source

pub fn div_scalar(&self, scalar: f64) -> Result<Tensor>

Divide every element by a scalar.

Source

pub fn tanh(&self) -> Result<Tensor>

Tanh activation: element-wise hyperbolic tangent.

Source

pub fn exp(&self) -> Result<Tensor>

Element-wise exponential.

Source

pub fn log(&self) -> Result<Tensor>

Element-wise natural logarithm.

Source

pub fn sqrt(&self) -> Result<Tensor>

Element-wise square root.

Source

pub fn abs(&self) -> Result<Tensor>

Element-wise absolute value.

Source

pub fn triu(&self, diagonal: i64) -> Result<Tensor>

Upper triangle of a matrix (or batch of matrices). Elements below the diagonal-th diagonal are zeroed. diagonal=0 keeps the main diagonal; diagonal=1 excludes it.

Source

pub fn pow_scalar(&self, exponent: f64) -> Result<Tensor>

Raise every element to a scalar exponent.

Source

pub fn sum_dim(&self, dim: i32, keepdim: bool) -> Result<Tensor>

Sum along a dimension.

Source

pub fn clamp(&self, min: f64, max: f64) -> Result<Tensor>

Clamp all elements to [min, max].

Source

pub fn gt_scalar(&self, scalar: f64) -> Result<Tensor>

Element-wise greater-than comparison against a scalar.

Source

pub fn reshape(&self, shape: &[i64]) -> Result<Tensor>

Reshape to a new shape (must have same total elements). Use -1 for one inferred dimension.

let flat = t.reshape(&[-1])?; // [2, 3] → [6]
Source

pub fn transpose(&self, dim0: i32, dim1: i32) -> Result<Tensor>

Swap two dimensions.

let t = x.transpose(0, 1)?; // [M, N] → [N, M]
Source

pub fn expand(&self, shape: &[i64]) -> Result<Tensor>

Broadcast to a larger shape.

Source

pub fn narrow(&self, dim: i32, start: i64, length: i64) -> Result<Tensor>

Narrow (slice) along a dimension: returns a view.

Source

pub fn narrow_scatter( &self, src: &Tensor, dim: i32, start: i64, ) -> Result<Tensor>

Scatter a narrow slice back into a tensor (for narrow backward).

Source

pub fn cat(&self, other: &Tensor, dim: i32) -> Result<Tensor>

Concatenate two tensors along a dimension.

Source

pub fn cat_many(tensors: &[&Tensor], dim: i32) -> Result<Tensor>

Concatenate multiple tensors along an existing dimension.

All tensors must have the same shape except in the concatenation dimension. Uses a single kernel launch regardless of the number of tensors.

Source

pub fn stack(tensors: &[&Tensor], dim: i32) -> Result<Tensor>

Stack tensors along a new dimension.

All tensors must have the same shape. A new dimension is inserted at dim.

Source

pub fn softmax(&self, dim: i32) -> Result<Tensor>

Softmax along a dimension.

Source

pub fn log_softmax(&self, dim: i32) -> Result<Tensor>

Log-softmax along a dimension (numerically stable).

Source

pub fn gelu(&self) -> Result<Tensor>

GELU activation (native libtorch).

Source

pub fn silu(&self) -> Result<Tensor>

SiLU activation (native libtorch).

Source

pub fn native_layer_norm( &self, weight: &Tensor, bias: &Tensor, normalized_size: i64, eps: f64, ) -> Result<(Tensor, Tensor, Tensor)>

Native layer normalization. Returns (output, mean, rstd).

Source

pub fn permute(&self, dims: &[i64]) -> Result<Tensor>

Permute dimensions.

Source

pub fn select(&self, dim: i32, index: i64) -> Result<Tensor>

Select a single index along a dimension (reduces that dim).

Source

pub fn mean_dim(&self, dim: i32, keepdim: bool) -> Result<Tensor>

Mean along a dimension.

Source

pub fn index_select(&self, dim: i32, index: &Tensor) -> Result<Tensor>

Select rows/elements along a dimension using an index tensor.

Source

pub fn index_add( &self, dim: i32, index: &Tensor, src: &Tensor, ) -> Result<Tensor>

Scatter-add src into self along dim at positions given by index.

Source

pub fn zeros_like(t: &Tensor) -> Result<Tensor>

Create a tensor of zeros with the same shape, dtype, and device as t.

Source

pub fn ones_like(t: &Tensor) -> Result<Tensor>

Create a tensor of ones with the same shape, dtype, and device as t.

Source

pub fn rand(shape: &[i64], opts: TensorOptions) -> Result<Self>

Create a tensor with uniform random values in [0, 1).

Source

pub fn randn(shape: &[i64], opts: TensorOptions) -> Result<Self>

Create a tensor with standard normal random values (mean=0, std=1).

Source

pub fn conv2d( &self, weight: &Tensor, bias: Option<&Tensor>, stride: [i64; 2], padding: [i64; 2], dilation: [i64; 2], groups: i64, ) -> Result<Tensor>

2D convolution. bias may be a null-handle tensor for no bias.

Source

pub fn conv_transpose2d( &self, weight: &Tensor, bias: Option<&Tensor>, stride: [i64; 2], padding: [i64; 2], output_padding: [i64; 2], dilation: [i64; 2], groups: i64, ) -> Result<Tensor>

Transposed 2D convolution.

Source

pub fn linear(&self, weight: &Tensor, bias: Option<&Tensor>) -> Result<Tensor>

Fused linear: y = input @ weight^T + bias (single ATen kernel).

Source

pub fn gru_cell( &self, hx: &Tensor, w_ih: &Tensor, w_hh: &Tensor, b_ih: &Tensor, b_hh: &Tensor, ) -> Result<Tensor>

Fused GRU cell: single ATen gru_cell kernel. Returns new hidden state h’.

Source

pub fn lstm_cell( &self, hx: &Tensor, cx: &Tensor, w_ih: &Tensor, w_hh: &Tensor, b_ih: &Tensor, b_hh: &Tensor, ) -> Result<(Tensor, Tensor)>

Fused LSTM cell: single ATen lstm_cell kernel. Returns (h', c').

Source

pub fn mse_loss(&self, target: &Tensor, reduction: i64) -> Result<Tensor>

Fused MSE loss: single libtorch kernel. reduction: 0=None, 1=Mean, 2=Sum.

Source

pub fn cross_entropy_loss( &self, target: &Tensor, reduction: i64, ignore_index: i64, label_smoothing: f64, ) -> Result<Tensor>

Fused cross-entropy loss: single libtorch kernel. pred: [N,C] logits. target: [N] Int64 indices or [N,C] Float probs. reduction: 0=None, 1=Mean, 2=Sum.

Source

pub fn bce_with_logits_loss( &self, target: &Tensor, reduction: i64, ) -> Result<Tensor>

Fused BCE with logits loss: single libtorch kernel. Numerically stable binary cross-entropy from raw logits. reduction: 0=None, 1=Mean, 2=Sum.

Source

pub fn l1_loss(&self, target: &Tensor, reduction: i64) -> Result<Tensor>

Fused L1 loss: single libtorch kernel. reduction: 0=None, 1=Mean, 2=Sum.

Source

pub fn smooth_l1_loss( &self, target: &Tensor, reduction: i64, beta: f64, ) -> Result<Tensor>

Fused Smooth L1 (Huber) loss: single libtorch kernel. reduction: 0=None, 1=Mean, 2=Sum. beta: transition point.

Source

pub fn kl_div_loss( &self, target: &Tensor, reduction: i64, log_target: bool, ) -> Result<Tensor>

Fused KL divergence loss: single libtorch kernel. input: log-probabilities. target: probabilities. reduction: 0=None, 1=Mean, 2=Sum, 5=BatchMean.

Source

pub fn batch_norm( &self, weight: Option<&Tensor>, bias: Option<&Tensor>, running_mean: Option<&Tensor>, running_var: Option<&Tensor>, training: bool, momentum: f64, eps: f64, ) -> Result<Tensor>

Fused batch normalization: single libtorch kernel. When training=true, updates running_mean/running_var in-place.

Source

pub fn dropout(&self, p: f64, training: bool) -> Result<Tensor>

Fused dropout: single libtorch kernel with inverted scaling.

Source

pub fn feature_dropout(&self, p: f64, training: bool) -> Result<Tensor>

Fused 2D feature dropout: drops entire channels.

Source

pub fn linspace( start: f64, end: f64, steps: i64, opts: TensorOptions, ) -> Result<Self>

Create evenly spaced values.

Source

pub fn arange( start: f64, end: f64, step: f64, opts: TensorOptions, ) -> Result<Self>

Create a range of values [start, end) with given step.

Source

pub fn min(&self) -> Result<Tensor>

Scalar minimum.

Source

pub fn max(&self) -> Result<Tensor>

Scalar maximum.

Source

pub fn norm(&self) -> Result<Tensor>

L2 (Frobenius) norm of all elements.

Source

pub fn min_dim(&self, dim: i32, keepdim: bool) -> Result<Tensor>

Minimum along a dimension (values only).

Source

pub fn max_dim(&self, dim: i32, keepdim: bool) -> Result<Tensor>

Maximum along a dimension (values only).

Source

pub fn argmax(&self, dim: i32, keepdim: bool) -> Result<Tensor>

Argmax along a dimension.

Source

pub fn ge_scalar(&self, scalar: f64) -> Result<Tensor>

Element-wise greater-than-or-equal comparison against a scalar.

Source

pub fn le_scalar(&self, scalar: f64) -> Result<Tensor>

Element-wise less-than-or-equal comparison against a scalar.

Source

pub fn lt_scalar(&self, scalar: f64) -> Result<Tensor>

Element-wise less-than comparison against a scalar.

Source

pub fn select_scatter( &self, src: &Tensor, dim: i32, index: i64, ) -> Result<Tensor>

Scatter a selected index back into a tensor.

Source

pub fn where_cond(condition: &Tensor, x: &Tensor, y: &Tensor) -> Result<Tensor>

Conditional select: where(condition, self, other).

Source

pub fn squeeze(&self, dim: i32) -> Result<Tensor>

Squeeze (remove) a dimension of size 1.

Source

pub fn unsqueeze(&self, dim: i32) -> Result<Tensor>

Unsqueeze (insert) a dimension of size 1.

Source

pub fn adaptive_avg_pool2d(&self, output_size: [i64; 2]) -> Result<Tensor>

Adaptive average pooling to target spatial size.

Source

pub fn grid_sample( &self, grid: &Tensor, mode: i32, padding_mode: i32, align_corners: bool, ) -> Result<Tensor>

Grid sampling (bilinear/nearest interpolation).

Source

pub fn to_dtype(&self, dtype: DType) -> Result<Tensor>

Cast to a different dtype.

Source

pub fn all_finite(&self) -> Result<bool>

Check if all elements are finite (no inf/nan).

Source

pub fn gt(&self, other: &Tensor) -> Result<Tensor>

Element-wise greater-than (returns float mask: 0.0 or 1.0).

Source

pub fn lt(&self, other: &Tensor) -> Result<Tensor>

Element-wise less-than (returns float mask: 0.0 or 1.0).

Source

pub fn ge(&self, other: &Tensor) -> Result<Tensor>

Element-wise greater-than-or-equal (returns float mask: 0.0 or 1.0).

Source

pub fn le(&self, other: &Tensor) -> Result<Tensor>

Element-wise less-than-or-equal (returns float mask: 0.0 or 1.0).

Source

pub fn eq_tensor(&self, other: &Tensor) -> Result<Tensor>

Element-wise equality. Returns a mask (0.0 or 1.0) in the input’s dtype for float inputs, or Float32 for integer/bool inputs.

Source

pub fn ne_tensor(&self, other: &Tensor) -> Result<Tensor>

Element-wise not-equal. Returns a mask (0.0 or 1.0) in the input’s dtype for float inputs, or Float32 for integer/bool inputs.

Source

pub fn argmin(&self, dim: i32, keepdim: bool) -> Result<Tensor>

Argmin along a dimension.

Source

pub fn var(&self) -> Result<Tensor>

Variance of all elements (Bessel-corrected).

Source

pub fn std(&self) -> Result<Tensor>

Standard deviation of all elements (Bessel-corrected).

Source

pub fn var_dim(&self, dim: i32, keepdim: bool) -> Result<Tensor>

Variance along a dimension (Bessel-corrected).

Source

pub fn std_dim(&self, dim: i32, keepdim: bool) -> Result<Tensor>

Standard deviation along a dimension (Bessel-corrected).

Source

pub fn sin(&self) -> Result<Tensor>

Element-wise sine.

Source

pub fn cos(&self) -> Result<Tensor>

Element-wise cosine.

Source

pub fn sign(&self) -> Result<Tensor>

Element-wise sign (-1, 0, or +1).

Source

pub fn floor(&self) -> Result<Tensor>

Element-wise floor.

Source

pub fn ceil(&self) -> Result<Tensor>

Element-wise ceiling.

Source

pub fn round(&self) -> Result<Tensor>

Element-wise rounding to nearest integer.

Source

pub fn reciprocal(&self) -> Result<Tensor>

Element-wise reciprocal (1/x).

Source

pub fn gather(&self, dim: i32, index: &Tensor) -> Result<Tensor>

Gather values along a dimension using an index tensor.

Source

pub fn scatter_add( &self, dim: i32, index: &Tensor, src: &Tensor, ) -> Result<Tensor>

Scatter-add: accumulate src into self at index positions along dim.

Source

pub fn topk( &self, k: i64, dim: i32, largest: bool, sorted: bool, ) -> Result<(Tensor, Tensor)>

Top-k values and indices along a dimension. Returns (values, indices).

Source

pub fn sort(&self, dim: i32, descending: bool) -> Result<(Tensor, Tensor)>

Sort along a dimension. Returns (sorted_values, indices).

Source

pub fn eye(n: i64, opts: TensorOptions) -> Result<Self>

Create an identity matrix of size n x n.

Source

pub fn full(shape: &[i64], value: f64, opts: TensorOptions) -> Result<Self>

Create a tensor filled with a scalar value.

Source

pub fn batches(&self, batch_size: i64) -> Result<Vec<Tensor>>

Split tensor into batches of batch_size along dimension 0. The last batch may be smaller if the tensor size isn’t evenly divisible.

let data = Tensor::randn(&[100, 4], opts)?;
for batch in data.batches(32)? {
    let x = Variable::new(batch, false);
    // ...
}
Source

pub fn chunk(&self, chunks: i32, dim: i32) -> Result<Vec<Tensor>>

Split tensor into chunks along a dimension.

Source

pub fn repeat(&self, repeats: &[i64]) -> Result<Tensor>

Repeat the tensor along each dimension.

Source

pub fn pad(&self, padding: &[i64], value: f64) -> Result<Tensor>

Constant-value padding. Padding format matches PyTorch: [left, right, top, bottom, …].

Source

pub fn unsqueeze_many(&self, dims: &[i32]) -> Result<Tensor>

Insert multiple dimensions of size 1. Dims are sorted ascending and applied sequentially.

Source

pub fn meshgrid(tensors: &[&Tensor]) -> Result<Vec<Tensor>>

Compute meshgrid from a slice of 1-D tensors (always “ij” indexing).

Source

pub fn cdist(&self, other: &Tensor) -> Result<Tensor>

Pairwise L2 distance between rows of two batched matrices. Input shapes: [B, P, D] and [B, R, D] -> output [B, P, R].

Source

pub fn cdist_p(&self, other: &Tensor, p: f64) -> Result<Tensor>

Pairwise distance with custom p-norm.

Source

pub fn to_device(&self, device: Device) -> Result<Tensor>

Move this tensor to a different device (CPU or CUDA). Returns a new tensor; the original is unchanged.

let gpu = t.to_device(Device::CUDA(0))?;
Source

pub fn to_device_of(&self, other: &Tensor) -> Result<Tensor>

Move this tensor to the same device as other. No-op (returns a clone) if both are already on the same device.

let x = x.to_device_of(&weights)?;  // ensure same device
Source

pub fn set_requires_grad(&self, requires_grad: bool) -> Result<Tensor>

Set requires_grad on this tensor. Returns a new tensor that shares storage but has the grad flag set. This enables libtorch’s native autograd tracking for all subsequent operations.

Source

pub fn requires_grad(&self) -> bool

Check whether this tensor requires gradient computation.

Source

pub fn backward(&self) -> Result<()>

Run backward pass from this scalar tensor. Populates .grad() on all leaf tensors in the computation graph.

Source

pub fn grad(&self) -> Option<Tensor>

Get the accumulated gradient for this tensor, if any. Returns None if no gradient has been computed.

Source

pub fn set_grad(&self, grad: &Tensor) -> Result<()>

Replace the gradient tensor (for gradient clipping / unscaling).

Source

pub fn zero_grad(&self) -> Result<()>

Zero out the accumulated gradient.

Source

pub fn zero_grad_set_to_none(&self)

Null out the gradient pointer instead of zeroing the data. No CUDA kernel — just resets the grad tensor to undefined. This is what PyTorch does by default since 1.7.

Source

pub fn clip_grad_norm_fused(params: &[Tensor], max_norm: f64) -> Result<f64>

Fused clip_grad_norm: compute global L2 norm across all param grads and scale in-place if it exceeds max_norm. Single C++ call. Returns the original total norm before clipping.

Source

pub fn is_leaf(&self) -> bool

Whether this tensor is a leaf in the autograd graph. A tensor is a leaf if it was created by the user (not by an op) or if it doesn’t require grad.

Source

pub fn autograd_node_count(&self) -> i64

Count unique autograd nodes reachable from this tensor’s grad_fn. Returns 0 for leaf tensors or tensors without gradient tracking. This is the number of backward operations libtorch will execute.

Source

pub fn detach(&self) -> Result<Tensor>

Detach from the computation graph. Returns a new tensor that shares storage but has no autograd history.

Source

pub fn detach_(&self) -> Result<()>

In-place detach: sever the grad_fn chain on this tensor without allocating a new handle. After this call the tensor’s autograd_meta no longer references any C++ Node objects, allowing the autograd graph to be freed immediately rather than when the tensor is dropped.

Source

pub fn add_(&self, other: &Tensor) -> Result<()>

In-place add: self += other

Source

pub fn sub_(&self, other: &Tensor) -> Result<()>

In-place subtract: self -= other

Source

pub fn mul_scalar_(&self, scalar: f64) -> Result<()>

In-place scalar multiply: self *= scalar

Source

pub fn add_scalar_(&self, scalar: f64) -> Result<()>

In-place scalar add: self += scalar

Source

pub fn zero_(&self) -> Result<()>

In-place zero: self = 0

Source

pub fn adam_step( &self, grad: &Tensor, m: &Tensor, v: &Tensor, lr: f64, beta1: f64, beta2: f64, eps: f64, weight_decay: f64, step: i64, ) -> Result<()>

Fused Adam/AdamW step: updates param, m, and v tensors in-place.

Performs the full Adam update in a single FFI call (~5 kernel launches instead of ~16), eliminating temporary tensor allocations.

  • self — parameter tensor (updated in-place)
  • grad — gradient (read-only)
  • m, v — moment buffers (updated in-place)
  • weight_decay — 0.0 for Adam, >0 for AdamW (decoupled)
  • step — timestep for bias correction
Source

pub fn adam_step_batched( params: &[Tensor], grads: &[Tensor], ms: &[Tensor], vs: &[Tensor], lrs: &mut [f64], beta1: f64, beta2: f64, eps: f64, weight_decay: f64, step: i64, ) -> Result<()>

Perform Adam/AdamW update on all params in one C++ loop. Eliminates per-param FFI overhead. lrs[i] supports per-group LR.

Source

pub fn pin_memory(&self) -> Result<Tensor>

Copy this CPU tensor into page-locked (pinned) memory.

Pinned memory enables async CPU→GPU transfers via cudaMemcpyAsync. Only valid for CPU tensors. Returns a new tensor in pinned memory.

Source

pub fn is_pinned(&self) -> bool

Returns true if this tensor is stored in pinned (page-locked) memory.

Trait Implementations§

Source§

impl Clone for Tensor

Source§

fn clone(&self) -> Self

Shallow clone: creates a new C++ Tensor handle sharing the same TensorImpl (and thus the same data storage). Cheap — just bumps libtorch’s internal refcount.

1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Tensor

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Drop for Tensor

Source§

fn drop(&mut self)

Executes the destructor for this type. Read more
Source§

impl Send for Tensor

Source§

impl Sync for Tensor

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.