pub struct Tensor { /* private fields */ }Expand description
A tensor with optional gradient tracking for automatic differentiation.
§Design
The tensor stores:
data: The actual numerical values (backed by aprender’s Vector)shape: Dimensions of the tensorgrad: Accumulated gradient (populated afterbackward())requires_grad: Whether this tensor participates in gradient computationgrad_fn: The operation that created this tensor (for backprop)id: Unique identifier for graph tracking
§Thread Safety
Tensors use Arc internally for shared ownership of gradient functions,
making them safe to share across threads for inference (but not training).
Implementations§
Source§impl Tensor
impl Tensor
Sourcepub fn mul_scalar(&self, scalar: f32) -> Tensor
pub fn mul_scalar(&self, scalar: f32) -> Tensor
Scalar multiplication: z = self * scalar
Source§impl Tensor
impl Tensor
Sourcepub fn leaky_relu(&self, negative_slope: f32) -> Tensor
pub fn leaky_relu(&self, negative_slope: f32) -> Tensor
Leaky ReLU activation: z = max(negative_slope * x, x)
§Arguments
negative_slope- Controls the angle of the negative slope (default: 0.01)
Source§impl Tensor
impl Tensor
Sourcepub fn matmul(&self, other: &Tensor) -> Tensor
pub fn matmul(&self, other: &Tensor) -> Tensor
Matrix multiplication: z = self @ other
Currently supports 2D tensors only. Batched matmul (3D+ tensors) can be added by iterating over batch dimensions and calling 2D matmul.
Sourcepub fn broadcast_add(&self, other: &Tensor) -> Tensor
pub fn broadcast_add(&self, other: &Tensor) -> Tensor
Broadcast addition: z = matrix + vector (broadcasts over rows).
The vector is broadcast to match the matrix’s second dimension. This is useful for adding biases in neural networks.
§Shape
- self: [N, M] (2D matrix)
- other: [M] (1D vector)
- output: [N, M]
§Example
let matrix = Tensor::new(&[1.0, 2.0, 3.0, 4.0], &[2, 2]);
let bias = Tensor::new(&[10.0, 20.0], &[2]);
let result = matrix.broadcast_add(&bias);
// result = [[11, 22], [13, 24]]Source§impl Tensor
impl Tensor
Sourcepub fn new(data: &[f32], shape: &[usize]) -> Self
pub fn new(data: &[f32], shape: &[usize]) -> Self
Create a new tensor from a slice with the given shape.
By default, gradient tracking is disabled.
§Panics
Panics if the data length doesn’t match the product of shape dimensions.
Sourcepub fn from_slice(data: &[f32]) -> Self
pub fn from_slice(data: &[f32]) -> Self
Create a tensor from a 1D slice (vector).
Sourcepub fn zeros_like(other: &Tensor) -> Self
pub fn zeros_like(other: &Tensor) -> Self
Create a tensor with the same shape as another, filled with zeros.
Sourcepub fn ones_like(other: &Tensor) -> Self
pub fn ones_like(other: &Tensor) -> Self
Create a tensor with the same shape as another, filled with ones.
Sourcepub fn requires_grad(self) -> Self
pub fn requires_grad(self) -> Self
Enable gradient tracking for this tensor.
Returns self for method chaining.
Sourcepub fn requires_grad_(&mut self, requires: bool) -> &mut Self
pub fn requires_grad_(&mut self, requires: bool) -> &mut Self
Enable or disable gradient tracking (in-place).
Sourcepub fn requires_grad_enabled(&self) -> bool
pub fn requires_grad_enabled(&self) -> bool
Check if this tensor requires gradient computation.
Sourcepub fn data_mut(&mut self) -> &mut [f32]
pub fn data_mut(&mut self) -> &mut [f32]
Get a mutable reference to the underlying data.
§Warning
Modifying data directly may invalidate gradients.
Sourcepub fn zero_grad_(&mut self)
pub fn zero_grad_(&mut self)
Zero out the gradient.
Sourcepub fn clear_grad(&mut self)
pub fn clear_grad(&mut self)
Clear the gradient (alias for zero_grad_).
Sourcepub fn detach(&self) -> Tensor
pub fn detach(&self) -> Tensor
Detach tensor from computation graph.
Returns a new tensor with the same data but no gradient tracking.
Sourcepub fn item(&self) -> f32
pub fn item(&self) -> f32
Get a scalar value (for 0-d or 1-element tensors).
§Panics
Panics if the tensor has more than one element.
Sourcepub fn backward(&self)
pub fn backward(&self)
Compute gradients via backpropagation.
This implements the reverse-mode automatic differentiation algorithm described in Rumelhart et al. (1986).
§Panics
Panics if called on a tensor with more than one element
(use backward_with_grad for non-scalar outputs).
Sourcepub fn backward_with_grad(&self, grad_output: Tensor)
pub fn backward_with_grad(&self, grad_output: Tensor)
Compute gradients with a specified output gradient.
§Arguments
grad_output- Gradient of the loss with respect to this tensor
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Tensor
impl !RefUnwindSafe for Tensor
impl Send for Tensor
impl Sync for Tensor
impl Unpin for Tensor
impl !UnwindSafe for Tensor
Blanket Implementations§
§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§unsafe fn clone_to_uninit(&self, dest: *mut u8)
unsafe fn clone_to_uninit(&self, dest: *mut u8)
clone_to_uninit)Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more