Skip to main contentCrate ferrotorch
Source - autograd
- creation
- data
- Data loading, datasets, samplers, and transforms.
- device
- distributions
- Probability distributions for sampling and variational inference.
- dtype
- einops
- Einops-style tensor rearrangement operations.
- einsum
- Einstein summation (
einsum) for ferrotorch tensors. - error
- fft
- FFT operations for tensors, powered by rustfft.
- gpu_dispatch
- GPU backend dispatch layer.
- grad_fns
- hub
- Model hub for downloading and caching pretrained models.
- jit
- JIT tracing, IR graph, optimization passes, and code generation.
- linalg
- Advanced linear algebra operations bridging to ferray-linalg.
- nn
- Neural network modules and layers.
- ops
- optim
- Optimizers and learning rate schedulers.
- prelude
- Prelude module — import everything commonly needed.
- profiler
- Performance profiling and Chrome trace export.
- quantize
- Post-training quantization (PTQ) for ferrotorch tensors.
- serialize
- Model serialization: ONNX export, PyTorch import, safetensors, GGUF.
- shape
- sparse
- special
- Special mathematical functions (
torch.special equivalent). - storage
- tensor
- train
- Training loop, Learner, callbacks, and metrics.
- vision
- Computer vision models, datasets, and transforms.
- vmap
- Vectorized map (vmap) — apply a function over a batch dimension.
- QuantizedTensor
- A tensor stored in quantized (integer) representation.
- SparseTensor
- A sparse tensor in COO (Coordinate List) format.
- Tensor
- The central type. A dynamically-shaped tensor with gradient tracking
and device placement.
- TensorId
- A unique, monotonically increasing tensor identifier.
- TensorStorage
- The underlying data buffer for a tensor, tagged with its device.
- AutocastDtype
- The reduced-precision dtype used during autocast regions.
- DType
- Runtime descriptor for the element type stored in an array.
- Device
- Device on which a tensor’s data resides.
- EinopsReduction
- Reduction operation for
reduce. - FerrotorchError
- Errors produced by ferrotorch operations.
- QuantDtype
- Target integer dtype for quantized storage.
- QuantScheme
- Granularity of quantization parameters (scale / zero_point).
- StorageBuffer
- Device-specific data buffer.
- Element
- Trait bound for types that can be stored in a ferray array.
- Float
- Marker trait for float element types that support autograd.
- GradFn
- The backward function trait for reverse-mode automatic differentiation.
- arange
- Create a 1-D tensor with values from
start to end (exclusive) with step step. - autocast
- Execute a closure with mixed-precision autocast enabled.
- autocast_dtype
- Returns the target dtype for autocast regions on this thread.
- backward
- Compute gradients of all leaf tensors that contribute to
root. - backward_with_grad
- Run backward pass through the computation graph.
- broadcast_shapes
- Compute the broadcasted shape of two shapes, following NumPy/PyTorch rules.
- cat
- Concatenate tensors along an axis.
- chunk_t
- Split tensor into
chunks roughly equal pieces along dim. - clamp
- Differentiable elementwise clamp:
c[i] = x[i].clamp(min, max). - contiguous_t
- Make tensor contiguous (copy data if needed).
- cos
- Differentiable elementwise cosine:
c[i] = cos(x[i]). - dequantize
- Dequantize back to a floating-point tensor.
- digamma
- Digamma function: psi(x) = d/dx ln(Gamma(x)).
- einsum
- Einstein summation.
- einsum_differentiable
- Differentiable Einstein summation. If any input requires grad and grad
is enabled, attaches [
EinsumBackward]. - enable_grad
- Re-enable gradient computation inside a
no_grad block. - erf
- Error function: erf(x) = (2/sqrt(pi)) * integral(0, x, exp(-t^2) dt).
- erfc
- Complementary error function: erfc(x) = 1 - erf(x).
- erfinv
- Inverse error function: erfinv(erf(x)) = x.
- exp
- Differentiable elementwise exponential:
c[i] = exp(x[i]). - expm1
- exp(x) - 1 – numerically stable for small x.
- eye
- Create an identity matrix of size
n x n. - fft
- 1-D complex-to-complex FFT along the last dimension.
- fft2
- 2-D FFT (complex-to-complex) along the last two spatial dimensions.
- fft_differentiable
- Differentiable 1-D FFT. Attaches
FftBackward when grad is needed. - fixed_point
- Find a fixed point of
f starting from x0, then compute its derivative
w.r.t. params using the implicit function theorem. - from_slice
- Create a tensor from a slice, copying the data.
- from_vec
- Create a tensor from a
Vec<T>, taking ownership. - full
- Create a tensor filled with a given value.
- full_like
- Create a tensor filled with
value with the same shape as other. - grad
- Compute gradients of
outputs with respect to inputs. - grad_norm
- Compute the L2 norm of gradients of
outputs with respect to inputs. - gradient_penalty
- Compute the gradient penalty for WGAN-GP.
- hessian
- Compute the Hessian matrix of a scalar function at a point.
- ifft
- 1-D inverse FFT along the last dimension.
- ifft2
- 2-D inverse FFT (complex-to-complex) along the last two spatial dimensions.
- ifft_differentiable
- Differentiable 1-D inverse FFT. Attaches
IfftBackward when grad is needed. - irfft
- 1-D complex-to-real inverse FFT.
- irfft_differentiable
- Differentiable 1-D inverse real FFT. Attaches
IrfftBackward when grad is needed. - is_autocast_enabled
- Returns
true if mixed-precision autocast is currently enabled on this thread. - is_grad_enabled
- Returns
true if gradient tracking is currently enabled on this thread. - jacobian
- Compute the Jacobian matrix of a function at a point.
- jvp
- Compute the Jacobian-vector product (JVP):
J @ v. - lgamma
- Log-gamma function: lgamma(x) = log(|Gamma(x)|).
- linspace
- Create a 1-D tensor of
num evenly spaced values from start to end (inclusive). - log
- Differentiable elementwise natural log:
c[i] = ln(x[i]). - log1p
- log(1 + x) – numerically stable for small x.
- mean_dim
- Mean along a specific dimension.
- no_grad
- Execute a closure with gradient tracking disabled.
- normalize_axis
- Normalize a possibly-negative axis index to a positive one.
- ones
- Create a tensor filled with ones.
- ones_like
- Create a tensor of ones with the same shape as
other. - permute_t
- Permute tensor dimensions. Like PyTorch’s
tensor.permute(dims). - quantize
- Quantize a floating-point tensor.
- quantize_named_tensors
- Quantize every weight tensor in a module, returning a name -> QuantizedTensor
map suitable for serialization or quantized inference.
- quantized_matmul
- Multiply two quantized 2-D matrices and return a quantized result.
- rand
- Create a tensor with random values uniformly distributed in [0, 1).
- rand_like
- Create a random tensor [0,1) with the same shape as
other. - randn
- Create a tensor with random values from a standard normal distribution.
- randn_like
- Create a random normal tensor with the same shape as
other. - rearrange
- Rearrange tensor dimensions using an einops-style pattern.
- rearrange_with
- Rearrange with explicit axis sizes for ambiguous splits.
- reduce
- Reduce along axes that appear on the left but not the right.
- repeat
- Repeat tensor elements along new or existing axes.
- rfft
- 1-D real-to-complex FFT along the last dimension.
- rfft_differentiable
- Differentiable 1-D real FFT. Attaches
RfftBackward when grad is needed. - scalar
- Create a scalar (0-D) tensor.
- select
- Extract a single slice along
dim at position index, removing the
dimension. - set_grad_enabled
- Programmatically set whether gradients are enabled.
- sigmoid
- Compute
sigmoid(x), attaching a backward node when gradients are enabled. - sin
- Differentiable elementwise sine:
c[i] = sin(x[i]). - sinc
- Normalized sinc function: sinc(x) = sin(pix) / (pix), with sinc(0) = 1.
- split_t
- Split tensor into pieces of given sizes along
dim. - stack
- Stack a slice of tensors along a new dimension
dim. - sum_dim
- Sum along a specific dimension.
- tanh
- Compute
tanh(x), attaching a backward node when gradients are enabled. - tensor
- Create a 1-D tensor from a slice (shape inferred).
- view_t
- View tensor with new shape. Like PyTorch’s
tensor.view(shape). - vjp
- Compute the vector-Jacobian product (VJP):
v^T @ J. - vmap
- Vectorize a function over a batch dimension.
- vmap2
- Vectorize a two-argument function over batch dimensions.
- xlogy
- x * log(y), with the convention that xlogy(0, y) = 0 for any y.
- zeros
- Create a tensor filled with zeros.
- zeros_like
- Create a tensor of zeros with the same shape as
other.
- FerrotorchResult
- Convenience alias for ferrotorch results.