pub struct Vector<T> { /* private fields */ }Expand description
High-performance vector with multi-backend support
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[1.0, 2.0, 3.0]);
let b = Vector::from_slice(&[4.0, 5.0, 6.0]);
let result = a.add(&b).unwrap();
assert_eq!(result.as_slice(), &[5.0, 7.0, 9.0]);Implementations§
Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn leaky_relu(&self, negative_slope: f32) -> Result<Self>
pub fn leaky_relu(&self, negative_slope: f32) -> Result<Self>
Leaky ReLU activation function
Computes the element-wise Leaky ReLU with a configurable negative slope. Leaky ReLU addresses the “dying ReLU” problem by allowing small negative values.
§Formula
leaky_relu(x, α)[i] = max(αx\[i\], x\[i\])
= x\[i\] if x\[i\] > 0
= αx\[i\] if x\[i\] ≤ 0§Parameters
negative_slope: The slope for negative values (typically 0.01)- Must be in range [0.0, 1.0)
- Common values: 0.01 (default), 0.1, 0.2
- α = 0 reduces to standard ReLU
- α = 1 reduces to identity function
§Properties
- Fixes dying ReLU: Neurons can’t completely die (always has gradient)
- Non-zero gradient: Gradient is α for negative inputs (not zero)
- Unbounded positive: No saturation for positive values
- Parameterized: Negative slope can be tuned or learned (PReLU)
§Applications
- Deep networks: Prevents dying neurons in very deep networks
- GANs: Often used in generator and discriminator networks
- Better gradient flow: Helps with vanishing gradient problem
- Empirical improvements: Often outperforms ReLU in practice
§Performance
This operation is memory-bound (simple multiplication and comparison). SIMD provides modest speedups.
§Errors
Returns EmptyVector if the input vector is empty.
Returns InvalidInput if negative_slope is not in [0.0, 1.0).
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-2.0, -1.0, 0.0, 1.0, 2.0]);
let result = v.leaky_relu(0.01)?;
// Negative values multiplied by 0.01, positive unchanged
assert_eq!(result.as_slice(), &[-0.02, -0.01, 0.0, 1.0, 2.0]);Sourcepub fn elu(&self, alpha: f32) -> Result<Self>
pub fn elu(&self, alpha: f32) -> Result<Self>
ELU (Exponential Linear Unit) activation function
Computes the element-wise ELU with a configurable alpha parameter. ELU pushes mean activations closer to zero, improving learning.
§Formula
elu(x, α)[i] = x\[i\] if x\[i\] > 0
= α(e^x\[i\] - 1) if x\[i\] ≤ 0§Parameters
alpha: Controls the saturation value for negative inputs (typically 1.0)- Must be > 0
- Common value: 1.0 (original ELU paper)
- Larger α → slower saturation for negative inputs
§Properties
- Smooth: Unlike ReLU/Leaky ReLU, has smooth gradients everywhere
- Negative values: Allows negative outputs (pushes mean closer to zero)
- Bounded below: Saturates to -α for very negative inputs
- Unbounded above: No saturation for positive values
- Non-zero gradient: Has gradient everywhere (no dead neurons)
§Applications
- Deep networks: Better gradient flow than ReLU
- Mean activation near zero: Reduces internal covariate shift
- Noise robustness: Smooth activation helps with noisy gradients
- Empirical improvements: Often outperforms ReLU and Leaky ReLU
§Performance
This operation is compute-bound due to exp() for negative values. More expensive than ReLU/Leaky ReLU but provides better properties.
§Errors
Returns EmptyVector if the input vector is empty.
Returns InvalidInput if alpha <= 0.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-2.0, -1.0, 0.0, 1.0, 2.0]);
let result = v.elu(1.0)?;
// Negative values: α(e^x - 1), positive unchanged
// elu(-2, 1) ≈ -0.865, elu(-1, 1) ≈ -0.632
assert!((result.as_slice()[0] - (-0.865)).abs() < 0.01);
assert!((result.as_slice()[1] - (-0.632)).abs() < 0.01);
assert_eq!(result.as_slice()[2], 0.0);
assert_eq!(result.as_slice()[3], 1.0);
assert_eq!(result.as_slice()[4], 2.0);Sourcepub fn selu(&self) -> Result<Self>
pub fn selu(&self) -> Result<Self>
SELU (Scaled Exponential Linear Unit) activation function
Computes selu(x) = λ * (x if x > 0 else α * (exp(x) - 1)) where λ ≈ 1.0507 and α ≈ 1.6733
§Properties
- Self-normalizing: Activations converge to zero mean and unit variance
- Vanishing gradient prevention: Non-zero gradient for negative inputs
- Automatic normalization: Reduces need for batch normalization
§Performance
Uses scalar implementation (GPU disabled for element-wise ops).
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-2.0, -1.0, 0.0, 1.0, 2.0]);
let result = v.selu()?;
// Positive values scaled by λ ≈ 1.0507
assert!((result.as_slice()[3] - 1.0507).abs() < 0.001);
assert!((result.as_slice()[4] - 2.1014).abs() < 0.001);
// Zero stays zero
assert!(result.as_slice()[2].abs() < 1e-5);
// Negative values use ELU-like formula
assert!(result.as_slice()[0] < 0.0);§Errors
Returns EmptyVector if the input vector is empty.
§References
- Klambauer et al. (2017): “Self-Normalizing Neural Networks”
Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn gelu(&self) -> Result<Self>
pub fn gelu(&self) -> Result<Self>
GELU (Gaussian Error Linear Unit) activation function
Computes the element-wise GELU activation using the tanh approximation. GELU is the activation function used in transformers (BERT, GPT, etc.).
§Formula
gelu(x) ≈ 0.5 * x * (1 + tanh(√(2/π) * (x + 0.044715 * x³)))This is the tanh approximation which is faster than the exact form involving the error function (erf).
§Properties
- Smooth: Infinitely differentiable everywhere
- Non-monotonic: Unlike ReLU variants, has slight non-monotonicity near zero
- Stochastic regularizer: Can be viewed as adaptive dropout
- Zero-centered: Mean activation close to zero
- Bounded below: Approaches 0 as x → -∞
- Unbounded above: Linear growth for large positive x
§Applications
- Transformers: BERT, GPT-2, GPT-3, GPT-4 (default activation)
- Vision transformers: ViT, DINO, MAE
- Modern architectures: State-of-the-art NLP and vision models
- Better than ReLU: Empirically outperforms ReLU in many tasks
§Performance
This operation is compute-intensive (tanh, x³ calculations). More expensive than ReLU but comparable to ELU.
§Errors
Returns EmptyVector if the input vector is empty.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-2.0, -1.0, 0.0, 1.0, 2.0]);
let result = v.gelu()?;
// GELU is smooth and non-monotonic near zero
assert!(result.as_slice()[0] < 0.0); // Negative inputs → small negative outputs
assert_eq!(result.as_slice()[2], 0.0); // gelu(0) = 0
assert!(result.as_slice()[4] > 1.5); // Large positive → ~linearSourcepub fn swish(&self) -> Result<Self>
pub fn swish(&self) -> Result<Self>
Swish activation function (also known as SiLU - Sigmoid Linear Unit)
Applies the Swish activation element-wise: swish(x) = x * sigmoid(x) = x / (1 + e^(-x)).
Swish is a smooth, non-monotonic activation function that consistently matches or outperforms ReLU in deep networks. It’s used in EfficientNet, MobileNet v3, and many modern architectures. The function is self-gated: it adaptively gates the input based on its value.
Properties:
- Smooth and differentiable everywhere
- Non-monotonic: has a slight “dip” for negative values
- swish(0) = 0
- swish(x) ≈ x for large positive x (linear)
- swish(x) ≈ 0 for large negative x
- Unbounded above, bounded below by ≈ -0.278 at x ≈ -1.278
§Performance
Compute-bound operation requiring exponential and division. Future SIMD optimizations planned for Phase 9 (GPU backend).
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-2.0, -1.0, 0.0, 1.0, 2.0]);
let result = v.swish()?;
// Swish is smooth and self-gated
assert!(result.as_slice()[0] < 0.0); // Negative inputs → small negative outputs
assert_eq!(result.as_slice()[2], 0.0); // swish(0) = 0
assert!(result.as_slice()[4] > 1.5); // Large positive → ~linear§Errors
Returns EmptyVector if the input vector is empty.
§References
- Ramachandran et al. (2017): “Searching for Activation Functions”
- Also known as SiLU (Sigmoid Linear Unit): Elfwing et al. (2018)
Sourcepub fn hardswish(&self) -> Result<Self>
pub fn hardswish(&self) -> Result<Self>
Hard Swish activation function
Applies the hardswish activation element-wise: hardswish(x) = x * relu6(x + 3) / 6
Hardswish is a piece-wise linear approximation to swish, designed for efficient computation in mobile neural networks. It’s used in MobileNetV3 and avoids the expensive sigmoid computation of standard swish.
Properties:
- Piece-wise linear: efficient to compute
- hardswish(x) = 0 for x ≤ -3
- hardswish(x) = x for x ≥ 3
- hardswish(x) = x * (x + 3) / 6 for -3 < x < 3
- hardswish(0) = 0
- Smooth transitions at boundaries
§Performance
More efficient than swish as it uses only multiply/divide operations instead of expensive exponential functions. Ideal for inference on resource-constrained devices.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-4.0, -3.0, 0.0, 3.0, 4.0]);
let result = v.hardswish()?;
// Piece-wise linear behavior
assert_eq!(result.as_slice()[0], 0.0); // x ≤ -3 → 0
assert_eq!(result.as_slice()[1], 0.0); // x = -3 → 0
assert_eq!(result.as_slice()[2], 0.0); // x = 0 → 0
assert_eq!(result.as_slice()[3], 3.0); // x = 3 → x
assert_eq!(result.as_slice()[4], 4.0); // x ≥ 3 → x§Errors
Returns EmptyVector if the input vector is empty.
§References
- Howard et al. (2019): “Searching for MobileNetV3”
Sourcepub fn mish(&self) -> Result<Self>
pub fn mish(&self) -> Result<Self>
Mish activation function
Applies the mish activation element-wise: mish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + e^x))
Mish is a self-regularizing non-monotonic activation function that often outperforms ReLU and swish in computer vision tasks. It’s used in YOLOv4 and many modern architectures.
Properties:
- Smooth and non-monotonic (similar to swish)
- Self-regularizing: prevents dying neurons
- mish(0) ≈ 0 (small positive value)
- mish(x) ≈ x for large positive x (nearly linear)
- mish(x) ≈ 0 for large negative x
- Bounded below by ≈ -0.31 at x ≈ -1.19
§Performance
Compute-bound operation requiring exponential, logarithm, and tanh. More expensive than ReLU/swish but often provides better accuracy.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-2.0, -1.0, 0.0, 1.0, 2.0]);
let result = v.mish()?;
// Mish is smooth and self-gated
assert!(result.as_slice()[0] < 0.0); // Small negative output for negative inputs
assert!(result.as_slice()[2].abs() < 1e-5); // mish(0) = 0
assert!(result.as_slice()[4] > 1.5); // Large positive → near linear§Errors
Returns EmptyVector if the input vector is empty.
§References
- Misra (2019): “Mish: A Self Regularized Non-Monotonic Neural Activation Function”
Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn softmax(&self) -> Result<Self>
pub fn softmax(&self) -> Result<Self>
Softmax activation function
Converts a vector of real values into a probability distribution. Formula: softmax(x)[i] = exp(x[i] - max(x)) / sum(exp(x[j] - max(x)))
Uses the numerically stable version with max subtraction to prevent overflow. The output is a probability distribution: all values in [0, 1] and sum to 1.
This is the standard activation function for multi-class classification in neural networks.
§Examples
use trueno::Vector;
let logits = Vector::from_slice(&[1.0, 2.0, 3.0]);
let probs = logits.softmax()?;
// Verify sum ≈ 1
let sum: f32 = probs.as_slice().iter().sum();
assert!((sum - 1.0).abs() < 1e-5);
// Verify all values in [0, 1]
for &p in probs.as_slice() {
assert!(p >= 0.0 && p <= 1.0);
}§Empty vectors
Returns EmptyVector error for empty vectors (cannot compute softmax).
Sourcepub fn log_softmax(&self) -> Result<Self>
pub fn log_softmax(&self) -> Result<Self>
Log-softmax activation function
Computes the logarithm of the softmax function in a numerically stable way. Formula: log_softmax(x)[i] = x[i] - max(x) - log(sum(exp(x[j] - max(x))))
This is more numerically stable than computing log(softmax(x)) and is commonly used in neural networks for computing cross-entropy loss.
§Examples
use trueno::Vector;
let logits = Vector::from_slice(&[1.0, 2.0, 3.0]);
let log_probs = logits.log_softmax()?;
// Verify exp(log_softmax) = softmax
let probs_from_log: Vec<f32> = log_probs.as_slice().iter().map(|&x| x.exp()).collect();
let sum: f32 = probs_from_log.iter().sum();
assert!((sum - 1.0).abs() < 1e-5);§Empty vectors
Returns EmptyVector error for empty vectors.
Sourcepub fn relu(&self) -> Result<Self>
pub fn relu(&self) -> Result<Self>
ReLU (Rectified Linear Unit) activation function
Computes the element-wise ReLU: max(0, x). ReLU is one of the most widely used activation functions in neural networks.
§Formula
relu(x)[i] = max(0, x\[i\])
= x\[i\] if x\[i\] > 0
= 0 otherwise§Properties
- Non-linearity: Introduces non-linearity while preserving linearity for positive values
- Sparsity: Produces exactly zero for negative inputs (sparse activations)
- Gradient: Derivative is 1 for positive inputs, 0 for negative (solves vanishing gradient)
- Computational efficiency: Simple max operation, no exponentials
§Applications
- Deep neural networks: Default activation for hidden layers
- Convolutional networks: Standard activation in CNNs
- Feature learning: Encourages sparse representations
§Performance
This operation is memory-bound. SIMD provides modest speedups since the computation (comparison and selection) is simpler than memory access.
§Errors
Returns EmptyVector if the input vector is empty.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-2.0, -1.0, 0.0, 1.0, 2.0]);
let result = v.relu()?;
assert_eq!(result.as_slice(), &[0.0, 0.0, 0.0, 1.0, 2.0]);Sourcepub fn sigmoid(&self) -> Result<Self>
pub fn sigmoid(&self) -> Result<Self>
Sigmoid (logistic) activation function
Computes the element-wise sigmoid: σ(x) = 1 / (1 + e^(-x)). Sigmoid is a classic activation function that squashes inputs to the range (0, 1).
§Formula
sigmoid(x)[i] = 1 / (1 + exp(-x\[i\]))
= exp(x\[i\]) / (1 + exp(x\[i\]))§Properties
- Bounded output: Maps all inputs to (0, 1) range
- Smooth: Infinitely differentiable (C^∞)
- Symmetric: σ(-x) = 1 - σ(x)
- Derivative: σ’(x) = σ(x) * (1 - σ(x))
- Interpretable: Output can be interpreted as probability
§Applications
- Binary classification: Final layer for binary output (0 or 1)
- Logistic regression: Traditional ML algorithm
- Gating mechanisms: LSTM/GRU gates (input, forget, output)
- Attention mechanisms: Soft attention weights
§Numerical Considerations
For very large negative inputs (x < -50), exp(-x) overflows to infinity. However, sigmoid(x) approaches 0, so we return 0 for numerical stability. For very large positive inputs (x > 50), exp(-x) underflows to 0, and sigmoid(x) approaches 1.
§Performance
This operation is compute-bound due to the exp() operation. SIMD provides modest speedups, but the exponential is the bottleneck.
§Errors
Returns EmptyVector if the input vector is empty.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-2.0, 0.0, 2.0]);
let result = v.sigmoid()?;
// sigmoid(-2) ≈ 0.119, sigmoid(0) = 0.5, sigmoid(2) ≈ 0.881
assert!((result.as_slice()[0] - 0.119).abs() < 0.001);
assert!((result.as_slice()[1] - 0.5).abs() < 0.001);
assert!((result.as_slice()[2] - 0.881).abs() < 0.001);Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn add(&self, other: &Self) -> Result<Self>
pub fn add(&self, other: &Self) -> Result<Self>
Element-wise addition
§Performance
Auto-selects the best available backend:
- AVX2: ~4x faster than scalar for 1K+ elements
- GPU: ~50x faster than scalar for 10M+ elements
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[1.0, 2.0, 3.0]);
let b = Vector::from_slice(&[4.0, 5.0, 6.0]);
let result = a.add(&b)?;
assert_eq!(result.as_slice(), &[5.0, 7.0, 9.0]);§Errors
Returns TruenoError::SizeMismatch if vectors have different lengths.
Sourcepub fn sub(&self, other: &Self) -> Result<Self>
pub fn sub(&self, other: &Self) -> Result<Self>
Element-wise subtraction
§Performance
Auto-selects the best available backend:
- AVX2: ~4x faster than scalar for 1K+ elements
- GPU: ~50x faster than scalar for 10M+ elements
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[5.0, 7.0, 9.0]);
let b = Vector::from_slice(&[1.0, 2.0, 3.0]);
let result = a.sub(&b)?;
assert_eq!(result.as_slice(), &[4.0, 5.0, 6.0]);§Errors
Returns TruenoError::SizeMismatch if vectors have different lengths.
Sourcepub fn mul(&self, other: &Self) -> Result<Self>
pub fn mul(&self, other: &Self) -> Result<Self>
Element-wise multiplication
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[2.0, 3.0, 4.0]);
let b = Vector::from_slice(&[5.0, 6.0, 7.0]);
let result = a.mul(&b)?;
assert_eq!(result.as_slice(), &[10.0, 18.0, 28.0]);Sourcepub fn div(&self, other: &Self) -> Result<Self>
pub fn div(&self, other: &Self) -> Result<Self>
Element-wise division
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[10.0, 20.0, 30.0]);
let b = Vector::from_slice(&[2.0, 4.0, 5.0]);
let result = a.div(&b)?;
assert_eq!(result.as_slice(), &[5.0, 5.0, 6.0]);Sourcepub fn scale(&self, scalar: f32) -> Result<Vector<f32>>
pub fn scale(&self, scalar: f32) -> Result<Vector<f32>>
Scalar multiplication (scale all elements by a scalar value)
Returns a new vector where each element is multiplied by the scalar.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0]);
let result = v.scale(2.0)?;
assert_eq!(result.as_slice(), &[2.0, 4.0, 6.0, 8.0]);§Scaling by Zero
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0]);
let result = v.scale(0.0)?;
assert_eq!(result.as_slice(), &[0.0, 0.0, 0.0]);§Negative Scaling
use trueno::Vector;
let v = Vector::from_slice(&[1.0, -2.0, 3.0]);
let result = v.scale(-2.0)?;
assert_eq!(result.as_slice(), &[-2.0, 4.0, -6.0]);Sourcepub fn fma(&self, b: &Vector<f32>, c: &Vector<f32>) -> Result<Vector<f32>>
pub fn fma(&self, b: &Vector<f32>, c: &Vector<f32>) -> Result<Vector<f32>>
Fused multiply-add: result[i] = self[i] * b[i] + c[i]
Computes element-wise fused multiply-add operation. On hardware with FMA support (AVX2, NEON), this is a single instruction with better performance and numerical accuracy (no intermediate rounding). On platforms without FMA (SSE2, WASM), uses separate multiply and add operations.
§Arguments
b- The second vector to multiply withc- The vector to add to the product
§Returns
A new vector where each element is self\[i\] * b\[i\] + c\[i\]
§Errors
Returns SizeMismatch if vector lengths don’t match
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[2.0, 3.0, 4.0]);
let b = Vector::from_slice(&[5.0, 6.0, 7.0]);
let c = Vector::from_slice(&[1.0, 2.0, 3.0]);
let result = a.fma(&b, &c)?;
assert_eq!(result.as_slice(), &[11.0, 20.0, 31.0]); // [2*5+1, 3*6+2, 4*7+3]§Use Cases
- Neural networks: matrix multiplication, backpropagation
- Scientific computing: polynomial evaluation, numerical integration
- Graphics: transformation matrices, shader computations
- Physics simulations: force calculations, particle systems
Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn zscore(&self) -> Result<Self>
pub fn zscore(&self) -> Result<Self>
Z-score normalization (standardization)
Transforms the vector to have mean = 0 and standard deviation = 1. Each element is transformed as: z[i] = (x[i] - μ) / σ
This is a fundamental preprocessing step in machine learning and statistics, ensuring features have comparable scales and are centered around zero.
§Performance
Uses optimized SIMD implementations via mean() and stddev(), then applies element-wise operations (sub, scale) which also use SIMD.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0, 5.0]);
let z = v.zscore()?;
// Verify mean ≈ 0
let mean = z.mean()?;
assert!(mean.abs() < 1e-5);
// Verify stddev ≈ 1
let std = z.stddev()?;
assert!((std - 1.0).abs() < 1e-5);§Empty vectors
Returns EmptyVector error for empty vectors (cannot compute mean/stddev).
§Division by zero
Returns DivisionByZero error if the vector has zero standard deviation (i.e., all elements are identical/constant).
use trueno::{Vector, TruenoError};
let v = Vector::from_slice(&[5.0, 5.0, 5.0]); // Constant
assert!(matches!(v.zscore(), Err(TruenoError::DivisionByZero)));Sourcepub fn minmax_normalize(&self) -> Result<Self>
pub fn minmax_normalize(&self) -> Result<Self>
Min-max normalization (scaling to [0, 1] range)
Transforms the vector so that the minimum value becomes 0 and the maximum value becomes 1, with all other values scaled proportionally. Formula: x’[i] = (x[i] - min) / (max - min)
This is a fundamental preprocessing technique in machine learning, especially for algorithms sensitive to feature magnitudes (e.g., neural networks, k-NN).
§Performance
Uses optimized SIMD implementations via min() and max() operations, then applies element-wise transformation.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0, 5.0]);
let normalized = v.minmax_normalize()?;
// Verify range [0, 1]
let min = normalized.min()?;
let max = normalized.max()?;
assert!((min - 0.0).abs() < 1e-5);
assert!((max - 1.0).abs() < 1e-5);§Empty vectors
Returns EmptyVector error for empty vectors (cannot compute min/max).
§Division by zero
Returns DivisionByZero error if the vector has all identical elements (i.e., min = max, causing division by zero in the normalization formula).
use trueno::{Vector, TruenoError};
let v = Vector::from_slice(&[5.0, 5.0, 5.0]); // Constant
assert!(matches!(v.minmax_normalize(), Err(TruenoError::DivisionByZero)));Sourcepub fn layer_norm(&self, gamma: &Self, beta: &Self, eps: f32) -> Result<Self>
pub fn layer_norm(&self, gamma: &Self, beta: &Self, eps: f32) -> Result<Self>
Layer normalization with learnable parameters (Issue #61: ML primitives)
Applies layer normalization: y = gamma * (x - mean) / sqrt(variance + eps) + beta
This is a fundamental normalization technique in transformers and other modern neural network architectures. Unlike batch normalization, layer norm normalizes across the feature dimension, making it suitable for sequence models.
§Arguments
gamma- Scale parameter (typically learned, initialized to 1.0)beta- Shift parameter (typically learned, initialized to 0.0)eps- Small constant for numerical stability (typically 1e-5 or 1e-6)
§Returns
Normalized vector with the same shape as input
§Errors
Returns SizeMismatch if gamma or beta have different lengths than self
Returns EmptyVector if input is empty
§Example
use trueno::Vector;
let x = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0]);
let gamma = Vector::from_slice(&[1.0, 1.0, 1.0, 1.0]); // Scale = 1
let beta = Vector::from_slice(&[0.0, 0.0, 0.0, 0.0]); // Shift = 0
let y = x.layer_norm(&gamma, &beta, 1e-5).unwrap();
// Output should be approximately standardized (mean ≈ 0, std ≈ 1)
let mean: f32 = y.as_slice().iter().sum::<f32>() / y.len() as f32;
assert!(mean.abs() < 1e-5);§Performance
Single-pass computation using Welford’s algorithm for numerical stability. Time complexity: O(n), Space complexity: O(n).
Sourcepub fn layer_norm_simple(&self, eps: f32) -> Result<Self>
pub fn layer_norm_simple(&self, eps: f32) -> Result<Self>
Layer normalization without learnable parameters
Simplified version that just standardizes the input: y = (x - mean) / sqrt(variance + eps)
This is equivalent to calling layer_norm with gamma=1 and beta=0.
§Arguments
eps- Small constant for numerical stability (typically 1e-5)
§Example
use trueno::Vector;
let x = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0]);
let y = x.layer_norm_simple(1e-5).unwrap();
// Output should be standardized
let mean: f32 = y.as_slice().iter().sum::<f32>() / y.len() as f32;
assert!(mean.abs() < 1e-5);Sourcepub fn normalize(&self) -> Result<Vector<f32>>
pub fn normalize(&self) -> Result<Vector<f32>>
Normalize the vector to unit length (L2 norm = 1)
Returns a new vector in the same direction but with magnitude 1.
§Errors
Returns TruenoError::DivisionByZero if the vector has zero norm (cannot normalize zero vector).
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.0, 4.0]);
let unit = v.normalize().unwrap();
// Result is [0.6, 0.8] (a unit vector)
assert!((unit.as_slice()[0] - 0.6).abs() < 1e-5);
assert!((unit.as_slice()[1] - 0.8).abs() < 1e-5);
// Verify it's a unit vector (norm = 1)
assert!((unit.norm_l2().unwrap() - 1.0).abs() < 1e-5);§Zero Vector Error
use trueno::{Vector, TruenoError};
let v = Vector::from_slice(&[0.0, 0.0]);
assert!(matches!(v.normalize(), Err(TruenoError::DivisionByZero)));Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn norm_l2(&self) -> Result<f32>
pub fn norm_l2(&self) -> Result<f32>
L2 norm (Euclidean norm)
Computes the Euclidean length of the vector: sqrt(sum(a[i]^2)). This is mathematically equivalent to sqrt(dot(self, self)).
§Performance
Uses optimized SIMD implementations via the dot product operation.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.0, 4.0]);
let norm = v.norm_l2()?;
assert!((norm - 5.0).abs() < 1e-5); // sqrt(3^2 + 4^2) = 5§Empty vectors
Returns 0.0 for empty vectors (consistent with the mathematical definition).
use trueno::Vector;
let v: Vector<f32> = Vector::from_slice(&[]);
assert_eq!(v.norm_l2()?, 0.0);Sourcepub fn norm_l1(&self) -> Result<f32>
pub fn norm_l1(&self) -> Result<f32>
Compute the L1 norm (Manhattan norm) of the vector
Returns the sum of absolute values: ||v||₁ = sum(|v[i]|)
The L1 norm is used in:
- Machine learning (L1 regularization, Lasso regression)
- Distance metrics (Manhattan distance)
- Sparse modeling and feature selection
- Signal processing
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.0, -4.0, 5.0]);
let norm = v.norm_l1().unwrap();
// |3| + |-4| + |5| = 12
assert!((norm - 12.0).abs() < 1e-5);§Empty Vector
use trueno::Vector;
let v: Vector<f32> = Vector::from_slice(&[]);
assert_eq!(v.norm_l1().unwrap(), 0.0);Sourcepub fn norm_linf(&self) -> Result<f32>
pub fn norm_linf(&self) -> Result<f32>
Compute the L∞ norm (infinity norm / max norm) of the vector
Returns the maximum absolute value: ||v||∞ = max(|v[i]|)
The L∞ norm is used in:
- Numerical analysis (error bounds, stability analysis)
- Optimization (Chebyshev approximation)
- Signal processing (peak detection)
- Distance metrics (Chebyshev distance)
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.0, -7.0, 5.0, -2.0]);
let norm = v.norm_linf().unwrap();
// max(|3|, |-7|, |5|, |-2|) = 7
assert!((norm - 7.0).abs() < 1e-5);§Empty Vector
use trueno::Vector;
let v: Vector<f32> = Vector::from_slice(&[]);
assert_eq!(v.norm_linf().unwrap(), 0.0);Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn sum_kahan(&self) -> Result<f32>
pub fn sum_kahan(&self) -> Result<f32>
Kahan summation (numerically stable sum)
Uses the Kahan summation algorithm to reduce floating-point rounding errors when summing many numbers. This is more accurate than the standard sum() method for vectors with many elements or elements of vastly different magnitudes.
§Performance
Note: Kahan summation is inherently sequential and cannot be effectively parallelized with SIMD. All backends use the scalar implementation.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0]);
assert_eq!(v.sum_kahan()?, 10.0);Sourcepub fn sum_of_squares(&self) -> Result<f32>
pub fn sum_of_squares(&self) -> Result<f32>
Sum of squared elements
Computes the sum of squares: sum(a[i]^2). This is the building block for computing L2 norm and variance.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0]);
let sum_sq = v.sum_of_squares()?;
assert_eq!(sum_sq, 14.0); // 1^2 + 2^2 + 3^2 = 1 + 4 + 9 = 14§Empty vectors
Returns 0.0 for empty vectors.
Sourcepub fn mean(&self) -> Result<f32>
pub fn mean(&self) -> Result<f32>
Arithmetic mean (average)
Computes the arithmetic mean of all elements: sum(a[i]) / n.
§Performance
Uses optimized SIMD sum() implementation, then divides by length.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0]);
let avg = v.mean()?;
assert!((avg - 2.5).abs() < 1e-5); // (1+2+3+4)/4 = 2.5§Empty vectors
Returns an error for empty vectors (division by zero).
use trueno::{Vector, TruenoError};
let v: Vector<f32> = Vector::from_slice(&[]);
assert!(matches!(v.mean(), Err(TruenoError::EmptyVector)));Sourcepub fn variance(&self) -> Result<f32>
pub fn variance(&self) -> Result<f32>
Population variance
Computes the population variance: Var(X) = E[(X - μ)²] = E[X²] - μ² Uses the computational formula to avoid two passes over the data.
§Performance
Uses optimized SIMD implementations via sum_of_squares() and mean().
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0, 5.0]);
let var = v.variance()?;
assert!((var - 2.0).abs() < 1e-5); // Population variance§Empty vectors
Returns an error for empty vectors.
use trueno::{Vector, TruenoError};
let v: Vector<f32> = Vector::from_slice(&[]);
assert!(matches!(v.variance(), Err(TruenoError::EmptyVector)));Sourcepub fn stddev(&self) -> Result<f32>
pub fn stddev(&self) -> Result<f32>
Population standard deviation
Computes the population standard deviation: σ = sqrt(Var(X)). This is the square root of the variance.
§Performance
Uses optimized SIMD implementations via variance().
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0, 5.0]);
let sd = v.stddev()?;
assert!((sd - 1.4142135).abs() < 1e-5); // sqrt(2) ≈ 1.414§Empty vectors
Returns an error for empty vectors.
use trueno::{Vector, TruenoError};
let v: Vector<f32> = Vector::from_slice(&[]);
assert!(matches!(v.stddev(), Err(TruenoError::EmptyVector)));Sourcepub fn covariance(&self, other: &Self) -> Result<f32>
pub fn covariance(&self, other: &Self) -> Result<f32>
Population covariance between two vectors
Computes the population covariance: Cov(X,Y) = E[(X - μx)(Y - μy)] Uses the computational formula: Cov(X,Y) = E[XY] - μx·μy
§Performance
Uses optimized SIMD implementations via dot() and mean().
§Examples
use trueno::Vector;
let x = Vector::from_slice(&[1.0, 2.0, 3.0]);
let y = Vector::from_slice(&[2.0, 4.0, 6.0]);
let cov = x.covariance(&y)?;
assert!((cov - 1.333).abs() < 0.01); // Perfect positive covariance§Size mismatch
Returns an error if vectors have different lengths.
use trueno::{Vector, TruenoError};
let x = Vector::from_slice(&[1.0, 2.0]);
let y = Vector::from_slice(&[1.0, 2.0, 3.0]);
assert!(matches!(x.covariance(&y), Err(TruenoError::SizeMismatch { .. })));§Empty vectors
Returns an error for empty vectors.
use trueno::{Vector, TruenoError};
let x: Vector<f32> = Vector::from_slice(&[]);
let y: Vector<f32> = Vector::from_slice(&[]);
assert!(matches!(x.covariance(&y), Err(TruenoError::EmptyVector)));Sourcepub fn correlation(&self, other: &Self) -> Result<f32>
pub fn correlation(&self, other: &Self) -> Result<f32>
Pearson correlation coefficient
Computes the Pearson correlation coefficient: ρ(X,Y) = Cov(X,Y) / (σx·σy) Normalized covariance in range [-1, 1].
§Performance
Uses optimized SIMD implementations via covariance() and stddev().
§Examples
use trueno::Vector;
let x = Vector::from_slice(&[1.0, 2.0, 3.0]);
let y = Vector::from_slice(&[2.0, 4.0, 6.0]);
let corr = x.correlation(&y)?;
assert!((corr - 1.0).abs() < 1e-5); // Perfect positive correlation§Size mismatch
Returns an error if vectors have different lengths.
§Division by zero
Returns DivisionByZero error if either vector has zero standard deviation (i.e., is constant).
use trueno::{Vector, TruenoError};
let x = Vector::from_slice(&[5.0, 5.0, 5.0]); // Constant
let y = Vector::from_slice(&[1.0, 2.0, 3.0]);
assert!(matches!(x.correlation(&y), Err(TruenoError::DivisionByZero)));Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn dot(&self, other: &Self) -> Result<f32>
pub fn dot(&self, other: &Self) -> Result<f32>
Dot product
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[1.0, 2.0, 3.0]);
let b = Vector::from_slice(&[4.0, 5.0, 6.0]);
let result = a.dot(&b)?;
assert_eq!(result, 32.0); // 1*4 + 2*5 + 3*6 = 4 + 10 + 18 = 32Sourcepub fn sum(&self) -> Result<f32>
pub fn sum(&self) -> Result<f32>
Sum all elements
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0]);
assert_eq!(v.sum()?, 10.0);Sourcepub fn max(&self) -> Result<f32>
pub fn max(&self) -> Result<f32>
Find maximum element
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 5.0, 3.0, 2.0]);
assert_eq!(v.max()?, 5.0);§Errors
Returns TruenoError::InvalidInput if vector is empty.
Sourcepub fn min(&self) -> Result<f32>
pub fn min(&self) -> Result<f32>
Find minimum value in the vector
Returns the smallest element in the vector using SIMD optimization.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 5.0, 3.0, 2.0]);
assert_eq!(v.min()?, 1.0);§Errors
Returns TruenoError::InvalidInput if vector is empty.
Sourcepub fn argmax(&self) -> Result<usize>
pub fn argmax(&self) -> Result<usize>
Find index of maximum value in the vector
Returns the index of the first occurrence of the maximum value using SIMD optimization.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 5.0, 3.0, 2.0]);
assert_eq!(v.argmax()?, 1); // max value 5.0 is at index 1§Errors
Returns TruenoError::InvalidInput if vector is empty.
Sourcepub fn argmin(&self) -> Result<usize>
pub fn argmin(&self) -> Result<usize>
Find index of minimum value in the vector
Returns the index of the first occurrence of the minimum value using SIMD optimization.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 5.0, 3.0, 2.0]);
assert_eq!(v.argmin()?, 0); // min value 1.0 is at index 0§Errors
Returns TruenoError::InvalidInput if vector is empty.
Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn floor(&self) -> Result<Vector<f32>>
pub fn floor(&self) -> Result<Vector<f32>>
Computes the floor (round down to nearest integer) of each element.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.7, -2.3, 5.0]);
let result = v.floor()?;
assert_eq!(result.as_slice(), &[3.0, -3.0, 5.0]);Sourcepub fn ceil(&self) -> Result<Vector<f32>>
pub fn ceil(&self) -> Result<Vector<f32>>
Computes the ceiling (round up to nearest integer) of each element.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.2, -2.7, 5.0]);
let result = v.ceil()?;
assert_eq!(result.as_slice(), &[4.0, -2.0, 5.0]);Sourcepub fn round(&self) -> Result<Vector<f32>>
pub fn round(&self) -> Result<Vector<f32>>
Rounds each element to the nearest integer.
Uses “round half away from zero” strategy:
- 0.5 rounds to 1.0, 1.5 rounds to 2.0, -1.5 rounds to -2.0, etc.
- Positive halfway cases round up, negative halfway cases round down.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.2, 3.7, -2.3, -2.8]);
let result = v.round()?;
assert_eq!(result.as_slice(), &[3.0, 4.0, -2.0, -3.0]);Sourcepub fn trunc(&self) -> Result<Vector<f32>>
pub fn trunc(&self) -> Result<Vector<f32>>
Truncates each element toward zero (removes fractional part).
Truncation always moves toward zero:
- Positive values: equivalent to floor() (e.g., 3.7 → 3.0)
- Negative values: equivalent to ceil() (e.g., -3.7 → -3.0)
- This differs from floor() which always rounds down
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.7, -2.7, 5.0]);
let result = v.trunc()?;
assert_eq!(result.as_slice(), &[3.0, -2.0, 5.0]);Sourcepub fn fract(&self) -> Result<Vector<f32>>
pub fn fract(&self) -> Result<Vector<f32>>
Returns the fractional part of each element.
The fractional part has the same sign as the original value:
- Positive: fract(3.7) = 0.7
- Negative: fract(-3.7) = -0.7
- Decomposition property: x = trunc(x) + fract(x)
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.7, -2.3, 5.0]);
let result = v.fract()?;
// Fractional parts: 0.7, -0.3, 0.0
assert!((result.as_slice()[0] - 0.7).abs() < 1e-5);
assert!((result.as_slice()[1] - (-0.3)).abs() < 1e-5);Sourcepub fn signum(&self) -> Result<Vector<f32>>
pub fn signum(&self) -> Result<Vector<f32>>
Returns the sign of each element.
Returns:
1.0if the value is positive (including +0.0 and +∞)-1.0if the value is negative (including -0.0 and -∞)NaNif the value is NaN
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[5.0, -3.0, 0.0, -0.0]);
let result = v.signum()?;
assert_eq!(result.as_slice(), &[1.0, -1.0, 1.0, -1.0]);Sourcepub fn copysign(&self, sign: &Self) -> Result<Vector<f32>>
pub fn copysign(&self, sign: &Self) -> Result<Vector<f32>>
Returns a vector with the magnitude of self and the sign of sign.
For each element pair, takes the magnitude from self and the sign from sign.
Equivalent to abs(self\[i\]) with the sign of sign\[i\].
§Arguments
sign- Vector providing the sign for each element
§Errors
Returns TruenoError::SizeMismatch if vectors have different lengths.
§Examples
use trueno::Vector;
let magnitude = Vector::from_slice(&[5.0, 3.0, 2.0]);
let sign = Vector::from_slice(&[-1.0, 1.0, -1.0]);
let result = magnitude.copysign(&sign)?;
assert_eq!(result.as_slice(), &[-5.0, 3.0, -2.0]);Sourcepub fn minimum(&self, other: &Self) -> Result<Vector<f32>>
pub fn minimum(&self, other: &Self) -> Result<Vector<f32>>
Element-wise minimum of two vectors.
Returns a new vector where each element is the minimum of the corresponding elements from self and other.
NaN handling: Prefers non-NaN values (NAN.min(x) = x).
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[1.0, 5.0, 3.0]);
let b = Vector::from_slice(&[2.0, 3.0, 4.0]);
let result = a.minimum(&b)?;
assert_eq!(result.as_slice(), &[1.0, 3.0, 3.0]);Sourcepub fn maximum(&self, other: &Self) -> Result<Vector<f32>>
pub fn maximum(&self, other: &Self) -> Result<Vector<f32>>
Element-wise maximum of two vectors.
Returns a new vector where each element is the maximum of the corresponding elements from self and other.
NaN handling: Prefers non-NaN values (NAN.max(x) = x).
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[1.0, 5.0, 3.0]);
let b = Vector::from_slice(&[2.0, 3.0, 4.0]);
let result = a.maximum(&b)?;
assert_eq!(result.as_slice(), &[2.0, 5.0, 4.0]);Sourcepub fn neg(&self) -> Result<Vector<f32>>
pub fn neg(&self) -> Result<Vector<f32>>
Element-wise negation (unary minus).
Returns a new vector where each element is the negation of the corresponding element from self.
Properties: Double negation is identity: -(-x) = x
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[1.0, -2.0, 3.0]);
let result = a.neg()?;
assert_eq!(result.as_slice(), &[-1.0, 2.0, -3.0]);Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn exp(&self) -> Result<Vector<f32>>
pub fn exp(&self) -> Result<Vector<f32>>
Element-wise exponential: result[i] = e^x[i]
Computes the natural exponential (e^x) for each element. Uses Rust’s optimized f32::exp() method.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[0.0, 1.0, 2.0]);
let result = v.exp()?;
// result ≈ [1.0, 2.718, 7.389]§Special Cases
exp(0.0)returns 1.0exp(1.0)returns e ≈ 2.71828exp(-∞)returns 0.0exp(+∞)returns +∞
§Applications
- Machine learning: Softmax activation, sigmoid, exponential loss
- Statistics: Exponential distribution, log-normal distribution
- Physics: Radioactive decay, population growth models
- Signal processing: Exponential smoothing, envelope detection
- Numerical methods: Solving differential equations
Sourcepub fn ln(&self) -> Result<Vector<f32>>
pub fn ln(&self) -> Result<Vector<f32>>
Element-wise natural logarithm: result[i] = ln(x[i])
Computes the natural logarithm (base e) for each element. Uses Rust’s optimized f32::ln() method.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, std::f32::consts::E, std::f32::consts::E.powi(2)]);
let result = v.ln()?;
// result ≈ [0.0, 1.0, 2.0]§Special Cases
ln(1.0)returns 0.0ln(e)returns 1.0ln(x)for x ≤ 0 returns NaNln(0.0)returns -∞ln(+∞)returns +∞
§Applications
- Machine learning: Log loss, log-likelihood, softmax normalization
- Statistics: Log-normal distribution, log transformation for skewed data
- Information theory: Entropy calculation, mutual information
- Economics: Log returns, elasticity calculations
- Signal processing: Decibel conversion, log-frequency analysis
Sourcepub fn log2(&self) -> Result<Vector<f32>>
pub fn log2(&self) -> Result<Vector<f32>>
Element-wise base-2 logarithm: result[i] = log₂(x[i])
Computes the base-2 logarithm for each element. Uses Rust’s optimized f32::log2() method.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 4.0, 8.0]);
let result = v.log2()?;
// result ≈ [0.0, 1.0, 2.0, 3.0]§Special Cases
log2(1.0)returns 0.0log2(2.0)returns 1.0log2(x)for x ≤ 0 returns NaNlog2(0.0)returns -∞log2(+∞)returns +∞
§Applications
- Information theory: Entropy in bits, mutual information
- Computer science: Bit manipulation, binary search complexity
- Audio: Octave calculations, pitch detection
- Data compression: Huffman coding, arithmetic coding
Sourcepub fn log10(&self) -> Result<Vector<f32>>
pub fn log10(&self) -> Result<Vector<f32>>
Element-wise base-10 logarithm: result[i] = log₁₀(x[i])
Computes the base-10 (common) logarithm for each element. Uses Rust’s optimized f32::log10() method.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 10.0, 100.0, 1000.0]);
let result = v.log10()?;
// result ≈ [0.0, 1.0, 2.0, 3.0]§Special Cases
log10(1.0)returns 0.0log10(10.0)returns 1.0log10(x)for x ≤ 0 returns NaNlog10(0.0)returns -∞log10(+∞)returns +∞
§Applications
- Audio: Decibel calculations (dB = 20 * log10(amplitude))
- Chemistry: pH calculations (-log10(H+ concentration))
- Seismology: Richter scale
- Scientific notation: Order of magnitude calculations
Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn sinh(&self) -> Result<Vector<f32>>
pub fn sinh(&self) -> Result<Vector<f32>>
Computes the hyperbolic sine (sinh) of each element.
§Mathematical Definition
sinh(x) = (e^x - e^(-x)) / 2
§Properties
- Domain: (-∞, +∞)
- Range: (-∞, +∞)
- Odd function: sinh(-x) = -sinh(x)
- sinh(0) = 0
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[0.0, 1.0, -1.0]);
let result = v.sinh()?;
assert!((result.as_slice()[0] - 0.0).abs() < 1e-5);Sourcepub fn cosh(&self) -> Result<Vector<f32>>
pub fn cosh(&self) -> Result<Vector<f32>>
Computes the hyperbolic cosine (cosh) of each element.
§Mathematical Definition
cosh(x) = (e^x + e^(-x)) / 2
§Properties
- Domain: (-∞, +∞)
- Range: [1, +∞)
- Even function: cosh(-x) = cosh(x)
- cosh(0) = 1
- Always positive: cosh(x) ≥ 1 for all x
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[0.0, 1.0, -1.0]);
let result = v.cosh()?;
assert!((result.as_slice()[0] - 1.0).abs() < 1e-5);Sourcepub fn tanh(&self) -> Result<Vector<f32>>
pub fn tanh(&self) -> Result<Vector<f32>>
Computes the hyperbolic tangent (tanh) of each element.
§Mathematical Definition
tanh(x) = sinh(x) / cosh(x) = (e^x - e^(-x)) / (e^x + e^(-x))
§Properties
- Domain: (-∞, +∞)
- Range: (-1, 1)
- Odd function: tanh(-x) = -tanh(x)
- tanh(0) = 0
- Bounded: -1 < tanh(x) < 1 for all x
- Commonly used as activation function in neural networks
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[0.0, 1.0, -1.0]);
let result = v.tanh()?;
assert!((result.as_slice()[0] - 0.0).abs() < 1e-5);
// All values are in range (-1, 1)
assert!(result.as_slice().iter().all(|&x| x > -1.0 && x < 1.0));Sourcepub fn asinh(&self) -> Result<Vector<f32>>
pub fn asinh(&self) -> Result<Vector<f32>>
Computes the inverse hyperbolic sine (asinh) of each element.
§Mathematical Definition
asinh(x) = ln(x + sqrt(x² + 1))
§Properties
- Domain: (-∞, +∞)
- Range: (-∞, +∞)
- Odd function: asinh(-x) = -asinh(x)
- asinh(0) = 0
- Inverse of sinh: asinh(sinh(x)) = x
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[0.0, 1.0, -1.0]);
let result = v.asinh()?;
assert!((result.as_slice()[0] - 0.0).abs() < 1e-5);Sourcepub fn acosh(&self) -> Result<Vector<f32>>
pub fn acosh(&self) -> Result<Vector<f32>>
Computes the inverse hyperbolic cosine (acosh) of each element.
§Mathematical Definition
acosh(x) = ln(x + sqrt(x² - 1))
§Properties
- Domain: [1, +∞)
- Range: [0, +∞)
- acosh(1) = 0
- Inverse of cosh: acosh(cosh(x)) = x for x >= 0
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0]);
let result = v.acosh()?;
assert!((result.as_slice()[0] - 0.0).abs() < 1e-5);Sourcepub fn atanh(&self) -> Result<Vector<f32>>
pub fn atanh(&self) -> Result<Vector<f32>>
Computes the inverse hyperbolic tangent (atanh) of each element.
Domain: (-1, 1) Range: (-∞, +∞)
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[0.0, 0.5, -0.5]);
let result = v.atanh()?;
// atanh(0) = 0, atanh(0.5) ≈ 0.549, atanh(-0.5) ≈ -0.549Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn sin(&self) -> Result<Vector<f32>>
pub fn sin(&self) -> Result<Vector<f32>>
Element-wise sine: result[i] = sin(x[i])
Computes the sine for each element (input in radians). Uses Rust’s optimized f32::sin() method.
§Examples
use trueno::Vector;
use std::f32::consts::PI;
let v = Vector::from_slice(&[0.0, PI / 2.0, PI]);
let result = v.sin()?;
// result ≈ [0.0, 1.0, 0.0]§Special Cases
sin(0)returns 0.0sin(π/2)returns 1.0sin(π)returns 0.0 (approximately)sin(-x)returns -sin(x) (odd function)- Periodic with period 2π: sin(x + 2π) = sin(x)
§Applications
- Signal processing: Waveform generation, oscillators, modulation
- Physics: Harmonic motion, wave propagation, pendulums
- Audio: Synthesizers, tone generation, effects processing
- Graphics: Animation, rotation transformations, procedural generation
- Fourier analysis: Frequency decomposition, spectral analysis
Sourcepub fn cos(&self) -> Result<Vector<f32>>
pub fn cos(&self) -> Result<Vector<f32>>
Element-wise cosine: result[i] = cos(x[i])
Computes the cosine for each element (input in radians). Uses Rust’s optimized f32::cos() method.
§Examples
use trueno::Vector;
use std::f32::consts::PI;
let v = Vector::from_slice(&[0.0, PI / 2.0, PI]);
let result = v.cos()?;
// result ≈ [1.0, 0.0, -1.0]§Special Cases
cos(0)returns 1.0cos(π/2)returns 0.0 (approximately)cos(π)returns -1.0cos(-x)returns cos(x) (even function)- Periodic with period 2π: cos(x + 2π) = cos(x)
- Relation to sine: cos(x) = sin(x + π/2)
§Applications
- Signal processing: Phase-shifted waveforms, I/Q modulation, quadrature signals
- Physics: Projectile motion, wave interference, damped oscillations
- Graphics: Rotation matrices, camera transforms, circular motion
- Audio: Stereo panning, spatial audio, frequency synthesis
- Engineering: Control systems, frequency response, AC circuits
Sourcepub fn tan(&self) -> Result<Vector<f32>>
pub fn tan(&self) -> Result<Vector<f32>>
Computes element-wise tangent (tan) of the vector.
Returns a new vector where each element is the tangent of the corresponding input element. tan(x) = sin(x) / cos(x)
§Returns
Ok(Vector<f32>): New vector with tan(x) for each element
§Properties
- Odd function: tan(-x) = -tan(x)
- Period: 2π (not π, despite common misconception)
- Undefined at x = π/2 + nπ (where n is any integer)
- tan(x) = sin(x) / cos(x)
- Range: (-∞, +∞)
§Performance
- Iterator map pattern for cache efficiency
- Leverages Rust’s optimized f32::tan()
- Auto-vectorized by LLVM on supporting platforms
§Examples
use trueno::Vector;
use std::f32::consts::PI;
let angles = Vector::from_slice(&[0.0, PI / 4.0, -PI / 4.0]);
let result = angles.tan()?;
// Result: [0.0, 1.0, -1.0] (approximately)§Use Cases
- Trigonometry: Slope calculations, angle relationships
- Signal processing: Phase analysis, modulation
- Physics: Projectile trajectories, optics (Snell’s law angles)
- Graphics: Perspective projection, field of view calculations
- Engineering: Slope gradients, tangent lines to curves
Sourcepub fn asin(&self) -> Result<Vector<f32>>
pub fn asin(&self) -> Result<Vector<f32>>
Computes element-wise arcsine (asin/sin⁻¹) of the vector.
Returns a new vector where each element is the inverse sine of the corresponding input element. This is the inverse function of sin: if y = sin(x), then x = asin(y).
§Returns
Ok(Vector<f32>): New vector with asin(x) for each element
§Properties
- Domain: [-1, 1] (inputs outside this range produce NaN)
- Range: [-π/2, π/2]
- Odd function: asin(-x) = -asin(x)
- Inverse relation: asin(sin(x)) = x for x ∈ [-π/2, π/2]
- asin(0) = 0
- asin(1) = π/2
- asin(-1) = -π/2
§Performance
- Iterator map pattern for cache efficiency
- Leverages Rust’s optimized f32::asin()
- Auto-vectorized by LLVM on supporting platforms
§Examples
use trueno::Vector;
use std::f32::consts::PI;
let values = Vector::from_slice(&[0.0, 0.5, 1.0]);
let result = values.asin()?;
// Result: [0.0, π/6, π/2] (approximately)§Use Cases
- Physics: Calculating angles from sine values in mechanics, optics
- Signal processing: Phase recovery, demodulation
- Graphics: Inverse transformations, angle calculations
- Navigation: GPS calculations, spherical trigonometry
- Control systems: Inverse kinematics, servo positioning
Sourcepub fn acos(&self) -> Result<Vector<f32>>
pub fn acos(&self) -> Result<Vector<f32>>
Computes element-wise arccosine (acos/cos⁻¹) of the vector.
Returns a new vector where each element is the inverse cosine of the corresponding input element. This is the inverse function of cos: if y = cos(x), then x = acos(y).
§Returns
Ok(Vector<f32>): New vector with acos(x) for each element
§Properties
- Domain: [-1, 1] (inputs outside this range produce NaN)
- Range: [0, π]
- Symmetry: acos(-x) = π - acos(x)
- Inverse relation: acos(cos(x)) = x for x ∈ [0, π]
- acos(0) = π/2
- acos(1) = 0
- acos(-1) = π
§Performance
- Iterator map pattern for cache efficiency
- Leverages Rust’s optimized f32::acos()
- Auto-vectorized by LLVM on supporting platforms
§Examples
use trueno::Vector;
use std::f32::consts::PI;
let values = Vector::from_slice(&[0.0, 0.5, 1.0]);
let result = values.acos()?;
// Result: [π/2, π/3, 0.0] (approximately)§Use Cases
- Physics: Angle calculations in mechanics, optics, reflections
- Signal processing: Phase analysis, correlation functions
- Graphics: View angle calculations, lighting models
- Navigation: Bearing calculations, great circle distances
- Robotics: Joint angle solving, orientation calculations
Sourcepub fn atan(&self) -> Result<Vector<f32>>
pub fn atan(&self) -> Result<Vector<f32>>
Computes element-wise arctangent (atan/tan⁻¹) of the vector.
Returns a new vector where each element is the inverse tangent of the corresponding input element. This is the inverse function of tan: if y = tan(x), then x = atan(y).
§Returns
Ok(Vector<f32>): New vector with atan(x) for each element
§Properties
- Domain: All real numbers (-∞, +∞)
- Range: (-π/2, π/2)
- Odd function: atan(-x) = -atan(x)
- Inverse relation: atan(tan(x)) = x for x ∈ (-π/2, π/2)
- atan(0) = 0
- atan(1) = π/4
- atan(-1) = -π/4
- lim(x→∞) atan(x) = π/2
- lim(x→-∞) atan(x) = -π/2
§Performance
- Iterator map pattern for cache efficiency
- Leverages Rust’s optimized f32::atan()
- Auto-vectorized by LLVM on supporting platforms
§Examples
use trueno::Vector;
use std::f32::consts::PI;
let values = Vector::from_slice(&[0.0, 1.0, -1.0]);
let result = values.atan()?;
// Result: [0.0, π/4, -π/4] (approximately)§Use Cases
- Physics: Angle calculations from slopes, velocity components
- Signal processing: Phase unwrapping, FM demodulation
- Graphics: Rotation calculations, camera orientation
- Robotics: Inverse kinematics, steering angles
- Navigation: Heading calculations from coordinates
Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn sqrt(&self) -> Result<Vector<f32>>
pub fn sqrt(&self) -> Result<Vector<f32>>
Element-wise square root: result[i] = sqrt(self[i])
Computes the square root of each element. For negative values, returns NaN following IEEE 754 floating-point semantics.
§Returns
A new vector where each element is the square root of the corresponding input element
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[4.0, 9.0, 16.0, 25.0]);
let result = a.sqrt()?;
assert_eq!(result.as_slice(), &[2.0, 3.0, 4.0, 5.0]);Negative values produce NaN:
use trueno::Vector;
let a = Vector::from_slice(&[-1.0, 4.0]);
let result = a.sqrt()?;
assert!(result.as_slice()[0].is_nan());
assert_eq!(result.as_slice()[1], 2.0);§Use Cases
- Distance calculations: Euclidean distance computation
- Statistics: Standard deviation, RMS (root mean square)
- Machine learning: Normalization, gradient descent with adaptive learning rates
- Signal processing: Amplitude calculations, power spectrum analysis
- Physics simulations: Velocity from kinetic energy, wave propagation
Sourcepub fn recip(&self) -> Result<Vector<f32>>
pub fn recip(&self) -> Result<Vector<f32>>
Element-wise reciprocal: result[i] = 1 / self[i]
Computes the reciprocal (multiplicative inverse) of each element. For zero values, returns infinity following IEEE 754 floating-point semantics.
§Returns
A new vector where each element is the reciprocal of the corresponding input element
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[2.0, 4.0, 5.0, 10.0]);
let result = a.recip().unwrap();
assert_eq!(result.as_slice(), &[0.5, 0.25, 0.2, 0.1]);Zero values produce infinity:
use trueno::Vector;
let a = Vector::from_slice(&[0.0, 2.0]);
let result = a.recip().unwrap();
assert!(result.as_slice()[0].is_infinite());
assert_eq!(result.as_slice()[1], 0.5);§Use Cases
- Division optimization:
a / b->a * recip(b)(multiplication is faster) - Neural networks: Learning rate schedules, weight normalization
- Statistics: Harmonic mean calculations, inverse transformations
- Physics: Resistance (R = 1/G), optical power (P = 1/f)
- Signal processing: Frequency to period conversion, filter design
Sourcepub fn pow(&self, n: f32) -> Result<Vector<f32>>
pub fn pow(&self, n: f32) -> Result<Vector<f32>>
Element-wise power: result[i] = base[i]^n
Raises each element to the given power n.
Uses Rust’s optimized f32::powf() method.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[2.0, 3.0, 4.0]);
let squared = v.pow(2.0).unwrap();
assert_eq!(squared.as_slice(), &[4.0, 9.0, 16.0]);
let sqrt = v.pow(0.5).unwrap(); // Fractional power = root§Special Cases
x.pow(0.0)returns 1.0 for all x (even x=0)x.pow(1.0)returns x (identity)x.pow(-1.0)returns 1/x (reciprocal)x.pow(0.5)returns sqrt(x) (square root)
§Applications
- Statistics: Power transformations (Box-Cox, Yeo-Johnson)
- Machine learning: Polynomial features, activation functions
- Physics: Inverse square law (1/r^2), power laws
- Signal processing: Power spectral density, root mean square
Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn abs(&self) -> Result<Vector<f32>>
pub fn abs(&self) -> Result<Vector<f32>>
Compute element-wise absolute value
Returns a new vector where each element is the absolute value of the corresponding input element.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[3.0, -4.0, 5.0, -2.0]);
let result = v.abs()?;
assert_eq!(result.as_slice(), &[3.0, 4.0, 5.0, 2.0]);§Empty Vector
use trueno::Vector;
let v: Vector<f32> = Vector::from_slice(&[]);
let result = v.abs()?;
assert_eq!(result.len(), 0);Sourcepub fn clip(&self, min_val: f32, max_val: f32) -> Result<Self>
pub fn clip(&self, min_val: f32, max_val: f32) -> Result<Self>
Clip values to a specified range [min_val, max_val]
Constrains each element to be within the specified range:
- Values below min_val become min_val
- Values above max_val become max_val
- Values within range stay unchanged
This is useful for outlier handling, gradient clipping in neural networks, and ensuring values stay within valid bounds.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-5.0, 0.0, 5.0, 10.0, 15.0]);
let clipped = v.clip(0.0, 10.0)?;
// Values: [-5, 0, 5, 10, 15] → [0, 0, 5, 10, 10]
assert_eq!(clipped.as_slice(), &[0.0, 0.0, 5.0, 10.0, 10.0]);§Invalid range
Returns InvalidInput error if min_val > max_val.
use trueno::{Vector, TruenoError};
let v = Vector::from_slice(&[1.0, 2.0, 3.0]);
let result = v.clip(10.0, 5.0); // min > max
assert!(matches!(result, Err(TruenoError::InvalidInput(_))));Sourcepub fn clamp(&self, min_val: f32, max_val: f32) -> Result<Vector<f32>>
pub fn clamp(&self, min_val: f32, max_val: f32) -> Result<Vector<f32>>
Clamp elements to range [min_val, max_val]
Returns a new vector where each element is constrained to the specified range. Elements below min_val become min_val, elements above max_val become max_val.
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[-5.0, 0.0, 5.0, 10.0, 15.0]);
let result = v.clamp(0.0, 10.0)?;
assert_eq!(result.as_slice(), &[0.0, 0.0, 5.0, 10.0, 10.0]);§Negative Range
use trueno::Vector;
let v = Vector::from_slice(&[-10.0, -5.0, 0.0, 5.0]);
let result = v.clamp(-8.0, -2.0)?;
assert_eq!(result.as_slice(), &[-8.0, -5.0, -2.0, -2.0]);§Errors
Returns InvalidInput if min_val > max_val.
Sourcepub fn lerp(&self, other: &Vector<f32>, t: f32) -> Result<Vector<f32>>
pub fn lerp(&self, other: &Vector<f32>, t: f32) -> Result<Vector<f32>>
Linear interpolation between two vectors
Computes element-wise linear interpolation: result\[i\] = a\[i\] + t * (b\[i\] - a\[i\])
- When
t = 0.0, returnsself - When
t = 1.0, returnsother - Values outside
[0, 1]perform extrapolation
§Examples
use trueno::Vector;
let a = Vector::from_slice(&[0.0, 10.0, 20.0]);
let b = Vector::from_slice(&[100.0, 110.0, 120.0]);
let result = a.lerp(&b, 0.5)?;
assert_eq!(result.as_slice(), &[50.0, 60.0, 70.0]);§Extrapolation
use trueno::Vector;
let a = Vector::from_slice(&[0.0, 10.0]);
let b = Vector::from_slice(&[10.0, 20.0]);
// t > 1.0 extrapolates beyond b
let result = a.lerp(&b, 2.0)?;
assert_eq!(result.as_slice(), &[20.0, 30.0]);§Errors
Returns SizeMismatch if vectors have different lengths.
Source§impl<T> Vector<T>where
T: Clone,
impl<T> Vector<T>where
T: Clone,
Sourcepub fn from_slice(data: &[T]) -> Self
pub fn from_slice(data: &[T]) -> Self
Create vector from slice using auto-selected optimal backend
§Performance
Auto-selects the best available backend at creation time based on:
- CPU feature detection (AVX-512 > AVX2 > AVX > SSE2)
- Vector size (GPU for large workloads)
- Platform availability (NEON on ARM, WASM SIMD in browser)
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0]);
assert_eq!(v.len(), 4);Sourcepub fn from_vec(data: Vec<T>) -> Self
pub fn from_vec(data: Vec<T>) -> Self
Create vector from an existing Vec (takes ownership, no copy)
This is more efficient than from_slice when you already have a Vec
and don’t need to keep it, as it avoids an extra allocation and copy.
§Examples
use trueno::Vector;
let data = vec![1.0, 2.0, 3.0];
let v = Vector::from_vec(data);
assert_eq!(v.len(), 3);Sourcepub fn from_slice_with_backend(data: &[T], backend: Backend) -> Self
pub fn from_slice_with_backend(data: &[T], backend: Backend) -> Self
Create vector with specific backend (for benchmarking or testing)
§Examples
use trueno::{Vector, Backend};
let v = Vector::from_slice_with_backend(&[1.0, 2.0], Backend::Scalar);
assert_eq!(v.len(), 2);Source§impl Vector<f32>
impl Vector<f32>
Sourcepub fn with_alignment(
size: usize,
backend: Backend,
alignment: usize,
) -> Result<Self>
pub fn with_alignment( size: usize, backend: Backend, alignment: usize, ) -> Result<Self>
Create vector with specified alignment for optimal SIMD performance
This method attempts to create a vector with memory aligned to the specified byte boundary. Note: Rust’s Vec allocator may already provide sufficient alignment for most use cases. This method validates the alignment requirement but uses standard Vec allocation.
§Arguments
size- Number of elements to allocatebackend- Backend to use for operationsalignment- Requested alignment in bytes (must be power of 2: 16, 32, 64)
§Recommended Alignments
- SSE2: 16 bytes (128-bit)
- AVX2: 32 bytes (256-bit)
- AVX-512: 64 bytes (512-bit)
§Note on Implementation
Currently uses Rust’s default Vec allocator, which typically provides 16-byte alignment on modern systems. Custom allocators for specific alignments will be added in future versions.
§Examples
use trueno::{Vector, Backend};
// Create vector with requested 16-byte alignment
let v = Vector::with_alignment(100, Backend::SSE2, 16).unwrap();
assert_eq!(v.len(), 100);§Errors
Returns TruenoError::InvalidInput if alignment is not a power of 2.
Source§impl<T> Vector<T>where
T: Clone,
impl<T> Vector<T>where
T: Clone,
Sourcepub fn as_slice(&self) -> &[T]
pub fn as_slice(&self) -> &[T]
Get underlying data as slice
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0]);
assert_eq!(v.as_slice(), &[1.0, 2.0, 3.0]);Sourcepub fn len(&self) -> usize
pub fn len(&self) -> usize
Get vector length
§Examples
use trueno::Vector;
let v = Vector::from_slice(&[1.0, 2.0, 3.0, 4.0, 5.0]);
assert_eq!(v.len(), 5);Trait Implementations§
impl<T> StructuralPartialEq for Vector<T>
Auto Trait Implementations§
impl<T> Freeze for Vector<T>
impl<T> RefUnwindSafe for Vector<T>where
T: RefUnwindSafe,
impl<T> Send for Vector<T>where
T: Send,
impl<T> Sync for Vector<T>where
T: Sync,
impl<T> Unpin for Vector<T>where
T: Unpin,
impl<T> UnsafeUnpin for Vector<T>
impl<T> UnwindSafe for Vector<T>where
T: UnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> FmtForward for T
impl<T> FmtForward for T
Source§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
self to use its Binary implementation when Debug-formatted.Source§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
self to use its Display implementation when
Debug-formatted.Source§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
self to use its LowerExp implementation when
Debug-formatted.Source§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
self to use its LowerHex implementation when
Debug-formatted.Source§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
self to use its Octal implementation when Debug-formatted.Source§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
self to use its Pointer implementation when
Debug-formatted.Source§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
self to use its UpperExp implementation when
Debug-formatted.Source§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
self to use its UpperHex implementation when
Debug-formatted.Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere
T: ?Sized,
Source§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
Source§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
self and passes that borrow into the pipe function. Read moreSource§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
self and passes that borrow into the pipe function. Read moreSource§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
Source§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R,
) -> R
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R, ) -> R
Source§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
self, then passes self.as_ref() into the pipe function.Source§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
self, then passes self.as_mut() into the pipe
function.Source§fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
self, then passes self.deref() into the pipe function.Source§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<T> Tap for T
impl<T> Tap for T
Source§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
Borrow<B> of a value. Read moreSource§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
BorrowMut<B> of a value. Read moreSource§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
AsRef<R> view of a value. Read moreSource§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
AsMut<R> view of a value. Read moreSource§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
Deref::Target of a value. Read moreSource§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
Deref::Target of a value. Read moreSource§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap() only in debug builds, and is erased in release builds.Source§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut() only in debug builds, and is erased in release
builds.Source§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
.tap_borrow() only in debug builds, and is erased in release
builds.Source§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
.tap_borrow_mut() only in debug builds, and is erased in release
builds.Source§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
.tap_ref() only in debug builds, and is erased in release
builds.Source§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
.tap_ref_mut() only in debug builds, and is erased in release
builds.Source§fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
.tap_deref() only in debug builds, and is erased in release
builds.