pub struct InProcessProvider { /* private fields */ }Implementations§
Trait Implementations§
Source§impl AccelProvider for InProcessProvider
impl AccelProvider for InProcessProvider
fn device_id(&self) -> u32
Source§fn gather_linear(
&self,
source: &GpuTensorHandle,
indices: &[u32],
output_shape: &[usize],
) -> Result<GpuTensorHandle>
fn gather_linear( &self, source: &GpuTensorHandle, indices: &[u32], output_shape: &[usize], ) -> Result<GpuTensorHandle>
Gather elements from
source at the provided zero-based linear indices, materialising
a dense tensor with the specified output_shape.Source§fn scatter_linear(
&self,
target: &GpuTensorHandle,
indices: &[u32],
values: &GpuTensorHandle,
) -> Result<()>
fn scatter_linear( &self, target: &GpuTensorHandle, indices: &[u32], values: &GpuTensorHandle, ) -> Result<()>
fn precision(&self) -> ProviderPrecision
fn upload(&self, host: &HostTensorView<'_>) -> Result<GpuTensorHandle>
fn download(&self, h: &GpuTensorHandle) -> Result<HostTensorOwned>
fn free(&self, h: &GpuTensorHandle) -> Result<()>
fn device_info(&self) -> String
Source§fn device_info_struct(&self) -> ApiDeviceInfo
fn device_info_struct(&self) -> ApiDeviceInfo
Structured device information (optional to override). Default adapts from
device_info().Source§fn telemetry_snapshot(&self) -> ProviderTelemetry
fn telemetry_snapshot(&self) -> ProviderTelemetry
Returns a snapshot of provider telemetry counters if supported.
Source§fn reset_telemetry(&self)
fn reset_telemetry(&self)
Reset all telemetry counters maintained by the provider, if supported.
fn sort_rows( &self, handle: &GpuTensorHandle, columns: &[SortRowsColumnSpec], comparison: SortComparison, ) -> Result<SortResult>
Source§fn polyder_single(
&self,
polynomial: &GpuTensorHandle,
) -> Result<GpuTensorHandle>
fn polyder_single( &self, polynomial: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
Differentiate a polynomial represented as a vector of coefficients.
Source§fn polyder_product(
&self,
p: &GpuTensorHandle,
q: &GpuTensorHandle,
) -> Result<GpuTensorHandle>
fn polyder_product( &self, p: &GpuTensorHandle, q: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
Apply the product rule to polynomials
p and q.Source§fn polyder_quotient(
&self,
u: &GpuTensorHandle,
v: &GpuTensorHandle,
) -> Result<ProviderPolyderQuotient>
fn polyder_quotient( &self, u: &GpuTensorHandle, v: &GpuTensorHandle, ) -> Result<ProviderPolyderQuotient>
Apply the quotient rule to polynomials
u and v.Source§fn polyint(
&self,
polynomial: &GpuTensorHandle,
constant: f64,
) -> Result<GpuTensorHandle>
fn polyint( &self, polynomial: &GpuTensorHandle, constant: f64, ) -> Result<GpuTensorHandle>
Integrate a polynomial represented as a vector of coefficients and append a constant term.
Source§fn diag_from_vector(
&self,
vector: &GpuTensorHandle,
offset: isize,
) -> Result<GpuTensorHandle>
fn diag_from_vector( &self, vector: &GpuTensorHandle, offset: isize, ) -> Result<GpuTensorHandle>
Construct a diagonal matrix from a vector-like tensor.
offset matches MATLAB semantics.Source§fn diag_extract(
&self,
matrix: &GpuTensorHandle,
offset: isize,
) -> Result<GpuTensorHandle>
fn diag_extract( &self, matrix: &GpuTensorHandle, offset: isize, ) -> Result<GpuTensorHandle>
Extract a diagonal from a matrix-like tensor. The result is always a column vector.
Source§fn tril(
&self,
handle: &GpuTensorHandle,
offset: isize,
) -> Result<GpuTensorHandle>
fn tril( &self, handle: &GpuTensorHandle, offset: isize, ) -> Result<GpuTensorHandle>
Apply a lower-triangular mask to the first two dimensions of a tensor.
Source§fn triu(
&self,
handle: &GpuTensorHandle,
offset: isize,
) -> Result<GpuTensorHandle>
fn triu( &self, handle: &GpuTensorHandle, offset: isize, ) -> Result<GpuTensorHandle>
Apply an upper-triangular mask to the first two dimensions of a tensor.
Source§fn issymmetric(
&self,
matrix: &GpuTensorHandle,
kind: ProviderSymmetryKind,
tolerance: f64,
) -> Result<bool>
fn issymmetric( &self, matrix: &GpuTensorHandle, kind: ProviderSymmetryKind, tolerance: f64, ) -> Result<bool>
Determine if a matrix is symmetric (or skew-symmetric) without gathering it to the host.
Source§fn ishermitian(
&self,
matrix: &GpuTensorHandle,
kind: ProviderHermitianKind,
tolerance: f64,
) -> Result<bool>
fn ishermitian( &self, matrix: &GpuTensorHandle, kind: ProviderHermitianKind, tolerance: f64, ) -> Result<bool>
Determine if a matrix is Hermitian (or skew-Hermitian) without gathering it to the host.
Source§fn bandwidth(&self, matrix: &GpuTensorHandle) -> Result<ProviderBandwidth>
fn bandwidth(&self, matrix: &GpuTensorHandle) -> Result<ProviderBandwidth>
Inspect the bandwidth of a matrix without gathering it back to the host.
Source§fn sym_rcm(&self, matrix: &GpuTensorHandle) -> Result<Vec<usize>>
fn sym_rcm(&self, matrix: &GpuTensorHandle) -> Result<Vec<usize>>
Compute the symmetric reverse Cuthill-McKee permutation for the matrix. Read more
Source§fn read_scalar(&self, h: &GpuTensorHandle, linear_index: usize) -> Result<f64>
fn read_scalar(&self, h: &GpuTensorHandle, linear_index: usize) -> Result<f64>
Read a single scalar at linear index from a device tensor, returning it as f64.
Source§fn zeros(&self, shape: &[usize]) -> Result<GpuTensorHandle>
fn zeros(&self, shape: &[usize]) -> Result<GpuTensorHandle>
Allocate a zero-initialised tensor with the provided shape on the device.
Source§fn zeros_like(&self, prototype: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn zeros_like(&self, prototype: &GpuTensorHandle) -> Result<GpuTensorHandle>
Allocate a zero-initialised tensor matching the prototype tensor.
Source§fn ones(&self, shape: &[usize]) -> Result<GpuTensorHandle>
fn ones(&self, shape: &[usize]) -> Result<GpuTensorHandle>
Allocate a one-initialised tensor with the provided shape on the device.
Source§fn ones_like(&self, prototype: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn ones_like(&self, prototype: &GpuTensorHandle) -> Result<GpuTensorHandle>
Allocate a one-initialised tensor matching the prototype tensor.
Source§fn eye(&self, shape: &[usize]) -> Result<GpuTensorHandle>
fn eye(&self, shape: &[usize]) -> Result<GpuTensorHandle>
Allocate an identity tensor with ones along the leading diagonal of the first two axes.
Source§fn eye_like(&self, prototype: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn eye_like(&self, prototype: &GpuTensorHandle) -> Result<GpuTensorHandle>
Allocate an identity tensor matching the prototype tensor’s shape.
fn linspace( &self, start: f64, stop: f64, count: usize, ) -> Result<GpuTensorHandle>
Source§fn random_uniform(&self, shape: &[usize]) -> Result<GpuTensorHandle>
fn random_uniform(&self, shape: &[usize]) -> Result<GpuTensorHandle>
Allocate a tensor filled with random values drawn from U(0, 1).
Source§fn random_normal(&self, shape: &[usize]) -> Result<GpuTensorHandle>
fn random_normal(&self, shape: &[usize]) -> Result<GpuTensorHandle>
Allocate a tensor filled with standard normal (mean 0, stddev 1) random values.
Source§fn set_rng_state(&self, state: u64) -> Result<()>
fn set_rng_state(&self, state: u64) -> Result<()>
Set the provider RNG state to align with the host RNG.
Source§fn fspecial(&self, request: &FspecialRequest) -> Result<GpuTensorHandle>
fn fspecial(&self, request: &FspecialRequest) -> Result<GpuTensorHandle>
Generate a 2-D correlation kernel matching MATLAB’s
fspecial builtin.Source§fn imfilter(
&self,
image: &GpuTensorHandle,
kernel: &GpuTensorHandle,
options: &ImfilterOptions,
) -> Result<GpuTensorHandle>
fn imfilter( &self, image: &GpuTensorHandle, kernel: &GpuTensorHandle, options: &ImfilterOptions, ) -> Result<GpuTensorHandle>
Apply an N-D correlation/convolution with padding semantics matching MATLAB’s
imfilter.Source§fn random_integer_range(
&self,
lower: i64,
upper: i64,
shape: &[usize],
) -> Result<GpuTensorHandle>
fn random_integer_range( &self, lower: i64, upper: i64, shape: &[usize], ) -> Result<GpuTensorHandle>
Allocate a tensor filled with random integers over an inclusive range.
Source§fn random_permutation(&self, n: usize, k: usize) -> Result<GpuTensorHandle>
fn random_permutation(&self, n: usize, k: usize) -> Result<GpuTensorHandle>
Allocate a random permutation of 1..=n, returning the first k elements.
Source§fn random_permutation_like(
&self,
_prototype: &GpuTensorHandle,
n: usize,
k: usize,
) -> Result<GpuTensorHandle>
fn random_permutation_like( &self, _prototype: &GpuTensorHandle, n: usize, k: usize, ) -> Result<GpuTensorHandle>
Allocate a random permutation matching the prototype residency.
Source§fn covariance(
&self,
matrix: &GpuTensorHandle,
second: Option<&GpuTensorHandle>,
weights: Option<&GpuTensorHandle>,
options: &CovarianceOptions,
) -> Result<GpuTensorHandle>
fn covariance( &self, matrix: &GpuTensorHandle, second: Option<&GpuTensorHandle>, weights: Option<&GpuTensorHandle>, options: &CovarianceOptions, ) -> Result<GpuTensorHandle>
Compute a covariance matrix across the columns of
matrix.Source§fn corrcoef(
&self,
matrix: &GpuTensorHandle,
options: &CorrcoefOptions,
) -> Result<GpuTensorHandle>
fn corrcoef( &self, matrix: &GpuTensorHandle, options: &CorrcoefOptions, ) -> Result<GpuTensorHandle>
Compute a correlation coefficient matrix across the columns of
matrix.fn elem_add( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_mul( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_sub( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_div( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_pow( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_ne( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_ge( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_le( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_lt( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_gt( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_eq( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn logical_and( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn logical_or( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn logical_xor( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn logical_not(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn logical_isreal(&self, a: &GpuTensorHandle) -> Result<bool>
fn logical_isfinite(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn logical_isnan(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn logical_isinf(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn elem_hypot( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn elem_atan2( &self, y: &GpuTensorHandle, x: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn unary_sin(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_gamma(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_factorial(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_asinh(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_sinh(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_cosh(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_asin(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_acos(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_acosh(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_tan(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_tanh(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_atan(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_atanh(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_ceil(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_floor(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_round(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_fix(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_cos(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_abs(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_exp(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_log(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_sqrt(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_double(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_single(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn unary_pow2(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn pow2_scale( &self, mantissa: &GpuTensorHandle, exponent: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn scalar_add( &self, a: &GpuTensorHandle, scalar: f64, ) -> Result<GpuTensorHandle>
fn scalar_sub( &self, a: &GpuTensorHandle, scalar: f64, ) -> Result<GpuTensorHandle>
fn scalar_mul( &self, a: &GpuTensorHandle, scalar: f64, ) -> Result<GpuTensorHandle>
fn scalar_div( &self, a: &GpuTensorHandle, scalar: f64, ) -> Result<GpuTensorHandle>
fn scalar_rsub( &self, a: &GpuTensorHandle, scalar: f64, ) -> Result<GpuTensorHandle>
fn scalar_rdiv( &self, a: &GpuTensorHandle, scalar: f64, ) -> Result<GpuTensorHandle>
fn transpose(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn conv1d( &self, signal: &GpuTensorHandle, kernel: &GpuTensorHandle, options: ProviderConv1dOptions, ) -> Result<GpuTensorHandle>
fn conv2d( &self, signal: &GpuTensorHandle, kernel: &GpuTensorHandle, mode: ProviderConvMode, ) -> Result<GpuTensorHandle>
fn iir_filter( &self, b: &GpuTensorHandle, a: &GpuTensorHandle, x: &GpuTensorHandle, options: ProviderIirFilterOptions, ) -> Result<ProviderIirFilterResult>
Source§fn permute(
&self,
handle: &GpuTensorHandle,
order: &[usize],
) -> Result<GpuTensorHandle>
fn permute( &self, handle: &GpuTensorHandle, order: &[usize], ) -> Result<GpuTensorHandle>
Reorder tensor dimensions according to
order, expressed as zero-based indices.fn flip( &self, handle: &GpuTensorHandle, axes: &[usize], ) -> Result<GpuTensorHandle>
fn circshift( &self, handle: &GpuTensorHandle, shifts: &[isize], ) -> Result<GpuTensorHandle>
fn diff_dim( &self, handle: &GpuTensorHandle, order: usize, dim: usize, ) -> Result<GpuTensorHandle>
fn unique( &self, handle: &GpuTensorHandle, options: &UniqueOptions, ) -> Result<UniqueResult>
fn setdiff( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, options: &SetdiffOptions, ) -> Result<SetdiffResult>
fn repmat( &self, handle: &GpuTensorHandle, reps: &[usize], ) -> Result<GpuTensorHandle>
fn dot( &self, lhs: &GpuTensorHandle, rhs: &GpuTensorHandle, dim: Option<usize>, ) -> Result<GpuTensorHandle>
fn reduce_sum(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn reduce_sum_dim( &self, a: &GpuTensorHandle, dim: usize, ) -> Result<GpuTensorHandle>
fn reduce_prod(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn reduce_prod_dim( &self, a: &GpuTensorHandle, dim: usize, ) -> Result<GpuTensorHandle>
fn reduce_mean(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn reduce_mean_dim( &self, a: &GpuTensorHandle, dim: usize, ) -> Result<GpuTensorHandle>
fn reduce_any( &self, a: &GpuTensorHandle, omit_nan: bool, ) -> Result<GpuTensorHandle>
fn reduce_any_dim( &self, a: &GpuTensorHandle, dim: usize, omit_nan: bool, ) -> Result<GpuTensorHandle>
fn reduce_all( &self, a: &GpuTensorHandle, omit_nan: bool, ) -> Result<GpuTensorHandle>
fn reduce_all_dim( &self, a: &GpuTensorHandle, dim: usize, omit_nan: bool, ) -> Result<GpuTensorHandle>
fn reduce_median(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn reduce_median_dim( &self, a: &GpuTensorHandle, dim: usize, ) -> Result<GpuTensorHandle>
fn reduce_min(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn reduce_min_dim( &self, a: &GpuTensorHandle, dim: usize, ) -> Result<ReduceDimResult>
fn reduce_max(&self, a: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn reduce_max_dim( &self, a: &GpuTensorHandle, dim: usize, ) -> Result<ReduceDimResult>
fn cumsum_scan( &self, _input: &GpuTensorHandle, _dim: usize, _direction: ProviderScanDirection, _nan_mode: ProviderNanMode, ) -> Result<GpuTensorHandle>
fn cummin_scan( &self, _input: &GpuTensorHandle, _dim: usize, _direction: ProviderScanDirection, _nan_mode: ProviderNanMode, ) -> Result<ProviderCumminResult>
fn find( &self, a: &GpuTensorHandle, limit: Option<usize>, direction: FindDirection, ) -> Result<ProviderFindResult>
fn lu(&self, a: &GpuTensorHandle) -> Result<ProviderLuResult>
fn chol(&self, a: &GpuTensorHandle, lower: bool) -> Result<ProviderCholResult>
fn qr( &self, handle: &GpuTensorHandle, options: ProviderQrOptions, ) -> Result<ProviderQrResult>
fn matmul( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
Source§fn matmul_epilogue(
&self,
a: &GpuTensorHandle,
b: &GpuTensorHandle,
ep: &MatmulEpilogue,
) -> Result<GpuTensorHandle>
fn matmul_epilogue( &self, a: &GpuTensorHandle, b: &GpuTensorHandle, ep: &MatmulEpilogue, ) -> Result<GpuTensorHandle>
Optional: matrix multiplication with an epilogue applied before store. Read more
fn matmul_power_step( &self, lhs: &GpuTensorHandle, rhs: &GpuTensorHandle, ep: &PowerStepEpilogue, ) -> Result<GpuTensorHandle>
fn image_normalize( &self, input: &GpuTensorHandle, desc: &ImageNormalizeDescriptor, ) -> Result<GpuTensorHandle>
fn pagefun(&self, _request: &PagefunRequest) -> Result<GpuTensorHandle>
fn linsolve( &self, lhs: &GpuTensorHandle, rhs: &GpuTensorHandle, options: &ProviderLinsolveOptions, ) -> Result<ProviderLinsolveResult>
fn inv( &self, matrix: &GpuTensorHandle, _options: ProviderInvOptions, ) -> Result<GpuTensorHandle>
fn pinv( &self, matrix: &GpuTensorHandle, options: ProviderPinvOptions, ) -> Result<GpuTensorHandle>
fn cond( &self, matrix: &GpuTensorHandle, norm: ProviderCondNorm, ) -> Result<GpuTensorHandle>
fn norm( &self, tensor: &GpuTensorHandle, order: ProviderNormOrder, ) -> Result<GpuTensorHandle>
fn rank( &self, matrix: &GpuTensorHandle, tolerance: Option<f64>, ) -> Result<GpuTensorHandle>
fn rcond(&self, matrix: &GpuTensorHandle) -> Result<GpuTensorHandle>
fn mldivide( &self, lhs: &GpuTensorHandle, rhs: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn mrdivide( &self, lhs: &GpuTensorHandle, rhs: &GpuTensorHandle, ) -> Result<GpuTensorHandle>
fn eig( &self, _a: &GpuTensorHandle, _compute_left: bool, ) -> Result<ProviderEigResult>
fn sub2ind( &self, dims: &[usize], strides: &[usize], inputs: &[&GpuTensorHandle], scalar_mask: &[bool], len: usize, output_shape: &[usize], ) -> Result<GpuTensorHandle>
fn fused_elementwise( &self, _shader: &str, _inputs: &[GpuTensorHandle], _output_shape: &[usize], _len: usize, ) -> Result<GpuTensorHandle>
Source§fn fill(&self, shape: &[usize], value: f64) -> Result<GpuTensorHandle, Error>
fn fill(&self, shape: &[usize], value: f64) -> Result<GpuTensorHandle, Error>
Allocate a tensor filled with a constant value on the device.
Source§fn fill_like(
&self,
prototype: &GpuTensorHandle,
value: f64,
) -> Result<GpuTensorHandle, Error>
fn fill_like( &self, prototype: &GpuTensorHandle, value: f64, ) -> Result<GpuTensorHandle, Error>
Allocate a tensor filled with a constant value, matching a prototype’s residency.
Source§fn meshgrid(
&self,
_axes: &[MeshgridAxisView<'_>],
) -> Result<ProviderMeshgridResult, Error>
fn meshgrid( &self, _axes: &[MeshgridAxisView<'_>], ) -> Result<ProviderMeshgridResult, Error>
Construct MATLAB-style coordinate grids from axis vectors.
Source§fn polyval(
&self,
_coefficients: &GpuTensorHandle,
_points: &GpuTensorHandle,
_options: &ProviderPolyvalOptions,
) -> Result<GpuTensorHandle, Error>
fn polyval( &self, _coefficients: &GpuTensorHandle, _points: &GpuTensorHandle, _options: &ProviderPolyvalOptions, ) -> Result<GpuTensorHandle, Error>
Evaluate a polynomial expressed by
coefficients at each element in points.Source§fn polyfit(
&self,
_x: &GpuTensorHandle,
_y: &GpuTensorHandle,
_degree: usize,
_weights: Option<&GpuTensorHandle>,
) -> Result<ProviderPolyfitResult, Error>
fn polyfit( &self, _x: &GpuTensorHandle, _y: &GpuTensorHandle, _degree: usize, _weights: Option<&GpuTensorHandle>, ) -> Result<ProviderPolyfitResult, Error>
Fit a polynomial of degree
degree to (x, y) samples. Optional weights must match x.Source§fn random_uniform_like(
&self,
prototype: &GpuTensorHandle,
) -> Result<GpuTensorHandle, Error>
fn random_uniform_like( &self, prototype: &GpuTensorHandle, ) -> Result<GpuTensorHandle, Error>
Allocate a tensor filled with random values matching the prototype shape.
Source§fn random_normal_like(
&self,
prototype: &GpuTensorHandle,
) -> Result<GpuTensorHandle, Error>
fn random_normal_like( &self, prototype: &GpuTensorHandle, ) -> Result<GpuTensorHandle, Error>
Allocate a tensor of standard normal values matching a prototype’s shape.
fn stochastic_evolution( &self, _state: &GpuTensorHandle, _drift: f64, _scale: f64, _steps: u32, ) -> Result<GpuTensorHandle, Error>
Source§fn random_integer_like(
&self,
prototype: &GpuTensorHandle,
lower: i64,
upper: i64,
) -> Result<GpuTensorHandle, Error>
fn random_integer_like( &self, prototype: &GpuTensorHandle, lower: i64, upper: i64, ) -> Result<GpuTensorHandle, Error>
Allocate a random integer tensor matching the prototype shape.
fn elem_max( &self, _a: &GpuTensorHandle, _b: &GpuTensorHandle, ) -> Result<GpuTensorHandle, Error>
fn elem_min( &self, _a: &GpuTensorHandle, _b: &GpuTensorHandle, ) -> Result<GpuTensorHandle, Error>
fn logical_islogical(&self, a: &GpuTensorHandle) -> Result<bool, Error>
fn unary_angle(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn unary_imag(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn unary_real(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn unary_conj(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn unary_sign(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn unary_expm1(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn unary_log2(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn unary_log10(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn unary_log1p(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn scalar_max( &self, _a: &GpuTensorHandle, _scalar: f64, ) -> Result<GpuTensorHandle, Error>
fn scalar_min( &self, _a: &GpuTensorHandle, _scalar: f64, ) -> Result<GpuTensorHandle, Error>
fn sort_dim( &self, _a: &GpuTensorHandle, _dim: usize, _order: SortOrder, _comparison: SortComparison, ) -> Result<SortResult, Error>
fn syrk(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn take_matmul_sources( &self, _product: &GpuTensorHandle, ) -> Option<(GpuTensorHandle, GpuTensorHandle)>
fn qr_power_iter( &self, product: &GpuTensorHandle, _product_lhs: Option<&GpuTensorHandle>, q_handle: &GpuTensorHandle, options: &ProviderQrOptions, ) -> Result<Option<ProviderQrPowerIterResult>, Error>
Source§fn fft_dim(
&self,
_handle: &GpuTensorHandle,
_len: Option<usize>,
_dim: usize,
) -> Result<GpuTensorHandle, Error>
fn fft_dim( &self, _handle: &GpuTensorHandle, _len: Option<usize>, _dim: usize, ) -> Result<GpuTensorHandle, Error>
Perform an in-place FFT along a zero-based dimension, optionally padding/truncating to
len.fn ifft_dim( &self, _handle: &GpuTensorHandle, _len: Option<usize>, _dim: usize, ) -> Result<GpuTensorHandle, Error>
fn union( &self, _a: &GpuTensorHandle, _b: &GpuTensorHandle, _options: &UnionOptions, ) -> Result<UnionResult, Error>
fn ismember( &self, _a: &GpuTensorHandle, _b: &GpuTensorHandle, _options: &IsMemberOptions, ) -> Result<IsMemberResult, Error>
fn reshape( &self, handle: &GpuTensorHandle, new_shape: &[usize], ) -> Result<GpuTensorHandle, Error>
Source§fn cat(
&self,
_dim: usize,
_inputs: &[GpuTensorHandle],
) -> Result<GpuTensorHandle, Error>
fn cat( &self, _dim: usize, _inputs: &[GpuTensorHandle], ) -> Result<GpuTensorHandle, Error>
Concatenate the provided tensors along the 1-based dimension
dim.Source§fn kron(
&self,
_a: &GpuTensorHandle,
_b: &GpuTensorHandle,
) -> Result<GpuTensorHandle, Error>
fn kron( &self, _a: &GpuTensorHandle, _b: &GpuTensorHandle, ) -> Result<GpuTensorHandle, Error>
Compute the Kronecker product of two tensors, matching MATLAB semantics.
fn reduce_nnz(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn reduce_nnz_dim( &self, _a: &GpuTensorHandle, _dim: usize, ) -> Result<GpuTensorHandle, Error>
Source§fn reduce_mean_nd(
&self,
_a: &GpuTensorHandle,
_dims_zero_based: &[usize],
) -> Result<GpuTensorHandle, Error>
fn reduce_mean_nd( &self, _a: &GpuTensorHandle, _dims_zero_based: &[usize], ) -> Result<GpuTensorHandle, Error>
Reduce mean across multiple zero-based dimensions in one device pass.
Source§fn reduce_moments_nd(
&self,
_a: &GpuTensorHandle,
_dims_zero_based: &[usize],
) -> Result<ProviderMoments2, Error>
fn reduce_moments_nd( &self, _a: &GpuTensorHandle, _dims_zero_based: &[usize], ) -> Result<ProviderMoments2, Error>
Reduce moments across multiple zero-based dimensions in one device pass.
Returns mean (E[x]) and mean of squares (E[x^2]).
fn reduce_std( &self, _a: &GpuTensorHandle, _normalization: ProviderStdNormalization, _nan_mode: ProviderNanMode, ) -> Result<GpuTensorHandle, Error>
fn reduce_std_dim( &self, _a: &GpuTensorHandle, _dim: usize, _normalization: ProviderStdNormalization, _nan_mode: ProviderNanMode, ) -> Result<GpuTensorHandle, Error>
fn cumprod_scan( &self, _input: &GpuTensorHandle, _dim: usize, _direction: ProviderScanDirection, _nan_mode: ProviderNanMode, ) -> Result<GpuTensorHandle, Error>
fn cummax_scan( &self, _input: &GpuTensorHandle, _dim: usize, _direction: ProviderScanDirection, _nan_mode: ProviderNanMode, ) -> Result<ProviderCumminResult, Error>
Source§fn map_nan_to_zero(
&self,
_a: &GpuTensorHandle,
) -> Result<GpuTensorHandle, Error>
fn map_nan_to_zero( &self, _a: &GpuTensorHandle, ) -> Result<GpuTensorHandle, Error>
Build a numeric tensor where NaNs in
a are replaced with 0.0 (device side).Source§fn not_nan_mask(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
fn not_nan_mask(&self, _a: &GpuTensorHandle) -> Result<GpuTensorHandle, Error>
Build a numeric mask tensor with 1.0 where value is not NaN and 0.0 where value is NaN.
Source§fn fused_reduction(
&self,
_shader: &str,
_inputs: &[GpuTensorHandle],
_output_shape: &[usize],
_reduce_len: usize,
_num_slices: usize,
_workgroup_size: u32,
_flavor: ReductionFlavor,
) -> Result<GpuTensorHandle, Error>
fn fused_reduction( &self, _shader: &str, _inputs: &[GpuTensorHandle], _output_shape: &[usize], _reduce_len: usize, _num_slices: usize, _workgroup_size: u32, _flavor: ReductionFlavor, ) -> Result<GpuTensorHandle, Error>
Generic fused reduction entrypoint. Read more
Source§fn warmup(&self)
fn warmup(&self)
Optionally pre-compile commonly used pipelines to amortize first-dispatch costs.
Source§fn fused_cache_counters(&self) -> (u64, u64)
fn fused_cache_counters(&self) -> (u64, u64)
Returns (cache_hits, cache_misses) for fused pipeline cache, if supported.
Source§fn last_warmup_millis(&self) -> Option<u64>
fn last_warmup_millis(&self) -> Option<u64>
Returns the duration of the last provider warmup in milliseconds, if known.
Source§fn default_reduction_workgroup_size(&self) -> u32
fn default_reduction_workgroup_size(&self) -> u32
Default reduction workgroup size the provider prefers.
Source§fn two_pass_threshold(&self) -> usize
fn two_pass_threshold(&self) -> usize
Threshold above which provider will prefer two-pass reduction.
Source§fn reduction_two_pass_mode(&self) -> ReductionTwoPassMode
fn reduction_two_pass_mode(&self) -> ReductionTwoPassMode
Current two-pass mode preference (auto/forced on/off).
Source§fn scatter_column(
&self,
_matrix: &GpuTensorHandle,
_col_index: usize,
_values: &GpuTensorHandle,
) -> Result<GpuTensorHandle, Error>
fn scatter_column( &self, _matrix: &GpuTensorHandle, _col_index: usize, _values: &GpuTensorHandle, ) -> Result<GpuTensorHandle, Error>
Fast-path: write a GPU column in a matrix from a GPU vector, returning a new handle.
Expected:
values.shape == [rows, 1] (or [rows]) and col_index < cols.Source§fn scatter_row(
&self,
_matrix: &GpuTensorHandle,
_row_index: usize,
_values: &GpuTensorHandle,
) -> Result<GpuTensorHandle, Error>
fn scatter_row( &self, _matrix: &GpuTensorHandle, _row_index: usize, _values: &GpuTensorHandle, ) -> Result<GpuTensorHandle, Error>
Fast-path: write a GPU row in a matrix from a GPU vector, returning a new handle.
Expected:
values.shape == [1, cols] (or [cols]) and row_index < rows.Source§fn supports_ind2sub(&self) -> bool
fn supports_ind2sub(&self) -> bool
Returns true if the provider offers a device-side
ind2sub implementation.Auto Trait Implementations§
impl !Freeze for InProcessProvider
impl RefUnwindSafe for InProcessProvider
impl Send for InProcessProvider
impl Sync for InProcessProvider
impl Unpin for InProcessProvider
impl UnwindSafe for InProcessProvider
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
Source§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
Source§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Convert
Box<dyn Trait> (where Trait: Downcast) to Box<dyn Any>. Box<dyn Any> can
then be further downcast into Box<ConcreteType> where ConcreteType implements Trait.Source§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Convert
Rc<Trait> (where Trait: Downcast) to Rc<Any>. Rc<Any> can then be
further downcast into Rc<ConcreteType> where ConcreteType implements Trait.Source§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
Convert
&Trait (where Trait: Downcast) to &Any. This is needed since Rust cannot
generate &Any’s vtable from &Trait’s.Source§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert
&mut Trait (where Trait: Downcast) to &Any. This is needed since Rust cannot
generate &mut Any’s vtable from &mut Trait’s.Source§impl<T> DowncastSync for T
impl<T> DowncastSync for T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<T> PolicyExt for Twhere
T: ?Sized,
impl<T> PolicyExt for Twhere
T: ?Sized,
Source§impl<R, P> ReadPrimitive<R> for P
impl<R, P> ReadPrimitive<R> for P
Source§fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
Read this value from the supplied reader. Same as
ReadEndian::read_from_little_endian().Source§impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
Source§fn to_subset(&self) -> Option<SS>
fn to_subset(&self) -> Option<SS>
The inverse inclusion map: attempts to construct
self from the equivalent element of its
superset. Read moreSource§fn is_in_subset(&self) -> bool
fn is_in_subset(&self) -> bool
Checks if
self is actually part of its subset T (and can be converted to it).Source§fn to_subset_unchecked(&self) -> SS
fn to_subset_unchecked(&self) -> SS
Use with care! Same as
self.to_subset but without any property checks. Always succeeds.Source§fn from_subset(element: &SS) -> SP
fn from_subset(element: &SS) -> SP
The inclusion map: converts
self to the equivalent element of its superset.