Struct cv_convert::TensorFromMat [−][src]
pub struct TensorFromMat { /* fields omitted */ }
Expand description
Implementations
Methods from Deref<Target = Tensor>
Returns a pointer to the underlying C++ tensor.
The caller must ensures that the Rust tensor object outlives this pointer.
Returns a mutable pointer to the underlying C++ tensor.
The caller must ensures that the Rust tensor object outlives this pointer.
Returns the tensor size for single dimension tensors.
Returns the tensor sizes for two dimension tensors.
Returns the tensor sizes for three dimension tensors.
Returns the tensor sizes for four dimension tensors.
Returns the tensor sizes for five dimension tensors.
Returns the tensor sizes for six dimension tensors.
Returns the tensor strides for single dimension tensors.
Returns the tensor strides for two dimension tensors.
Returns the tensor strides for three dimension tensors.
Returns the tensor strides for four dimension tensors.
Returns the tensor strides for five dimension tensors.
Returns the tensor strides for six dimension tensors.
Returns the kind of elements stored in the input tensor. Returns an error on undefined tensors and unsupported data types.
Returns the kind of elements stored in the input tensor. Panics an error on undefined tensors and unsupported data types.
Prints the input tensor.
Caution: this uses the C++ printer which prints the whole tensor even if it is very large.
Returns a double value on tensors holding a single element. An error is returned otherwise.
Returns an int value on tensors holding a single element. An error is returned otherwise.
Returns a double value on tensors holding a single element. Panics otherwise.
Returns an int value on tensors holding a single element. Panics otherwise.
Returns true if gradient are currently tracked for this tensor.
Runs the backward pass, populating the gradient tensors for tensors which gradients are tracked.
Gradients tracking can be turned on via set_requires_grad
.
Runs the backward pass, populating the gradient tensors for tensors which gradients are tracked.
Gradients tracking can be turned on via set_requires_grad
.
Panics if the C++ api returns an exception.
Copies numel
elements from self
to dst
.
Unscale tensor while checking for infinities.
found_inf
is a singleton tensor that is used to record the
presence of infinite values. inv_scale
is a scalar containing
the inverse scaling factor. This method is only available
for CUDA tensors.
pub fn internal_amp_non_finite_check_and_unscale(
&mut self,
found_inf: &mut Tensor,
inv_scale: &Tensor
)
pub fn internal_amp_non_finite_check_and_unscale(
&mut self,
found_inf: &mut Tensor,
inv_scale: &Tensor
)
Unscale tensor while checking for infinities.
found_inf
is a singleton tensor that is used to record the
presence of infinite values. inv_scale
is a scalar containing
the inverse scaling factor. This method is only available
for CUDA tensors.
Copies numel
elements from self
to dst
.
Copies numel
elements from self
to dst
.
Copies numel
elements from self
to dst
.
Returns a new tensor that share storage with the input tensor.
Gets the sub-tensor at the given index.
Copies values from the argument tensor to the input tensor.
Copies values from the argument tensor to the input tensor.
Saves a tensor to a file.
The file format is the same as the one used by the PyTorch C++ API.
pub fn f_internal_ilshift_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_internal_irshift_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_internal_lshift_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_internal_rshift_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_internal_adaptive_avg_pool2d_backward(
&self,
grad_output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_adaptive_avg_pool3d_backward(
&self,
grad_output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_aminmax_dim(
&self,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_amp_update_scale_(
&mut self,
growth_tracker: &Tensor,
found_inf: &Tensor,
scale_growth_factor: f64,
scale_backoff_factor: f64,
growth_interval: i64
) -> Result<Tensor, TchError>
pub fn f_internal_baddbmm_mkl_(
&mut self,
batch1: &Tensor,
batch2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_bmm_out(
&self,
out: &Tensor,
mat2: &Tensor,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_internal_cholesky_solve_helper(
&self,
a: &Tensor,
upper: bool
) -> Result<Tensor, TchError>
pub fn f_internal_compute_linear_combination(
&self,
coefficients: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_compute_linear_combination_out(
&self,
out: &Tensor,
coefficients: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool,
allow_tf32: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_internal_convolution_deprecated<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_internal_convolution_mode<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_internal_convolution_nogroup<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_internal_cudnn_rnn<T>(
&self,
weight: &[T],
weight_stride0: i64,
weight_buf: Option<T>,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
proj_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> Result<(Tensor, Tensor, Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_internal_fake_quantize_learnable_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<Tensor, TchError>
pub fn f_internal_fake_quantize_learnable_per_channel_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_internal_fake_quantize_learnable_per_tensor_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<Tensor, TchError>
pub fn f_internal_fake_quantize_learnable_per_tensor_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_internal_fft_c2c(
&self,
dim: &[i64],
normalization: i64,
forward: bool
) -> Result<Tensor, TchError>
pub fn f_internal_fft_c2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
forward: bool
) -> Result<Tensor, TchError>
pub fn f_internal_fft_c2r(
&self,
dim: &[i64],
normalization: i64,
last_dim_size: i64
) -> Result<Tensor, TchError>
pub fn f_internal_fft_c2r_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
last_dim_size: i64
) -> Result<Tensor, TchError>
pub fn f_internal_fft_r2c(
&self,
dim: &[i64],
normalization: i64,
onesided: bool
) -> Result<Tensor, TchError>
pub fn f_internal_fft_r2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
onesided: bool
) -> Result<Tensor, TchError>
pub fn f_internal_gather_sparse_backward(
&self,
dim: i64,
index: &Tensor,
grad: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_grid_sampler_2d_cpu_fallback(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_internal_grid_sampler_2d_cpu_fallback_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_has_compatible_shallow_copy_type(
&self,
from: &Tensor
) -> Result<bool, TchError>
pub fn f_internal_index_copy_(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_index_put_impl_<T>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool,
unsafe_: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_internal_linalg_inv_out_helper_(
&mut self,
infos_lu: &Tensor,
infos_getri: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_linalg_solve_out_helper_(
&mut self,
other: &Tensor,
infos: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_lu_with_info(
&self,
pivot: bool,
check_errors: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_internal_make_per_channel_quantized_tensor(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64
) -> Result<Tensor, TchError>
pub fn f_internal_make_per_tensor_quantized_tensor(
&self,
scale: f64,
zero_point: i64
) -> Result<Tensor, TchError>
pub fn f_internal_nnpack_spatial_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_internal_nnpack_spatial_convolution_backward_input(
&self,
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_internal_nnpack_spatial_convolution_backward_weight(
&self,
weightsize: &[i64],
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_internal_pack_padded_sequence(
&self,
lengths: &Tensor,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_pdist_backward(
&self,
grad: &Tensor,
p: f64,
pdist: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_remove_batch_dim(
&self,
level: i64,
batch_size: i64,
out_dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sobol_engine_ff_(
&mut self,
n: i64,
sobolstate: &Tensor,
dimension: i64,
num_generated: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sobol_engine_initialize_state_(
&mut self,
dimension: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sobol_engine_scramble_(
&mut self,
ltm: &Tensor,
dimension: i64
) -> Result<Tensor, TchError>
pub fn f_internal_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_log_softmax(
&self,
dim: i64,
half_to_float: bool
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_sum_backward(
&self,
grad: &Tensor,
dim: &[i64]
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_sum_dim_dtype(
&self,
dim: &[i64],
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_internal_svd_helper(
&self,
some: bool,
compute_uv: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_internal_symeig_helper(
&self,
eigenvectors: bool,
upper: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_test_serialization_subcmul(
&self,
other: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_unique(
&self,
sorted: bool,
return_inverse: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_unique2(
&self,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_adaptive_avg_pool2d_out(
&self,
out: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_adaptive_avg_pool3d_backward(
&self,
grad_input: &Tensor,
grad_output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_avg_pool3d_out(
&self,
out: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool2d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool2d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<(Tensor, Tensor), TchError>
pub fn f_adaptive_max_pool3d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<(Tensor, Tensor), TchError>
pub fn f_addbmm_out(
&self,
out: &Tensor,
batch1: &Tensor,
batch2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_addcdiv_out(
&self,
out: &Tensor,
tensor1: &Tensor,
tensor2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_addcmul_out(
&self,
out: &Tensor,
tensor1: &Tensor,
tensor2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_allclose(
&self,
other: &Tensor,
rtol: f64,
atol: f64,
equal_nan: bool
) -> Result<bool, TchError>
pub fn f_argmax_out(
&self,
out: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_argmin_out(
&self,
out: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_as_strided(
&self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_as_strided_(
&mut self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool
) -> Result<Tensor, TchError>
pub fn f_avg_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool2d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool3d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_baddbmm_out(
&self,
out: &Tensor,
batch1: &Tensor,
batch2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_batch_norm_backward_elemt<T>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
mean_dy: &Tensor,
mean_dy_xmu: &Tensor,
count: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_batch_norm_backward_reduce<T>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
input_g: bool,
weight_g: bool,
bias_g: bool
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_batch_norm_elemt<T>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_batch_norm_elemt_out<T>(
&self,
out: &Tensor,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_batch_norm_gather_stats<T>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
count: i64
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_batch_norm_gather_stats_with_counts<T>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
counts: &Tensor
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_batch_norm_update_stats<T>(
&self,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_binary_cross_entropy<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_binary_cross_entropy_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_binary_cross_entropy_backward_grad_input<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_binary_cross_entropy_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_binary_cross_entropy_with_logits<T>(
&self,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_binary_cross_entropy_with_logits_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_bincount<T>(
&self,
weights: Option<T>,
minlength: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_bitwise_and_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_bitwise_or_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_bitwise_xor_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_bucketize(
&self,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_bucketize_tensor_out(
&self,
out: &Tensor,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_cholesky_solve_out(
&self,
out: &Tensor,
input2: &Tensor,
upper: bool
) -> Result<Tensor, TchError>
pub fn f_choose_qparams_optimized(
&self,
numel: i64,
n_bins: i64,
ratio: f64,
bit_width: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_clamp_max_out<S>(
&self,
out: &Tensor,
max: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_clamp_min_out<S>(
&self,
out: &Tensor,
min: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_clamp_out<S>(
&self,
out: &Tensor,
min: S,
max: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_clamp_tensor<T>(
&self,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_clamp_tensor_<T>(
&mut self,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_clamp_tensor_out<T>(
&self,
out: &Tensor,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_clip_out<S>(
&self,
out: &Tensor,
min: S,
max: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_clip_tensor<T>(
&self,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_clip_tensor_<T>(
&mut self,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_clip_tensor_out<T>(
&self,
out: &Tensor,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_col2im(
&self,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_col2im_out(
&self,
out: &Tensor,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_conv1d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_conv1d_padding<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_conv2d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_conv2d_padding<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_conv3d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_conv3d_padding<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_conv_depthwise3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_conv_depthwise3d_backward(
&self,
grad_input: &Tensor,
grad_weight: &Tensor,
grad_bias: &Tensor,
grad_output: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_conv_tbc_backward(
&self,
input: &Tensor,
weight: &Tensor,
bias: &Tensor,
pad: i64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_conv_transpose1d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_conv_transpose2d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_conv_transpose3d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_convolution_overrideable<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_copy_sparse_to_sparse_(
&mut self,
src: &Tensor,
non_blocking: bool
) -> Result<Tensor, TchError>
pub fn f_copysign_scalar_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_copysign_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_cross_entropy_loss<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_cross_out(
&self,
out: &Tensor,
other: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_cudnn_batch_norm<T>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_cudnn_batch_norm_backward<T>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64,
reservespace: &Tensor
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_cudnn_convolution(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_add_relu<T, S>(
&self,
weight: &Tensor,
z: &Tensor,
alpha: S,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
S: Into<Scalar>,
pub fn f_cudnn_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_deprecated<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_cudnn_convolution_deprecated2(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_relu<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_cudnn_convolution_transpose(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_transpose_deprecated<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_cudnn_convolution_transpose_deprecated2(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_grid_sampler_backward(
&self,
grid: &Tensor,
grad_output: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_cummax_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_cummaxmin_backward(
&self,
grad: &Tensor,
indices: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_cummin_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_cumprod_backward(
&self,
grad: &Tensor,
dim: i64,
output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_diff<T>(
&self,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_diff_out<T>(
&self,
out: &Tensor,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_div_out_mode(
&self,
out: &Tensor,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_div_scalar_mode<S>(
&self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_div_scalar_mode_<S>(
&mut self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_div_tensor_mode_(
&mut self,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_divide_out_mode(
&self,
out: &Tensor,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_divide_scalar_mode<S>(
&self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_divide_scalar_mode_<S>(
&mut self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_divide_tensor_mode(
&self,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_divide_tensor_mode_(
&mut self,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_eig_e(
&self,
e: &Tensor,
v: &Tensor,
eigenvectors: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_embedding_renorm_(
&mut self,
indices: &Tensor,
max_norm: f64,
norm_type: f64
) -> Result<Tensor, TchError>
pub fn f_eq_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_fake_quantize_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> Result<Tensor, TchError>
pub fn f_fake_quantize_per_channel_affine_cachemask(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fake_quantize_per_tensor_affine(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> Result<Tensor, TchError>
pub fn f_fake_quantize_per_tensor_affine_cachemask(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fbgemm_linear_fp16_weight(
&self,
packed_weight: &Tensor,
bias: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fbgemm_linear_fp16_weight_fp32_activation(
&self,
packed_weight: &Tensor,
bias: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fbgemm_linear_int8_weight<S>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_fbgemm_linear_int8_weight_fp32_activation<S>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_fft_fft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fft2<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_hfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_hfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifft2<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ihfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ihfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfft2<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfft2<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fill_diagonal_<S>(
&mut self,
fill_value: S,
wrap: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_float_power_tensor_scalar<S>(
&self,
exponent: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_float_power_tensor_scalar_out<S>(
&self,
out: &Tensor,
exponent: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_float_power_tensor_tensor_out(
&self,
out: &Tensor,
exponent: &Tensor
) -> Result<Tensor, TchError>
pub fn f_floor_divide_scalar_<S>(
&mut self,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_fmod_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_fractional_max_pool2d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fractional_max_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fractional_max_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fractional_max_pool2d_output(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fractional_max_pool3d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fractional_max_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fractional_max_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fractional_max_pool3d_output(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_frexp_tensor_out(
&self,
mantissa: &Tensor,
exponent: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_frobenius_norm_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_gather_backward(
&self,
grad: &Tensor,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Result<Tensor, TchError>
pub fn f_gather_out(
&self,
out: &Tensor,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Result<Tensor, TchError>
pub fn f_ge_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_glu_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_greater_equal_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_greater_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_grid_sampler(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_grid_sampler_2d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_grid_sampler_2d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_grid_sampler_3d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_grid_sampler_3d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_group_norm<T>(
&self,
num_groups: i64,
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enabled: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_gru<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_gru_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_gt_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_hardshrink_backward<S>(
&self,
grad_out: &Tensor,
lambd: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_hardtanh_backward<S>(
&self,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_hardtanh_backward_grad_input<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_hinge_embedding_loss(
&self,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_huber_loss(
&self,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Result<Tensor, TchError>
pub fn f_huber_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Result<Tensor, TchError>
pub fn f_huber_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Result<Tensor, TchError>
pub fn f_huber_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Result<Tensor, TchError>
pub fn f_im2col(
&self,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_im2col_out(
&self,
out: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_index<T>(&self, indices: &[Option<T>]) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_index_add_(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
pub fn f_index_add_alpha<S>(
&self,
dim: i64,
index: &Tensor,
source: &Tensor,
alpha: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_index_add_alpha_<S>(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor,
alpha: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_index_copy_(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
pub fn f_index_fill<S>(
&self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_index_fill_<S>(
&mut self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_index_fill_int_tensor(
&self,
dim: i64,
index: &Tensor,
value: &Tensor
) -> Result<Tensor, TchError>
pub fn f_index_fill_int_tensor_(
&mut self,
dim: i64,
index: &Tensor,
value: &Tensor
) -> Result<Tensor, TchError>
pub fn f_index_put<T>(
&self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_index_put_<T>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_index_select_out(
&self,
out: &Tensor,
dim: i64,
index: &Tensor
) -> Result<Tensor, TchError>
pub fn f_infinitely_differentiable_gelu_backward(
&self,
grad: &Tensor
) -> Result<Tensor, TchError>
pub fn f_instance_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
use_input_stats: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_isclose(
&self,
other: &Tensor,
rtol: f64,
atol: f64,
equal_nan: bool
) -> Result<Tensor, TchError>
pub fn f_istft<T>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
center: bool,
normalized: bool,
onesided: bool,
length: impl Into<Option<i64>>,
return_complex: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_kl_div(
&self,
target: &Tensor,
reduction: Reduction,
log_target: bool
) -> Result<Tensor, TchError>
pub fn f_kl_div_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
log_target: bool
) -> Result<Tensor, TchError>
pub fn f_kthvalue_values(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_l1_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_l1_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_layer_norm<T>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enable: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_le_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_leaky_relu_backward<S>(
&self,
grad_output: &Tensor,
negative_slope: S,
self_is_result: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_lerp_<S>(
&mut self,
end: &Tensor,
weight: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_lerp_scalar_out<S>(
&self,
out: &Tensor,
end: &Tensor,
weight: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_lerp_tensor_out(
&self,
out: &Tensor,
end: &Tensor,
weight: &Tensor
) -> Result<Tensor, TchError>
pub fn f_less_equal_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_less_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_linalg_cholesky_ex_l(
&self,
l: &Tensor,
info: &Tensor,
check_errors: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_cond_out<S>(
&self,
out: &Tensor,
p: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_linalg_eig_out(
&self,
eigenvalues: &Tensor,
eigenvectors: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_eigh_eigvals(
&self,
eigvals: &Tensor,
eigvecs: &Tensor,
uplo: &str
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_householder_product_out(
&self,
out: &Tensor,
tau: &Tensor
) -> Result<Tensor, TchError>
pub fn f_linalg_inv_ex_inverse(
&self,
inverse: &Tensor,
info: &Tensor,
check_errors: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_lstsq(
&self,
b: &Tensor,
rcond: impl Into<Option<f64>>,
driver: &str
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_linalg_lstsq_out(
&self,
solution: &Tensor,
residuals: &Tensor,
rank: &Tensor,
singular_values: &Tensor,
b: &Tensor,
rcond: impl Into<Option<f64>>,
driver: &str
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_linalg_matrix_norm<S>(
&self,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_linalg_matrix_norm_out<S>(
&self,
out: &Tensor,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_linalg_matrix_norm_str_ord(
&self,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_norm_str_ord_out(
&self,
out: &Tensor,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_rank(
&self,
tol: impl Into<Option<f64>>,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_rank_out(
&self,
out: &Tensor,
tol: impl Into<Option<f64>>,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_rank_out_tol_tensor(
&self,
out: &Tensor,
tol: &Tensor,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_rank_tol_tensor(
&self,
tol: &Tensor,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_norm<'a, S>(
&self,
ord: S,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_linalg_norm_ord_str<'a>(
&self,
ord: &str,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_norm_ord_str_out<'a>(
&self,
out: &Tensor,
ord: &str,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_norm_out<'a, S>(
&self,
out: &Tensor,
ord: S,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_linalg_pinv_out(
&self,
out: &Tensor,
rcond: f64,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_pinv_out_rcond_tensor(
&self,
out: &Tensor,
rcond: &Tensor,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_pinv_rcond_tensor(
&self,
rcond: &Tensor,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_qr_out(
&self,
q: &Tensor,
r: &Tensor,
mode: &str
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_slogdet_out(
&self,
sign: &Tensor,
logabsdet: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_svd_u(
&self,
u: &Tensor,
s: &Tensor,
vh: &Tensor,
full_matrices: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_linalg_tensorsolve<'a>(
&self,
other: &Tensor,
dims: impl Into<Option<&'a [i64]>>
) -> Result<Tensor, TchError>
pub fn f_linalg_tensorsolve_out<'a>(
&self,
out: &Tensor,
other: &Tensor,
dims: impl Into<Option<&'a [i64]>>
) -> Result<Tensor, TchError>
pub fn f_linear<T>(
&self,
weight: &Tensor,
bias: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_log_sigmoid_backward(
&self,
grad_output: &Tensor,
buffer: &Tensor
) -> Result<Tensor, TchError>
pub fn f_log_sigmoid_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
buffer: &Tensor
) -> Result<Tensor, TchError>
pub fn f_logit_backward(
&self,
grad_output: &Tensor,
eps: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_logit_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
eps: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_logsumexp_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_lstm<T>(
&self,
hx: &[T],
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_lstm_cell<T>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_lt_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_lu_solve_out(
&self,
out: &Tensor,
lu_data: &Tensor,
lu_pivots: &Tensor
) -> Result<Tensor, TchError>
pub fn f_masked_fill<S>(
&self,
mask: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_masked_fill_<S>(
&mut self,
mask: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_max_dim_max(
&self,
max: &Tensor,
max_values: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_max_pool1d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_max_pool2d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool2d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_max_pool2d_with_indices_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_max_pool2d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_max_pool3d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool3d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_max_pool3d_with_indices_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_max_pool3d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_unpool2d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool2d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool3d(
&self,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool3d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_mean_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_median_dim_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_min_dim_min(
&self,
min: &Tensor,
min_indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_miopen_batch_norm<T>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_miopen_batch_norm_backward<T>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_miopen_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_miopen_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_convolution_transpose<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_miopen_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_depthwise_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_miopen_depthwise_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_rnn<T>(
&self,
weight: &[T],
weight_stride0: i64,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> Result<(Tensor, Tensor, Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_mkldnn_adaptive_avg_pool2d_backward(
&self,
grad_output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_mkldnn_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_mkldnn_convolution_backward_weights(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
bias_defined: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_mkldnn_linear<T>(
&self,
weight: &Tensor,
bias: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_mkldnn_linear_backward_weights(
&self,
grad_output: &Tensor,
weight: &Tensor,
bias_defined: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_mkldnn_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_mkldnn_max_pool2d_backward(
&self,
grad_output: &Tensor,
output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_mkldnn_max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_mkldnn_max_pool3d_backward(
&self,
grad_output: &Tensor,
output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_mkldnn_reorder_conv2d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_mkldnn_reorder_conv3d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_mode_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_mse_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_mse_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_mse_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_multi_margin_loss_backward<T, S>(
&self,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
S: Into<Scalar>,
pub fn f_multi_margin_loss_backward_grad_input<T, S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
S: Into<Scalar>,
pub fn f_multilabel_margin_loss(
&self,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_multilabel_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Result<Tensor, TchError>
pub fn f_multilabel_margin_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Result<Tensor, TchError>
pub fn f_multilabel_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_multinomial_out(
&self,
out: &Tensor,
num_samples: i64,
replacement: bool
) -> Result<Tensor, TchError>
pub fn f_multiply_scalar_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_nan_to_num(
&self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_nan_to_num_(
&mut self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_nan_to_num_out(
&self,
out: &Tensor,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_nanmedian_dim_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_nanquantile(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_nanquantile_new(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_nanquantile_new_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_nanquantile_new_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_nanquantile_new_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_nanquantile_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_nanquantile_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_nanquantile_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_nansum_dim_intlist(
&self,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_nansum_intlist_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_narrow_copy_out(
&self,
out: &Tensor,
dim: i64,
start: i64,
length: i64
) -> Result<Tensor, TchError>
pub fn f_native_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_native_batch_norm_out<T>(
&self,
out: &Tensor,
save_mean: &Tensor,
save_invstd: &Tensor,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_native_group_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
n: i64,
c: i64,
hxw: i64,
group: i64,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_native_layer_norm<T>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_native_norm_scalaropt_dim_dtype<S>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_ne_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_new_empty_strided(
&self,
size: &[i64],
stride: &[i64],
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_new_full<S>(
&self,
size: &[i64],
fill_value: S,
options: (Kind, Device)
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_nll_loss<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_nll_loss2d<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_nll_loss2d_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_nll_loss2d_backward_grad_input<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_nll_loss2d_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_nll_loss_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_nll_loss_backward_grad_input<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_nll_loss_nd<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_nll_loss_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_norm_dtype_out<S>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_norm_out<S>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_norm_scalaropt_dim<S>(
&self,
p: S,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_norm_scalaropt_dim_dtype<S>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_norm_scalaropt_dtype<S>(
&self,
p: S,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_not_equal_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_nuclear_norm_dim_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_ormqr(
&self,
input2: &Tensor,
input3: &Tensor,
left: bool,
transpose: bool
) -> Result<Tensor, TchError>
pub fn f_ormqr_out(
&self,
out: &Tensor,
input2: &Tensor,
input3: &Tensor,
left: bool,
transpose: bool
) -> Result<Tensor, TchError>
pub fn f_poisson_nll_loss(
&self,
target: &Tensor,
log_input: bool,
full: bool,
eps: f64,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_pow_tensor_scalar_out<S>(
&self,
out: &Tensor,
exponent: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_prelu_backward(
&self,
grad_output: &Tensor,
weight: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_prod_int_out(
&self,
out: &Tensor,
dim: i64,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_put_(
&mut self,
index: &Tensor,
source: &Tensor,
accumulate: bool
) -> Result<Tensor, TchError>
pub fn f_quantile(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_quantile_new(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_quantile_new_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_quantile_new_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_quantile_new_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_quantile_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_quantile_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_quantile_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_quantize_per_channel(
&self,
scales: &Tensor,
zero_points: &Tensor,
axis: i64,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_quantize_per_tensor(
&self,
scale: f64,
zero_point: i64,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_quantized_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
var: &Tensor,
eps: f64,
output_scale: f64,
output_zero_point: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_quantized_gru_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_quantized_lstm_cell<T, S>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
S: Into<Scalar>,
pub fn f_quantized_max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_quantized_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_quantized_rnn_relu_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_quantized_rnn_tanh_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_random_from_(
&mut self,
from: i64,
to: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_reflection_pad1d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_reflection_pad1d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_reflection_pad2d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_reflection_pad2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_remainder_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_renorm<S>(
&self,
p: S,
dim: i64,
maxnorm: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_renorm_<S>(
&mut self,
p: S,
dim: i64,
maxnorm: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_renorm_out<S>(
&self,
out: &Tensor,
p: S,
dim: i64,
maxnorm: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_repeat_interleave_self_int(
&self,
repeats: i64,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_repeat_interleave_self_tensor(
&self,
repeats: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_replication_pad1d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad1d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad2d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad3d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_rnn_relu<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_rnn_relu_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_rnn_tanh<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
pub fn f_rnn_tanh_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_rrelu_with_noise_backward<S>(
&self,
grad_output: &Tensor,
noise: &Tensor,
lower: S,
upper: S,
training: bool,
self_is_result: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_rrelu_with_noise_out(
&self,
out: &Tensor,
noise: &Tensor,
training: bool
) -> Result<Tensor, TchError>
pub fn f_scatter_add_(
&mut self,
dim: i64,
index: &Tensor,
src: &Tensor
) -> Result<Tensor, TchError>
pub fn f_scatter_reduce_(
&mut self,
dim: i64,
index: &Tensor,
src: &Tensor,
reduce: &str
) -> Result<Tensor, TchError>
pub fn f_scatter_value<S>(
&self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_scatter_value_<S>(
&mut self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_scatter_value_reduce_<S>(
&mut self,
dim: i64,
index: &Tensor,
value: S,
reduce: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_searchsorted(
&self,
sorted_sequence: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_searchsorted_tensor_out(
&self,
out: &Tensor,
sorted_sequence: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_slice(
&self,
dim: i64,
start: impl Into<Option<i64>>,
end: impl Into<Option<i64>>,
step: i64
) -> Result<Tensor, TchError>
pub fn f_slow_conv3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_slow_conv3d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_slow_conv_dilated2d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_slow_conv_dilated3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_slow_conv_transpose2d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_slow_conv_transpose2d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_slow_conv_transpose3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_slow_conv_transpose3d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_smooth_l1_loss(
&self,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
pub fn f_smooth_l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
pub fn f_smooth_l1_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
pub fn f_smooth_l1_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
pub fn f_soft_margin_loss(
&self,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_soft_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_soft_margin_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_soft_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_softplus_backward<S>(
&self,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_softplus_backward_grad_input<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_softshrink_backward<S>(
&self,
grad_output: &Tensor,
lambd: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_softshrink_backward_grad_input<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
lambd: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_solve_solution(
&self,
solution: &Tensor,
lu: &Tensor,
a: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_sort_stable(
&self,
stable: bool,
dim: i64,
descending: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_sort_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
descending: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_sort_values_stable(
&self,
values: &Tensor,
indices: &Tensor,
stable: bool,
dim: i64,
descending: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_sparse_resize_(
&mut self,
size: &[i64],
sparse_dim: i64,
dense_dim: i64
) -> Result<Tensor, TchError>
pub fn f_sparse_resize_and_clear_(
&mut self,
size: &[i64],
sparse_dim: i64,
dense_dim: i64
) -> Result<Tensor, TchError>
pub fn f_special_logit_out(
&self,
out: &Tensor,
eps: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_special_xlog1py_other_scalar<S>(
&self,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_special_xlog1py_other_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_split_with_sizes(
&self,
split_sizes: &[i64],
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
pub fn f_sspaddmm_out(
&self,
out: &Tensor,
mat1: &Tensor,
mat2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_std_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_std_correction_out<'a>(
&self,
out: &Tensor,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_std_mean_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_std_mean_dim(
&self,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_std_out(
&self,
out: &Tensor,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_stft<T>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
normalized: bool,
onesided: bool,
return_complex: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
pub fn f_subtract_scalar_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_sum_dim_intlist(
&self,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_sum_intlist_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_svd_u(
&self,
u: &Tensor,
s: &Tensor,
v: &Tensor,
some: bool,
compute_uv: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_symeig_e(
&self,
e: &Tensor,
v: &Tensor,
eigenvectors: bool,
upper: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_take_along_dim(
&self,
indices: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_take_along_dim_out(
&self,
out: &Tensor,
indices: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_tensor_split_indices(
&self,
indices: &[i64],
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
pub fn f_tensor_split_tensor_indices_or_sections(
&self,
tensor_indices_or_sections: &Tensor,
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
pub fn f_tensordot(
&self,
other: &Tensor,
dims_self: &[i64],
dims_other: &[i64]
) -> Result<Tensor, TchError>
pub fn f_tensordot_out(
&self,
out: &Tensor,
other: &Tensor,
dims_self: &[i64],
dims_other: &[i64]
) -> Result<Tensor, TchError>
pub fn f_threshold<S>(&self, threshold: S, value: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_threshold_<S>(
&mut self,
threshold: S,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_threshold_backward<S>(
&self,
grad_output: &Tensor,
threshold: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_threshold_backward_grad_input<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
threshold: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_threshold_out<S>(
&self,
out: &Tensor,
threshold: S,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_to_device_(
&self,
device: Device,
dtype: Kind,
non_blocking: bool,
copy: bool
) -> Result<Tensor, TchError>
pub fn f_to_dtype_layout(
&self,
options: (Kind, Device),
non_blocking: bool,
copy: bool
) -> Result<Tensor, TchError>
pub fn f_to_other(
&self,
other: &Tensor,
non_blocking: bool,
copy: bool
) -> Result<Tensor, TchError>
pub fn f_topk(
&self,
k: i64,
dim: i64,
largest: bool,
sorted: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_topk_values(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
largest: bool,
sorted: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_triangular_solve(
&self,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_triangular_solve_x(
&self,
x: &Tensor,
m: &Tensor,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_true_divide_scalar_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_unique_consecutive(
&self,
return_inverse: bool,
return_counts: bool,
dim: impl Into<Option<i64>>
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_unique_dim(
&self,
dim: i64,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_unique_dim_consecutive(
&self,
dim: i64,
return_inverse: bool,
return_counts: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_unsafe_split_with_sizes(
&self,
split_sizes: &[i64],
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
pub fn f_upsample_bicubic2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bicubic2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bicubic2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_bilinear2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bilinear2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bilinear2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_linear1d(
&self,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_linear1d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_linear1d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest1d(
&self,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest1d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest1d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest2d(
&self,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest2d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest3d(
&self,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest3d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest3d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_trilinear3d(
&self,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_trilinear3d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_trilinear3d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_var_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_var_correction_out<'a>(
&self,
out: &Tensor,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_var_mean_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_var_mean_dim(
&self,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_var_out(
&self,
out: &Tensor,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_where_scalarother<S>(
&self,
condition: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn f_xlogy_outscalar_other<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
pub fn internal_amp_update_scale_(
&mut self,
growth_tracker: &Tensor,
found_inf: &Tensor,
scale_growth_factor: f64,
scale_backoff_factor: f64,
growth_interval: i64
) -> Tensor
pub fn internal_compute_linear_combination_out(
&self,
out: &Tensor,
coefficients: &Tensor
) -> Tensor
pub fn internal_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool,
allow_tf32: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn internal_convolution_deprecated<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn internal_convolution_mode<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn internal_convolution_nogroup<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn internal_cudnn_rnn<T>(
&self,
weight: &[T],
weight_stride0: i64,
weight_buf: Option<T>,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
proj_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> (Tensor, Tensor, Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn internal_fake_quantize_learnable_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Tensor
pub fn internal_fake_quantize_learnable_per_channel_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> (Tensor, Tensor, Tensor)
pub fn internal_fake_quantize_learnable_per_tensor_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Tensor
pub fn internal_fake_quantize_learnable_per_tensor_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> (Tensor, Tensor, Tensor)
pub fn internal_fft_c2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
forward: bool
) -> Tensor
pub fn internal_fft_c2r_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
last_dim_size: i64
) -> Tensor
pub fn internal_fft_r2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
onesided: bool
) -> Tensor
pub fn internal_gather_sparse_backward(
&self,
dim: i64,
index: &Tensor,
grad: &Tensor
) -> Tensor
pub fn internal_grid_sampler_2d_cpu_fallback(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
pub fn internal_grid_sampler_2d_cpu_fallback_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> (Tensor, Tensor)
pub fn internal_index_put_impl_<T>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool,
unsafe_: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn internal_linalg_inv_out_helper_(
&mut self,
infos_lu: &Tensor,
infos_getri: &Tensor
) -> Tensor
pub fn internal_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
pub fn internal_make_per_channel_quantized_tensor(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64
) -> Tensor
pub fn internal_make_per_tensor_quantized_tensor(
&self,
scale: f64,
zero_point: i64
) -> Tensor
pub fn internal_nnpack_spatial_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn internal_nnpack_spatial_convolution_backward_input(
&self,
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64]
) -> Tensor
pub fn internal_nnpack_spatial_convolution_backward_weight(
&self,
weightsize: &[i64],
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn internal_pack_padded_sequence(
&self,
lengths: &Tensor,
batch_first: bool
) -> (Tensor, Tensor)
pub fn internal_sobol_engine_ff_(
&mut self,
n: i64,
sobolstate: &Tensor,
dimension: i64,
num_generated: i64
) -> Tensor
pub fn internal_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
pub fn internal_sparse_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
pub fn internal_sparse_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
pub fn internal_unique2(
&self,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> (Tensor, Tensor, Tensor)
pub fn adaptive_avg_pool3d_backward(
&self,
grad_input: &Tensor,
grad_output: &Tensor
) -> Tensor
pub fn adaptive_max_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Tensor
pub fn adaptive_max_pool2d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> (Tensor, Tensor)
pub fn adaptive_max_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Tensor
pub fn adaptive_max_pool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> (Tensor, Tensor)
pub fn as_strided(
&self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Tensor
pub fn as_strided_(
&mut self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool
) -> Tensor
pub fn avg_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool2d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool3d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn batch_norm_backward_elemt<T>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
mean_dy: &Tensor,
mean_dy_xmu: &Tensor,
count: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
pub fn batch_norm_backward_reduce<T>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
input_g: bool,
weight_g: bool,
bias_g: bool
) -> (Tensor, Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn batch_norm_elemt<T>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Tensor where
T: Borrow<Tensor>,
pub fn batch_norm_elemt_out<T>(
&self,
out: &Tensor,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Tensor where
T: Borrow<Tensor>,
pub fn batch_norm_gather_stats<T>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
count: i64
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn batch_norm_gather_stats_with_counts<T>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
counts: &Tensor
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn batch_norm_update_stats<T>(
&self,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn binary_cross_entropy<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
pub fn binary_cross_entropy_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
pub fn binary_cross_entropy_backward_grad_input<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
pub fn binary_cross_entropy_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
pub fn binary_cross_entropy_with_logits<T>(
&self,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
pub fn binary_cross_entropy_with_logits_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
pub fn bitwise_and_scalar_out<S>(&self, out: &Tensor, other: S) -> Tensor where
S: Into<Scalar>,
pub fn bitwise_or_scalar_out<S>(&self, out: &Tensor, other: S) -> Tensor where
S: Into<Scalar>,
pub fn bitwise_xor_scalar_out<S>(&self, out: &Tensor, other: S) -> Tensor where
S: Into<Scalar>,
pub fn bucketize_tensor_out(
&self,
out: &Tensor,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Tensor
pub fn choose_qparams_optimized(
&self,
numel: i64,
n_bins: i64,
ratio: f64,
bit_width: i64
) -> (Tensor, Tensor)
pub fn clamp_tensor_<T>(&mut self, min: Option<T>, max: Option<T>) -> Tensor where
T: Borrow<Tensor>,
pub fn clamp_tensor_out<T>(
&self,
out: &Tensor,
min: Option<T>,
max: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
pub fn clip_tensor_<T>(&mut self, min: Option<T>, max: Option<T>) -> Tensor where
T: Borrow<Tensor>,
pub fn clip_tensor_out<T>(
&self,
out: &Tensor,
min: Option<T>,
max: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
pub fn col2im(
&self,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn col2im_out(
&self,
out: &Tensor,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn conv1d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn conv1d_padding<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn conv2d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn conv2d_padding<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn conv3d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn conv3d_padding<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn conv_depthwise3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn conv_depthwise3d_backward(
&self,
grad_input: &Tensor,
grad_weight: &Tensor,
grad_bias: &Tensor,
grad_output: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> (Tensor, Tensor, Tensor)
pub fn conv_tbc_backward(
&self,
input: &Tensor,
weight: &Tensor,
bias: &Tensor,
pad: i64
) -> (Tensor, Tensor, Tensor)
pub fn conv_transpose1d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn conv_transpose2d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn conv_transpose3d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn convolution_overrideable<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn cross_entropy_loss<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn cudnn_batch_norm<T>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> (Tensor, Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn cudnn_batch_norm_backward<T>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64,
reservespace: &Tensor
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn cudnn_convolution(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_add_relu<T, S>(
&self,
weight: &Tensor,
z: &Tensor,
alpha: S,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
S: Into<Scalar>,
pub fn cudnn_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_deprecated<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn cudnn_convolution_deprecated2(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn cudnn_convolution_relu<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn cudnn_convolution_transpose(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_transpose_deprecated<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn cudnn_convolution_transpose_deprecated2(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn diff<T>(
&self,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
pub fn diff_out<T>(
&self,
out: &Tensor,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
pub fn g_div_scalar_mode<S>(&self, other: S, rounding_mode: &str) -> Tensor where
S: Into<Scalar>,
pub fn g_div_scalar_mode_<S>(&mut self, other: S, rounding_mode: &str) -> Tensor where
S: Into<Scalar>,
pub fn divide_scalar_mode<S>(&self, other: S, rounding_mode: &str) -> Tensor where
S: Into<Scalar>,
pub fn divide_scalar_mode_<S>(
&mut self,
other: S,
rounding_mode: &str
) -> Tensor where
S: Into<Scalar>,
pub fn embedding_renorm_(
&mut self,
indices: &Tensor,
max_norm: f64,
norm_type: f64
) -> Tensor
pub fn fake_quantize_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> Tensor
pub fn fake_quantize_per_channel_affine_cachemask(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> (Tensor, Tensor)
pub fn fake_quantize_per_tensor_affine(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> Tensor
pub fn fake_quantize_per_tensor_affine_cachemask(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> (Tensor, Tensor)
pub fn fbgemm_linear_fp16_weight_fp32_activation(
&self,
packed_weight: &Tensor,
bias: &Tensor
) -> Tensor
pub fn fbgemm_linear_int8_weight<S>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Tensor where
S: Into<Scalar>,
pub fn fbgemm_linear_int8_weight_fp32_activation<S>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Tensor where
S: Into<Scalar>,
pub fn fft_fft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Tensor
pub fn fft_fftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_fftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_hfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_ifft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Tensor
pub fn fft_ifft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_ifftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_ifftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_ihfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_irfft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Tensor
pub fn fft_irfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_irfftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_irfftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_rfft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Tensor
pub fn fft_rfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_rfftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_rfftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fill_diagonal_<S>(&mut self, fill_value: S, wrap: bool) -> Tensor where
S: Into<Scalar>,
pub fn float_power_tensor_scalar_out<S>(
&self,
out: &Tensor,
exponent: S
) -> Tensor where
S: Into<Scalar>,
pub fn fractional_max_pool2d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
pub fn fractional_max_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
pub fn fractional_max_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
pub fn fractional_max_pool2d_output(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
pub fn fractional_max_pool3d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
pub fn fractional_max_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
pub fn fractional_max_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
pub fn fractional_max_pool3d_output(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
pub fn gather_backward(
&self,
grad: &Tensor,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Tensor
pub fn glu_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
dim: i64
) -> Tensor
pub fn greater_equal_scalar_out<S>(&self, out: &Tensor, other: S) -> Tensor where
S: Into<Scalar>,
pub fn grid_sampler(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
pub fn grid_sampler_2d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
pub fn grid_sampler_2d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> (Tensor, Tensor)
pub fn grid_sampler_3d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
pub fn grid_sampler_3d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> (Tensor, Tensor)
pub fn group_norm<T>(
&self,
num_groups: i64,
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enabled: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn gru<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn gru_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
pub fn hardshrink_backward<S>(&self, grad_out: &Tensor, lambd: S) -> Tensor where
S: Into<Scalar>,
pub fn hardtanh_backward<S>(
&self,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Tensor where
S: Into<Scalar>,
pub fn hardtanh_backward_grad_input<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Tensor where
S: Into<Scalar>,
pub fn hinge_embedding_loss(
&self,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Tensor
pub fn huber_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Tensor
pub fn huber_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Tensor
pub fn huber_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Tensor
pub fn im2col(
&self,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn im2col_out(
&self,
out: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn index_add_alpha<S>(
&self,
dim: i64,
index: &Tensor,
source: &Tensor,
alpha: S
) -> Tensor where
S: Into<Scalar>,
pub fn index_add_alpha_<S>(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor,
alpha: S
) -> Tensor where
S: Into<Scalar>,
pub fn index_fill_<S>(&mut self, dim: i64, index: &Tensor, value: S) -> Tensor where
S: Into<Scalar>,
pub fn index_put<T>(
&self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn index_put_<T>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn instance_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
use_input_stats: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn istft<T>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
center: bool,
normalized: bool,
onesided: bool,
length: impl Into<Option<i64>>,
return_complex: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn kl_div_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
log_target: bool
) -> Tensor
pub fn kthvalue_values(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn l1_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn layer_norm<T>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enable: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn leaky_relu_backward<S>(
&self,
grad_output: &Tensor,
negative_slope: S,
self_is_result: bool
) -> Tensor where
S: Into<Scalar>,
pub fn lerp_scalar_out<S>(
&self,
out: &Tensor,
end: &Tensor,
weight: S
) -> Tensor where
S: Into<Scalar>,
pub fn less_equal_scalar_out<S>(&self, out: &Tensor, other: S) -> Tensor where
S: Into<Scalar>,
pub fn linalg_cholesky_ex_l(
&self,
l: &Tensor,
info: &Tensor,
check_errors: bool
) -> (Tensor, Tensor)
pub fn linalg_eigh_eigvals(
&self,
eigvals: &Tensor,
eigvecs: &Tensor,
uplo: &str
) -> (Tensor, Tensor)
pub fn linalg_inv_ex_inverse(
&self,
inverse: &Tensor,
info: &Tensor,
check_errors: bool
) -> (Tensor, Tensor)
pub fn linalg_lstsq(
&self,
b: &Tensor,
rcond: impl Into<Option<f64>>,
driver: &str
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn linalg_lstsq_out(
&self,
solution: &Tensor,
residuals: &Tensor,
rank: &Tensor,
singular_values: &Tensor,
b: &Tensor,
rcond: impl Into<Option<f64>>,
driver: &str
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn linalg_matrix_norm<S>(
&self,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
pub fn linalg_matrix_norm_out<S>(
&self,
out: &Tensor,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
pub fn linalg_matrix_norm_str_ord(
&self,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_matrix_norm_str_ord_out(
&self,
out: &Tensor,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_matrix_rank_out(
&self,
out: &Tensor,
tol: impl Into<Option<f64>>,
hermitian: bool
) -> Tensor
pub fn linalg_matrix_rank_out_tol_tensor(
&self,
out: &Tensor,
tol: &Tensor,
hermitian: bool
) -> Tensor
pub fn linalg_norm<'a, S>(
&self,
ord: S,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
pub fn linalg_norm_ord_str<'a>(
&self,
ord: &str,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_norm_ord_str_out<'a>(
&self,
out: &Tensor,
ord: &str,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_norm_out<'a, S>(
&self,
out: &Tensor,
ord: S,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
pub fn linalg_pinv_out_rcond_tensor(
&self,
out: &Tensor,
rcond: &Tensor,
hermitian: bool
) -> Tensor
pub fn linalg_svd_u(
&self,
u: &Tensor,
s: &Tensor,
vh: &Tensor,
full_matrices: bool
) -> (Tensor, Tensor, Tensor)
pub fn linalg_tensorsolve_out<'a>(
&self,
out: &Tensor,
other: &Tensor,
dims: impl Into<Option<&'a [i64]>>
) -> Tensor
pub fn log_sigmoid_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
buffer: &Tensor
) -> Tensor
pub fn logit_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
eps: impl Into<Option<f64>>
) -> Tensor
pub fn lstm<T>(
&self,
hx: &[T],
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn lstm_cell<T>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn max_dim_max(
&self,
max: &Tensor,
max_values: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn max_pool1d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn max_pool2d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_pool2d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
pub fn max_pool2d_with_indices_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
pub fn max_pool2d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn max_pool3d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_pool3d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
pub fn max_pool3d_with_indices_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
pub fn max_pool3d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_unpool2d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Tensor
pub fn max_unpool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Tensor
pub fn max_unpool3d(
&self,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn max_unpool3d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn max_unpool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn max_unpool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn median_dim_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn min_dim_min(
&self,
min: &Tensor,
min_indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn miopen_batch_norm<T>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn miopen_batch_norm_backward<T>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn miopen_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn miopen_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_convolution_transpose<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn miopen_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_depthwise_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn miopen_depthwise_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_rnn<T>(
&self,
weight: &[T],
weight_stride0: i64,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> (Tensor, Tensor, Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn mkldnn_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn mkldnn_convolution_backward_weights(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
bias_defined: bool
) -> (Tensor, Tensor)
pub fn mkldnn_linear_backward_weights(
&self,
grad_output: &Tensor,
weight: &Tensor,
bias_defined: bool
) -> (Tensor, Tensor)
pub fn mkldnn_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn mkldnn_max_pool2d_backward(
&self,
grad_output: &Tensor,
output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn mkldnn_max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn mkldnn_max_pool3d_backward(
&self,
grad_output: &Tensor,
output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn mkldnn_reorder_conv2d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn mkldnn_reorder_conv3d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn mode_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn mse_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn mse_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn multi_margin_loss_backward<T, S>(
&self,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
S: Into<Scalar>,
pub fn multi_margin_loss_backward_grad_input<T, S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
S: Into<Scalar>,
pub fn multilabel_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Tensor
pub fn multilabel_margin_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Tensor
pub fn multilabel_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn nan_to_num(
&self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Tensor
pub fn nan_to_num_(
&mut self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Tensor
pub fn nan_to_num_out(
&self,
out: &Tensor,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Tensor
pub fn nanmedian_dim_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn nanquantile_new(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn nanquantile_new_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn nanquantile_new_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn nanquantile_new_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn nanquantile_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn nanquantile_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn native_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn native_batch_norm_out<T>(
&self,
out: &Tensor,
save_mean: &Tensor,
save_invstd: &Tensor,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn native_group_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
n: i64,
c: i64,
hxw: i64,
group: i64,
eps: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn native_layer_norm<T>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn native_norm_scalaropt_dim_dtype<S>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
pub fn new_full<S>(
&self,
size: &[i64],
fill_value: S,
options: (Kind, Device)
) -> Tensor where
S: Into<Scalar>,
pub fn g_nll_loss<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn nll_loss2d<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn nll_loss2d_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
pub fn nll_loss2d_backward_grad_input<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
pub fn nll_loss2d_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn nll_loss_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
pub fn nll_loss_backward_grad_input<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
pub fn nll_loss_nd<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn nll_loss_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn norm_dtype_out<S>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
pub fn norm_out<S>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool
) -> Tensor where
S: Into<Scalar>,
pub fn norm_scalaropt_dim<S>(&self, p: S, dim: &[i64], keepdim: bool) -> Tensor where
S: Into<Scalar>,
pub fn norm_scalaropt_dim_dtype<S>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
pub fn ormqr_out(
&self,
out: &Tensor,
input2: &Tensor,
input3: &Tensor,
left: bool,
transpose: bool
) -> Tensor
pub fn poisson_nll_loss(
&self,
target: &Tensor,
log_input: bool,
full: bool,
eps: f64,
reduction: Reduction
) -> Tensor
pub fn pow_tensor_scalar_out<S>(&self, out: &Tensor, exponent: S) -> Tensor where
S: Into<Scalar>,
pub fn quantile_new(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn quantile_new_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn quantile_new_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn quantile_new_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn quantile_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn quantile_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn quantize_per_channel(
&self,
scales: &Tensor,
zero_points: &Tensor,
axis: i64,
dtype: Kind
) -> Tensor
pub fn quantized_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
var: &Tensor,
eps: f64,
output_scale: f64,
output_zero_point: i64
) -> Tensor where
T: Borrow<Tensor>,
pub fn quantized_gru_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Tensor where
S: Into<Scalar>,
pub fn quantized_lstm_cell<T, S>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
S: Into<Scalar>,
pub fn quantized_max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn quantized_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn quantized_rnn_relu_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Tensor where
S: Into<Scalar>,
pub fn quantized_rnn_tanh_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Tensor where
S: Into<Scalar>,
pub fn reflection_pad1d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn reflection_pad2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn renorm_out<S>(&self, out: &Tensor, p: S, dim: i64, maxnorm: S) -> Tensor where
S: Into<Scalar>,
pub fn repeat_interleave_self_tensor(
&self,
repeats: &Tensor,
dim: impl Into<Option<i64>>
) -> Tensor
pub fn replication_pad1d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn replication_pad2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn replication_pad3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn rnn_relu<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn rnn_relu_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
pub fn rnn_tanh<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
pub fn rnn_tanh_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
pub fn rrelu_with_noise_backward<S>(
&self,
grad_output: &Tensor,
noise: &Tensor,
lower: S,
upper: S,
training: bool,
self_is_result: bool
) -> Tensor where
S: Into<Scalar>,
pub fn scatter_value_<S>(
&mut self,
dim: i64,
index: &Tensor,
value: S
) -> Tensor where
S: Into<Scalar>,
pub fn scatter_value_reduce_<S>(
&mut self,
dim: i64,
index: &Tensor,
value: S,
reduce: &str
) -> Tensor where
S: Into<Scalar>,
pub fn searchsorted_tensor_out(
&self,
out: &Tensor,
sorted_sequence: &Tensor,
out_int32: bool,
right: bool
) -> Tensor
pub fn slice(
&self,
dim: i64,
start: impl Into<Option<i64>>,
end: impl Into<Option<i64>>,
step: i64
) -> Tensor
pub fn slow_conv3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn slow_conv3d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn slow_conv_dilated2d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn slow_conv_dilated3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn slow_conv_transpose2d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn slow_conv_transpose2d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn slow_conv_transpose3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn slow_conv_transpose3d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
pub fn smooth_l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Tensor
pub fn smooth_l1_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Tensor
pub fn smooth_l1_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Tensor
pub fn soft_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn soft_margin_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn soft_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn softplus_backward<S>(
&self,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Tensor where
S: Into<Scalar>,
pub fn softplus_backward_grad_input<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Tensor where
S: Into<Scalar>,
pub fn softshrink_backward<S>(&self, grad_output: &Tensor, lambd: S) -> Tensor where
S: Into<Scalar>,
pub fn softshrink_backward_grad_input<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
lambd: S
) -> Tensor where
S: Into<Scalar>,
pub fn sort_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
descending: bool
) -> (Tensor, Tensor)
pub fn sort_values_stable(
&self,
values: &Tensor,
indices: &Tensor,
stable: bool,
dim: i64,
descending: bool
) -> (Tensor, Tensor)
pub fn sparse_resize_and_clear_(
&mut self,
size: &[i64],
sparse_dim: i64,
dense_dim: i64
) -> Tensor
pub fn special_xlog1py_other_scalar_out<S>(
&self,
out: &Tensor,
other: S
) -> Tensor where
S: Into<Scalar>,
pub fn std_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn std_correction_out<'a>(
&self,
out: &Tensor,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn std_mean_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> (Tensor, Tensor)
pub fn stft<T>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
normalized: bool,
onesided: bool,
return_complex: bool
) -> Tensor where
T: Borrow<Tensor>,
pub fn svd_u(
&self,
u: &Tensor,
s: &Tensor,
v: &Tensor,
some: bool,
compute_uv: bool
) -> (Tensor, Tensor, Tensor)
pub fn symeig_e(
&self,
e: &Tensor,
v: &Tensor,
eigenvectors: bool,
upper: bool
) -> (Tensor, Tensor)
pub fn take_along_dim_out(
&self,
out: &Tensor,
indices: &Tensor,
dim: impl Into<Option<i64>>
) -> Tensor
pub fn tensor_split_tensor_indices_or_sections(
&self,
tensor_indices_or_sections: &Tensor,
dim: i64
) -> Vec<Tensor, Global>
pub fn tensordot_out(
&self,
out: &Tensor,
other: &Tensor,
dims_self: &[i64],
dims_other: &[i64]
) -> Tensor
pub fn threshold_backward<S>(
&self,
grad_output: &Tensor,
threshold: S
) -> Tensor where
S: Into<Scalar>,
pub fn threshold_backward_grad_input<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
threshold: S
) -> Tensor where
S: Into<Scalar>,
pub fn threshold_out<S>(&self, out: &Tensor, threshold: S, value: S) -> Tensor where
S: Into<Scalar>,
pub fn topk_values(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
largest: bool,
sorted: bool
) -> (Tensor, Tensor)
pub fn triangular_solve(
&self,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> (Tensor, Tensor)
pub fn triangular_solve_x(
&self,
x: &Tensor,
m: &Tensor,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> (Tensor, Tensor)
pub fn unique_consecutive(
&self,
return_inverse: bool,
return_counts: bool,
dim: impl Into<Option<i64>>
) -> (Tensor, Tensor, Tensor)
pub fn unique_dim(
&self,
dim: i64,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> (Tensor, Tensor, Tensor)
pub fn unique_dim_consecutive(
&self,
dim: i64,
return_inverse: bool,
return_counts: bool
) -> (Tensor, Tensor, Tensor)
pub fn upsample_bicubic2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bicubic2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bicubic2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_bilinear2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bilinear2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bilinear2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_linear1d(
&self,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_linear1d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_linear1d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_nearest1d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest1d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_nearest2d(
&self,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest2d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_nearest3d(
&self,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest3d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest3d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_trilinear3d(
&self,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_trilinear3d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_trilinear3d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn var_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn var_correction_out<'a>(
&self,
out: &Tensor,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn var_mean_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> (Tensor, Tensor)
pub fn where_scalarother<S>(&self, condition: &Tensor, other: S) -> Tensor where
S: Into<Scalar>,
pub fn xlogy_outscalar_other<S>(&self, out: &Tensor, other: S) -> Tensor where
S: Into<Scalar>,
Computes the cross-entropy loss based on some logits and targets.
Returns the average accuracy for some given logits assuming that targets represent ground-truth.
Flattens a tensor.
This returns a flattened version of the given tensor. The first dimension is preserved as it is assumed to be the mini-batch dimension.
Converts a tensor to a one-hot encoded version.
If the input has a size [N1, N2, …, Nk], the returned tensor has a size [N1, …, Nk, labels]. The returned tensor uses float values. Elements of the input vector are expected to be between 0 and labels-1.
Copies a tensor to a newly allocated tensor using the same shape and device.
Trait Implementations
Auto Trait Implementations
impl RefUnwindSafe for TensorFromMat
impl Send for TensorFromMat
impl !Sync for TensorFromMat
impl Unpin for TensorFromMat
impl UnwindSafe for TensorFromMat
Blanket Implementations
Mutably borrows from an owned value. Read more
The inverse inclusion map: attempts to construct self
from the equivalent element of its
superset. Read more
pub fn is_in_subset(&self) -> bool
pub fn is_in_subset(&self) -> bool
Checks if self
is actually part of its subset T
(and can be converted to it).
pub fn to_subset_unchecked(&self) -> SS
pub fn to_subset_unchecked(&self) -> SS
Use with care! Same as self.to_subset
but without any property checks. Always succeeds.
pub fn from_subset(element: &SS) -> SP
pub fn from_subset(element: &SS) -> SP
The inclusion map: converts self
to the equivalent element of its superset.