Struct cv_convert::TensorFromMat [−][src]
pub struct TensorFromMat { /* fields omitted */ }
Expand description
Implementations
Methods from Deref<Target = Tensor>
Returns a pointer to the underlying C++ tensor.
The caller must ensures that the Rust tensor object outlives this pointer.
Returns a mutable pointer to the underlying C++ tensor.
The caller must ensures that the Rust tensor object outlives this pointer.
Returns the tensor size for single dimension tensors.
Returns the tensor sizes for two dimension tensors.
Returns the tensor sizes for three dimension tensors.
Returns the tensor sizes for four dimension tensors.
Returns the tensor sizes for five dimension tensors.
Returns the tensor sizes for six dimension tensors.
Returns the tensor strides for single dimension tensors.
Returns the tensor strides for two dimension tensors.
Returns the tensor strides for three dimension tensors.
Returns the tensor strides for four dimension tensors.
Returns the tensor strides for five dimension tensors.
Returns the tensor strides for six dimension tensors.
Returns the kind of elements stored in the input tensor. Returns an error on undefined tensors and unsupported data types.
Returns the kind of elements stored in the input tensor. Panics an error on undefined tensors and unsupported data types.
Prints the input tensor.
Caution: this uses the C++ printer which prints the whole tensor even if it is very large.
Returns a double value on tensors holding a single element. An error is returned otherwise.
Returns an int value on tensors holding a single element. An error is returned otherwise.
Returns a double value on tensors holding a single element. Panics otherwise.
Returns an int value on tensors holding a single element. Panics otherwise.
Returns true if gradient are currently tracked for this tensor.
Runs the backward pass, populating the gradient tensors for tensors which gradients are tracked.
Gradients tracking can be turned on via set_requires_grad
.
Runs the backward pass, populating the gradient tensors for tensors which gradients are tracked.
Gradients tracking can be turned on via set_requires_grad
.
Panics if the C++ api returns an exception.
Copies numel
elements from self
to dst
.
Unscale tensor while checking for infinities.
found_inf
is a singleton tensor that is used to record the
presence of infinite values. inv_scale
is a scalar containing
the inverse scaling factor. This method is only available
for CUDA tensors.
pub fn internal_amp_non_finite_check_and_unscale(
&mut self,
found_inf: &mut Tensor,
inv_scale: &Tensor
)
[src]
pub fn internal_amp_non_finite_check_and_unscale(
&mut self,
found_inf: &mut Tensor,
inv_scale: &Tensor
)
[src]Unscale tensor while checking for infinities.
found_inf
is a singleton tensor that is used to record the
presence of infinite values. inv_scale
is a scalar containing
the inverse scaling factor. This method is only available
for CUDA tensors.
Copies numel
elements from self
to dst
.
Copies numel
elements from self
to dst
.
Copies numel
elements from self
to dst
.
Returns a new tensor that share storage with the input tensor.
Gets the sub-tensor at the given index.
Copies values from the argument tensor to the input tensor.
Copies values from the argument tensor to the input tensor.
Saves a tensor to a file.
The file format is the same as the one used by the PyTorch C++ API.
pub fn f_internal_and_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_iand_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_ilshift_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_ior_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_irshift_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_ixor_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_lshift_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_or_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_rshift_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_xor_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_internal_adaptive_avg_pool2d_backward(
&self,
grad_output: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_add_batch_dim(
&self,
batch_dim: i64,
level: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_add_relu_out(
&self,
out: &Tensor,
other: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_addmv_impl_(
&mut self,
self2: &Tensor,
mat: &Tensor,
vec: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_aminmax1(
&self,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_internal_baddbmm_mkl_(
&mut self,
batch1: &Tensor,
batch2: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_bmm_out(
&self,
out: &Tensor,
mat2: &Tensor,
deterministic: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_cholesky_solve_helper(
&self,
a: &Tensor,
upper: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_compute_linear_combination(
&self,
coefficients: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_compute_linear_combination_out(
&self,
out: &Tensor,
coefficients: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_internal_convolution1<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool,
allow_tf32: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_internal_convolution_nogroup<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_internal_copy_from(
&self,
dst: &Tensor,
non_blocking: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_cudnn_rnn<T>(
&self,
weight: &[T],
weight_stride0: i64,
weight_buf: Option<T>,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
proj_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> Result<(Tensor, Tensor, Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_internal_fake_quantize_learnable_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_fake_quantize_learnable_per_channel_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_internal_fake_quantize_learnable_per_tensor_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_fake_quantize_learnable_per_tensor_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_internal_fft_c2c(
&self,
dim: &[i64],
normalization: i64,
forward: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_fft_c2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
forward: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_fft_c2r(
&self,
dim: &[i64],
normalization: i64,
last_dim_size: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_fft_c2r_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
last_dim_size: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_fft_r2c(
&self,
dim: &[i64],
normalization: i64,
onesided: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_fft_r2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
onesided: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_gather_sparse_backward(
&self,
dim: i64,
index: &Tensor,
grad: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_grid_sampler_2d_cpu_fallback(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_grid_sampler_2d_cpu_fallback_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_internal_index_copy_(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_index_put_impl_<T>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool,
unsafe_: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_internal_linalg_inv_out_helper_(
&mut self,
infos_lu: &Tensor,
infos_getri: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_linalg_solve_out_helper_(
&mut self,
other: &Tensor,
infos: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_log_softmax(
&self,
dim: i64,
half_to_float: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_lu_solve_helper(
&self,
lu_data: &Tensor,
lu_pivots: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_lu_with_info(
&self,
pivot: bool,
check_errors: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_internal_make_per_channel_quantized_tensor(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_make_per_tensor_quantized_tensor(
&self,
scale: f64,
zero_point: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_mkldnn_transpose_(
&mut self,
dim0: i64,
dim1: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_mode_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_internal_nnpack_spatial_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_internal_nnpack_spatial_convolution_backward_input(
&self,
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_internal_nnpack_spatial_convolution_backward_weight(
&self,
weightsize: &[i64],
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_internal_pack_padded_sequence(
&self,
lengths: &Tensor,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_internal_pdist_backward(
&self,
grad: &Tensor,
p: f64,
pdist: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_remove_batch_dim(
&self,
level: i64,
batch_size: i64,
out_dim: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_s_where(
&self,
condition: &Tensor,
other: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sobol_engine_ff_(
&mut self,
n: i64,
sobolstate: &Tensor,
dimension: i64,
num_generated: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sobol_engine_initialize_state_(
&mut self,
dimension: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sobol_engine_scramble_(
&mut self,
ltm: &Tensor,
dimension: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sparse_addmm(
&self,
sparse: &Tensor,
dense: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sparse_log_softmax(
&self,
dim: i64,
dtype: Kind
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sparse_log_softmax1(
&self,
dim: i64,
half_to_float: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sparse_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sparse_softmax1(
&self,
dim: i64,
half_to_float: bool
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sparse_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
[src]pub fn f_internal_sparse_sum_backward(
&self,
grad: &Tensor,
dim: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_internal_svd_helper(
&self,
some: bool,
compute_uv: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_internal_syevd_helper(
&self,
compute_eigenvectors: bool,
uplo: &str
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_internal_symeig_helper(
&self,
eigenvectors: bool,
upper: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_internal_test_serialization_subcmul(
&self,
other: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_internal_triangular_solve_helper(
&self,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_internal_unique(
&self,
sorted: bool,
return_inverse: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_internal_unique2(
&self,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_adaptive_avg_pool2d_out(
&self,
out: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_adaptive_avg_pool3d_backward(
&self,
grad_output: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_adaptive_avg_pool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_adaptive_avg_pool3d_out(
&self,
out: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_adaptive_max_pool1d(
&self,
output_size: &[i64]
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_adaptive_max_pool2d(
&self,
output_size: &[i64]
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_adaptive_max_pool2d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_adaptive_max_pool2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_adaptive_max_pool2d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_adaptive_max_pool3d(
&self,
output_size: &[i64]
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_adaptive_max_pool3d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_adaptive_max_pool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_adaptive_max_pool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_addbmm_out(
&self,
out: &Tensor,
batch1: &Tensor,
batch2: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_addcdiv_out(
&self,
out: &Tensor,
tensor1: &Tensor,
tensor2: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_addcmul_out(
&self,
out: &Tensor,
tensor1: &Tensor,
tensor2: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_addmm_out(
&self,
out: &Tensor,
mat1: &Tensor,
mat2: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_addmv_out(
&self,
out: &Tensor,
mat: &Tensor,
vec: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_addr_out(
&self,
out: &Tensor,
vec1: &Tensor,
vec2: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_amax_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_amin_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_argmax(
&self,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_argmax_out(
&self,
out: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_argmin(
&self,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_argmin_out(
&self,
out: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_as_strided(
&self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_as_strided_(
&mut self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_avg_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool
) -> Result<Tensor, TchError>
[src]pub fn f_avg_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_avg_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_avg_pool2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_avg_pool2d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_avg_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_avg_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_avg_pool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_avg_pool3d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_baddbmm_out(
&self,
out: &Tensor,
batch1: &Tensor,
batch2: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_batch_norm_backward_elemt<T>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
mean_dy: &Tensor,
mean_dy_xmu: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_batch_norm_backward_reduce<T>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
input_g: bool,
weight_g: bool,
bias_g: bool
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_batch_norm_elemt<T>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_batch_norm_elemt_out<T>(
&self,
out: &Tensor,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_batch_norm_gather_stats<T>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
count: i64
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_batch_norm_gather_stats_with_counts<T>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
counts: &Tensor
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_batch_norm_update_stats<T>(
&self,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_binary_cross_entropy<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_binary_cross_entropy_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_binary_cross_entropy_backward_out<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_binary_cross_entropy_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_binary_cross_entropy_with_logits<T>(
&self,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_binary_cross_entropy_with_logits_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_bincount<T>(
&self,
weights: Option<T>,
minlength: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_bitwise_and_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_bitwise_and_out1<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_bitwise_or_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_bitwise_or_out1<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_bitwise_xor_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_bitwise_xor_out1<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_bucketize(
&self,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
[src]pub fn f_bucketize_out(
&self,
out: &Tensor,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
[src]pub fn f_cholesky_solve_out(
&self,
out: &Tensor,
input2: &Tensor,
upper: bool
) -> Result<Tensor, TchError>
[src]pub fn f_choose_qparams_optimized(
&self,
numel: i64,
n_bins: i64,
ratio: f64,
bit_width: i64
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_clamp_<S>(&mut self, min: S, max: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_clamp_max_out<S>(
&self,
out: &Tensor,
max: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_clamp_min_out<S>(
&self,
out: &Tensor,
min: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_clamp_out<S>(
&self,
out: &Tensor,
min: S,
max: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_clip_<S>(&mut self, min: S, max: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_clip_out<S>(
&self,
out: &Tensor,
min: S,
max: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_col2im(
&self,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_col2im_out(
&self,
out: &Tensor,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_conv1d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_conv2d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_conv3d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_conv_tbc(
&self,
weight: &Tensor,
bias: &Tensor,
pad: i64
) -> Result<Tensor, TchError>
[src]pub fn f_conv_tbc_backward(
&self,
input: &Tensor,
weight: &Tensor,
bias: &Tensor,
pad: i64
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_conv_transpose1d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_conv_transpose2d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_conv_transpose3d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_convolution_overrideable<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_copy_sparse_to_sparse_(
&mut self,
src: &Tensor,
non_blocking: bool
) -> Result<Tensor, TchError>
[src]pub fn f_copysign_1<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_cross(
&self,
other: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_cross_out(
&self,
out: &Tensor,
other: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_cudnn_batch_norm<T>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_cudnn_batch_norm_backward<T>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64,
reservespace: &Tensor
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_cudnn_convolution(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
[src]pub fn f_cudnn_convolution1<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_cudnn_convolution2(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
[src]pub fn f_cudnn_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
[src]pub fn f_cudnn_convolution_transpose(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
[src]pub fn f_cudnn_convolution_transpose1<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_cudnn_convolution_transpose2(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
[src]pub fn f_cudnn_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
[src]pub fn f_cudnn_grid_sampler_backward(
&self,
grid: &Tensor,
grad_output: &Tensor
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_cummax_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_cummaxmin_backward(
&self,
grad: &Tensor,
indices: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
[src]pub fn f_cummin_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_diff<T>(
&self,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_diff_out<T>(
&self,
out: &Tensor,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_div3<S>(
&self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_div_3<S>(
&mut self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_div_out1(
&self,
out: &Tensor,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
[src]pub fn f_divide3<S>(
&self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_divide_3<S>(
&mut self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_divide_out1(
&self,
out: &Tensor,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
[src]pub fn f_eig_out(
&self,
e: &Tensor,
v: &Tensor,
eigenvectors: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_embedding_renorm_(
&mut self,
indices: &Tensor,
max_norm: f64,
norm_type: f64
) -> Result<Tensor, TchError>
[src]pub fn f_eq_out<S>(&self, out: &Tensor, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_fake_quantize_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> Result<Tensor, TchError>
[src]pub fn f_fake_quantize_per_channel_affine_cachemask(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_fake_quantize_per_tensor_affine(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> Result<Tensor, TchError>
[src]pub fn f_fake_quantize_per_tensor_affine_cachemask(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_fbgemm_linear_fp16_weight(
&self,
packed_weight: &Tensor,
bias: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_fbgemm_linear_fp16_weight_fp32_activation(
&self,
packed_weight: &Tensor,
bias: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_fbgemm_linear_int8_weight<S>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_fbgemm_linear_int8_weight_fp32_activation<S>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_fft_fft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_fft2_out(
&self,
out: &Tensor,
s: &[i64],
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_fft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_fftn_out(
&self,
out: &Tensor,
s: &[i64],
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_hfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_hfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_ifft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_ifft2_out(
&self,
out: &Tensor,
s: &[i64],
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_ifft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_ifftn_out(
&self,
out: &Tensor,
s: &[i64],
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_ihfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_ihfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_irfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_irfft2_out(
&self,
out: &Tensor,
s: &[i64],
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_irfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_irfftn_out(
&self,
out: &Tensor,
s: &[i64],
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_rfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_rfft2_out(
&self,
out: &Tensor,
s: &[i64],
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_rfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fft_rfftn_out(
&self,
out: &Tensor,
s: &[i64],
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
[src]pub fn f_fill_diagonal_<S>(
&mut self,
fill_value: S,
wrap: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_float_power2<S>(&self, exponent: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_float_power_<S>(&mut self, exponent: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_float_power_out2<S>(
&self,
out: &Tensor,
exponent: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_floor_divide_1<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_fmod_out<S>(&self, out: &Tensor, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_fractional_max_pool2d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_fractional_max_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_fractional_max_pool2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_fractional_max_pool2d_out(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_fractional_max_pool3d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_fractional_max_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_fractional_max_pool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_fractional_max_pool3d_out(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_frobenius_norm_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_full_like<S>(&self, fill_value: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_gather(
&self,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Result<Tensor, TchError>
[src]pub fn f_gather_backward(
&self,
grad: &Tensor,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Result<Tensor, TchError>
[src]pub fn f_gather_out(
&self,
out: &Tensor,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Result<Tensor, TchError>
[src]pub fn f_ge_out<S>(&self, out: &Tensor, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_glu_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
[src]pub fn f_greater_equal_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_greater_equal_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_greater_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_grid_sampler(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
[src]pub fn f_grid_sampler_2d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
[src]pub fn f_grid_sampler_2d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_grid_sampler_3d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
[src]pub fn f_grid_sampler_3d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_group_norm<T>(
&self,
num_groups: i64,
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enabled: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_gru<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_gru_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_gt_out<S>(&self, out: &Tensor, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_hardshrink_backward<S>(
&self,
grad_out: &Tensor,
lambd: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_hardtanh_backward<S>(
&self,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_hardtanh_backward_out<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_hinge_embedding_loss(
&self,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_im2col(
&self,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_im2col_out(
&self,
out: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_index<T>(&self, indices: &[Option<T>]) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_index_add(
&self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_index_add_(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_index_copy(
&self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_index_copy_(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_index_fill<S>(
&self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_index_fill1(
&self,
dim: i64,
index: &Tensor,
value: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_index_fill_<S>(
&mut self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_index_fill_1(
&mut self,
dim: i64,
index: &Tensor,
value: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_index_put<T>(
&self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_index_put_<T>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_index_select_out(
&self,
out: &Tensor,
dim: i64,
index: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_infinitely_differentiable_gelu_backward(
&self,
grad: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_instance_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
use_input_stats: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_isclose(
&self,
other: &Tensor,
rtol: f64,
atol: f64,
equal_nan: bool
) -> Result<Tensor, TchError>
[src]pub fn f_istft<T>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
center: bool,
normalized: bool,
onesided: bool,
length: impl Into<Option<i64>>,
return_complex: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_kl_div(
&self,
target: &Tensor,
reduction: Reduction,
log_target: bool
) -> Result<Tensor, TchError>
[src]pub fn f_kl_div_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
log_target: bool
) -> Result<Tensor, TchError>
[src]pub fn f_kthvalue(
&self,
k: i64,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_kthvalue_out(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_l1_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_l1_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_layer_norm<T>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enable: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_le_out<S>(&self, out: &Tensor, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_leaky_relu_backward<S>(
&self,
grad_output: &Tensor,
negative_slope: S,
self_is_result: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_lerp<S>(&self, end: &Tensor, weight: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_lerp_<S>(
&mut self,
end: &Tensor,
weight: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_lerp_out<S>(
&self,
out: &Tensor,
end: &Tensor,
weight: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_lerp_out1(
&self,
out: &Tensor,
end: &Tensor,
weight: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_less_equal_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_less_equal_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_less_out<S>(&self, out: &Tensor, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_linalg_cond_out<S>(
&self,
out: &Tensor,
p: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_linalg_eigh_out(
&self,
eigvals: &Tensor,
eigvecs: &Tensor,
uplo: &str
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_linalg_matrix_rank(
&self,
tol: impl Into<Option<f64>>,
hermitian: bool
) -> Result<Tensor, TchError>
[src]pub fn f_linalg_matrix_rank_out(
&self,
out: &Tensor,
tol: impl Into<Option<f64>>,
hermitian: bool
) -> Result<Tensor, TchError>
[src]pub fn f_linalg_norm<S>(
&self,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_linalg_norm1(
&self,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
[src]pub fn f_linalg_norm_out<S>(
&self,
out: &Tensor,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_linalg_norm_out1(
&self,
out: &Tensor,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
[src]pub fn f_linalg_pinv_out(
&self,
out: &Tensor,
rcond: f64,
hermitian: bool
) -> Result<Tensor, TchError>
[src]pub fn f_linalg_pinv_out1(
&self,
out: &Tensor,
rcond: &Tensor,
hermitian: bool
) -> Result<Tensor, TchError>
[src]pub fn f_linalg_qr_out(
&self,
q: &Tensor,
r: &Tensor,
mode: &str
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_linalg_slogdet_out(
&self,
sign: &Tensor,
logabsdet: &Tensor
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_linalg_svd(
&self,
full_matrices: bool,
compute_uv: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_linalg_svd_out(
&self,
u: &Tensor,
s: &Tensor,
v: &Tensor,
full_matrices: bool,
compute_uv: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_linalg_tensorsolve_out(
&self,
out: &Tensor,
other: &Tensor,
dims: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_linear<T>(
&self,
weight: &Tensor,
bias: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_log_sigmoid_backward(
&self,
grad_output: &Tensor,
buffer: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_log_sigmoid_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
buffer: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_logit_backward(
&self,
grad_output: &Tensor,
eps: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_logit_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
eps: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_logit_out(
&self,
out: &Tensor,
eps: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_logsumexp_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_lstm<T>(
&self,
hx: &[T],
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_lstm_cell<T>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_lstsq_out(
&self,
x: &Tensor,
qr: &Tensor,
a: &Tensor
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_lt_out<S>(&self, out: &Tensor, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_lu_solve_out(
&self,
out: &Tensor,
lu_data: &Tensor,
lu_pivots: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_masked_fill<S>(
&self,
mask: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_masked_fill_<S>(
&mut self,
mask: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_masked_scatter_(
&mut self,
mask: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_masked_select_backward(
&self,
grad: &Tensor,
mask: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_max_out1(
&self,
max: &Tensor,
max_values: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
[src]pub fn f_max_pool1d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
[src]pub fn f_max_pool2d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_max_pool2d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_max_pool2d_with_indices_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_max_pool2d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
[src]pub fn f_max_pool3d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_max_pool3d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_max_pool3d_with_indices_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_max_pool3d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_max_unpool2d(
&self,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_max_unpool2d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_max_unpool2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_max_unpool2d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_max_unpool3d(
&self,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_max_unpool3d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_max_unpool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_max_unpool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_mean_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
[src]pub fn f_median_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_min_out1(
&self,
min: &Tensor,
min_indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_miopen_batch_norm<T>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_miopen_batch_norm_backward<T>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_miopen_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_miopen_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
[src]pub fn f_miopen_convolution_transpose<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_miopen_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
[src]pub fn f_miopen_depthwise_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_miopen_depthwise_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
[src]pub fn f_miopen_rnn<T>(
&self,
weight: &[T],
weight_stride0: i64,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> Result<(Tensor, Tensor, Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_mkldnn_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_mkldnn_convolution_backward_weights(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
bias_defined: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_mkldnn_linear<T>(
&self,
weight: &Tensor,
bias: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_mkldnn_linear_backward_weights(
&self,
grad_output: &Tensor,
weight: &Tensor,
bias_defined: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_mkldnn_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
[src]pub fn f_mkldnn_max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
[src]pub fn f_mkldnn_reorder_conv2d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
[src]pub fn f_mkldnn_reorder_conv3d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
[src]pub fn f_mode_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_mse_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_mse_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_mse_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_multi_margin_loss_backward<T, S>(
&self,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
S: Into<Scalar>,
[src]pub fn f_multi_margin_loss_backward_out<T, S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
S: Into<Scalar>,
[src]pub fn f_multilabel_margin_loss(
&self,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_multilabel_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_multilabel_margin_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_multilabel_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_multinomial_out(
&self,
out: &Tensor,
num_samples: i64,
replacement: bool
) -> Result<Tensor, TchError>
[src]pub fn f_multiply_1<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_nan_to_num(
&self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_nan_to_num_(
&mut self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_nan_to_num_out(
&self,
out: &Tensor,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_nanmedian_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_nanquantile(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_nanquantile1(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_nanquantile_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_nanquantile_out1(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_nansum_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
[src]pub fn f_narrow_copy_out(
&self,
out: &Tensor,
dim: i64,
start: i64,
length: i64
) -> Result<Tensor, TchError>
[src]pub fn f_native_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_native_batch_norm_out<T>(
&self,
out: &Tensor,
save_mean: &Tensor,
save_invstd: &Tensor,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_native_group_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
n: i64,
c: i64,
hxw: i64,
group: i64,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_native_layer_norm<T>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_native_norm1<S>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_ne_out<S>(&self, out: &Tensor, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_new_empty_strided(
&self,
size: &[i64],
stride: &[i64],
options: (Kind, Device)
) -> Result<Tensor, TchError>
[src]pub fn f_new_full<S>(
&self,
size: &[i64],
fill_value: S,
options: (Kind, Device)
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_nll_loss<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_nll_loss2d<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_nll_loss2d_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_nll_loss2d_backward_out<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_nll_loss2d_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_nll_loss_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_nll_loss_backward_out<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_nll_loss_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_norm1<S>(&self, p: S, dtype: Kind) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_norm2<S>(
&self,
p: S,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_norm3<S>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_norm_out<S>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_norm_out1<S>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_not_equal_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_not_equal_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_nuclear_norm_out1(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_ormqr(
&self,
input2: &Tensor,
input3: &Tensor,
left: bool,
transpose: bool
) -> Result<Tensor, TchError>
[src]pub fn f_ormqr_out(
&self,
out: &Tensor,
input2: &Tensor,
input3: &Tensor,
left: bool,
transpose: bool
) -> Result<Tensor, TchError>
[src]pub fn f_poisson_nll_loss(
&self,
target: &Tensor,
log_input: bool,
full: bool,
eps: f64,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_pow_out2<S>(
&self,
out: &Tensor,
exponent: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_prelu_backward(
&self,
grad_output: &Tensor,
weight: &Tensor
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_prod_out(
&self,
out: &Tensor,
dim: i64,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
[src]pub fn f_put_(
&mut self,
index: &Tensor,
source: &Tensor,
accumulate: bool
) -> Result<Tensor, TchError>
[src]pub fn f_qr_out(
&self,
q: &Tensor,
r: &Tensor,
some: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_quantile(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_quantile1(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_quantile_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_quantile_out1(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_quantize_per_channel(
&self,
scales: &Tensor,
zero_points: &Tensor,
axis: i64,
dtype: Kind
) -> Result<Tensor, TchError>
[src]pub fn f_quantize_per_tensor(
&self,
scale: f64,
zero_point: i64,
dtype: Kind
) -> Result<Tensor, TchError>
[src]pub fn f_quantized_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
var: &Tensor,
eps: f64,
output_scale: f64,
output_zero_point: i64
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_quantized_gru_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_quantized_lstm_cell<T, S>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
S: Into<Scalar>,
[src]pub fn f_quantized_max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
[src]pub fn f_quantized_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
[src]pub fn f_quantized_rnn_relu_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_quantized_rnn_tanh_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_random_2(
&mut self,
from: i64,
to: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_reflection_pad1d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_reflection_pad1d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_reflection_pad1d_out(
&self,
out: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_reflection_pad2d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_reflection_pad2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_reflection_pad2d_out(
&self,
out: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_remainder_<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_remainder_out<S>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_renorm<S>(
&self,
p: S,
dim: i64,
maxnorm: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_renorm_<S>(
&mut self,
p: S,
dim: i64,
maxnorm: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_renorm_out<S>(
&self,
out: &Tensor,
p: S,
dim: i64,
maxnorm: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_repeat_interleave1(
&self,
repeats: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_repeat_interleave2(
&self,
repeats: i64,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
[src]pub fn f_replication_pad1d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_replication_pad1d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_replication_pad1d_out(
&self,
out: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_replication_pad2d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_replication_pad2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_replication_pad2d_out(
&self,
out: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_replication_pad3d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_replication_pad3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_replication_pad3d_out(
&self,
out: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_rnn_relu<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_rnn_relu_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_rnn_tanh<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError> where
T: Borrow<Tensor>,
[src]pub fn f_rnn_tanh_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_rrelu_with_noise_(
&mut self,
noise: &Tensor,
training: bool
) -> Result<Tensor, TchError>
[src]pub fn f_rrelu_with_noise_backward<S>(
&self,
grad_output: &Tensor,
noise: &Tensor,
lower: S,
upper: S,
training: bool,
self_is_result: bool
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_rrelu_with_noise_out(
&self,
out: &Tensor,
noise: &Tensor,
training: bool
) -> Result<Tensor, TchError>
[src]pub fn f_scatter1<S>(
&self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_scatter_(
&mut self,
dim: i64,
index: &Tensor,
src: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_scatter_1<S>(
&mut self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_scatter_2(
&mut self,
dim: i64,
index: &Tensor,
src: &Tensor,
reduce: &str
) -> Result<Tensor, TchError>
[src]pub fn f_scatter_3<S>(
&mut self,
dim: i64,
index: &Tensor,
value: S,
reduce: &str
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_scatter_add(
&self,
dim: i64,
index: &Tensor,
src: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_scatter_add_(
&mut self,
dim: i64,
index: &Tensor,
src: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_searchsorted(
&self,
sorted_sequence: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
[src]pub fn f_searchsorted_out(
&self,
out: &Tensor,
sorted_sequence: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
[src]pub fn f_slice(
&self,
dim: i64,
start: impl Into<Option<i64>>,
end: impl Into<Option<i64>>,
step: i64
) -> Result<Tensor, TchError>
[src]pub fn f_slow_conv3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_slow_conv3d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_slow_conv_dilated2d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_slow_conv_dilated3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_slow_conv_transpose2d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_slow_conv_transpose2d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_slow_conv_transpose3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_slow_conv_transpose3d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_smooth_l1_loss(
&self,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
[src]pub fn f_smooth_l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
[src]pub fn f_smooth_l1_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
[src]pub fn f_smooth_l1_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
[src]pub fn f_soft_margin_loss(
&self,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_soft_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_soft_margin_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_soft_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
[src]pub fn f_softplus_backward<S>(
&self,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_softplus_backward_out<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_softshrink_backward<S>(
&self,
grad_output: &Tensor,
lambd: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_softshrink_backward_out<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
lambd: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_solve_out(
&self,
solution: &Tensor,
lu: &Tensor,
a: &Tensor
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_sort_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
descending: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_sparse_resize_(
&mut self,
size: &[i64],
sparse_dim: i64,
dense_dim: i64
) -> Result<Tensor, TchError>
[src]pub fn f_sparse_resize_and_clear_(
&mut self,
size: &[i64],
sparse_dim: i64,
dense_dim: i64
) -> Result<Tensor, TchError>
[src]pub fn f_split_with_sizes(
&self,
split_sizes: &[i64],
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
[src]pub fn f_sspaddmm_out(
&self,
out: &Tensor,
mat1: &Tensor,
mat2: &Tensor
) -> Result<Tensor, TchError>
[src]pub fn f_std_mean1(
&self,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_std_out(
&self,
out: &Tensor,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_stft<T>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
normalized: bool,
onesided: bool,
return_complex: bool
) -> Result<Tensor, TchError> where
T: Borrow<Tensor>,
[src]pub fn f_subtract_1<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_sum_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
[src]pub fn f_svd(
&self,
some: bool,
compute_uv: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_svd_out(
&self,
u: &Tensor,
s: &Tensor,
v: &Tensor,
some: bool,
compute_uv: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_symeig_out(
&self,
e: &Tensor,
v: &Tensor,
eigenvectors: bool,
upper: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_tensor_split(
&self,
sections: i64,
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
[src]pub fn f_tensor_split1(
&self,
indices: &[i64],
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
[src]pub fn f_tensor_split2(
&self,
tensor_indices_or_sections: &Tensor,
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
[src]pub fn f_tensordot(
&self,
other: &Tensor,
dims_self: &[i64],
dims_other: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_tensordot_out(
&self,
out: &Tensor,
other: &Tensor,
dims_self: &[i64],
dims_other: &[i64]
) -> Result<Tensor, TchError>
[src]pub fn f_threshold<S>(&self, threshold: S, value: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_threshold_<S>(
&mut self,
threshold: S,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_threshold_backward<S>(
&self,
grad_output: &Tensor,
threshold: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_threshold_out<S>(
&self,
out: &Tensor,
threshold: S,
value: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_to1(
&self,
options: (Kind, Device),
non_blocking: bool,
copy: bool
) -> Result<Tensor, TchError>
[src]pub fn f_to3(
&self,
other: &Tensor,
non_blocking: bool,
copy: bool
) -> Result<Tensor, TchError>
[src]pub fn f_to4(
&self,
device: Device,
dtype: Kind,
non_blocking: bool,
copy: bool
) -> Result<Tensor, TchError>
[src]pub fn f_topk(
&self,
k: i64,
dim: i64,
largest: bool,
sorted: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_topk_out(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
largest: bool,
sorted: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_triangular_solve(
&self,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_triangular_solve_out(
&self,
x: &Tensor,
m: &Tensor,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_true_divide_1<S>(&mut self, other: S) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn f_unique_consecutive(
&self,
return_inverse: bool,
return_counts: bool,
dim: impl Into<Option<i64>>
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_unique_dim(
&self,
dim: i64,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_unique_dim_consecutive(
&self,
dim: i64,
return_inverse: bool,
return_counts: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
[src]pub fn f_unsafe_split(
&self,
split_size: i64,
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
[src]pub fn f_unsafe_split_with_sizes(
&self,
split_sizes: &[i64],
dim: i64
) -> Result<Vec<Tensor, Global>, TchError>
[src]pub fn f_upsample_bicubic2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_bicubic2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_bilinear2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_bilinear2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_linear1d(
&self,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_linear1d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_nearest1d(
&self,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_nearest1d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_nearest2d(
&self,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_nearest2d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_nearest3d(
&self,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_nearest3d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_trilinear3d(
&self,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_upsample_trilinear3d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
[src]pub fn f_var_mean1(
&self,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
[src]pub fn f_var_out(
&self,
out: &Tensor,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<Tensor, TchError>
[src]pub fn f_where3<S>(
&self,
condition: &Tensor,
other: S
) -> Result<Tensor, TchError> where
S: Into<Scalar>,
[src]pub fn internal_compute_linear_combination_out(
&self,
out: &Tensor,
coefficients: &Tensor
) -> Tensor
[src]pub fn internal_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn internal_convolution1<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool,
allow_tf32: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn internal_convolution_nogroup<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn internal_cudnn_rnn<T>(
&self,
weight: &[T],
weight_stride0: i64,
weight_buf: Option<T>,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
proj_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> (Tensor, Tensor, Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn internal_fake_quantize_learnable_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Tensor
[src]pub fn internal_fake_quantize_learnable_per_channel_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> (Tensor, Tensor, Tensor)
[src]pub fn internal_fake_quantize_learnable_per_tensor_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Tensor
[src]pub fn internal_fake_quantize_learnable_per_tensor_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> (Tensor, Tensor, Tensor)
[src]pub fn internal_fft_c2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
forward: bool
) -> Tensor
[src]pub fn internal_fft_c2r_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
last_dim_size: i64
) -> Tensor
[src]pub fn internal_fft_r2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
onesided: bool
) -> Tensor
[src]pub fn internal_gather_sparse_backward(
&self,
dim: i64,
index: &Tensor,
grad: &Tensor
) -> Tensor
[src]pub fn internal_grid_sampler_2d_cpu_fallback(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
[src]pub fn internal_grid_sampler_2d_cpu_fallback_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> (Tensor, Tensor)
[src]pub fn internal_index_put_impl_<T>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool,
unsafe_: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn internal_linalg_inv_out_helper_(
&mut self,
infos_lu: &Tensor,
infos_getri: &Tensor
) -> Tensor
[src]pub fn internal_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
[src]pub fn internal_lu_with_info(
&self,
pivot: bool,
check_errors: bool
) -> (Tensor, Tensor, Tensor)
[src]pub fn internal_make_per_channel_quantized_tensor(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64
) -> Tensor
[src]pub fn internal_make_per_tensor_quantized_tensor(
&self,
scale: f64,
zero_point: i64
) -> Tensor
[src]pub fn internal_mode_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
[src]pub fn internal_nnpack_spatial_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn internal_nnpack_spatial_convolution_backward_input(
&self,
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64]
) -> Tensor
[src]pub fn internal_nnpack_spatial_convolution_backward_weight(
&self,
weightsize: &[i64],
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
[src]pub fn internal_pack_padded_sequence(
&self,
lengths: &Tensor,
batch_first: bool
) -> (Tensor, Tensor)
[src]pub fn internal_sobol_engine_ff_(
&mut self,
n: i64,
sobolstate: &Tensor,
dimension: i64,
num_generated: i64
) -> Tensor
[src]pub fn internal_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
[src]pub fn internal_sparse_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
[src]pub fn internal_sparse_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
[src]pub fn internal_syevd_helper(
&self,
compute_eigenvectors: bool,
uplo: &str
) -> (Tensor, Tensor)
[src]pub fn internal_triangular_solve_helper(
&self,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> (Tensor, Tensor)
[src]pub fn internal_unique2(
&self,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> (Tensor, Tensor, Tensor)
[src]pub fn adaptive_avg_pool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor
) -> Tensor
[src]pub fn adaptive_max_pool2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Tensor
[src]pub fn adaptive_max_pool2d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> (Tensor, Tensor)
[src]pub fn adaptive_max_pool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Tensor
[src]pub fn adaptive_max_pool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> (Tensor, Tensor)
[src]pub fn as_strided(
&self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Tensor
[src]pub fn as_strided_(
&mut self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Tensor
[src]pub fn avg_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool
) -> Tensor
[src]pub fn avg_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
[src]pub fn avg_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
[src]pub fn avg_pool2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
[src]pub fn avg_pool2d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
[src]pub fn avg_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
[src]pub fn avg_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
[src]pub fn avg_pool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
[src]pub fn avg_pool3d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
[src]pub fn batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn batch_norm_backward_elemt<T>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
mean_dy: &Tensor,
mean_dy_xmu: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn batch_norm_backward_reduce<T>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
input_g: bool,
weight_g: bool,
bias_g: bool
) -> (Tensor, Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn batch_norm_elemt<T>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn batch_norm_elemt_out<T>(
&self,
out: &Tensor,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn batch_norm_gather_stats<T>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
count: i64
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn batch_norm_gather_stats_with_counts<T>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
counts: &Tensor
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn batch_norm_update_stats<T>(
&self,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn binary_cross_entropy<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn binary_cross_entropy_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn binary_cross_entropy_backward_out<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn binary_cross_entropy_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn binary_cross_entropy_with_logits<T>(
&self,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn binary_cross_entropy_with_logits_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn bincount<T>(&self, weights: Option<T>, minlength: i64) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn bucketize_out(
&self,
out: &Tensor,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Tensor
[src]pub fn choose_qparams_optimized(
&self,
numel: i64,
n_bins: i64,
ratio: f64,
bit_width: i64
) -> (Tensor, Tensor)
[src]pub fn col2im(
&self,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
[src]pub fn col2im_out(
&self,
out: &Tensor,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
[src]pub fn conv1d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn conv2d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn conv3d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn conv_tbc_backward(
&self,
input: &Tensor,
weight: &Tensor,
bias: &Tensor,
pad: i64
) -> (Tensor, Tensor, Tensor)
[src]pub fn conv_transpose1d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn conv_transpose2d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn conv_transpose3d<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn convolution_overrideable<T>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn cudnn_batch_norm<T>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> (Tensor, Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn cudnn_batch_norm_backward<T>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64,
reservespace: &Tensor
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn cudnn_convolution(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
[src]pub fn cudnn_convolution1<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn cudnn_convolution2(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
[src]pub fn cudnn_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
[src]pub fn cudnn_convolution_transpose(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
[src]pub fn cudnn_convolution_transpose1<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn cudnn_convolution_transpose2(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
[src]pub fn cudnn_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
[src]pub fn cudnn_grid_sampler_backward(
&self,
grid: &Tensor,
grad_output: &Tensor
) -> (Tensor, Tensor)
[src]pub fn diff<T>(
&self,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn diff_out<T>(
&self,
out: &Tensor,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn embedding_renorm_(
&mut self,
indices: &Tensor,
max_norm: f64,
norm_type: f64
) -> Tensor
[src]pub fn fake_quantize_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> Tensor
[src]pub fn fake_quantize_per_channel_affine_cachemask(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> (Tensor, Tensor)
[src]pub fn fake_quantize_per_tensor_affine(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> Tensor
[src]pub fn fake_quantize_per_tensor_affine_cachemask(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> (Tensor, Tensor)
[src]pub fn fbgemm_linear_fp16_weight_fp32_activation(
&self,
packed_weight: &Tensor,
bias: &Tensor
) -> Tensor
[src]pub fn fbgemm_linear_int8_weight<S>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Tensor where
S: Into<Scalar>,
[src]pub fn fbgemm_linear_int8_weight_fp32_activation<S>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Tensor where
S: Into<Scalar>,
[src]pub fn fft_fft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
[src]pub fn fft_hfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
[src]pub fn fft_ifft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
[src]pub fn fft_ihfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
[src]pub fn fft_irfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
[src]pub fn fft_rfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
[src]pub fn fill_diagonal_<S>(&mut self, fill_value: S, wrap: bool) -> Tensor where
S: Into<Scalar>,
[src]pub fn fractional_max_pool2d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
[src]pub fn fractional_max_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
[src]pub fn fractional_max_pool2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
[src]pub fn fractional_max_pool2d_out(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
[src]pub fn fractional_max_pool3d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
[src]pub fn fractional_max_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
[src]pub fn fractional_max_pool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
[src]pub fn fractional_max_pool3d_out(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
[src]pub fn gather_backward(
&self,
grad: &Tensor,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Tensor
[src]pub fn grid_sampler(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
[src]pub fn grid_sampler_2d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
[src]pub fn grid_sampler_2d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> (Tensor, Tensor)
[src]pub fn grid_sampler_3d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
[src]pub fn grid_sampler_3d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> (Tensor, Tensor)
[src]pub fn group_norm<T>(
&self,
num_groups: i64,
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enabled: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn gru<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn gru_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn hardshrink_backward<S>(&self, grad_out: &Tensor, lambd: S) -> Tensor where
S: Into<Scalar>,
[src]pub fn hardtanh_backward<S>(
&self,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Tensor where
S: Into<Scalar>,
[src]pub fn hardtanh_backward_out<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Tensor where
S: Into<Scalar>,
[src]pub fn hinge_embedding_loss(
&self,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Tensor
[src]pub fn im2col(
&self,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
[src]pub fn im2col_out(
&self,
out: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
[src]pub fn index_fill<S>(&self, dim: i64, index: &Tensor, value: S) -> Tensor where
S: Into<Scalar>,
[src]pub fn index_fill_<S>(&mut self, dim: i64, index: &Tensor, value: S) -> Tensor where
S: Into<Scalar>,
[src]pub fn index_put<T>(
&self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn index_put_<T>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn instance_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
use_input_stats: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn istft<T>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
center: bool,
normalized: bool,
onesided: bool,
length: impl Into<Option<i64>>,
return_complex: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn kl_div_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
log_target: bool
) -> Tensor
[src]pub fn kthvalue_out(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
[src]pub fn l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
[src]pub fn l1_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
[src]pub fn layer_norm<T>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enable: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn leaky_relu_backward<S>(
&self,
grad_output: &Tensor,
negative_slope: S,
self_is_result: bool
) -> Tensor where
S: Into<Scalar>,
[src]pub fn lerp_out<S>(&self, out: &Tensor, end: &Tensor, weight: S) -> Tensor where
S: Into<Scalar>,
[src]pub fn linalg_eigh_out(
&self,
eigvals: &Tensor,
eigvecs: &Tensor,
uplo: &str
) -> (Tensor, Tensor)
[src]pub fn linalg_matrix_rank_out(
&self,
out: &Tensor,
tol: impl Into<Option<f64>>,
hermitian: bool
) -> Tensor
[src]pub fn linalg_norm<S>(
&self,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
[src]pub fn linalg_norm_out<S>(
&self,
out: &Tensor,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
[src]pub fn linalg_norm_out1(
&self,
out: &Tensor,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
[src]pub fn linalg_svd_out(
&self,
u: &Tensor,
s: &Tensor,
v: &Tensor,
full_matrices: bool,
compute_uv: bool
) -> (Tensor, Tensor, Tensor)
[src]pub fn log_sigmoid_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
buffer: &Tensor
) -> Tensor
[src]pub fn logit_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
eps: impl Into<Option<f64>>
) -> Tensor
[src]pub fn lstm<T>(
&self,
hx: &[T],
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn lstm_cell<T>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn max_out1(
&self,
max: &Tensor,
max_values: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
[src]pub fn max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
[src]pub fn max_pool1d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
[src]pub fn max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
[src]pub fn max_pool2d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
[src]pub fn max_pool2d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
[src]pub fn max_pool2d_with_indices_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
[src]pub fn max_pool2d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
[src]pub fn max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
[src]pub fn max_pool3d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
[src]pub fn max_pool3d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
[src]pub fn max_pool3d_with_indices_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
[src]pub fn max_pool3d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
[src]pub fn max_unpool2d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Tensor
[src]pub fn max_unpool2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Tensor
[src]pub fn max_unpool3d(
&self,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
[src]pub fn max_unpool3d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
[src]pub fn max_unpool3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
[src]pub fn max_unpool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
[src]pub fn median_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
[src]pub fn min_out1(
&self,
min: &Tensor,
min_indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
[src]pub fn miopen_batch_norm<T>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn miopen_batch_norm_backward<T>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn miopen_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn miopen_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
[src]pub fn miopen_convolution_transpose<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn miopen_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
[src]pub fn miopen_depthwise_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn miopen_depthwise_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
[src]pub fn miopen_rnn<T>(
&self,
weight: &[T],
weight_stride0: i64,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> (Tensor, Tensor, Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn mkldnn_convolution<T>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn mkldnn_convolution_backward_weights(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
bias_defined: bool
) -> (Tensor, Tensor)
[src]pub fn mkldnn_linear<T>(&self, weight: &Tensor, bias: Option<T>) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn mkldnn_linear_backward_weights(
&self,
grad_output: &Tensor,
weight: &Tensor,
bias_defined: bool
) -> (Tensor, Tensor)
[src]pub fn mkldnn_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
[src]pub fn mkldnn_max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
[src]pub fn mkldnn_reorder_conv2d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
[src]pub fn mkldnn_reorder_conv3d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
[src]pub fn mode_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
[src]pub fn mse_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
[src]pub fn mse_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
[src]pub fn multi_margin_loss_backward<T, S>(
&self,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
S: Into<Scalar>,
[src]pub fn multi_margin_loss_backward_out<T, S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Tensor where
T: Borrow<Tensor>,
S: Into<Scalar>,
[src]pub fn multilabel_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Tensor
[src]pub fn multilabel_margin_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Tensor
[src]pub fn multilabel_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
[src]pub fn nan_to_num(
&self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Tensor
[src]pub fn nan_to_num_(
&mut self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Tensor
[src]pub fn nan_to_num_out(
&self,
out: &Tensor,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Tensor
[src]pub fn nanmedian_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
[src]pub fn nanquantile_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
[src]pub fn nanquantile_out1(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
[src]pub fn native_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn native_batch_norm_out<T>(
&self,
out: &Tensor,
save_mean: &Tensor,
save_invstd: &Tensor,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn native_group_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
n: i64,
c: i64,
hxw: i64,
group: i64,
eps: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn native_layer_norm<T>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64
) -> (Tensor, Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn native_norm1<S>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
[src]pub fn new_empty_strided(
&self,
size: &[i64],
stride: &[i64],
options: (Kind, Device)
) -> Tensor
[src]pub fn new_full<S>(
&self,
size: &[i64],
fill_value: S,
options: (Kind, Device)
) -> Tensor where
S: Into<Scalar>,
[src]pub fn g_nll_loss<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn nll_loss2d<T>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn nll_loss2d_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn nll_loss2d_backward_out<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn nll_loss2d_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn nll_loss_backward<T>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn nll_loss_backward_out<T>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn nll_loss_out<T>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn norm3<S>(&self, p: S, dim: &[i64], keepdim: bool, dtype: Kind) -> Tensor where
S: Into<Scalar>,
[src]pub fn norm_out<S>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool
) -> Tensor where
S: Into<Scalar>,
[src]pub fn norm_out1<S>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor where
S: Into<Scalar>,
[src]pub fn ormqr_out(
&self,
out: &Tensor,
input2: &Tensor,
input3: &Tensor,
left: bool,
transpose: bool
) -> Tensor
[src]pub fn poisson_nll_loss(
&self,
target: &Tensor,
log_input: bool,
full: bool,
eps: f64,
reduction: Reduction
) -> Tensor
[src]pub fn quantile_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
[src]pub fn quantile_out1(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
[src]pub fn quantize_per_channel(
&self,
scales: &Tensor,
zero_points: &Tensor,
axis: i64,
dtype: Kind
) -> Tensor
[src]pub fn quantized_batch_norm<T>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
var: &Tensor,
eps: f64,
output_scale: f64,
output_zero_point: i64
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn quantized_gru_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Tensor where
S: Into<Scalar>,
[src]pub fn quantized_lstm_cell<T, S>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
S: Into<Scalar>,
[src]pub fn quantized_max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
[src]pub fn quantized_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
[src]pub fn quantized_rnn_relu_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Tensor where
S: Into<Scalar>,
[src]pub fn quantized_rnn_tanh_cell<S>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Tensor where
S: Into<Scalar>,
[src]pub fn reflection_pad1d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
[src]pub fn reflection_pad2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
[src]pub fn renorm_out<S>(&self, out: &Tensor, p: S, dim: i64, maxnorm: S) -> Tensor where
S: Into<Scalar>,
[src]pub fn replication_pad1d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
[src]pub fn replication_pad2d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
[src]pub fn replication_pad3d_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
[src]pub fn rnn_relu<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn rnn_relu_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn rnn_tanh<T>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor) where
T: Borrow<Tensor>,
[src]pub fn rnn_tanh_cell<T>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn rrelu_with_noise_backward<S>(
&self,
grad_output: &Tensor,
noise: &Tensor,
lower: S,
upper: S,
training: bool,
self_is_result: bool
) -> Tensor where
S: Into<Scalar>,
[src]pub fn scatter1<S>(&self, dim: i64, index: &Tensor, value: S) -> Tensor where
S: Into<Scalar>,
[src]pub fn scatter_1<S>(&mut self, dim: i64, index: &Tensor, value: S) -> Tensor where
S: Into<Scalar>,
[src]pub fn scatter_3<S>(
&mut self,
dim: i64,
index: &Tensor,
value: S,
reduce: &str
) -> Tensor where
S: Into<Scalar>,
[src]pub fn searchsorted_out(
&self,
out: &Tensor,
sorted_sequence: &Tensor,
out_int32: bool,
right: bool
) -> Tensor
[src]pub fn slice(
&self,
dim: i64,
start: impl Into<Option<i64>>,
end: impl Into<Option<i64>>,
step: i64
) -> Tensor
[src]pub fn slow_conv3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn slow_conv3d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn slow_conv_dilated2d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn slow_conv_dilated3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn slow_conv_transpose2d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn slow_conv_transpose2d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn slow_conv_transpose3d<T>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn slow_conv_transpose3d_out<T>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn smooth_l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Tensor
[src]pub fn smooth_l1_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Tensor
[src]pub fn smooth_l1_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Tensor
[src]pub fn soft_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
[src]pub fn soft_margin_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
[src]pub fn soft_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
[src]pub fn softplus_backward<S>(
&self,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Tensor where
S: Into<Scalar>,
[src]pub fn softplus_backward_out<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Tensor where
S: Into<Scalar>,
[src]pub fn softshrink_backward<S>(&self, grad_output: &Tensor, lambd: S) -> Tensor where
S: Into<Scalar>,
[src]pub fn softshrink_backward_out<S>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
lambd: S
) -> Tensor where
S: Into<Scalar>,
[src]pub fn sort_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
descending: bool
) -> (Tensor, Tensor)
[src]pub fn sparse_resize_and_clear_(
&mut self,
size: &[i64],
sparse_dim: i64,
dense_dim: i64
) -> Tensor
[src]pub fn stft<T>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
normalized: bool,
onesided: bool,
return_complex: bool
) -> Tensor where
T: Borrow<Tensor>,
[src]pub fn svd_out(
&self,
u: &Tensor,
s: &Tensor,
v: &Tensor,
some: bool,
compute_uv: bool
) -> (Tensor, Tensor, Tensor)
[src]pub fn symeig_out(
&self,
e: &Tensor,
v: &Tensor,
eigenvectors: bool,
upper: bool
) -> (Tensor, Tensor)
[src]pub fn tensor_split2(
&self,
tensor_indices_or_sections: &Tensor,
dim: i64
) -> Vec<Tensor, Global>
[src]pub fn tensordot_out(
&self,
out: &Tensor,
other: &Tensor,
dims_self: &[i64],
dims_other: &[i64]
) -> Tensor
[src]pub fn threshold_backward<S>(
&self,
grad_output: &Tensor,
threshold: S
) -> Tensor where
S: Into<Scalar>,
[src]pub fn threshold_out<S>(&self, out: &Tensor, threshold: S, value: S) -> Tensor where
S: Into<Scalar>,
[src]pub fn topk_out(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
largest: bool,
sorted: bool
) -> (Tensor, Tensor)
[src]pub fn triangular_solve(
&self,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> (Tensor, Tensor)
[src]pub fn triangular_solve_out(
&self,
x: &Tensor,
m: &Tensor,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> (Tensor, Tensor)
[src]pub fn unique_consecutive(
&self,
return_inverse: bool,
return_counts: bool,
dim: impl Into<Option<i64>>
) -> (Tensor, Tensor, Tensor)
[src]pub fn unique_dim(
&self,
dim: i64,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> (Tensor, Tensor, Tensor)
[src]pub fn unique_dim_consecutive(
&self,
dim: i64,
return_inverse: bool,
return_counts: bool
) -> (Tensor, Tensor, Tensor)
[src]pub fn upsample_bicubic2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_bicubic2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_bilinear2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_bilinear2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_linear1d(
&self,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_linear1d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_nearest1d(
&self,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_nearest1d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_nearest2d(
&self,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_nearest2d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_nearest3d(
&self,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_nearest3d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
[src]pub fn upsample_trilinear3d(
&self,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
[src]Computes the cross-entropy loss based on some logits and targets.
Returns the average accuracy for some given logits assuming that targets represent ground-truth.
Flattens a tensor.
This returns a flattened version of the given tensor. The first dimension is preserved as it is assumed to be the mini-batch dimension.
Converts a tensor to a one-hot encoded version.
If the input has a size [N1, N2, …, Nk], the returned tensor has a size [N1, …, Nk, labels]. The returned tensor uses float values. Elements of the input vector are expected to be between 0 and labels-1.
Copies a tensor to a newly allocated tensor using the same shape and device.
Trait Implementations
Auto Trait Implementations
impl RefUnwindSafe for TensorFromMat
impl Send for TensorFromMat
impl !Sync for TensorFromMat
impl Unpin for TensorFromMat
impl UnwindSafe for TensorFromMat
Blanket Implementations
Mutably borrows from an owned value. Read more
type Output = T
type Output = T
Should always be Self
The inverse inclusion map: attempts to construct self
from the equivalent element of its
superset. Read more
pub fn is_in_subset(&self) -> bool
pub fn is_in_subset(&self) -> bool
Checks if self
is actually part of its subset T
(and can be converted to it).
pub fn to_subset_unchecked(&self) -> SS
pub fn to_subset_unchecked(&self) -> SS
Use with care! Same as self.to_subset
but without any property checks. Always succeeds.
pub fn from_subset(element: &SS) -> SP
pub fn from_subset(element: &SS) -> SP
The inclusion map: converts self
to the equivalent element of its superset.
pub fn vzip(self) -> V