Struct tch::Tensor [−][src]
pub struct Tensor { /* fields omitted */ }
Expand description
A tensor object.
Implementations
Creates a new tensor from the pointer to an existing C++ tensor.
Safety
The caller must ensures that the pointer outlives the Rust object.
Creates a new tensor from the pointer to an existing C++ tensor.
Safety
A shallow copy of the pointer is made so there is no need for this pointer to remain valid for the whole lifetime of the Rust object.
Returns a pointer to the underlying C++ tensor.
The caller must ensures that the Rust tensor object outlives this pointer.
Returns a mutable pointer to the underlying C++ tensor.
The caller must ensures that the Rust tensor object outlives this pointer.
Returns the tensor size for single dimension tensors.
Returns the tensor sizes for two dimension tensors.
Returns the tensor sizes for three dimension tensors.
Returns the tensor sizes for four dimension tensors.
Returns the tensor sizes for five dimension tensors.
Returns the tensor sizes for six dimension tensors.
Returns the tensor strides for single dimension tensors.
Returns the tensor strides for two dimension tensors.
Returns the tensor strides for three dimension tensors.
Returns the tensor strides for four dimension tensors.
Returns the tensor strides for five dimension tensors.
Returns the tensor strides for six dimension tensors.
Returns the kind of elements stored in the input tensor. Returns an error on undefined tensors and unsupported data types.
Returns the kind of elements stored in the input tensor. Panics an error on undefined tensors and unsupported data types.
Prints the input tensor.
Caution: this uses the C++ printer which prints the whole tensor even if it is very large.
Returns a double value on tensors holding a single element. An error is returned otherwise.
Returns an int value on tensors holding a single element. An error is returned otherwise.
Returns a double value on tensors holding a single element. Panics otherwise.
Returns an int value on tensors holding a single element. Panics otherwise.
Returns true if gradient are currently tracked for this tensor.
Runs the backward pass, populating the gradient tensors for tensors which gradients are tracked.
Gradients tracking can be turned on via set_requires_grad
.
Runs the backward pass, populating the gradient tensors for tensors which gradients are tracked.
Gradients tracking can be turned on via set_requires_grad
.
Panics if the C++ api returns an exception.
pub fn f_run_backward<T1, T2>(
tensors: &[T1],
inputs: &[T2],
keep_graph: bool,
create_graph: bool
) -> Result<Vec<Tensor>, TchError> where
T1: Borrow<Tensor>,
T2: Borrow<Tensor>,
pub fn run_backward<T1, T2>(
tensors: &[T1],
inputs: &[T2],
keep_graph: bool,
create_graph: bool
) -> Vec<Tensor> where
T1: Borrow<Tensor>,
T2: Borrow<Tensor>,
Copies numel
elements from self
to dst
.
Unscale tensor while checking for infinities.
found_inf
is a singleton tensor that is used to record the
presence of infinite values. inv_scale
is a scalar containing
the inverse scaling factor. This method is only available
for CUDA tensors.
pub fn internal_amp_non_finite_check_and_unscale(
&mut self,
found_inf: &mut Tensor,
inv_scale: &Tensor
)
pub fn internal_amp_non_finite_check_and_unscale(
&mut self,
found_inf: &mut Tensor,
inv_scale: &Tensor
)
Unscale tensor while checking for infinities.
found_inf
is a singleton tensor that is used to record the
presence of infinite values. inv_scale
is a scalar containing
the inverse scaling factor. This method is only available
for CUDA tensors.
Copies numel
elements from self
to dst
.
Copies numel
elements from self
to dst
.
Copies numel
elements from self
to dst
.
Converts a slice to a tensor.
Converts some byte data to a tensor with some specified kind and shape.
Creates a tensor from data that is assumed to be initialized. Resize operations are now allowed on this tensor without copying the data first.
Safety
This will panic if data
points to invalid data.
Creates a tensor from data that is assumed to be initialized. Resize operations are now allowed on this tensor without copying the data first.
Safety
This will panic if data
points to invalid data.
Converts some byte data to a tensor with some specified kind and shape.
Returns a new tensor that share storage with the input tensor.
Gets the sub-tensor at the given index.
Copies values from the argument tensor to the input tensor.
Copies values from the argument tensor to the input tensor.
Loads a tensor from a file.
The file format is the same as the one used by the PyTorch C++ API.
Saves a tensor to a file.
The file format is the same as the one used by the PyTorch C++ API.
Saves some named tensors to a file
The file format is the same as the one used by the PyTorch C++ API.
Loads some named tensors from a file
The file format is the same as the one used by the PyTorch C++ API.
Loads some named tensors from a file to a given device
The file format is the same as the one used by the PyTorch C++ API.
pub fn f_internal_adaptive_avg_pool2d_backward(
&self,
grad_output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_adaptive_avg_pool3d_backward(
&self,
grad_output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_aminmax_dim(
&self,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_amp_update_scale_(
&mut self,
growth_tracker: &Tensor,
found_inf: &Tensor,
scale_growth_factor: f64,
scale_backoff_factor: f64,
growth_interval: i64
) -> Result<Tensor, TchError>
pub fn f_internal_baddbmm_mkl_(
&mut self,
batch1: &Tensor,
batch2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_bmm_out(
&self,
out: &Tensor,
mat2: &Tensor,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_internal_cat_out<T: Borrow<Tensor>>(
out: &Tensor,
tensors: &[T],
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_cdist_backward(
grad: &Tensor,
x1: &Tensor,
x2: &Tensor,
p: f64,
cdist: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_cholesky_solve_helper(
&self,
a: &Tensor,
upper: bool
) -> Result<Tensor, TchError>
pub fn f_internal_compute_linear_combination(
&self,
coefficients: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_compute_linear_combination_out(
&self,
out: &Tensor,
coefficients: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_internal_convolution_deprecated<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool
) -> Result<Tensor, TchError>
pub fn f_internal_convolution_mode<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_internal_convolution_nogroup<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_internal_ctc_loss(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
blank: i64,
zero_infinity: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_ctc_loss_backward(
grad: &Tensor,
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
neg_log_likelihood: &Tensor,
log_alpha: &Tensor,
blank: i64,
zero_infinity: bool
) -> Result<Tensor, TchError>
pub fn f_internal_cudnn_ctc_loss(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
blank: i64,
deterministic: bool,
zero_infinity: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_cudnn_init_dropout_state(
dropout: f64,
train: bool,
dropout_seed: i64,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_internal_cudnn_rnn<T: Borrow<Tensor>>(
&self,
weight: &[T],
weight_stride0: i64,
weight_buf: Option<T>,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
proj_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> Result<(Tensor, Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_internal_cudnn_rnn_flatten_weight<T: Borrow<Tensor>>(
weight_arr: &[T],
weight_stride0: i64,
input_size: i64,
mode: i64,
hidden_size: i64,
proj_size: i64,
num_layers: i64,
batch_first: bool,
bidirectional: bool
) -> Result<Tensor, TchError>
pub fn f_internal_dirichlet_grad(
x: &Tensor,
alpha: &Tensor,
total: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_embedding_bag<T: Borrow<Tensor>>(
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
include_last_offset: bool,
padding_idx: i64
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_internal_embedding_bag_backward<T: Borrow<Tensor>>(
grad: &Tensor,
indices: &Tensor,
offsets: &Tensor,
offset2bag: &Tensor,
bag_size: &Tensor,
maximum_indices: &Tensor,
num_weights: i64,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
padding_idx: i64
) -> Result<Tensor, TchError>
pub fn f_internal_embedding_bag_dense_backward<T: Borrow<Tensor>>(
grad: &Tensor,
indices: &Tensor,
offset2bag: &Tensor,
bag_size: &Tensor,
maximum_indices: &Tensor,
num_weights: i64,
scale_grad_by_freq: bool,
mode: i64,
per_sample_weights: Option<T>,
padding_idx: i64
) -> Result<Tensor, TchError>
pub fn f_internal_embedding_bag_forward_only<T: Borrow<Tensor>>(
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
include_last_offset: bool,
padding_idx: i64
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_internal_embedding_bag_per_sample_weights_backward(
grad: &Tensor,
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
offset2bag: &Tensor,
mode: i64,
padding_idx: i64
) -> Result<Tensor, TchError>
pub fn f_internal_embedding_bag_sparse_backward<T: Borrow<Tensor>>(
grad: &Tensor,
indices: &Tensor,
offsets: &Tensor,
offset2bag: &Tensor,
bag_size: &Tensor,
num_weights: i64,
scale_grad_by_freq: bool,
mode: i64,
per_sample_weights: Option<T>,
padding_idx: i64
) -> Result<Tensor, TchError>
pub fn f_internal_empty_affine_quantized(
size: &[i64],
options: (Kind, Device),
scale: f64,
zero_point: i64
) -> Result<Tensor, TchError>
pub fn f_internal_empty_per_channel_affine_quantized(
size: &[i64],
scales: &Tensor,
zero_points: &Tensor,
axis: i64,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_internal_fake_quantize_learnable_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<Tensor, TchError>
pub fn f_internal_fake_quantize_learnable_per_channel_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_internal_fake_quantize_learnable_per_tensor_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<Tensor, TchError>
pub fn f_internal_fake_quantize_learnable_per_tensor_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_internal_fft_c2c(
&self,
dim: &[i64],
normalization: i64,
forward: bool
) -> Result<Tensor, TchError>
pub fn f_internal_fft_c2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
forward: bool
) -> Result<Tensor, TchError>
pub fn f_internal_fft_c2r(
&self,
dim: &[i64],
normalization: i64,
last_dim_size: i64
) -> Result<Tensor, TchError>
pub fn f_internal_fft_c2r_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
last_dim_size: i64
) -> Result<Tensor, TchError>
pub fn f_internal_fft_r2c(
&self,
dim: &[i64],
normalization: i64,
onesided: bool
) -> Result<Tensor, TchError>
pub fn f_internal_fft_r2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
onesided: bool
) -> Result<Tensor, TchError>
pub fn f_internal_gather_sparse_backward(
&self,
dim: i64,
index: &Tensor,
grad: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_grid_sampler_2d_cpu_fallback(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_internal_grid_sampler_2d_cpu_fallback_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_has_compatible_shallow_copy_type(
&self,
from: &Tensor
) -> Result<bool, TchError>
pub fn f_internal_index_copy_(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_index_put_impl_<T: Borrow<Tensor>>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool,
unsafe_: bool
) -> Result<Tensor, TchError>
pub fn f_internal_linalg_inv_out_helper_(
&mut self,
infos_lu: &Tensor,
infos_getri: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_linalg_solve_out_helper_(
&mut self,
other: &Tensor,
infos: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_lu_with_info(
&self,
pivot: bool,
check_errors: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_internal_make_dual(
primal: &Tensor,
tangent: &Tensor,
level: i64
) -> Result<Tensor, TchError>
pub fn f_internal_make_per_channel_quantized_tensor(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64
) -> Result<Tensor, TchError>
pub fn f_internal_make_per_tensor_quantized_tensor(
&self,
scale: f64,
zero_point: i64
) -> Result<Tensor, TchError>
pub fn f_internal_nnpack_spatial_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_internal_nnpack_spatial_convolution_backward_input(
&self,
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_internal_nnpack_spatial_convolution_backward_weight(
&self,
weightsize: &[i64],
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_internal_pack_padded_sequence(
&self,
lengths: &Tensor,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_pack_padded_sequence_backward(
grad: &Tensor,
input_size: &[i64],
batch_sizes: &Tensor,
batch_first: bool
) -> Result<Tensor, TchError>
pub fn f_internal_pad_packed_sequence<S: Into<Scalar>>(
data: &Tensor,
batch_sizes: &Tensor,
batch_first: bool,
padding_value: S,
total_length: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_pdist_backward(
&self,
grad: &Tensor,
p: f64,
pdist: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_remove_batch_dim(
&self,
level: i64,
batch_size: i64,
out_dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_rowwise_prune(
weight: &Tensor,
mask: &Tensor,
compressed_indices_dtype: Kind
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_sobol_engine_draw(
quasi: &Tensor,
n: i64,
sobolstate: &Tensor,
dimension: i64,
num_generated: i64,
dtype: Kind
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_sobol_engine_ff_(
&mut self,
n: i64,
sobolstate: &Tensor,
dimension: i64,
num_generated: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sobol_engine_initialize_state_(
&mut self,
dimension: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sobol_engine_scramble_(
&mut self,
ltm: &Tensor,
dimension: i64
) -> Result<Tensor, TchError>
pub fn f_internal_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_coo_tensor_unsafe(
indices: &Tensor,
values: &Tensor,
size: &[i64],
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_coo_tensor_with_dims(
sparse_dim: i64,
dense_dim: i64,
size: &[i64],
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_coo_tensor_with_dims_and_tensors(
sparse_dim: i64,
dense_dim: i64,
size: &[i64],
indices: &Tensor,
values: &Tensor,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_csr_tensor(
crow_indices: &Tensor,
col_indices: &Tensor,
values: &Tensor,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_csr_tensor_crow_col_value_size(
crow_indices: &Tensor,
col_indices: &Tensor,
values: &Tensor,
size: &[i64],
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_log_softmax(
&self,
dim: i64,
half_to_float: bool
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_mask_helper(
tr: &Tensor,
mask_indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_sum_backward(
&self,
grad: &Tensor,
dim: &[i64]
) -> Result<Tensor, TchError>
pub fn f_internal_sparse_sum_dim_dtype(
&self,
dim: &[i64],
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_internal_stack_out<T: Borrow<Tensor>>(
out: &Tensor,
tensors: &[T],
dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_svd_helper(
&self,
some: bool,
compute_uv: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_internal_symeig_helper(
&self,
eigenvectors: bool,
upper: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_test_ambiguous_defaults(
dummy: &Tensor,
a: i64,
b: i64
) -> Result<Tensor, TchError>
pub fn f_internal_test_ambiguous_defaults_b(
dummy: &Tensor,
a: i64,
b: &str
) -> Result<Tensor, TchError>
pub fn f_internal_test_optional_filled_intlist<'a>(
values: &Tensor,
addends: impl Into<Option<&'a [i64]>>
) -> Result<Tensor, TchError>
pub fn f_internal_test_optional_floatlist(
values: &Tensor,
addends: &[f64]
) -> Result<Tensor, TchError>
pub fn f_internal_test_optional_intlist<'a>(
values: &Tensor,
addends: impl Into<Option<&'a [i64]>>
) -> Result<Tensor, TchError>
pub fn f_internal_test_serialization_subcmul(
&self,
other: &Tensor
) -> Result<Tensor, TchError>
pub fn f_internal_test_string_default(
dummy: &Tensor,
a: &str,
b: &str
) -> Result<Tensor, TchError>
pub fn f_internal_trilinear(
i1: &Tensor,
i2: &Tensor,
i3: &Tensor,
expand1: &[i64],
expand2: &[i64],
expand3: &[i64],
sumdim: &[i64],
unroll_dim: i64
) -> Result<Tensor, TchError>
pub fn f_internal_unique(
&self,
sorted: bool,
return_inverse: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_unique2(
&self,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_internal_use_cudnn_ctc_loss(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
blank: i64
) -> Result<bool, TchError>
pub fn f_internal_weight_norm_cuda_interface(
v: &Tensor,
g: &Tensor,
dim: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_weight_norm_cuda_interface_backward(
grad_w: &Tensor,
saved_v: &Tensor,
saved_g: &Tensor,
saved_norms: &Tensor,
dim: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_internal_weight_norm_differentiable_backward(
grad_w: &Tensor,
saved_v: &Tensor,
saved_g: &Tensor,
saved_norms: &Tensor,
dim: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_adaptive_avg_pool2d_out(
&self,
out: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_adaptive_avg_pool3d_backward(
&self,
grad_input: &Tensor,
grad_output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_avg_pool3d_out(
&self,
out: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool2d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool2d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<(Tensor, Tensor), TchError>
pub fn f_adaptive_max_pool3d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_adaptive_max_pool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<(Tensor, Tensor), TchError>
pub fn f_addbmm_out(
&self,
out: &Tensor,
batch1: &Tensor,
batch2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_addcdiv_out(
&self,
out: &Tensor,
tensor1: &Tensor,
tensor2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_addcmul_out(
&self,
out: &Tensor,
tensor1: &Tensor,
tensor2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_affine_grid_generator(
theta: &Tensor,
size: &[i64],
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_affine_grid_generator_backward(
grad: &Tensor,
size: &[i64],
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_allclose(
&self,
other: &Tensor,
rtol: f64,
atol: f64,
equal_nan: bool
) -> Result<bool, TchError>
pub fn f_arange_start<S: Into<Scalar>>(
start: S,
end: S,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_arange_start_out<S: Into<Scalar>>(
out: &Tensor,
start: S,
end: S
) -> Result<Tensor, TchError>
pub fn f_arange_start_step<S: Into<Scalar>>(
start: S,
end: S,
step: S,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_argmax_out(
&self,
out: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_argmin_out(
&self,
out: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_as_strided(
&self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_as_strided_(
&mut self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool
) -> Result<Tensor, TchError>
pub fn f_avg_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool2d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_avg_pool3d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_baddbmm_out(
&self,
out: &Tensor,
batch1: &Tensor,
batch2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_bartlett_window_periodic(
window_length: i64,
periodic: bool,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_batch_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Result<Tensor, TchError>
pub fn f_batch_norm_backward_elemt<T: Borrow<Tensor>>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
mean_dy: &Tensor,
mean_dy_xmu: &Tensor,
count: &Tensor
) -> Result<Tensor, TchError>
pub fn f_batch_norm_backward_reduce<T: Borrow<Tensor>>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
input_g: bool,
weight_g: bool,
bias_g: bool
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_batch_norm_elemt<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Result<Tensor, TchError>
pub fn f_batch_norm_elemt_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Result<Tensor, TchError>
pub fn f_batch_norm_gather_stats<T: Borrow<Tensor>>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
count: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_batch_norm_gather_stats_with_counts<T: Borrow<Tensor>>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
counts: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_batch_norm_update_stats<T: Borrow<Tensor>>(
&self,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_bilinear<T: Borrow<Tensor>>(
input1: &Tensor,
input2: &Tensor,
weight: &Tensor,
bias: Option<T>
) -> Result<Tensor, TchError>
pub fn f_binary_cross_entropy<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_binary_cross_entropy_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_binary_cross_entropy_backward_grad_input<T: Borrow<Tensor>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_binary_cross_entropy_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_binary_cross_entropy_with_logits<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_binary_cross_entropy_with_logits_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_bincount<T: Borrow<Tensor>>(
&self,
weights: Option<T>,
minlength: i64
) -> Result<Tensor, TchError>
pub fn f_bitwise_and_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_bitwise_or_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_bitwise_xor_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_blackman_window_periodic(
window_length: i64,
periodic: bool,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_bucketize(
&self,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_bucketize_scalar<S: Into<Scalar>>(
self_scalar: S,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_bucketize_tensor_out(
&self,
out: &Tensor,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_cat_out<T: Borrow<Tensor>>(
out: &Tensor,
tensors: &[T],
dim: i64
) -> Result<Tensor, TchError>
pub fn f_cdist(
x1: &Tensor,
x2: &Tensor,
p: f64,
compute_mode: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_chain_matmul_out<T: Borrow<Tensor>>(
out: &Tensor,
matrices: &[T]
) -> Result<Tensor, TchError>
pub fn f_cholesky_solve_out(
&self,
out: &Tensor,
input2: &Tensor,
upper: bool
) -> Result<Tensor, TchError>
pub fn f_choose_qparams_optimized(
&self,
numel: i64,
n_bins: i64,
ratio: f64,
bit_width: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_clamp_out<S: Into<Scalar>>(
&self,
out: &Tensor,
min: S,
max: S
) -> Result<Tensor, TchError>
pub fn f_clamp_tensor<T: Borrow<Tensor>>(
&self,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError>
pub fn f_clamp_tensor_<T: Borrow<Tensor>>(
&mut self,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError>
pub fn f_clamp_tensor_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError>
pub fn f_clip_out<S: Into<Scalar>>(
&self,
out: &Tensor,
min: S,
max: S
) -> Result<Tensor, TchError>
pub fn f_clip_tensor<T: Borrow<Tensor>>(
&self,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError>
pub fn f_clip_tensor_<T: Borrow<Tensor>>(
&mut self,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError>
pub fn f_clip_tensor_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
min: Option<T>,
max: Option<T>
) -> Result<Tensor, TchError>
pub fn f_col2im(
&self,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_col2im_backward(
grad_output: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_col2im_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_col2im_out(
&self,
out: &Tensor,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_column_stack_out<T: Borrow<Tensor>>(
out: &Tensor,
tensors: &[T]
) -> Result<Tensor, TchError>
pub fn f_conv1d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_conv1d_padding<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_conv2d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_conv2d_padding<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_conv3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_conv3d_padding<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_conv_depthwise3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_conv_depthwise3d_backward(
&self,
grad_input: &Tensor,
grad_weight: &Tensor,
grad_bias: &Tensor,
grad_output: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_conv_tbc_backward(
&self,
input: &Tensor,
weight: &Tensor,
bias: &Tensor,
pad: i64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_conv_transpose1d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_conv_transpose2d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_conv_transpose3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_convolution_overrideable<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_copy_sparse_to_sparse_(
&mut self,
src: &Tensor,
non_blocking: bool
) -> Result<Tensor, TchError>
pub fn f_copysign_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_cosine_embedding_loss(
input1: &Tensor,
input2: &Tensor,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_cosine_similarity(
x1: &Tensor,
x2: &Tensor,
dim: i64,
eps: f64
) -> Result<Tensor, TchError>
pub fn f_cross_entropy_loss<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError>
pub fn f_cross_out(
&self,
out: &Tensor,
other: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_ctc_loss(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
blank: i64,
reduction: Reduction,
zero_infinity: bool
) -> Result<Tensor, TchError>
pub fn f_ctc_loss_tensor(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &Tensor,
target_lengths: &Tensor,
blank: i64,
reduction: Reduction,
zero_infinity: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_affine_grid_generator(
theta: &Tensor,
n: i64,
c: i64,
h: i64,
w: i64
) -> Result<Tensor, TchError>
pub fn f_cudnn_affine_grid_generator_backward(
grad: &Tensor,
n: i64,
c: i64,
h: i64,
w: i64
) -> Result<Tensor, TchError>
pub fn f_cudnn_batch_norm<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_cudnn_batch_norm_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64,
reservespace: &Tensor
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_cudnn_convolution(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_add_relu<T: Borrow<Tensor>, S: Into<Scalar>>(
&self,
weight: &Tensor,
z: &Tensor,
alpha: S,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_backward_input(
self_size: &[i64],
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_deprecated<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_deprecated2(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_relu<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_transpose(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_transpose_backward_input(
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_transpose_deprecated<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_convolution_transpose_deprecated2(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_cudnn_grid_sampler_backward(
&self,
grid: &Tensor,
grad_output: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_cummax_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_cummaxmin_backward(
&self,
grad: &Tensor,
indices: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_cummin_out(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_cumprod_backward(
&self,
grad: &Tensor,
dim: i64,
output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_diag_backward(
grad: &Tensor,
input_sizes: &[i64],
diagonal: i64
) -> Result<Tensor, TchError>
pub fn f_diagonal_backward(
grad: &Tensor,
input_sizes: &[i64],
offset: i64,
dim1: i64,
dim2: i64
) -> Result<Tensor, TchError>
pub fn f_diff<T: Borrow<Tensor>>(
&self,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Result<Tensor, TchError>
pub fn f_diff_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Result<Tensor, TchError>
pub fn f_div_out_mode(
&self,
out: &Tensor,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_div_scalar_mode<S: Into<Scalar>>(
&self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_div_scalar_mode_<S: Into<Scalar>>(
&mut self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_div_tensor_mode_(
&mut self,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_divide_out_mode(
&self,
out: &Tensor,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_divide_scalar_mode<S: Into<Scalar>>(
&self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_divide_scalar_mode_<S: Into<Scalar>>(
&mut self,
other: S,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_divide_tensor_mode(
&self,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_divide_tensor_mode_(
&mut self,
other: &Tensor,
rounding_mode: &str
) -> Result<Tensor, TchError>
pub fn f_eig_e(
&self,
e: &Tensor,
v: &Tensor,
eigenvectors: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_elu_backward<S: Into<Scalar>>(
grad_output: &Tensor,
alpha: S,
scale: S,
input_scale: S,
is_result: bool,
self_or_result: &Tensor
) -> Result<Tensor, TchError>
pub fn f_embedding(
weight: &Tensor,
indices: &Tensor,
padding_idx: i64,
scale_grad_by_freq: bool,
sparse: bool
) -> Result<Tensor, TchError>
pub fn f_embedding_backward(
grad: &Tensor,
indices: &Tensor,
num_weights: i64,
padding_idx: i64,
scale_grad_by_freq: bool,
sparse: bool
) -> Result<Tensor, TchError>
pub fn f_embedding_bag<T: Borrow<Tensor>>(
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
include_last_offset: bool
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_embedding_bag_padding_idx<T: Borrow<Tensor>>(
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
include_last_offset: bool,
padding_idx: impl Into<Option<i64>>
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_embedding_dense_backward(
grad_output: &Tensor,
indices: &Tensor,
num_weights: i64,
padding_idx: i64,
scale_grad_by_freq: bool
) -> Result<Tensor, TchError>
pub fn f_embedding_renorm_(
&mut self,
indices: &Tensor,
max_norm: f64,
norm_type: f64
) -> Result<Tensor, TchError>
pub fn f_embedding_sparse_backward(
grad: &Tensor,
indices: &Tensor,
num_weights: i64,
padding_idx: i64,
scale_grad_by_freq: bool
) -> Result<Tensor, TchError>
pub fn f_empty_strided(
size: &[i64],
stride: &[i64],
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_fake_quantize_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> Result<Tensor, TchError>
pub fn f_fake_quantize_per_channel_affine_cachemask(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fake_quantize_per_channel_affine_cachemask_backward(
grad: &Tensor,
mask: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fake_quantize_per_tensor_affine(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> Result<Tensor, TchError>
pub fn f_fake_quantize_per_tensor_affine_cachemask(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fake_quantize_per_tensor_affine_cachemask_backward(
grad: &Tensor,
mask: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fbgemm_linear_fp16_weight(
&self,
packed_weight: &Tensor,
bias: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fbgemm_linear_fp16_weight_fp32_activation(
&self,
packed_weight: &Tensor,
bias: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fbgemm_linear_int8_weight<S: Into<Scalar>>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fbgemm_linear_int8_weight_fp32_activation<S: Into<Scalar>>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fft_fft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fft2<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_fftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_hfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_hfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifft2<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ifftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ihfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_ihfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfft2<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_irfftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfft(
&self,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfft2<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fft_rfftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Result<Tensor, TchError>
pub fn f_fill_diagonal_<S: Into<Scalar>>(
&mut self,
fill_value: S,
wrap: bool
) -> Result<Tensor, TchError>
pub fn f_float_power_scalar<S: Into<Scalar>>(
self_scalar: S,
exponent: &Tensor
) -> Result<Tensor, TchError>
pub fn f_float_power_scalar_out<S: Into<Scalar>>(
out: &Tensor,
self_scalar: S,
exponent: &Tensor
) -> Result<Tensor, TchError>
pub fn f_float_power_tensor_scalar<S: Into<Scalar>>(
&self,
exponent: S
) -> Result<Tensor, TchError>
pub fn f_float_power_tensor_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
exponent: S
) -> Result<Tensor, TchError>
pub fn f_float_power_tensor_tensor_out(
&self,
out: &Tensor,
exponent: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fmod_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_fractional_max_pool2d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fractional_max_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fractional_max_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fractional_max_pool2d_output(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fractional_max_pool3d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_fractional_max_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fractional_max_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_fractional_max_pool3d_output(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_frexp_tensor_out(
&self,
mantissa: &Tensor,
exponent: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_frobenius_norm_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_from_file(
filename: &str,
shared: bool,
size: impl Into<Option<i64>>,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_full<S: Into<Scalar>>(
size: &[i64],
fill_value: S,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_full_out<S: Into<Scalar>>(
out: &Tensor,
size: &[i64],
fill_value: S
) -> Result<Tensor, TchError>
pub fn f_gather_backward(
&self,
grad: &Tensor,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Result<Tensor, TchError>
pub fn f_gather_out(
&self,
out: &Tensor,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Result<Tensor, TchError>
pub fn f_glu_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
dim: i64
) -> Result<Tensor, TchError>
pub fn f_greater_equal_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_greater_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_grid_sampler(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_grid_sampler_2d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_grid_sampler_2d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_grid_sampler_3d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<Tensor, TchError>
pub fn f_grid_sampler_3d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_group_norm<T: Borrow<Tensor>>(
&self,
num_groups: i64,
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enabled: bool
) -> Result<Tensor, TchError>
pub fn f_gru<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_gru_cell<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<Tensor, TchError>
pub fn f_gru_data<T: Borrow<Tensor>>(
data: &Tensor,
batch_sizes: &Tensor,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_hamming_window_periodic(
window_length: i64,
periodic: bool,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_hamming_window_periodic_alpha(
window_length: i64,
periodic: bool,
alpha: f64,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_hamming_window_periodic_alpha_beta(
window_length: i64,
periodic: bool,
alpha: f64,
beta: f64,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_hann_window_periodic(
window_length: i64,
periodic: bool,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_hardshrink_backward<S: Into<Scalar>>(
&self,
grad_out: &Tensor,
lambd: S
) -> Result<Tensor, TchError>
pub fn f_hardtanh_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Result<Tensor, TchError>
pub fn f_hardtanh_backward_grad_input<S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Result<Tensor, TchError>
pub fn f_hinge_embedding_loss(
&self,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_huber_loss(
&self,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Result<Tensor, TchError>
pub fn f_huber_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Result<Tensor, TchError>
pub fn f_huber_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Result<Tensor, TchError>
pub fn f_huber_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Result<Tensor, TchError>
pub fn f_im2col(
&self,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_im2col_backward(
grad_output: &Tensor,
input_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_im2col_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
input_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_im2col_out(
&self,
out: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Result<Tensor, TchError>
pub fn f_index_add_(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
pub fn f_index_add_alpha<S: Into<Scalar>>(
&self,
dim: i64,
index: &Tensor,
source: &Tensor,
alpha: S
) -> Result<Tensor, TchError>
pub fn f_index_add_alpha_<S: Into<Scalar>>(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor,
alpha: S
) -> Result<Tensor, TchError>
pub fn f_index_copy_(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor
) -> Result<Tensor, TchError>
pub fn f_index_fill<S: Into<Scalar>>(
&self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError>
pub fn f_index_fill_<S: Into<Scalar>>(
&mut self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError>
pub fn f_index_fill_int_tensor(
&self,
dim: i64,
index: &Tensor,
value: &Tensor
) -> Result<Tensor, TchError>
pub fn f_index_fill_int_tensor_(
&mut self,
dim: i64,
index: &Tensor,
value: &Tensor
) -> Result<Tensor, TchError>
pub fn f_index_put<T: Borrow<Tensor>>(
&self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Result<Tensor, TchError>
pub fn f_index_put_<T: Borrow<Tensor>>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Result<Tensor, TchError>
pub fn f_index_select_backward(
grad: &Tensor,
self_sizes: &[i64],
dim: i64,
index: &Tensor
) -> Result<Tensor, TchError>
pub fn f_index_select_out(
&self,
out: &Tensor,
dim: i64,
index: &Tensor
) -> Result<Tensor, TchError>
pub fn f_infinitely_differentiable_gelu_backward(
&self,
grad: &Tensor
) -> Result<Tensor, TchError>
pub fn f_instance_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
use_input_stats: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Result<Tensor, TchError>
pub fn f_isclose(
&self,
other: &Tensor,
rtol: f64,
atol: f64,
equal_nan: bool
) -> Result<Tensor, TchError>
pub fn f_istft<T: Borrow<Tensor>>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
center: bool,
normalized: bool,
onesided: bool,
length: impl Into<Option<i64>>,
return_complex: bool
) -> Result<Tensor, TchError>
pub fn f_kaiser_window_beta(
window_length: i64,
periodic: bool,
beta: f64,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_kaiser_window_periodic(
window_length: i64,
periodic: bool,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_kl_div(
&self,
target: &Tensor,
reduction: Reduction,
log_target: bool
) -> Result<Tensor, TchError>
pub fn f_kl_div_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
log_target: bool
) -> Result<Tensor, TchError>
pub fn f_kthvalue_values(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_l1_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_l1_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_layer_norm<T: Borrow<Tensor>>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enable: bool
) -> Result<Tensor, TchError>
pub fn f_leaky_relu_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
negative_slope: S,
self_is_result: bool
) -> Result<Tensor, TchError>
pub fn f_lerp_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
end: &Tensor,
weight: S
) -> Result<Tensor, TchError>
pub fn f_lerp_tensor_out(
&self,
out: &Tensor,
end: &Tensor,
weight: &Tensor
) -> Result<Tensor, TchError>
pub fn f_less_equal_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_less_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_linalg_cholesky_ex_l(
&self,
l: &Tensor,
info: &Tensor,
check_errors: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_eig_out(
&self,
eigenvalues: &Tensor,
eigenvectors: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_eigh_eigvals(
&self,
eigvals: &Tensor,
eigvecs: &Tensor,
uplo: &str
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_householder_product_out(
&self,
out: &Tensor,
tau: &Tensor
) -> Result<Tensor, TchError>
pub fn f_linalg_inv_ex_inverse(
&self,
inverse: &Tensor,
info: &Tensor,
check_errors: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_lstsq(
&self,
b: &Tensor,
rcond: impl Into<Option<f64>>,
driver: &str
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_linalg_lstsq_out(
&self,
solution: &Tensor,
residuals: &Tensor,
rank: &Tensor,
singular_values: &Tensor,
b: &Tensor,
rcond: impl Into<Option<f64>>,
driver: &str
) -> Result<(Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_linalg_matrix_norm<S: Into<Scalar>>(
&self,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_norm_out<S: Into<Scalar>>(
&self,
out: &Tensor,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_norm_str_ord(
&self,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_norm_str_ord_out(
&self,
out: &Tensor,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_rank(
&self,
tol: impl Into<Option<f64>>,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_rank_out(
&self,
out: &Tensor,
tol: impl Into<Option<f64>>,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_rank_out_tol_tensor(
&self,
out: &Tensor,
tol: &Tensor,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_matrix_rank_tol_tensor(
&self,
tol: &Tensor,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_multi_dot_out<T: Borrow<Tensor>>(
out: &Tensor,
tensors: &[T]
) -> Result<Tensor, TchError>
pub fn f_linalg_norm<'a, S: Into<Scalar>>(
&self,
ord: S,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_norm_ord_str<'a>(
&self,
ord: &str,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_norm_ord_str_out<'a>(
&self,
out: &Tensor,
ord: &str,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_norm_out<'a, S: Into<Scalar>>(
&self,
out: &Tensor,
ord: S,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_linalg_pinv_out(
&self,
out: &Tensor,
rcond: f64,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_pinv_out_rcond_tensor(
&self,
out: &Tensor,
rcond: &Tensor,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_pinv_rcond_tensor(
&self,
rcond: &Tensor,
hermitian: bool
) -> Result<Tensor, TchError>
pub fn f_linalg_qr_out(
&self,
q: &Tensor,
r: &Tensor,
mode: &str
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_slogdet_out(
&self,
sign: &Tensor,
logabsdet: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_linalg_svd_u(
&self,
u: &Tensor,
s: &Tensor,
vh: &Tensor,
full_matrices: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_linalg_tensorsolve<'a>(
&self,
other: &Tensor,
dims: impl Into<Option<&'a [i64]>>
) -> Result<Tensor, TchError>
pub fn f_linalg_tensorsolve_out<'a>(
&self,
out: &Tensor,
other: &Tensor,
dims: impl Into<Option<&'a [i64]>>
) -> Result<Tensor, TchError>
pub fn f_linear<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>
) -> Result<Tensor, TchError>
pub fn f_linspace<S: Into<Scalar>>(
start: S,
end: S,
steps: impl Into<Option<i64>>,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_linspace_out<S: Into<Scalar>>(
out: &Tensor,
start: S,
end: S,
steps: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_log_sigmoid_backward(
&self,
grad_output: &Tensor,
buffer: &Tensor
) -> Result<Tensor, TchError>
pub fn f_log_sigmoid_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
buffer: &Tensor
) -> Result<Tensor, TchError>
pub fn f_logit_backward(
&self,
grad_output: &Tensor,
eps: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_logit_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
eps: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_logspace<S: Into<Scalar>>(
start: S,
end: S,
steps: impl Into<Option<i64>>,
base: f64,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_logspace_out<S: Into<Scalar>>(
out: &Tensor,
start: S,
end: S,
steps: impl Into<Option<i64>>,
base: f64
) -> Result<Tensor, TchError>
pub fn f_logsumexp_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_lstm<T: Borrow<Tensor>>(
&self,
hx: &[T],
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_lstm_cell<T: Borrow<Tensor>>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<(Tensor, Tensor), TchError>
pub fn f_lstm_data<T: Borrow<Tensor>>(
data: &Tensor,
batch_sizes: &Tensor,
hx: &[T],
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_lu_solve_out(
&self,
out: &Tensor,
lu_data: &Tensor,
lu_pivots: &Tensor
) -> Result<Tensor, TchError>
pub fn f_lu_unpack(
lu_data: &Tensor,
lu_pivots: &Tensor,
unpack_data: bool,
unpack_pivots: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_lu_unpack_out(
p: &Tensor,
l: &Tensor,
u: &Tensor,
lu_data: &Tensor,
lu_pivots: &Tensor,
unpack_data: bool,
unpack_pivots: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_margin_ranking_loss(
input1: &Tensor,
input2: &Tensor,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_masked_fill_<S: Into<Scalar>>(
&mut self,
mask: &Tensor,
value: S
) -> Result<Tensor, TchError>
pub fn f_max_dim_max(
&self,
max: &Tensor,
max_values: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_max_pool1d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_max_pool2d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool2d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_max_pool2d_with_indices_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_max_pool2d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_max_pool3d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_pool3d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_max_pool3d_with_indices_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Result<Tensor, TchError>
pub fn f_max_pool3d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_max_unpool2d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool2d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool3d(
&self,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool3d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_max_unpool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_mean_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_median_dim_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_min_dim_min(
&self,
min: &Tensor,
min_indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_miopen_batch_norm<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_miopen_batch_norm_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_miopen_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_convolution_backward_input(
self_size: &[i64],
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_convolution_transpose<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_convolution_transpose_backward_input(
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_depthwise_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_depthwise_convolution_backward_input(
self_size: &[i64],
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_depthwise_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Result<Tensor, TchError>
pub fn f_miopen_rnn<T: Borrow<Tensor>>(
&self,
weight: &[T],
weight_stride0: i64,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> Result<(Tensor, Tensor, Tensor, Tensor, Tensor), TchError>
pub fn f_mkldnn_adaptive_avg_pool2d_backward(
&self,
grad_output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_mkldnn_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_mkldnn_convolution_backward_input(
self_size: &[i64],
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
bias_defined: bool
) -> Result<Tensor, TchError>
pub fn f_mkldnn_convolution_backward_weights(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
bias_defined: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_mkldnn_linear<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>
) -> Result<Tensor, TchError>
pub fn f_mkldnn_linear_backward_input(
input_size: &[i64],
grad_output: &Tensor,
weight: &Tensor
) -> Result<Tensor, TchError>
pub fn f_mkldnn_linear_backward_weights(
&self,
grad_output: &Tensor,
weight: &Tensor,
bias_defined: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_mkldnn_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_mkldnn_max_pool2d_backward(
&self,
grad_output: &Tensor,
output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_mkldnn_max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_mkldnn_max_pool3d_backward(
&self,
grad_output: &Tensor,
output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_mkldnn_reorder_conv2d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_mkldnn_reorder_conv3d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Result<Tensor, TchError>
pub fn f_mode_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_mse_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_mse_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_mse_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_multi_margin_loss_backward<T: Borrow<Tensor>, S: Into<Scalar>>(
&self,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_multi_margin_loss_backward_grad_input<T: Borrow<Tensor>, S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_multilabel_margin_loss(
&self,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_multilabel_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Result<Tensor, TchError>
pub fn f_multilabel_margin_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Result<Tensor, TchError>
pub fn f_multilabel_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_multinomial_out(
&self,
out: &Tensor,
num_samples: i64,
replacement: bool
) -> Result<Tensor, TchError>
pub fn f_nan_to_num(
&self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_nan_to_num_(
&mut self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_nan_to_num_out(
&self,
out: &Tensor,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_nanmedian_dim_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_nanquantile(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_nanquantile_new(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_nanquantile_new_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_nanquantile_new_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_nanquantile_new_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_nanquantile_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_nanquantile_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_nanquantile_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_nansum_dim_intlist(
&self,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_nansum_intlist_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_narrow_copy_out(
&self,
out: &Tensor,
dim: i64,
start: i64,
length: i64
) -> Result<Tensor, TchError>
pub fn f_native_batch_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_native_batch_norm_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
save_mean: &Tensor,
save_invstd: &Tensor,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_native_group_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
n: i64,
c: i64,
hxw: i64,
group: i64,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_native_layer_norm<T: Borrow<Tensor>>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_native_norm_scalaropt_dim_dtype<S: Into<Scalar>>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_new_empty_strided(
&self,
size: &[i64],
stride: &[i64],
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_new_full<S: Into<Scalar>>(
&self,
size: &[i64],
fill_value: S,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_nll_loss<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError>
pub fn f_nll_loss2d<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError>
pub fn f_nll_loss2d_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError>
pub fn f_nll_loss2d_backward_grad_input<T: Borrow<Tensor>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError>
pub fn f_nll_loss2d_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError>
pub fn f_nll_loss_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError>
pub fn f_nll_loss_backward_grad_input<T: Borrow<Tensor>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Result<Tensor, TchError>
pub fn f_nll_loss_nd<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError>
pub fn f_nll_loss_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Result<Tensor, TchError>
pub fn f_norm_dtype_out<S: Into<Scalar>>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_norm_out<S: Into<Scalar>>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_norm_scalaropt_dim<S: Into<Scalar>>(
&self,
p: S,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_norm_scalaropt_dim_dtype<S: Into<Scalar>>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_norm_scalaropt_dtype<S: Into<Scalar>>(
&self,
p: S,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_normal_float_float_out(
out: &Tensor,
mean: f64,
std: f64,
size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_normal_float_tensor_out(
out: &Tensor,
mean: f64,
std: &Tensor
) -> Result<Tensor, TchError>
pub fn f_normal_tensor_tensor_out(
out: &Tensor,
mean: &Tensor,
std: &Tensor
) -> Result<Tensor, TchError>
pub fn f_not_equal_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_nuclear_norm_dim_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_ormqr(
&self,
input2: &Tensor,
input3: &Tensor,
left: bool,
transpose: bool
) -> Result<Tensor, TchError>
pub fn f_ormqr_out(
&self,
out: &Tensor,
input2: &Tensor,
input3: &Tensor,
left: bool,
transpose: bool
) -> Result<Tensor, TchError>
pub fn f_pad_sequence<T: Borrow<Tensor>>(
sequences: &[T],
batch_first: bool,
padding_value: f64
) -> Result<Tensor, TchError>
pub fn f_pairwise_distance(
x1: &Tensor,
x2: &Tensor,
p: f64,
eps: f64,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_poisson_nll_loss(
&self,
target: &Tensor,
log_input: bool,
full: bool,
eps: f64,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_pow_scalar<S: Into<Scalar>>(
self_scalar: S,
exponent: &Tensor
) -> Result<Tensor, TchError>
pub fn f_pow_scalar_out<S: Into<Scalar>>(
out: &Tensor,
self_scalar: S,
exponent: &Tensor
) -> Result<Tensor, TchError>
pub fn f_pow_tensor_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
exponent: S
) -> Result<Tensor, TchError>
pub fn f_prelu_backward(
&self,
grad_output: &Tensor,
weight: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_prod_int_out(
&self,
out: &Tensor,
dim: i64,
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_put_(
&mut self,
index: &Tensor,
source: &Tensor,
accumulate: bool
) -> Result<Tensor, TchError>
pub fn f_quantile(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_quantile_new(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_quantile_new_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_quantile_new_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_quantile_new_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Result<Tensor, TchError>
pub fn f_quantile_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_quantile_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_quantile_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_quantize_per_channel(
&self,
scales: &Tensor,
zero_points: &Tensor,
axis: i64,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_quantize_per_tensor(
&self,
scale: f64,
zero_point: i64,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_quantize_per_tensor_tensors<T: Borrow<Tensor>>(
tensors: &[T],
scales: &Tensor,
zero_points: &Tensor,
dtype: Kind
) -> Result<Vec<Tensor>, TchError>
pub fn f_quantized_batch_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
var: &Tensor,
eps: f64,
output_scale: f64,
output_zero_point: i64
) -> Result<Tensor, TchError>
pub fn f_quantized_gru_cell<S: Into<Scalar>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<Tensor, TchError>
pub fn f_quantized_lstm_cell<T: Borrow<Tensor>, S: Into<Scalar>>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<(Tensor, Tensor), TchError>
pub fn f_quantized_max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_quantized_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Result<Tensor, TchError>
pub fn f_quantized_rnn_relu_cell<S: Into<Scalar>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<Tensor, TchError>
pub fn f_quantized_rnn_tanh_cell<S: Into<Scalar>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Result<Tensor, TchError>
pub fn f_randint_low(
low: i64,
high: i64,
size: &[i64],
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_randint_low_out(
out: &Tensor,
low: i64,
high: i64,
size: &[i64]
) -> Result<Tensor, TchError>
pub fn f_random_from_(
&mut self,
from: i64,
to: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_range<S: Into<Scalar>>(
start: S,
end: S,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_range_step<S: Into<Scalar>>(
start: S,
end: S,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_reflection_pad1d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_reflection_pad1d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_reflection_pad2d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_reflection_pad2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_remainder_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_renorm_<S: Into<Scalar>>(
&mut self,
p: S,
dim: i64,
maxnorm: S
) -> Result<Tensor, TchError>
pub fn f_renorm_out<S: Into<Scalar>>(
&self,
out: &Tensor,
p: S,
dim: i64,
maxnorm: S
) -> Result<Tensor, TchError>
pub fn f_repeat_interleave_self_int(
&self,
repeats: i64,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_repeat_interleave_self_tensor(
&self,
repeats: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_replication_pad1d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad1d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad2d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad3d_backward(
&self,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_replication_pad3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_rnn_relu<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_rnn_relu_cell<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<Tensor, TchError>
pub fn f_rnn_relu_data<T: Borrow<Tensor>>(
data: &Tensor,
batch_sizes: &Tensor,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_rnn_tanh<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_rnn_tanh_cell<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Result<Tensor, TchError>
pub fn f_rnn_tanh_data<T: Borrow<Tensor>>(
data: &Tensor,
batch_sizes: &Tensor,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_rrelu_with_noise_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
noise: &Tensor,
lower: S,
upper: S,
training: bool,
self_is_result: bool
) -> Result<Tensor, TchError>
pub fn f_rrelu_with_noise_out(
&self,
out: &Tensor,
noise: &Tensor,
training: bool
) -> Result<Tensor, TchError>
pub fn f_scatter_add_(
&mut self,
dim: i64,
index: &Tensor,
src: &Tensor
) -> Result<Tensor, TchError>
pub fn f_scatter_reduce_(
&mut self,
dim: i64,
index: &Tensor,
src: &Tensor,
reduce: &str
) -> Result<Tensor, TchError>
pub fn f_scatter_value<S: Into<Scalar>>(
&self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError>
pub fn f_scatter_value_<S: Into<Scalar>>(
&mut self,
dim: i64,
index: &Tensor,
value: S
) -> Result<Tensor, TchError>
pub fn f_scatter_value_reduce_<S: Into<Scalar>>(
&mut self,
dim: i64,
index: &Tensor,
value: S,
reduce: &str
) -> Result<Tensor, TchError>
pub fn f_searchsorted(
&self,
sorted_sequence: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_searchsorted_scalar<S: Into<Scalar>>(
sorted_sequence: &Tensor,
self_scalar: S,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_searchsorted_tensor_out(
&self,
out: &Tensor,
sorted_sequence: &Tensor,
out_int32: bool,
right: bool
) -> Result<Tensor, TchError>
pub fn f_segment_reduce<T: Borrow<Tensor>, S: Into<Scalar>>(
data: &Tensor,
reduce: &str,
lengths: Option<T>,
indices: Option<T>,
axis: i64,
unsafe_: bool,
initial: S
) -> Result<Tensor, TchError>
pub fn f_segment_reduce_backward<T: Borrow<Tensor>>(
grad: &Tensor,
output: &Tensor,
data: &Tensor,
lengths: Option<T>
) -> Result<Tensor, TchError>
pub fn f_select_backward(
grad: &Tensor,
input_sizes: &[i64],
dim: i64,
index: i64
) -> Result<Tensor, TchError>
pub fn f_sigmoid_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_slice(
&self,
dim: i64,
start: impl Into<Option<i64>>,
end: impl Into<Option<i64>>,
step: i64
) -> Result<Tensor, TchError>
pub fn f_slice_backward(
grad: &Tensor,
input_sizes: &[i64],
dim: i64,
start: i64,
end: i64,
step: i64
) -> Result<Tensor, TchError>
pub fn f_slow_conv3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_slow_conv3d_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Result<Tensor, TchError>
pub fn f_slow_conv_dilated2d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_slow_conv_dilated3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_slow_conv_transpose2d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_slow_conv_transpose2d_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_slow_conv_transpose3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_slow_conv_transpose3d_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Result<Tensor, TchError>
pub fn f_smooth_l1_loss(
&self,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
pub fn f_smooth_l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
pub fn f_smooth_l1_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
pub fn f_smooth_l1_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Result<Tensor, TchError>
pub fn f_soft_margin_loss(
&self,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_soft_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_soft_margin_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_soft_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_softplus_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_softplus_backward_grad_input<S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_softshrink_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
lambd: S
) -> Result<Tensor, TchError>
pub fn f_softshrink_backward_grad_input<S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
lambd: S
) -> Result<Tensor, TchError>
pub fn f_solve_solution(
&self,
solution: &Tensor,
lu: &Tensor,
a: &Tensor
) -> Result<(Tensor, Tensor), TchError>
pub fn f_sort_stable(
&self,
stable: bool,
dim: i64,
descending: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_sort_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
descending: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_sort_values_stable(
&self,
values: &Tensor,
indices: &Tensor,
stable: bool,
dim: i64,
descending: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_sparse_coo_tensor_indices(
indices: &Tensor,
values: &Tensor,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_sparse_coo_tensor_indices_size(
indices: &Tensor,
values: &Tensor,
size: &[i64],
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_sparse_resize_(
&mut self,
size: &[i64],
sparse_dim: i64,
dense_dim: i64
) -> Result<Tensor, TchError>
pub fn f_sparse_resize_and_clear_(
&mut self,
size: &[i64],
sparse_dim: i64,
dense_dim: i64
) -> Result<Tensor, TchError>
pub fn f_special_logit_out(
&self,
out: &Tensor,
eps: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_special_xlog1py_other_scalar<S: Into<Scalar>>(
&self,
other: S
) -> Result<Tensor, TchError>
pub fn f_special_xlog1py_other_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_special_xlog1py_self_scalar<S: Into<Scalar>>(
self_scalar: S,
other: &Tensor
) -> Result<Tensor, TchError>
pub fn f_special_xlog1py_self_scalar_out<S: Into<Scalar>>(
out: &Tensor,
self_scalar: S,
other: &Tensor
) -> Result<Tensor, TchError>
pub fn f_sspaddmm_out(
&self,
out: &Tensor,
mat1: &Tensor,
mat2: &Tensor
) -> Result<Tensor, TchError>
pub fn f_stack_out<T: Borrow<Tensor>>(
out: &Tensor,
tensors: &[T],
dim: i64
) -> Result<Tensor, TchError>
pub fn f_std_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_std_correction_out<'a>(
&self,
out: &Tensor,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_std_mean_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_std_mean_dim(
&self,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_std_out(
&self,
out: &Tensor,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_stft<T: Borrow<Tensor>>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
normalized: bool,
onesided: bool,
return_complex: bool
) -> Result<Tensor, TchError>
pub fn f_sum_dim_intlist(
&self,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_sum_intlist_out(
&self,
out: &Tensor,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Result<Tensor, TchError>
pub fn f_svd_u(
&self,
u: &Tensor,
s: &Tensor,
v: &Tensor,
some: bool,
compute_uv: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_symeig_e(
&self,
e: &Tensor,
v: &Tensor,
eigenvectors: bool,
upper: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_take_along_dim(
&self,
indices: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_take_along_dim_out(
&self,
out: &Tensor,
indices: &Tensor,
dim: impl Into<Option<i64>>
) -> Result<Tensor, TchError>
pub fn f_tanh_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output: &Tensor
) -> Result<Tensor, TchError>
pub fn f_tensor_split_tensor_indices_or_sections(
&self,
tensor_indices_or_sections: &Tensor,
dim: i64
) -> Result<Vec<Tensor>, TchError>
pub fn f_tensordot(
&self,
other: &Tensor,
dims_self: &[i64],
dims_other: &[i64]
) -> Result<Tensor, TchError>
pub fn f_tensordot_out(
&self,
out: &Tensor,
other: &Tensor,
dims_self: &[i64],
dims_other: &[i64]
) -> Result<Tensor, TchError>
pub fn f_threshold_<S: Into<Scalar>>(
&mut self,
threshold: S,
value: S
) -> Result<Tensor, TchError>
pub fn f_threshold_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
threshold: S
) -> Result<Tensor, TchError>
pub fn f_threshold_backward_grad_input<S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
threshold: S
) -> Result<Tensor, TchError>
pub fn f_threshold_out<S: Into<Scalar>>(
&self,
out: &Tensor,
threshold: S,
value: S
) -> Result<Tensor, TchError>
pub fn f_to_device_(
&self,
device: Device,
dtype: Kind,
non_blocking: bool,
copy: bool
) -> Result<Tensor, TchError>
pub fn f_to_dtype_layout(
&self,
options: (Kind, Device),
non_blocking: bool,
copy: bool
) -> Result<Tensor, TchError>
pub fn f_to_other(
&self,
other: &Tensor,
non_blocking: bool,
copy: bool
) -> Result<Tensor, TchError>
pub fn f_topk(
&self,
k: i64,
dim: i64,
largest: bool,
sorted: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_topk_values(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
largest: bool,
sorted: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_triangular_solve(
&self,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_triangular_solve_x(
&self,
x: &Tensor,
m: &Tensor,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_tril_indices(
row: i64,
col: i64,
offset: i64,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_triplet_margin_loss(
anchor: &Tensor,
positive: &Tensor,
negative: &Tensor,
margin: f64,
p: f64,
eps: f64,
swap: bool,
reduction: Reduction
) -> Result<Tensor, TchError>
pub fn f_triu_indices(
row: i64,
col: i64,
offset: i64,
options: (Kind, Device)
) -> Result<Tensor, TchError>
pub fn f_unflatten_dense_tensors<T: Borrow<Tensor>>(
flat: &Tensor,
tensors: &[T]
) -> Result<Vec<Tensor>, TchError>
pub fn f_unfold_backward(
grad_in: &Tensor,
input_sizes: &[i64],
dim: i64,
size: i64,
step: i64
) -> Result<Tensor, TchError>
pub fn f_unique_consecutive(
&self,
return_inverse: bool,
return_counts: bool,
dim: impl Into<Option<i64>>
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_unique_dim(
&self,
dim: i64,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_unique_dim_consecutive(
&self,
dim: i64,
return_inverse: bool,
return_counts: bool
) -> Result<(Tensor, Tensor, Tensor), TchError>
pub fn f_unsafe_split_with_sizes(
&self,
split_sizes: &[i64],
dim: i64
) -> Result<Vec<Tensor>, TchError>
pub fn f_upsample_bicubic2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bicubic2d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bicubic2d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bicubic2d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_bicubic2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bicubic2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_bilinear2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bilinear2d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bilinear2d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bilinear2d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_bilinear2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_bilinear2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_linear1d(
&self,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_linear1d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_linear1d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_linear1d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_linear1d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_linear1d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest1d(
&self,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest1d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest1d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest1d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest1d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest1d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest2d(
&self,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest2d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest2d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest2d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest2d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest3d(
&self,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest3d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest3d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest3d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest3d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_nearest3d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_trilinear3d(
&self,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_trilinear3d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_trilinear3d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_trilinear3d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_upsample_trilinear3d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Result<Tensor, TchError>
pub fn f_upsample_trilinear3d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Result<Tensor, TchError>
pub fn f_value_selecting_reduction_backward(
grad: &Tensor,
dim: i64,
indices: &Tensor,
sizes: &[i64],
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_vander(
x: &Tensor,
n: impl Into<Option<i64>>,
increasing: bool
) -> Result<Tensor, TchError>
pub fn f_var_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_var_correction_out<'a>(
&self,
out: &Tensor,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_var_mean_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_var_mean_dim(
&self,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<(Tensor, Tensor), TchError>
pub fn f_var_out(
&self,
out: &Tensor,
dim: &[i64],
unbiased: bool,
keepdim: bool
) -> Result<Tensor, TchError>
pub fn f_where_scalar<S: Into<Scalar>>(
condition: &Tensor,
self_scalar: S,
other: S
) -> Result<Tensor, TchError>
pub fn f_where_scalarother<S: Into<Scalar>>(
&self,
condition: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_where_scalarself<S: Into<Scalar>>(
condition: &Tensor,
self_scalar: S,
other: &Tensor
) -> Result<Tensor, TchError>
pub fn f_xlogy_outscalar_other<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Result<Tensor, TchError>
pub fn f_xlogy_outscalar_self<S: Into<Scalar>>(
out: &Tensor,
self_scalar: S,
other: &Tensor
) -> Result<Tensor, TchError>
pub fn internal_amp_update_scale_(
&mut self,
growth_tracker: &Tensor,
found_inf: &Tensor,
scale_growth_factor: f64,
scale_backoff_factor: f64,
growth_interval: i64
) -> Tensor
pub fn internal_cdist_backward(
grad: &Tensor,
x1: &Tensor,
x2: &Tensor,
p: f64,
cdist: &Tensor
) -> Tensor
pub fn internal_compute_linear_combination_out(
&self,
out: &Tensor,
coefficients: &Tensor
) -> Tensor
pub fn internal_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool,
allow_tf32: bool
) -> Tensor
pub fn internal_convolution_deprecated<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
cudnn_enabled: bool
) -> Tensor
pub fn internal_convolution_mode<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Tensor
pub fn internal_convolution_nogroup<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64]
) -> Tensor
pub fn internal_ctc_loss(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
blank: i64,
zero_infinity: bool
) -> (Tensor, Tensor)
pub fn internal_ctc_loss_backward(
grad: &Tensor,
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
neg_log_likelihood: &Tensor,
log_alpha: &Tensor,
blank: i64,
zero_infinity: bool
) -> Tensor
pub fn internal_cudnn_ctc_loss(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
blank: i64,
deterministic: bool,
zero_infinity: bool
) -> (Tensor, Tensor)
pub fn internal_cudnn_init_dropout_state(
dropout: f64,
train: bool,
dropout_seed: i64,
options: (Kind, Device)
) -> Tensor
pub fn internal_cudnn_rnn<T: Borrow<Tensor>>(
&self,
weight: &[T],
weight_stride0: i64,
weight_buf: Option<T>,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
proj_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> (Tensor, Tensor, Tensor, Tensor, Tensor)
pub fn internal_cudnn_rnn_flatten_weight<T: Borrow<Tensor>>(
weight_arr: &[T],
weight_stride0: i64,
input_size: i64,
mode: i64,
hidden_size: i64,
proj_size: i64,
num_layers: i64,
batch_first: bool,
bidirectional: bool
) -> Tensor
pub fn internal_embedding_bag<T: Borrow<Tensor>>(
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
include_last_offset: bool,
padding_idx: i64
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn internal_embedding_bag_backward<T: Borrow<Tensor>>(
grad: &Tensor,
indices: &Tensor,
offsets: &Tensor,
offset2bag: &Tensor,
bag_size: &Tensor,
maximum_indices: &Tensor,
num_weights: i64,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
padding_idx: i64
) -> Tensor
pub fn internal_embedding_bag_dense_backward<T: Borrow<Tensor>>(
grad: &Tensor,
indices: &Tensor,
offset2bag: &Tensor,
bag_size: &Tensor,
maximum_indices: &Tensor,
num_weights: i64,
scale_grad_by_freq: bool,
mode: i64,
per_sample_weights: Option<T>,
padding_idx: i64
) -> Tensor
pub fn internal_embedding_bag_forward_only<T: Borrow<Tensor>>(
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
include_last_offset: bool,
padding_idx: i64
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn internal_embedding_bag_per_sample_weights_backward(
grad: &Tensor,
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
offset2bag: &Tensor,
mode: i64,
padding_idx: i64
) -> Tensor
pub fn internal_embedding_bag_sparse_backward<T: Borrow<Tensor>>(
grad: &Tensor,
indices: &Tensor,
offsets: &Tensor,
offset2bag: &Tensor,
bag_size: &Tensor,
num_weights: i64,
scale_grad_by_freq: bool,
mode: i64,
per_sample_weights: Option<T>,
padding_idx: i64
) -> Tensor
pub fn internal_empty_affine_quantized(
size: &[i64],
options: (Kind, Device),
scale: f64,
zero_point: i64
) -> Tensor
pub fn internal_empty_per_channel_affine_quantized(
size: &[i64],
scales: &Tensor,
zero_points: &Tensor,
axis: i64,
options: (Kind, Device)
) -> Tensor
pub fn internal_fake_quantize_learnable_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Tensor
pub fn internal_fake_quantize_learnable_per_channel_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> (Tensor, Tensor, Tensor)
pub fn internal_fake_quantize_learnable_per_tensor_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> Tensor
pub fn internal_fake_quantize_learnable_per_tensor_affine_backward(
&self,
grad: &Tensor,
scale: &Tensor,
zero_point: &Tensor,
quant_min: i64,
quant_max: i64,
grad_factor: f64
) -> (Tensor, Tensor, Tensor)
pub fn internal_fft_c2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
forward: bool
) -> Tensor
pub fn internal_fft_c2r_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
last_dim_size: i64
) -> Tensor
pub fn internal_fft_r2c_out(
&self,
out: &Tensor,
dim: &[i64],
normalization: i64,
onesided: bool
) -> Tensor
pub fn internal_gather_sparse_backward(
&self,
dim: i64,
index: &Tensor,
grad: &Tensor
) -> Tensor
pub fn internal_grid_sampler_2d_cpu_fallback(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
pub fn internal_grid_sampler_2d_cpu_fallback_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> (Tensor, Tensor)
pub fn internal_index_put_impl_<T: Borrow<Tensor>>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool,
unsafe_: bool
) -> Tensor
pub fn internal_linalg_inv_out_helper_(
&mut self,
infos_lu: &Tensor,
infos_getri: &Tensor
) -> Tensor
pub fn internal_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
pub fn internal_make_per_channel_quantized_tensor(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64
) -> Tensor
pub fn internal_make_per_tensor_quantized_tensor(
&self,
scale: f64,
zero_point: i64
) -> Tensor
pub fn internal_nnpack_spatial_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn internal_nnpack_spatial_convolution_backward_input(
&self,
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64]
) -> Tensor
pub fn internal_nnpack_spatial_convolution_backward_weight(
&self,
weightsize: &[i64],
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn internal_pack_padded_sequence(
&self,
lengths: &Tensor,
batch_first: bool
) -> (Tensor, Tensor)
pub fn internal_pack_padded_sequence_backward(
grad: &Tensor,
input_size: &[i64],
batch_sizes: &Tensor,
batch_first: bool
) -> Tensor
pub fn internal_pad_packed_sequence<S: Into<Scalar>>(
data: &Tensor,
batch_sizes: &Tensor,
batch_first: bool,
padding_value: S,
total_length: i64
) -> (Tensor, Tensor)
pub fn internal_rowwise_prune(
weight: &Tensor,
mask: &Tensor,
compressed_indices_dtype: Kind
) -> (Tensor, Tensor)
pub fn internal_sobol_engine_draw(
quasi: &Tensor,
n: i64,
sobolstate: &Tensor,
dimension: i64,
num_generated: i64,
dtype: Kind
) -> (Tensor, Tensor)
pub fn internal_sobol_engine_ff_(
&mut self,
n: i64,
sobolstate: &Tensor,
dimension: i64,
num_generated: i64
) -> Tensor
pub fn internal_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
pub fn internal_sparse_coo_tensor_unsafe(
indices: &Tensor,
values: &Tensor,
size: &[i64],
options: (Kind, Device)
) -> Tensor
pub fn internal_sparse_coo_tensor_with_dims(
sparse_dim: i64,
dense_dim: i64,
size: &[i64],
options: (Kind, Device)
) -> Tensor
pub fn internal_sparse_coo_tensor_with_dims_and_tensors(
sparse_dim: i64,
dense_dim: i64,
size: &[i64],
indices: &Tensor,
values: &Tensor,
options: (Kind, Device)
) -> Tensor
pub fn internal_sparse_csr_tensor(
crow_indices: &Tensor,
col_indices: &Tensor,
values: &Tensor,
options: (Kind, Device)
) -> Tensor
pub fn internal_sparse_csr_tensor_crow_col_value_size(
crow_indices: &Tensor,
col_indices: &Tensor,
values: &Tensor,
size: &[i64],
options: (Kind, Device)
) -> Tensor
pub fn internal_sparse_log_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
pub fn internal_sparse_softmax_backward_data(
&self,
grad_output: &Tensor,
output: &Tensor,
dim: i64
) -> Tensor
pub fn internal_test_optional_filled_intlist<'a>(
values: &Tensor,
addends: impl Into<Option<&'a [i64]>>
) -> Tensor
pub fn internal_test_optional_intlist<'a>(
values: &Tensor,
addends: impl Into<Option<&'a [i64]>>
) -> Tensor
pub fn internal_trilinear(
i1: &Tensor,
i2: &Tensor,
i3: &Tensor,
expand1: &[i64],
expand2: &[i64],
expand3: &[i64],
sumdim: &[i64],
unroll_dim: i64
) -> Tensor
pub fn internal_unique2(
&self,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> (Tensor, Tensor, Tensor)
pub fn internal_use_cudnn_ctc_loss(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
blank: i64
) -> bool
pub fn internal_weight_norm_cuda_interface_backward(
grad_w: &Tensor,
saved_v: &Tensor,
saved_g: &Tensor,
saved_norms: &Tensor,
dim: i64
) -> (Tensor, Tensor)
pub fn internal_weight_norm_differentiable_backward(
grad_w: &Tensor,
saved_v: &Tensor,
saved_g: &Tensor,
saved_norms: &Tensor,
dim: i64
) -> (Tensor, Tensor)
pub fn adaptive_avg_pool3d_backward(
&self,
grad_input: &Tensor,
grad_output: &Tensor
) -> Tensor
pub fn adaptive_max_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Tensor
pub fn adaptive_max_pool2d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> (Tensor, Tensor)
pub fn adaptive_max_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor
) -> Tensor
pub fn adaptive_max_pool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> (Tensor, Tensor)
pub fn arange_start_step<S: Into<Scalar>>(
start: S,
end: S,
step: S,
options: (Kind, Device)
) -> Tensor
pub fn as_strided(
&self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Tensor
pub fn as_strided_(
&mut self,
size: &[i64],
stride: &[i64],
storage_offset: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool
) -> Tensor
pub fn avg_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool2d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn avg_pool3d_out(
&self,
out: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
ceil_mode: bool,
count_include_pad: bool,
divisor_override: impl Into<Option<i64>>
) -> Tensor
pub fn bartlett_window_periodic(
window_length: i64,
periodic: bool,
options: (Kind, Device)
) -> Tensor
pub fn batch_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Tensor
pub fn batch_norm_backward_elemt<T: Borrow<Tensor>>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
mean_dy: &Tensor,
mean_dy_xmu: &Tensor,
count: &Tensor
) -> Tensor
pub fn batch_norm_backward_reduce<T: Borrow<Tensor>>(
&self,
grad_out: &Tensor,
mean: &Tensor,
invstd: &Tensor,
weight: Option<T>,
input_g: bool,
weight_g: bool,
bias_g: bool
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn batch_norm_elemt<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Tensor
pub fn batch_norm_elemt_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
invstd: &Tensor,
eps: f64
) -> Tensor
pub fn batch_norm_gather_stats<T: Borrow<Tensor>>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
count: i64
) -> (Tensor, Tensor)
pub fn batch_norm_gather_stats_with_counts<T: Borrow<Tensor>>(
&self,
mean: &Tensor,
invstd: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64,
eps: f64,
counts: &Tensor
) -> (Tensor, Tensor)
pub fn batch_norm_update_stats<T: Borrow<Tensor>>(
&self,
running_mean: Option<T>,
running_var: Option<T>,
momentum: f64
) -> (Tensor, Tensor)
pub fn bilinear<T: Borrow<Tensor>>(
input1: &Tensor,
input2: &Tensor,
weight: &Tensor,
bias: Option<T>
) -> Tensor
pub fn binary_cross_entropy<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor
pub fn binary_cross_entropy_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor
pub fn binary_cross_entropy_backward_grad_input<T: Borrow<Tensor>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor
pub fn binary_cross_entropy_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction
) -> Tensor
pub fn binary_cross_entropy_with_logits<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Tensor
pub fn binary_cross_entropy_with_logits_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
pos_weight: Option<T>,
reduction: Reduction
) -> Tensor
pub fn blackman_window_periodic(
window_length: i64,
periodic: bool,
options: (Kind, Device)
) -> Tensor
pub fn bucketize_scalar<S: Into<Scalar>>(
self_scalar: S,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Tensor
pub fn bucketize_tensor_out(
&self,
out: &Tensor,
boundaries: &Tensor,
out_int32: bool,
right: bool
) -> Tensor
pub fn choose_qparams_optimized(
&self,
numel: i64,
n_bins: i64,
ratio: f64,
bit_width: i64
) -> (Tensor, Tensor)
pub fn clamp_tensor_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
min: Option<T>,
max: Option<T>
) -> Tensor
pub fn clip_tensor_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
min: Option<T>,
max: Option<T>
) -> Tensor
pub fn col2im(
&self,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn col2im_backward(
grad_output: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn col2im_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn col2im_out(
&self,
out: &Tensor,
output_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn conv1d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn conv1d_padding<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Tensor
pub fn conv2d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn conv2d_padding<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Tensor
pub fn conv3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn conv3d_padding<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &str,
dilation: &[i64],
groups: i64
) -> Tensor
pub fn conv_depthwise3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Tensor
pub fn conv_depthwise3d_backward(
&self,
grad_input: &Tensor,
grad_weight: &Tensor,
grad_bias: &Tensor,
grad_output: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> (Tensor, Tensor, Tensor)
pub fn conv_tbc_backward(
&self,
input: &Tensor,
weight: &Tensor,
bias: &Tensor,
pad: i64
) -> (Tensor, Tensor, Tensor)
pub fn conv_transpose1d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Tensor
pub fn conv_transpose2d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Tensor
pub fn conv_transpose3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
groups: i64,
dilation: &[i64]
) -> Tensor
pub fn convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Tensor
pub fn convolution_overrideable<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
transposed: bool,
output_padding: &[i64],
groups: i64
) -> Tensor
pub fn cosine_embedding_loss(
input1: &Tensor,
input2: &Tensor,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Tensor
pub fn cross_entropy_loss<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor
pub fn ctc_loss(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &[i64],
target_lengths: &[i64],
blank: i64,
reduction: Reduction,
zero_infinity: bool
) -> Tensor
pub fn ctc_loss_tensor(
log_probs: &Tensor,
targets: &Tensor,
input_lengths: &Tensor,
target_lengths: &Tensor,
blank: i64,
reduction: Reduction,
zero_infinity: bool
) -> Tensor
pub fn cudnn_affine_grid_generator_backward(
grad: &Tensor,
n: i64,
c: i64,
h: i64,
w: i64
) -> Tensor
pub fn cudnn_batch_norm<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn cudnn_batch_norm_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64,
reservespace: &Tensor
) -> (Tensor, Tensor, Tensor)
pub fn cudnn_convolution(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_add_relu<T: Borrow<Tensor>, S: Into<Scalar>>(
&self,
weight: &Tensor,
z: &Tensor,
alpha: S,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn cudnn_convolution_backward_input(
self_size: &[i64],
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_deprecated<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn cudnn_convolution_deprecated2(
&self,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn cudnn_convolution_relu<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn cudnn_convolution_transpose(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_transpose_backward_input(
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool,
allow_tf32: bool
) -> Tensor
pub fn cudnn_convolution_transpose_deprecated<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn cudnn_convolution_transpose_deprecated2(
&self,
weight: &Tensor,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn diagonal_backward(
grad: &Tensor,
input_sizes: &[i64],
offset: i64,
dim1: i64,
dim2: i64
) -> Tensor
pub fn diff<T: Borrow<Tensor>>(
&self,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Tensor
pub fn diff_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
n: i64,
dim: i64,
prepend: Option<T>,
append: Option<T>
) -> Tensor
pub fn g_div_scalar_mode_<S: Into<Scalar>>(
&mut self,
other: S,
rounding_mode: &str
) -> Tensor
pub fn divide_scalar_mode_<S: Into<Scalar>>(
&mut self,
other: S,
rounding_mode: &str
) -> Tensor
pub fn elu_backward<S: Into<Scalar>>(
grad_output: &Tensor,
alpha: S,
scale: S,
input_scale: S,
is_result: bool,
self_or_result: &Tensor
) -> Tensor
pub fn embedding(
weight: &Tensor,
indices: &Tensor,
padding_idx: i64,
scale_grad_by_freq: bool,
sparse: bool
) -> Tensor
pub fn embedding_backward(
grad: &Tensor,
indices: &Tensor,
num_weights: i64,
padding_idx: i64,
scale_grad_by_freq: bool,
sparse: bool
) -> Tensor
pub fn embedding_bag<T: Borrow<Tensor>>(
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
include_last_offset: bool
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn embedding_bag_padding_idx<T: Borrow<Tensor>>(
weight: &Tensor,
indices: &Tensor,
offsets: &Tensor,
scale_grad_by_freq: bool,
mode: i64,
sparse: bool,
per_sample_weights: Option<T>,
include_last_offset: bool,
padding_idx: impl Into<Option<i64>>
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn embedding_dense_backward(
grad_output: &Tensor,
indices: &Tensor,
num_weights: i64,
padding_idx: i64,
scale_grad_by_freq: bool
) -> Tensor
pub fn embedding_renorm_(
&mut self,
indices: &Tensor,
max_norm: f64,
norm_type: f64
) -> Tensor
pub fn embedding_sparse_backward(
grad: &Tensor,
indices: &Tensor,
num_weights: i64,
padding_idx: i64,
scale_grad_by_freq: bool
) -> Tensor
pub fn fake_quantize_per_channel_affine(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> Tensor
pub fn fake_quantize_per_channel_affine_cachemask(
&self,
scale: &Tensor,
zero_point: &Tensor,
axis: i64,
quant_min: i64,
quant_max: i64
) -> (Tensor, Tensor)
pub fn fake_quantize_per_channel_affine_cachemask_backward(
grad: &Tensor,
mask: &Tensor
) -> Tensor
pub fn fake_quantize_per_tensor_affine(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> Tensor
pub fn fake_quantize_per_tensor_affine_cachemask(
&self,
scale: f64,
zero_point: i64,
quant_min: i64,
quant_max: i64
) -> (Tensor, Tensor)
pub fn fake_quantize_per_tensor_affine_cachemask_backward(
grad: &Tensor,
mask: &Tensor
) -> Tensor
pub fn fbgemm_linear_fp16_weight_fp32_activation(
&self,
packed_weight: &Tensor,
bias: &Tensor
) -> Tensor
pub fn fbgemm_linear_int8_weight<S: Into<Scalar>>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Tensor
pub fn fbgemm_linear_int8_weight_fp32_activation<S: Into<Scalar>>(
&self,
weight: &Tensor,
packed: &Tensor,
col_offsets: &Tensor,
weight_scale: S,
weight_zero_point: S,
bias: &Tensor
) -> Tensor
pub fn fft_fft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Tensor
pub fn fft_fftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_fftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_hfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_ifft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Tensor
pub fn fft_ifft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_ifftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_ifftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_ihfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_irfft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Tensor
pub fn fft_irfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_irfftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_irfftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_rfft2_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: &[i64],
norm: &str
) -> Tensor
pub fn fft_rfft_out(
&self,
out: &Tensor,
n: impl Into<Option<i64>>,
dim: i64,
norm: &str
) -> Tensor
pub fn fft_rfftn<'a>(
&self,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn fft_rfftn_out<'a>(
&self,
out: &Tensor,
s: impl Into<Option<&'a [i64]>>,
dim: impl Into<Option<&'a [i64]>>,
norm: &str
) -> Tensor
pub fn float_power_scalar_out<S: Into<Scalar>>(
out: &Tensor,
self_scalar: S,
exponent: &Tensor
) -> Tensor
pub fn float_power_tensor_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
exponent: S
) -> Tensor
pub fn fractional_max_pool2d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
pub fn fractional_max_pool2d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
pub fn fractional_max_pool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
pub fn fractional_max_pool2d_output(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
pub fn fractional_max_pool3d(
&self,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
pub fn fractional_max_pool3d_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
pub fn fractional_max_pool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
indices: &Tensor
) -> Tensor
pub fn fractional_max_pool3d_output(
&self,
output: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
output_size: &[i64],
random_samples: &Tensor
) -> (Tensor, Tensor)
pub fn from_file(
filename: &str,
shared: bool,
size: impl Into<Option<i64>>,
options: (Kind, Device)
) -> Tensor
pub fn gather_backward(
&self,
grad: &Tensor,
dim: i64,
index: &Tensor,
sparse_grad: bool
) -> Tensor
pub fn glu_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
dim: i64
) -> Tensor
pub fn grid_sampler(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
pub fn grid_sampler_2d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
pub fn grid_sampler_2d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> (Tensor, Tensor)
pub fn grid_sampler_3d(
&self,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> Tensor
pub fn grid_sampler_3d_backward(
&self,
grad_output: &Tensor,
grid: &Tensor,
interpolation_mode: i64,
padding_mode: i64,
align_corners: bool
) -> (Tensor, Tensor)
pub fn group_norm<T: Borrow<Tensor>>(
&self,
num_groups: i64,
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enabled: bool
) -> Tensor
pub fn gru<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor)
pub fn gru_cell<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Tensor
pub fn gru_data<T: Borrow<Tensor>>(
data: &Tensor,
batch_sizes: &Tensor,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool
) -> (Tensor, Tensor)
pub fn hamming_window_periodic(
window_length: i64,
periodic: bool,
options: (Kind, Device)
) -> Tensor
pub fn hamming_window_periodic_alpha(
window_length: i64,
periodic: bool,
alpha: f64,
options: (Kind, Device)
) -> Tensor
pub fn hamming_window_periodic_alpha_beta(
window_length: i64,
periodic: bool,
alpha: f64,
beta: f64,
options: (Kind, Device)
) -> Tensor
pub fn hardtanh_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Tensor
pub fn hardtanh_backward_grad_input<S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
min_val: S,
max_val: S
) -> Tensor
pub fn hinge_embedding_loss(
&self,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Tensor
pub fn huber_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Tensor
pub fn huber_loss_backward_out(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Tensor
pub fn huber_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
delta: f64
) -> Tensor
pub fn im2col(
&self,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn im2col_backward(
grad_output: &Tensor,
input_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn im2col_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
input_size: &[i64],
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn im2col_out(
&self,
out: &Tensor,
kernel_size: &[i64],
dilation: &[i64],
padding: &[i64],
stride: &[i64]
) -> Tensor
pub fn index_add_alpha<S: Into<Scalar>>(
&self,
dim: i64,
index: &Tensor,
source: &Tensor,
alpha: S
) -> Tensor
pub fn index_add_alpha_<S: Into<Scalar>>(
&mut self,
dim: i64,
index: &Tensor,
source: &Tensor,
alpha: S
) -> Tensor
pub fn index_put<T: Borrow<Tensor>>(
&self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Tensor
pub fn index_put_<T: Borrow<Tensor>>(
&mut self,
indices: &[Option<T>],
values: &Tensor,
accumulate: bool
) -> Tensor
pub fn index_select_backward(
grad: &Tensor,
self_sizes: &[i64],
dim: i64,
index: &Tensor
) -> Tensor
pub fn instance_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
use_input_stats: bool,
momentum: f64,
eps: f64,
cudnn_enabled: bool
) -> Tensor
pub fn istft<T: Borrow<Tensor>>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
center: bool,
normalized: bool,
onesided: bool,
length: impl Into<Option<i64>>,
return_complex: bool
) -> Tensor
pub fn kaiser_window_beta(
window_length: i64,
periodic: bool,
beta: f64,
options: (Kind, Device)
) -> Tensor
pub fn kaiser_window_periodic(
window_length: i64,
periodic: bool,
options: (Kind, Device)
) -> Tensor
pub fn kl_div_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
log_target: bool
) -> Tensor
pub fn kthvalue_values(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn l1_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn layer_norm<T: Borrow<Tensor>>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64,
cudnn_enable: bool
) -> Tensor
pub fn leaky_relu_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
negative_slope: S,
self_is_result: bool
) -> Tensor
pub fn linalg_cholesky_ex_l(
&self,
l: &Tensor,
info: &Tensor,
check_errors: bool
) -> (Tensor, Tensor)
pub fn linalg_eigh_eigvals(
&self,
eigvals: &Tensor,
eigvecs: &Tensor,
uplo: &str
) -> (Tensor, Tensor)
pub fn linalg_inv_ex_inverse(
&self,
inverse: &Tensor,
info: &Tensor,
check_errors: bool
) -> (Tensor, Tensor)
pub fn linalg_lstsq(
&self,
b: &Tensor,
rcond: impl Into<Option<f64>>,
driver: &str
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn linalg_lstsq_out(
&self,
solution: &Tensor,
residuals: &Tensor,
rank: &Tensor,
singular_values: &Tensor,
b: &Tensor,
rcond: impl Into<Option<f64>>,
driver: &str
) -> (Tensor, Tensor, Tensor, Tensor)
pub fn linalg_matrix_norm<S: Into<Scalar>>(
&self,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_matrix_norm_out<S: Into<Scalar>>(
&self,
out: &Tensor,
ord: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_matrix_norm_str_ord(
&self,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_matrix_norm_str_ord_out(
&self,
out: &Tensor,
ord: &str,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_matrix_rank_out(
&self,
out: &Tensor,
tol: impl Into<Option<f64>>,
hermitian: bool
) -> Tensor
pub fn linalg_matrix_rank_out_tol_tensor(
&self,
out: &Tensor,
tol: &Tensor,
hermitian: bool
) -> Tensor
pub fn linalg_norm<'a, S: Into<Scalar>>(
&self,
ord: S,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_norm_ord_str<'a>(
&self,
ord: &str,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_norm_ord_str_out<'a>(
&self,
out: &Tensor,
ord: &str,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_norm_out<'a, S: Into<Scalar>>(
&self,
out: &Tensor,
ord: S,
dim: impl Into<Option<&'a [i64]>>,
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn linalg_pinv_out_rcond_tensor(
&self,
out: &Tensor,
rcond: &Tensor,
hermitian: bool
) -> Tensor
pub fn linalg_svd_u(
&self,
u: &Tensor,
s: &Tensor,
vh: &Tensor,
full_matrices: bool
) -> (Tensor, Tensor, Tensor)
pub fn linalg_tensorsolve_out<'a>(
&self,
out: &Tensor,
other: &Tensor,
dims: impl Into<Option<&'a [i64]>>
) -> Tensor
pub fn linspace<S: Into<Scalar>>(
start: S,
end: S,
steps: impl Into<Option<i64>>,
options: (Kind, Device)
) -> Tensor
pub fn linspace_out<S: Into<Scalar>>(
out: &Tensor,
start: S,
end: S,
steps: impl Into<Option<i64>>
) -> Tensor
pub fn log_sigmoid_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
buffer: &Tensor
) -> Tensor
pub fn logit_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
eps: impl Into<Option<f64>>
) -> Tensor
pub fn logspace<S: Into<Scalar>>(
start: S,
end: S,
steps: impl Into<Option<i64>>,
base: f64,
options: (Kind, Device)
) -> Tensor
pub fn logspace_out<S: Into<Scalar>>(
out: &Tensor,
start: S,
end: S,
steps: impl Into<Option<i64>>,
base: f64
) -> Tensor
pub fn lstm<T: Borrow<Tensor>>(
&self,
hx: &[T],
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor, Tensor)
pub fn lstm_cell<T: Borrow<Tensor>>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> (Tensor, Tensor)
pub fn lstm_data<T: Borrow<Tensor>>(
data: &Tensor,
batch_sizes: &Tensor,
hx: &[T],
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool
) -> (Tensor, Tensor, Tensor)
pub fn lu_unpack(
lu_data: &Tensor,
lu_pivots: &Tensor,
unpack_data: bool,
unpack_pivots: bool
) -> (Tensor, Tensor, Tensor)
pub fn lu_unpack_out(
p: &Tensor,
l: &Tensor,
u: &Tensor,
lu_data: &Tensor,
lu_pivots: &Tensor,
unpack_data: bool,
unpack_pivots: bool
) -> (Tensor, Tensor, Tensor)
pub fn margin_ranking_loss(
input1: &Tensor,
input2: &Tensor,
target: &Tensor,
margin: f64,
reduction: Reduction
) -> Tensor
pub fn max_dim_max(
&self,
max: &Tensor,
max_values: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn max_pool1d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn max_pool2d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_pool2d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
pub fn max_pool2d_with_indices_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
pub fn max_pool2d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn max_pool3d_with_indices(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_pool3d_with_indices_backward(
&self,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
pub fn max_pool3d_with_indices_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool,
indices: &Tensor
) -> Tensor
pub fn max_pool3d_with_indices_out(
&self,
out: &Tensor,
indices: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> (Tensor, Tensor)
pub fn max_unpool2d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Tensor
pub fn max_unpool2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64]
) -> Tensor
pub fn max_unpool3d(
&self,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn max_unpool3d_backward(
&self,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn max_unpool3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn max_unpool3d_out(
&self,
out: &Tensor,
indices: &Tensor,
output_size: &[i64],
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn median_dim_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn min_dim_min(
&self,
min: &Tensor,
min_indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn miopen_batch_norm<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
exponential_average_factor: f64,
epsilon: f64
) -> (Tensor, Tensor, Tensor)
pub fn miopen_batch_norm_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
weight: &Tensor,
running_mean: Option<T>,
running_var: Option<T>,
save_mean: Option<T>,
save_var: Option<T>,
epsilon: f64
) -> (Tensor, Tensor, Tensor)
pub fn miopen_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_convolution_backward_input(
self_size: &[i64],
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_convolution_transpose<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
output_padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_convolution_transpose_backward_input(
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_convolution_transpose_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_depthwise_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_depthwise_convolution_backward_input(
self_size: &[i64],
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_depthwise_convolution_backward_weight(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
benchmark: bool,
deterministic: bool
) -> Tensor
pub fn miopen_rnn<T: Borrow<Tensor>>(
&self,
weight: &[T],
weight_stride0: i64,
hx: &Tensor,
cx: Option<T>,
mode: i64,
hidden_size: i64,
num_layers: i64,
batch_first: bool,
dropout: f64,
train: bool,
bidirectional: bool,
batch_sizes: &[i64],
dropout_state: Option<T>
) -> (Tensor, Tensor, Tensor, Tensor, Tensor)
pub fn mkldnn_convolution<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
bias: Option<T>,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn mkldnn_convolution_backward_input(
self_size: &[i64],
grad_output: &Tensor,
weight: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
bias_defined: bool
) -> Tensor
pub fn mkldnn_convolution_backward_weights(
&self,
weight_size: &[i64],
grad_output: &Tensor,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64,
bias_defined: bool
) -> (Tensor, Tensor)
pub fn mkldnn_linear_backward_input(
input_size: &[i64],
grad_output: &Tensor,
weight: &Tensor
) -> Tensor
pub fn mkldnn_linear_backward_weights(
&self,
grad_output: &Tensor,
weight: &Tensor,
bias_defined: bool
) -> (Tensor, Tensor)
pub fn mkldnn_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn mkldnn_max_pool2d_backward(
&self,
grad_output: &Tensor,
output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn mkldnn_max_pool3d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn mkldnn_max_pool3d_backward(
&self,
grad_output: &Tensor,
output: &Tensor,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn mkldnn_reorder_conv2d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn mkldnn_reorder_conv3d_weight(
&self,
padding: &[i64],
stride: &[i64],
dilation: &[i64],
groups: i64
) -> Tensor
pub fn mode_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn mse_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn mse_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn multi_margin_loss_backward<T: Borrow<Tensor>, S: Into<Scalar>>(
&self,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Tensor
pub fn multi_margin_loss_backward_grad_input<T: Borrow<Tensor>, S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
p: S,
margin: S,
weight: Option<T>,
reduction: Reduction
) -> Tensor
pub fn multilabel_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Tensor
pub fn multilabel_margin_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
is_target: &Tensor
) -> Tensor
pub fn multilabel_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn nan_to_num(
&self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Tensor
pub fn nan_to_num_(
&mut self,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Tensor
pub fn nan_to_num_out(
&self,
out: &Tensor,
nan: impl Into<Option<f64>>,
posinf: impl Into<Option<f64>>,
neginf: impl Into<Option<f64>>
) -> Tensor
pub fn nanmedian_dim_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
keepdim: bool
) -> (Tensor, Tensor)
pub fn nanquantile_new(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn nanquantile_new_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn nanquantile_new_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn nanquantile_new_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn nanquantile_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn nanquantile_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn native_batch_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> (Tensor, Tensor, Tensor)
pub fn native_batch_norm_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
save_mean: &Tensor,
save_invstd: &Tensor,
weight: Option<T>,
bias: Option<T>,
running_mean: Option<T>,
running_var: Option<T>,
training: bool,
momentum: f64,
eps: f64
) -> (Tensor, Tensor, Tensor)
pub fn native_group_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
n: i64,
c: i64,
hxw: i64,
group: i64,
eps: f64
) -> (Tensor, Tensor, Tensor)
pub fn native_layer_norm<T: Borrow<Tensor>>(
&self,
normalized_shape: &[i64],
weight: Option<T>,
bias: Option<T>,
eps: f64
) -> (Tensor, Tensor, Tensor)
pub fn native_norm_scalaropt_dim_dtype<S: Into<Scalar>>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn new_full<S: Into<Scalar>>(
&self,
size: &[i64],
fill_value: S,
options: (Kind, Device)
) -> Tensor
pub fn g_nll_loss<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor
pub fn nll_loss2d<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor
pub fn nll_loss2d_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor
pub fn nll_loss2d_backward_grad_input<T: Borrow<Tensor>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor
pub fn nll_loss2d_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor
pub fn nll_loss_backward<T: Borrow<Tensor>>(
&self,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor
pub fn nll_loss_backward_grad_input<T: Borrow<Tensor>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64,
total_weight: &Tensor
) -> Tensor
pub fn nll_loss_nd<T: Borrow<Tensor>>(
&self,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor
pub fn nll_loss_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
target: &Tensor,
weight: Option<T>,
reduction: Reduction,
ignore_index: i64
) -> Tensor
pub fn norm_dtype_out<S: Into<Scalar>>(
&self,
out: &Tensor,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn norm_scalaropt_dim_dtype<S: Into<Scalar>>(
&self,
p: S,
dim: &[i64],
keepdim: bool,
dtype: Kind
) -> Tensor
pub fn ormqr_out(
&self,
out: &Tensor,
input2: &Tensor,
input3: &Tensor,
left: bool,
transpose: bool
) -> Tensor
pub fn pad_sequence<T: Borrow<Tensor>>(
sequences: &[T],
batch_first: bool,
padding_value: f64
) -> Tensor
pub fn poisson_nll_loss(
&self,
target: &Tensor,
log_input: bool,
full: bool,
eps: f64,
reduction: Reduction
) -> Tensor
pub fn quantile_new(
&self,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn quantile_new_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn quantile_new_scalar(
&self,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn quantile_new_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool,
interpolation: &str
) -> Tensor
pub fn quantile_out(
&self,
out: &Tensor,
q: &Tensor,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn quantile_scalar_out(
&self,
out: &Tensor,
q: f64,
dim: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn quantize_per_channel(
&self,
scales: &Tensor,
zero_points: &Tensor,
axis: i64,
dtype: Kind
) -> Tensor
pub fn quantize_per_tensor_tensors<T: Borrow<Tensor>>(
tensors: &[T],
scales: &Tensor,
zero_points: &Tensor,
dtype: Kind
) -> Vec<Tensor>
pub fn quantized_batch_norm<T: Borrow<Tensor>>(
&self,
weight: Option<T>,
bias: Option<T>,
mean: &Tensor,
var: &Tensor,
eps: f64,
output_scale: f64,
output_zero_point: i64
) -> Tensor
pub fn quantized_gru_cell<S: Into<Scalar>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Tensor
pub fn quantized_lstm_cell<T: Borrow<Tensor>, S: Into<Scalar>>(
&self,
hx: &[T],
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> (Tensor, Tensor)
pub fn quantized_max_pool1d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn quantized_max_pool2d(
&self,
kernel_size: &[i64],
stride: &[i64],
padding: &[i64],
dilation: &[i64],
ceil_mode: bool
) -> Tensor
pub fn quantized_rnn_relu_cell<S: Into<Scalar>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Tensor
pub fn quantized_rnn_tanh_cell<S: Into<Scalar>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: &Tensor,
b_hh: &Tensor,
packed_ih: &Tensor,
packed_hh: &Tensor,
col_offsets_ih: &Tensor,
col_offsets_hh: &Tensor,
scale_ih: S,
scale_hh: S,
zero_point_ih: S,
zero_point_hh: S
) -> Tensor
pub fn reflection_pad1d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn reflection_pad2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn repeat_interleave_self_tensor(
&self,
repeats: &Tensor,
dim: impl Into<Option<i64>>
) -> Tensor
pub fn replication_pad1d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn replication_pad2d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn replication_pad3d_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
padding: &[i64]
) -> Tensor
pub fn rnn_relu<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor)
pub fn rnn_relu_cell<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Tensor
pub fn rnn_relu_data<T: Borrow<Tensor>>(
data: &Tensor,
batch_sizes: &Tensor,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool
) -> (Tensor, Tensor)
pub fn rnn_tanh<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool,
batch_first: bool
) -> (Tensor, Tensor)
pub fn rnn_tanh_cell<T: Borrow<Tensor>>(
&self,
hx: &Tensor,
w_ih: &Tensor,
w_hh: &Tensor,
b_ih: Option<T>,
b_hh: Option<T>
) -> Tensor
pub fn rnn_tanh_data<T: Borrow<Tensor>>(
data: &Tensor,
batch_sizes: &Tensor,
hx: &Tensor,
params: &[T],
has_biases: bool,
num_layers: i64,
dropout: f64,
train: bool,
bidirectional: bool
) -> (Tensor, Tensor)
pub fn rrelu_with_noise_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
noise: &Tensor,
lower: S,
upper: S,
training: bool,
self_is_result: bool
) -> Tensor
pub fn scatter_value_reduce_<S: Into<Scalar>>(
&mut self,
dim: i64,
index: &Tensor,
value: S,
reduce: &str
) -> Tensor
pub fn searchsorted_scalar<S: Into<Scalar>>(
sorted_sequence: &Tensor,
self_scalar: S,
out_int32: bool,
right: bool
) -> Tensor
pub fn searchsorted_tensor_out(
&self,
out: &Tensor,
sorted_sequence: &Tensor,
out_int32: bool,
right: bool
) -> Tensor
pub fn segment_reduce<T: Borrow<Tensor>, S: Into<Scalar>>(
data: &Tensor,
reduce: &str,
lengths: Option<T>,
indices: Option<T>,
axis: i64,
unsafe_: bool,
initial: S
) -> Tensor
pub fn segment_reduce_backward<T: Borrow<Tensor>>(
grad: &Tensor,
output: &Tensor,
data: &Tensor,
lengths: Option<T>
) -> Tensor
pub fn sigmoid_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output: &Tensor
) -> Tensor
pub fn slice(
&self,
dim: i64,
start: impl Into<Option<i64>>,
end: impl Into<Option<i64>>,
step: i64
) -> Tensor
pub fn slice_backward(
grad: &Tensor,
input_sizes: &[i64],
dim: i64,
start: i64,
end: i64,
step: i64
) -> Tensor
pub fn slow_conv3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn slow_conv3d_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64]
) -> Tensor
pub fn slow_conv_dilated2d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Tensor
pub fn slow_conv_dilated3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
dilation: &[i64]
) -> Tensor
pub fn slow_conv_transpose2d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor
pub fn slow_conv_transpose2d_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor
pub fn slow_conv_transpose3d<T: Borrow<Tensor>>(
&self,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor
pub fn slow_conv_transpose3d_out<T: Borrow<Tensor>>(
&self,
out: &Tensor,
weight: &Tensor,
kernel_size: &[i64],
bias: Option<T>,
stride: &[i64],
padding: &[i64],
output_padding: &[i64],
dilation: &[i64]
) -> Tensor
pub fn smooth_l1_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Tensor
pub fn smooth_l1_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Tensor
pub fn smooth_l1_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction,
beta: f64
) -> Tensor
pub fn soft_margin_loss_backward(
&self,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn soft_margin_loss_backward_grad_input(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn soft_margin_loss_out(
&self,
out: &Tensor,
target: &Tensor,
reduction: Reduction
) -> Tensor
pub fn softplus_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Tensor
pub fn softplus_backward_grad_input<S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
beta: S,
threshold: S,
output: &Tensor
) -> Tensor
pub fn softshrink_backward_grad_input<S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
lambd: S
) -> Tensor
pub fn sort_values(
&self,
values: &Tensor,
indices: &Tensor,
dim: i64,
descending: bool
) -> (Tensor, Tensor)
pub fn sort_values_stable(
&self,
values: &Tensor,
indices: &Tensor,
stable: bool,
dim: i64,
descending: bool
) -> (Tensor, Tensor)
pub fn sparse_coo_tensor_indices(
indices: &Tensor,
values: &Tensor,
options: (Kind, Device)
) -> Tensor
pub fn sparse_coo_tensor_indices_size(
indices: &Tensor,
values: &Tensor,
size: &[i64],
options: (Kind, Device)
) -> Tensor
pub fn sparse_resize_and_clear_(
&mut self,
size: &[i64],
sparse_dim: i64,
dense_dim: i64
) -> Tensor
pub fn special_xlog1py_other_scalar_out<S: Into<Scalar>>(
&self,
out: &Tensor,
other: S
) -> Tensor
pub fn special_xlog1py_self_scalar_out<S: Into<Scalar>>(
out: &Tensor,
self_scalar: S,
other: &Tensor
) -> Tensor
pub fn std_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn std_correction_out<'a>(
&self,
out: &Tensor,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn std_mean_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> (Tensor, Tensor)
pub fn stft<T: Borrow<Tensor>>(
&self,
n_fft: i64,
hop_length: impl Into<Option<i64>>,
win_length: impl Into<Option<i64>>,
window: Option<T>,
normalized: bool,
onesided: bool,
return_complex: bool
) -> Tensor
pub fn svd_u(
&self,
u: &Tensor,
s: &Tensor,
v: &Tensor,
some: bool,
compute_uv: bool
) -> (Tensor, Tensor, Tensor)
pub fn symeig_e(
&self,
e: &Tensor,
v: &Tensor,
eigenvectors: bool,
upper: bool
) -> (Tensor, Tensor)
pub fn take_along_dim_out(
&self,
out: &Tensor,
indices: &Tensor,
dim: impl Into<Option<i64>>
) -> Tensor
pub fn tanh_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output: &Tensor
) -> Tensor
pub fn tensor_split_tensor_indices_or_sections(
&self,
tensor_indices_or_sections: &Tensor,
dim: i64
) -> Vec<Tensor>
pub fn tensordot_out(
&self,
out: &Tensor,
other: &Tensor,
dims_self: &[i64],
dims_other: &[i64]
) -> Tensor
pub fn threshold_backward<S: Into<Scalar>>(
&self,
grad_output: &Tensor,
threshold: S
) -> Tensor
pub fn threshold_backward_grad_input<S: Into<Scalar>>(
&self,
grad_input: &Tensor,
grad_output: &Tensor,
threshold: S
) -> Tensor
pub fn topk_values(
&self,
values: &Tensor,
indices: &Tensor,
k: i64,
dim: i64,
largest: bool,
sorted: bool
) -> (Tensor, Tensor)
pub fn triangular_solve(
&self,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> (Tensor, Tensor)
pub fn triangular_solve_x(
&self,
x: &Tensor,
m: &Tensor,
a: &Tensor,
upper: bool,
transpose: bool,
unitriangular: bool
) -> (Tensor, Tensor)
pub fn triplet_margin_loss(
anchor: &Tensor,
positive: &Tensor,
negative: &Tensor,
margin: f64,
p: f64,
eps: f64,
swap: bool,
reduction: Reduction
) -> Tensor
pub fn unfold_backward(
grad_in: &Tensor,
input_sizes: &[i64],
dim: i64,
size: i64,
step: i64
) -> Tensor
pub fn unique_consecutive(
&self,
return_inverse: bool,
return_counts: bool,
dim: impl Into<Option<i64>>
) -> (Tensor, Tensor, Tensor)
pub fn unique_dim(
&self,
dim: i64,
sorted: bool,
return_inverse: bool,
return_counts: bool
) -> (Tensor, Tensor, Tensor)
pub fn unique_dim_consecutive(
&self,
dim: i64,
return_inverse: bool,
return_counts: bool
) -> (Tensor, Tensor, Tensor)
pub fn upsample_bicubic2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bicubic2d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bicubic2d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bicubic2d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_bicubic2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bicubic2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_bilinear2d(
&self,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bilinear2d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bilinear2d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bilinear2d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_bilinear2d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_bilinear2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_linear1d(
&self,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_linear1d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_linear1d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_linear1d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_linear1d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_linear1d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_nearest1d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest1d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest1d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
scale_factors: &[f64]
) -> Tensor
pub fn upsample_nearest1d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest1d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_nearest2d(
&self,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest2d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest2d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest2d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
scale_factors: &[f64]
) -> Tensor
pub fn upsample_nearest2d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest2d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_nearest3d(
&self,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest3d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest3d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest3d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
scale_factors: &[f64]
) -> Tensor
pub fn upsample_nearest3d_out(
&self,
out: &Tensor,
output_size: &[i64],
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_nearest3d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_trilinear3d(
&self,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_trilinear3d_backward(
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_trilinear3d_backward_grad_input(
grad_input: &Tensor,
grad_output: &Tensor,
output_size: &[i64],
input_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_trilinear3d_backward_vec<'a>(
grad_output: &Tensor,
output_size: impl Into<Option<&'a [i64]>>,
input_size: &[i64],
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn upsample_trilinear3d_out(
&self,
out: &Tensor,
output_size: &[i64],
align_corners: bool,
scales_d: impl Into<Option<f64>>,
scales_h: impl Into<Option<f64>>,
scales_w: impl Into<Option<f64>>
) -> Tensor
pub fn upsample_trilinear3d_vec<'a>(
&self,
output_size: impl Into<Option<&'a [i64]>>,
align_corners: bool,
scale_factors: &[f64]
) -> Tensor
pub fn value_selecting_reduction_backward(
grad: &Tensor,
dim: i64,
indices: &Tensor,
sizes: &[i64],
keepdim: bool
) -> Tensor
pub fn var_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn var_correction_out<'a>(
&self,
out: &Tensor,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> Tensor
pub fn var_mean_correction<'a>(
&self,
dim: impl Into<Option<&'a [i64]>>,
correction: impl Into<Option<i64>>,
keepdim: bool
) -> (Tensor, Tensor)
pub fn where_scalarself<S: Into<Scalar>>(
condition: &Tensor,
self_scalar: S,
other: &Tensor
) -> Tensor
Reads a npy file and return the stored tensor.
Reads a npz file and returns some named tensors.
Writes a tensor in the npy format so that it can be read using python.
Computes the cross-entropy loss based on some logits and targets.
Returns the average accuracy for some given logits assuming that targets represent ground-truth.
Flattens a tensor.
This returns a flattened version of the given tensor. The first dimension is preserved as it is assumed to be the mini-batch dimension.
Converts a tensor to a one-hot encoded version.
If the input has a size [N1, N2, …, Nk], the returned tensor has a size [N1, …, Nk, labels]. The returned tensor uses float values. Elements of the input vector are expected to be between 0 and labels-1.
Copies a tensor to a newly allocated tensor using the same shape and device.
Trait Implementations
Performs the +=
operation. Read more
Performs the +=
operation. Read more
Performs the +=
operation. Read more
Performs the +=
operation. Read more
Performs the +=
operation. Read more
Performs the +=
operation. Read more
Performs the /=
operation. Read more
Performs the /=
operation. Read more
Performs the /=
operation. Read more
Performs the /=
operation. Read more
Performs the /=
operation. Read more
Performs the /=
operation. Read more
impl<A, B, C, D, E, F, G> IndexOp<(A, B, C, D, E, F, G)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
D: Into<TensorIndexer>,
E: Into<TensorIndexer>,
F: Into<TensorIndexer>,
G: Into<TensorIndexer>,
impl<A, B, C, D, E, F, G> IndexOp<(A, B, C, D, E, F, G)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
D: Into<TensorIndexer>,
E: Into<TensorIndexer>,
F: Into<TensorIndexer>,
G: Into<TensorIndexer>,
impl<A, B, C, D, E, F> IndexOp<(A, B, C, D, E, F)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
D: Into<TensorIndexer>,
E: Into<TensorIndexer>,
F: Into<TensorIndexer>,
impl<A, B, C, D, E, F> IndexOp<(A, B, C, D, E, F)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
D: Into<TensorIndexer>,
E: Into<TensorIndexer>,
F: Into<TensorIndexer>,
impl<A, B, C, D, E> IndexOp<(A, B, C, D, E)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
D: Into<TensorIndexer>,
E: Into<TensorIndexer>,
impl<A, B, C, D, E> IndexOp<(A, B, C, D, E)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
D: Into<TensorIndexer>,
E: Into<TensorIndexer>,
impl<A, B, C, D> IndexOp<(A, B, C, D)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
D: Into<TensorIndexer>,
impl<A, B, C, D> IndexOp<(A, B, C, D)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
D: Into<TensorIndexer>,
impl<A, B, C> IndexOp<(A, B, C)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
impl<A, B, C> IndexOp<(A, B, C)> for Tensor where
A: Into<TensorIndexer>,
B: Into<TensorIndexer>,
C: Into<TensorIndexer>,
Performs the *=
operation. Read more
Performs the *=
operation. Read more
Performs the *=
operation. Read more
Performs the *=
operation. Read more
Performs the *=
operation. Read more
Performs the *=
operation. Read more
Performs the -=
operation. Read more
Performs the -=
operation. Read more
Performs the -=
operation. Read more
Performs the -=
operation. Read more
Performs the -=
operation. Read more
Performs the -=
operation. Read more