Module tch::nn

source ·
Expand description

A small neural-network library based on Torch.

This library tries to stay as close as possible to the original Python and C++ implementations.

Re-exports§

Modules§

  • Variable initialization.

Structs§

  • Parameters for the Adam optimizer.
  • Parameters for the AdamW optimizer.
  • A batch-normalization layer.
  • Batch-normalization config.
  • A N-dimensional convolution layer.
  • Generic convolution config.
  • A generic transposed convolution configuration.
  • A generic transposed convolution layer.
  • An embedding layer.
  • Configuration option for an embedding layer.
  • A layer defined by a simple closure.
  • A layer defined by a closure with an additional training parameter.
  • A Gated Recurrent Unit (GRU) layer.
  • A GRU state, this contains a single tensor.
  • A group-normalization layer.
  • Group-normalization config.
  • An identity layer. This just propagates its tensor input as output.
  • A Long Short-Term Memory (LSTM) layer.
  • The state for a LSTM network, this contains two tensors.
  • A layer-normalization layer.
  • Layer-normalization config.
  • A linear fully-connected layer.
  • Configuration for a linear layer.
  • An optimizer to run gradient descent.
  • A variable store with an associated path for variables naming.
  • Configuration for the GRU and LSTM layers.
  • Parameters for the RmsProp optimizer.
  • A sequential layer combining multiple other layers.
  • A sequential layer combining new layers with support for a training mode.
  • Parameters for the SGD optimizer.
  • A VarStore is used to store variables used by one or multiple layers. It specifies a single device where all variables are stored.

Enums§

  • How padding is performed by convolution operations on the edge of the input tensor.

Traits§

  • The simplest module trait, defining a forward function.
  • Module trait with an additional train parameter.
  • Optimizer configurations. These configs can be used to build optimizer.
  • Trait for Recurrent Neural Networks.

Functions§

  • Creates the configuration for the Adam optimizer.
  • Creates the configuration for the AdamW optimizer.
  • Applies Batch Normalization over a three dimension input.
  • Applies Batch Normalization over a four dimension input.
  • Applies Batch Normalization over a five dimension input.
  • Creates a new convolution layer for any number of dimensions.
  • Creates a new one dimension convolution layer.
  • Creates a new two dimension convolution layer.
  • Creates a new three dimension convolution layer.
  • Creates a one dimension transposed convolution layer.
  • Creates a two dimension transposed convolution layer.
  • Creates a three dimension transposed convolution layer.
  • Creates a new GRU layer.
  • Creates a new linear layer.
  • Creates a LSTM layer.
  • The default convolution config without bias.
  • Creates the configuration for the RmsProp optimizer.
  • Creates a new empty sequential layer.
  • Creates a new empty sequential layer.
  • Creates the configuration for a Stochastic Gradient Descent (SGD) optimizer.

Type Aliases§

  • One dimension convolution layer.
  • Two dimensions convolution layer.
  • Three dimensions convolution layer.
  • Convolution config using the same parameters on all dimensions.
  • A one dimension transposed convolution layer.
  • A two dimension transposed convolution layer.
  • A three dimension transposed convolution layer.
  • A transposed convolution configuration using the same values on each dimension.