[][src]Module autograph::autograd

Structs

Gradient

Wrapper around a RwTensor
Gradient lazily allocates its tensor with zeros, to minimize memory footprint. If the backward pass is never called, then no allocation is needed.

Graph

Stores backward ops on the forward pass. On the backward pass, executes variable ops in first in last out order (ie reverse) and then executes parameter ops in first in last out order.

Parameter

A trainable parameter of a neural network model. Can be cloned (copying the pointer not the data) to share access. Note that if the gradient is or is set to None (via set_training(false)) then the gradients will not be shared.

Variable

Variable is the struct that represents inputs and outputs of a model. Operations on Variable enqueue backward ops, that are executed in reverse, such that the parameter gradients are evaluated, which are then used to optimize the model. Like ArcTensor, Variable can be cloned to copy the pointer to its data, as well as the pointers to the graph and the gradient data.

Type Definitions

Gradient0
Gradient1
Gradient2
Gradient3
Gradient4
GradientD
Parameter1
Parameter2
Parameter4
ParameterD
Variable0
Variable2
Variable4
VariableD