Struct leaf::layer::Layer [] [src]

pub struct Layer<B: IBackend> {
    pub name: String,
    pub config: Box<LayerConfig>,
    pub worker: Box<ILayer<B>>,
    pub weights_data: Vec<ArcLock<SharedTensor<f32>>>,
    pub weights_gradient: Vec<ArcLock<SharedTensor<f32>>>,
    pub input_blobs_data: Vec<ArcLock<SharedTensor<f32>>>,
    pub input_blobs_gradient: Vec<ArcLock<SharedTensor<f32>>>,
    pub input_blob_names: Vec<String>,
    pub output_blobs_data: Vec<ArcLock<SharedTensor<f32>>>,
    pub output_blobs_gradient: Vec<ArcLock<SharedTensor<f32>>>,
    pub blob_names: HashMap<String, (ArcLock<SharedTensor<f32>>, ArcLock<SharedTensor<f32>>)>,
    // some fields omitted
}

The generic Layer

Fields

Identifies the Network

The name is mainly used for logging purposes.

The configuration of the Layer

The implementation of the Layer.

This is the part that does most of the work (forward/backward).

The vector that stores shared references to the weights in the form of blobs.

The vector that stores shared references to the weights in the form of blobs.

References to all the input blobs of the layer.

References to all the input blobs of the layer.

Names for all the input blobs of the layer.

References to all the output blobs of the layer.

References to all the output blobs of the layer.

All the blobs of the layer that can be addressed by name.

Does not contain anonymous blobs.

Methods

impl<B: IBackend> Layer<B>
[src]

Connect the layer to another layers and set up tensors for intermediate results and weights.

Connects to the outputs provided by other layers via the registry. Adds output blobs to the layer and then adds them to the registry, so the next layers can connect them as their inputs. In the end it initializes the underlying layer implementation.

Called during initialization of containter layers.

Initializes layer for backpropagation

Go through all the blobs of a layer to determine which blobs contribute to the loss of the next layer. We can skip backward computation for blobs that don't contribute to the loss. If all of the blobs skip backpropagation we set a flag to skip backpropagation of the whole layer.

Set backpropagation flags to force this layer to backpropagate.

Is executed during Network initalization if [NetworkConfig][2].force_backward is true. Forcing backpropagation is useful for debugging.

Uses the underlying layer implementation to compute a forward step.

See ILayer.forward

Uses the underlying layer implementation to compute a backward step.

See ILayer.backward

Calculate the gradient w.r.t. input.

This method is mostly used when doing backpropagation.

Calculate the gradient w.r.t. parameters.

"Parameters" here refers to weights and also possibly bias, depending on the layer.

This method is mostly used when doing backpropagation.

Synchronize the layers backend.

Updates the weights with the weight update computed by the Solver.

Updating the weights is the last step of computing a Solver minibatch. The update value is computed in previous steps according to the learning rate policy

Clears the weights gradients and zero-inits them.

The gradients for the weights accumulate over the backpropagation steps of a Solver minibatch and are cleared between each minibatch to start over with a clean slate.

Serialize the Layer and it's weights to a Cap'n Proto file at the specified path.

You can find the capnp schema here.

let mut net_cfg = SequentialConfig::default();
// ... set up network ...
let cfg = LayerConfig::new("network", net_cfg);

let native_backend = Rc::new(util::native_backend());
let mut layer = Layer::from_config(native_backend, &cfg);
// ... do stuff with the layer ...
// ... and save it
layer.save("mynetwork").unwrap();

Read a Cap'n Proto file at the specified path and deserialize the Layer inside it.

You can find the capnp schema here.

use collenchyma::prelude::*;

let native_backend = Rc::new(util::native_backend());
// Load layer from file "mynetwork"
let layer = Layer::<Backend<Native>>::load(native_backend, "mynetwork").unwrap();

Sets whether the layer should compute gradients w.r.t. a weight at a particular index given by weight_id.

See [weight_propagate_down][1] ./struct.Layer.html

Returns true when the layer is using in-place computation.

For a layer to use in-place computation it needs to support it via compute_in_place and the names of the first input and output tensor have to match.

Returns the names of all the input blobs.

Returns the loss weight associated with the weight blob with id weight_id.

Returns all the learnable weights in the layer.

If the layer is a container layer it will return all the weights of the layers inside it.

Returns the gradients for all the learnable weights in the layer.

If the layer is a container layer it will return all the gradients of the layers inside it.

Returns the names of all the learnable weights in the layer.

If the layer is a container layer it will return all the names of the layers inside it.

Returns the learning rate for all the learnable weights in the layer.

If the layer is a container layer it will return all learning rates of the layers inside it.

impl<B: IBackend + LayerOps<f32> + 'static> Layer<B>
[src]

Creates a new Layer from a LayerConfig.

Trait Implementations

impl<B: Debug + IBackend> Debug for Layer<B>
[src]

Formats the value using the given formatter.

impl<B: IBackend> Send for Layer<B>
[src]