[][src]Trait opencv::dnn::LSTMLayer

pub trait LSTMLayer: LayerTrait {
    pub fn as_raw_LSTMLayer(&self) -> *const c_void;
pub fn as_raw_mut_LSTMLayer(&mut self) -> *mut c_void; pub fn set_weights(&mut self, wh: &Mat, wx: &Mat, b: &Mat) -> Result<()> { ... }
pub fn set_out_shape(&mut self, out_tail_shape: &MatShape) -> Result<()> { ... }
pub fn set_use_timstamps_dim(&mut self, use_: bool) -> Result<()> { ... }
pub fn set_produce_cell_output(&mut self, produce: bool) -> Result<()> { ... }
pub fn input_name_to_index(&mut self, input_name: &str) -> Result<i32> { ... }
pub fn output_name_to_index(&mut self, output_name: &str) -> Result<i32> { ... } }

LSTM recurrent layer

Required methods

Loading content...

Provided methods

pub fn set_weights(&mut self, wh: &Mat, wx: &Mat, b: &Mat) -> Result<()>[src]

👎 Deprecated:

Use LayerParams::blobs instead.

Deprecated: Use LayerParams::blobs instead. Set trained weights for LSTM layer.

LSTM behavior on each step is defined by current input, previous output, previous cell state and learned weights.

Let @f$x_t@f$ be current input, @f$h_t@f$ be current output, @f$c_t@f$ be current state. Than current output and current cell state is computed as follows: @f{eqnarray*}{ h_t &= o_t \odot tanh(c_t), \ c_t &= f_t \odot c_{t-1} + i_t \odot g_t, \ @f} where @f$\odot@f$ is per-element multiply operation and @f$i_t, f_t, o_t, g_t@f$ is internal gates that are computed using learned weights.

Gates are computed as follows: @f{eqnarray*}{ i_t &= sigmoid&(W_{xi} x_t + W_{hi} h_{t-1} + b_i), \ f_t &= sigmoid&(W_{xf} x_t + W_{hf} h_{t-1} + b_f), \ o_t &= sigmoid&(W_{xo} x_t + W_{ho} h_{t-1} + b_o), \ g_t &= tanh &(W_{xg} x_t + W_{hg} h_{t-1} + b_g), \ @f} where @f$W_{x?}@f$, @f$W_{h?}@f$ and @f$b_{?}@f$ are learned weights represented as matrices: @f$W_{x?} \in R^{N_h \times N_x}@f$, @f$W_{h?} \in R^{N_h \times N_h}@f$, @f$b_? \in R^{N_h}@f$.

For simplicity and performance purposes we use @f$ W_x = [W_{xi}; W_{xf}; W_{xo}, W_{xg}] @f$ (i.e. @f$W_x@f$ is vertical concatenation of @f$ W_{x?} @f$), @f$ W_x \in R^{4N_h \times N_x} @f$. The same for @f$ W_h = [W_{hi}; W_{hf}; W_{ho}, W_{hg}], W_h \in R^{4N_h \times N_h} @f$ and for @f$ b = [b_i; b_f, b_o, b_g]@f$, @f$b \in R^{4N_h} @f$.

Parameters

  • Wh: is matrix defining how previous output is transformed to internal gates (i.e. according to above mentioned notation is @f$ W_h @f$)
  • Wx: is matrix defining how current input is transformed to internal gates (i.e. according to above mentioned notation is @f$ W_x @f$)
  • b: is bias vector (i.e. according to above mentioned notation is @f$ b @f$)

pub fn set_out_shape(&mut self, out_tail_shape: &MatShape) -> Result<()>[src]

Specifies shape of output blob which will be [[T], N] + @p outTailShape. @details If this parameter is empty or unset then @p outTailShape = [Wh.size(0)] will be used, where Wh is parameter from setWeights().

C++ default parameters

  • out_tail_shape: MatShape()

pub fn set_use_timstamps_dim(&mut self, use_: bool) -> Result<()>[src]

👎 Deprecated:

Use flag produce_cell_output in LayerParams.

Deprecated: Use flag produce_cell_output in LayerParams. Specifies either interpret first dimension of input blob as timestamp dimension either as sample.

If flag is set to true then shape of input blob will be interpreted as [T, N, [data dims]] where T specifies number of timestamps, N is number of independent streams. In this case each forward() call will iterate through T timestamps and update layer's state T times.

If flag is set to false then shape of input blob will be interpreted as [N, [data dims]]. In this case each forward() call will make one iteration and produce one timestamp with shape [N, [out dims]].

C++ default parameters

  • use_: true

pub fn set_produce_cell_output(&mut self, produce: bool) -> Result<()>[src]

👎 Deprecated:

Use flag use_timestamp_dim in LayerParams.

Deprecated: Use flag use_timestamp_dim in LayerParams. If this flag is set to true then layer will produce @f$ c_t @f$ as second output. @details Shape of the second output is the same as first output.

C++ default parameters

  • produce: false

pub fn input_name_to_index(&mut self, input_name: &str) -> Result<i32>[src]

pub fn output_name_to_index(&mut self, output_name: &str) -> Result<i32>[src]

Loading content...

Implementations

impl<'_> dyn LSTMLayer + '_[src]

pub fn create(params: &LayerParams) -> Result<Ptr<dyn LSTMLayer>>[src]

Creates instance of LSTM layer

Implementors

Loading content...