Struct opencv::core::Ptr

source ·
pub struct Ptr<T: ?Sized>where
    Self: PtrExtern,
{ /* private fields */ }
Expand description

Implementations§

Get raw pointer to the inner object

Get mutable raw pointer to the inner object

Methods from Deref<Target = f32>§

Return the ordering between self and other.

Unlike the standard partial comparison between floating point numbers, this comparison always produces an ordering in accordance to the totalOrder predicate as defined in the IEEE 754 (2008 revision) floating point standard. The values are ordered in the following sequence:

  • negative quiet NaN
  • negative signaling NaN
  • negative infinity
  • negative numbers
  • negative subnormal numbers
  • negative zero
  • positive zero
  • positive subnormal numbers
  • positive numbers
  • positive infinity
  • positive signaling NaN
  • positive quiet NaN.

The ordering established by this function does not always agree with the PartialOrd and PartialEq implementations of f32. For example, they consider negative and positive zero equal, while total_cmp doesn’t.

The interpretation of the signaling NaN bit follows the definition in the IEEE 754 standard, which may not match the interpretation by some of the older, non-conformant (e.g. MIPS) hardware implementations.

Example
struct GoodBoy {
    name: String,
    weight: f32,
}

let mut bois = vec![
    GoodBoy { name: "Pucci".to_owned(), weight: 0.1 },
    GoodBoy { name: "Woofer".to_owned(), weight: 99.0 },
    GoodBoy { name: "Yapper".to_owned(), weight: 10.0 },
    GoodBoy { name: "Chonk".to_owned(), weight: f32::INFINITY },
    GoodBoy { name: "Abs. Unit".to_owned(), weight: f32::NAN },
    GoodBoy { name: "Floaty".to_owned(), weight: -5.0 },
];

bois.sort_by(|a, b| a.weight.total_cmp(&b.weight));

Trait Implementations§

Sets training method and common parameters. Read more
Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM. Read more
Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat. Read more
Termination criteria of the training algorithm. You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Default value is TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01). Read more
BPROP: Strength of the weight gradient term. The recommended value is about 0.1. Default value is 0.1. Read more
BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. Default value is 0.1. Read more
RPROP: Initial value inline formula of update-values inline formula. Default value is 0.1. Read more
RPROP: Increase factor inline formula. It must be >1. Default value is 1.2. Read more
RPROP: Decrease factor inline formula. It must be <1. Default value is 0.5. Read more
RPROP: Update-values lower limit inline formula. It must be positive. Default value is FLT_EPSILON. Read more
RPROP: Update-values upper limit inline formula. It must be >1. Default value is 50. Read more
ANNEAL: Update initial temperature. It must be >=0. Default value is 10. Read more
ANNEAL: Update final temperature. It must be >=0 and less than initialT. Default value is 0.1. Read more
ANNEAL: Update cooling ratio. It must be >0 and less than 1. Default value is 0.95. Read more
ANNEAL: Update iteration per step. It must be >0 . Default value is 10. Read more
Set/initialize anneal RNG
Returns current training method
Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Read more
Termination criteria of the training algorithm. You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Default value is TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01). Read more
BPROP: Strength of the weight gradient term. The recommended value is about 0.1. Default value is 0.1. Read more
BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. Default value is 0.1. Read more
RPROP: Initial value inline formula of update-values inline formula. Default value is 0.1. Read more
RPROP: Increase factor inline formula. It must be >1. Default value is 1.2. Read more
RPROP: Decrease factor inline formula. It must be <1. Default value is 0.5. Read more
RPROP: Update-values lower limit inline formula. It must be positive. Default value is FLT_EPSILON. Read more
RPROP: Update-values upper limit inline formula. It must be >1. Default value is 50. Read more
ANNEAL: Update initial temperature. It must be >=0. Default value is 10. Read more
ANNEAL: Update final temperature. It must be >=0 and less than initialT. Default value is 0.1. Read more
ANNEAL: Update cooling ratio. It must be >0 and less than 1. Default value is 0.95. Read more
ANNEAL: Update iteration per step. It must be >0 . Default value is 10. Read more
Apply high-dimensional filtering using adaptive manifolds. Read more
See also Read more
See also Read more
See also Read more
See also Read more
See also Read more
See also Read more
Detects keypoints in the image using the wrapped detector and performs affine adaptation to augment them with their elliptic regions. Read more
Detects keypoints and computes descriptors for their surrounding regions, after warping them into circles. Read more
Detects keypoints in the image using the wrapped detector and performs affine adaptation to augment them with their elliptic regions. Read more
Detects keypoints and computes descriptors for their surrounding regions, after warping them into circles. Read more
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Clears the algorithm state
Reads algorithm parameters from a file storage
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Aligns images Read more
Short version of process, that doesn’t take extra arguments. Read more
Calculates shift between two images, i. e. how to shift the second image to correspond it with the first. Read more
Helper function, that shift Mat filling new regions with zeros. Read more
Computes median threshold and exclude bitmaps of given image. Read more
C++ default parameters Read more
Returns Read more
Returns Read more
Computes features sby input image. Read more
Set detection threshold. Read more
Set detection octaves. Read more
Backend identifier.
Backend identifier.
Backend identifier.
Target identifier.
Transfer data to CPU host memory.
Indicate that an actual data is on CPU.
Backend identifier.
Target identifier.
Computes a foreground mask. Read more
Computes a foreground mask. Read more
Computes a foreground mask. Read more
Computes a foreground mask. Read more
Computes a foreground mask. Read more
Computes a foreground mask. Read more
Computes a foreground mask. Read more
Computes a foreground mask. Read more
Computes a foreground mask. Read more
C++ default parameters Read more
Sets the number of frames with same pixel color to consider stable.
Sets the maximum allowed credit for a pixel in history.
Sets if we’re giving a pixel credit for being stable for a long time.
Sets if we’re parallelizing the algorithm.
Returns number of frames with same pixel color to consider stable.
Returns maximum allowed credit for a pixel in history.
Returns if we’re giving a pixel credit for being stable for a long time.
Returns if we’re parallelizing the algorithm.
Computes a background image. Read more
Computes a background image. Read more
Computes a background image. Read more
Computes a background image. Read more
Computes a background image. Read more
Computes a background image. Read more
Computes a background image. Read more
Computes a background image. Read more
Computes a background image. Read more
Sets total number of distinct colors to maintain in histogram.
Sets the learning rate of the algorithm.
Sets the number of frames used to initialize background model.
Sets the parameter used for quantization of color-space
Sets the prior probability that each individual pixel is a background pixel.
Sets the kernel radius used for morphological operations
Sets the value of decision threshold.
Sets the status of background model update
Sets the minimum value taken on by pixels in image sequence.
Sets the maximum value taken on by pixels in image sequence.
Returns total number of distinct colors to maintain in histogram.
Returns the learning rate of the algorithm. Read more
Returns the number of frames used to initialize background model.
Returns the parameter used for quantization of color-space. Read more
Returns the prior probability that each individual pixel is a background pixel.
Returns the kernel radius used for morphological operations
Returns the value of decision threshold. Read more
Returns the status of background model update
Returns the minimum value taken on by pixels in image sequence. Usually 0.
Returns the maximum value taken on by pixels in image sequence. e.g. 1.0 or 255.
C++ default parameters Read more
Sets the number of last frames that affect the background model
Sets the number of data samples in the background model. Read more
Sets the threshold on the squared distance
Sets the k in the kNN. How many nearest neighbours need to match.
Enables or disables shadow detection
Sets the shadow value
Sets the shadow threshold
Returns the number of last frames that affect the background model
Returns the number of data samples in the background model
Returns the threshold on the squared distance between the pixel and the sample Read more
Returns the number of neighbours, the k in the kNN. Read more
Returns the shadow detection flag Read more
Returns the shadow value Read more
Returns the shadow threshold Read more
C++ default parameters Read more
Sets the number of last frames that affect the background model
Sets the number of gaussian components in the background model. Read more
Sets the “background ratio” parameter of the algorithm
Sets the variance threshold for the pixel-model match
Sets the variance threshold for the pixel-model match used for new mixture component generation
Sets the initial variance of each gaussian component
Sets the complexity reduction threshold
Enables or disables shadow detection
Sets the shadow value
Sets the shadow threshold
Computes a foreground mask. Read more
Sets the number of last frames that affect the background model
Sets the number of gaussian components in the background model. Read more
Sets the “background ratio” parameter of the algorithm
Sets the variance threshold for the pixel-model match
Sets the variance threshold for the pixel-model match used for new mixture component generation
Sets the initial variance of each gaussian component
Sets the complexity reduction threshold
Enables or disables shadow detection
Sets the shadow value
Sets the shadow threshold
Computes a foreground mask. Read more
Returns the number of last frames that affect the background model
Returns the number of gaussian components in the background model
Returns the “background ratio” parameter of the algorithm Read more
Returns the variance threshold for the pixel-model match Read more
Returns the variance threshold for the pixel-model match used for new mixture component generation Read more
Returns the initial variance of each gaussian component
Returns the complexity reduction threshold Read more
Returns the shadow detection flag Read more
Returns the shadow value Read more
Returns the shadow threshold Read more
Returns the number of last frames that affect the background model
Returns the number of gaussian components in the background model
Returns the “background ratio” parameter of the algorithm Read more
Returns the variance threshold for the pixel-model match Read more
Returns the variance threshold for the pixel-model match used for new mixture component generation Read more
Returns the initial variance of each gaussian component
Returns the complexity reduction threshold Read more
Returns the shadow detection flag Read more
Returns the shadow value Read more
Returns the shadow threshold Read more
C++ default parameters Read more
C++ default parameters Read more
C++ default parameters Read more
C++ default parameters Read more
C++ default parameters Read more
C++ default parameters Read more
C++ default parameters Read more
C++ default parameters Read more
See also Read more
See also Read more
See also Read more
See also Read more
For every input query descriptor, retrieve the best matching one from a dataset provided from user or from the one internal to class Read more
For every input query descriptor, retrieve the best k matching ones from a dataset provided from user or from the one internal to class Read more
For every input query descriptor, retrieve, from a dataset provided from user or from the one internal to class, all the descriptors that are not further than maxDist from input query Read more
Store locally new descriptors to be inserted in dataset, without updating dataset. Read more
Update dataset by inserting into it all descriptors that were stored locally by add function. Read more
Clear dataset and internal data
For every input query descriptor, retrieve the best matching one from a dataset provided from user or from the one internal to class Read more
For every input query descriptor, retrieve the best k matching ones from a dataset provided from user or from the one internal to class Read more
For every input query descriptor, retrieve, from a dataset provided from user or from the one internal to class, all the descriptors that are not further than maxDist from input query Read more
Get current number of octaves
Set number of octaves Read more
Get current width of bands
Set width of bands Read more
Get current reduction ratio (used in Gaussian pyramids)
Set reduction ratio (used in Gaussian pyramids) Read more
Read parameters from a FileNode object and store them Read more
Requires line detection Read more
Store parameters to a FileStorage object Read more
Requires line detection Read more
Requires descriptors computation Read more
Requires descriptors computation Read more
Return descriptor size
Return data type
returns norm mode
Create BlockMeanHash object Read more
array of object points of all the marker corners in the board each marker include its 4 corners in this order: Read more
the dictionary of markers employed for this board
the dictionary of markers employed for this board
vector of the identifiers of the markers in the board (same size than objPoints) The identifiers refers to the board dictionary Read more
coordinate of the bottom right corner of the board, is set when calling the function create()
Set ids vector Read more
array of object points of all the marker corners in the board each marker include its 4 corners in this order: Read more
the dictionary of markers employed for this board
the dictionary of markers employed for this board
vector of the identifiers of the markers in the board (same size than objPoints) The identifiers refers to the board dictionary Read more
coordinate of the bottom right corner of the board, is set when calling the function create()
Set ids vector Read more
array of object points of all the marker corners in the board each marker include its 4 corners in this order: Read more
the dictionary of markers employed for this board
the dictionary of markers employed for this board
vector of the identifiers of the markers in the board (same size than objPoints) The identifiers refers to the board dictionary Read more
coordinate of the bottom right corner of the board, is set when calling the function create()
Set ids vector Read more
array of object points of all the marker corners in the board each marker include its 4 corners in this order: Read more
vector of the identifiers of the markers in the board (same size than objPoints) The identifiers refers to the board dictionary Read more
coordinate of the bottom right corner of the board, is set when calling the function create()
array of object points of all the marker corners in the board each marker include its 4 corners in this order: Read more
vector of the identifiers of the markers in the board (same size than objPoints) The identifiers refers to the board dictionary Read more
coordinate of the bottom right corner of the board, is set when calling the function create()
array of object points of all the marker corners in the board each marker include its 4 corners in this order: Read more
vector of the identifiers of the markers in the board (same size than objPoints) The identifiers refers to the board dictionary Read more
coordinate of the bottom right corner of the board, is set when calling the function create()
Type of the boosting algorithm. See Boost::Types. Default value is Boost::REAL. Read more
The number of weak classifiers. Default value is 100. Read more
A threshold between 0 and 1 used to save computational time. Samples with summary weight inline formula do not participate in the next iteration of training. Set this parameter to 0 to turn off this functionality. Default value is 0.95. Read more
Type of the boosting algorithm. See Boost::Types. Default value is Boost::REAL. Read more
The number of weak classifiers. Default value is 100. Read more
A threshold between 0 and 1 used to save computational time. Samples with summary weight inline formula do not participate in the next iteration of training. Set this parameter to 0 to turn off this functionality. Default value is 0.95. Read more
Wrap the specified raw pointer Read more
Return an the underlying raw pointer while consuming this wrapper. Read more
Return the underlying raw pointer. Read more
Return the underlying mutable raw pointer Read more
Equalizes the histogram of a grayscale image using Contrast Limited Adaptive Histogram Equalization. Read more
Sets threshold for contrast limiting. Read more
Sets size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. Read more
Equalizes the histogram of a grayscale image using Contrast Limited Adaptive Histogram Equalization. Read more
Sets threshold for contrast limiting. Read more
Sets size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. Read more
Returns threshold value for contrast limiting.
Returns Size defines the number of tiles in row and column.
Returns threshold value for contrast limiting.
Returns Size defines the number of tiles in row and column.
number of lagged non-linearity iterations (inner loop)
number of warping iterations (number of pyramid levels)
number of linear system solver iterations
Equalizes the histogram of a grayscale image using Contrast Limited Adaptive Histogram Equalization. Read more
Finds edges in an image using the Canny86 algorithm. Read more
Finds edges in an image using the Canny86 algorithm. Read more
Computes the cornerness criteria at each image pixel. Read more
Determines strong corners on an image. Read more
Calculates a dense optical flow. Read more
Calculates a dense optical flow. Read more
Calculates a dense optical flow. Read more
Calculates a dense optical flow. Read more
Adds descriptors to train a descriptor collection. Read more
Clears the train descriptor collection.
Trains a descriptor matcher. Read more
Finds the best match for each descriptor from a query set (blocking version). Read more
Finds the best match for each descriptor from a query set (blocking version). Read more
Finds the best match for each descriptor from a query set (asynchronous version). Read more
Finds the best match for each descriptor from a query set (asynchronous version). Read more
Converts matches array from internal representation to standard matches vector. Read more
Finds the k best matches for each descriptor from a query set (blocking version). Read more
Finds the k best matches for each descriptor from a query set (blocking version). Read more
Finds the k best matches for each descriptor from a query set (asynchronous version). Read more
Finds the k best matches for each descriptor from a query set (asynchronous version). Read more
Converts matches array from internal representation to standard matches vector. Read more
For each query descriptor, finds the training descriptors not farther than the specified distance (blocking version). Read more
For each query descriptor, finds the training descriptors not farther than the specified distance (blocking version). Read more
For each query descriptor, finds the training descriptors not farther than the specified distance (asynchronous version). Read more
For each query descriptor, finds the training descriptors not farther than the specified distance (asynchronous version). Read more
Converts matches array from internal representation to standard matches vector. Read more
Returns true if the descriptor matcher supports masking permissible matches.
Returns a constant link to the train descriptor collection.
Returns true if there are no train descriptors in the collection.
Refines a disparity map using joint bilateral filtering. Read more
truncation of data continuity
truncation of disparity continuity
filter range sigma
Detects keypoints in an image. Read more
Computes the descriptors for a set of keypoints detected in an image. Read more
Detects keypoints and computes the descriptors. Read more
Converts keypoints array from internal representation to standard vector.
Detects keypoints in an image. Read more
Computes the descriptors for a set of keypoints detected in an image. Read more
Detects keypoints and computes the descriptors. Read more
Converts keypoints array from internal representation to standard vector.
Finds circles in a grayscale image using the Hough transform. Read more
Finds lines in a binary image using the classical Hough transform. Read more
Downloads results from cuda::HoughLinesDetector::detect to host memory. Read more
Finds line segments in a binary image using the probabilistic Hough transform. Read more
Calculates Optical Flow using NVIDIA Optical Flow SDK. Read more
Releases all buffers, contexts and device pointers.
Calculates Optical Flow using NVIDIA Optical Flow SDK. Read more
Releases all buffers, contexts and device pointers.
Returns grid size of output buffer as per the hardware’s capability.
Returns grid size of output buffer as per the hardware’s capability.
The NVIDIA optical flow hardware generates flow vectors at granularity gridSize, which can be queried via function getGridSize(). Upsampler() helper function converts the hardware-generated flow vectors to dense representation (1 flow vector for each pixel) using nearest neighbour upsampling method. Read more
convertToFloat() helper function converts the hardware-generated flow vectors to floating point representation (1 flow vector for gridSize). gridSize can be queried via function getGridSize(). Read more
if true, image will be blurred before descriptors calculation
Time step of the numerical scheme.
Weight parameter for the data term, attachment parameter. This is the most relevant parameter, which determines the smoothness of the output. The smaller this parameter is, the smoother the solutions we obtain. It depends on the range of motions of the images, so its value should be adapted to each image sequence. Read more
Weight parameter for (u - v)^2, tightness parameter. It serves as a link between the attachment and the regularization terms. In theory, it should have a small value in order to maintain both parts in correspondence. The method is stable for a large range of values of this parameter. Read more
parameter used for motion estimation. It adds a variable allowing for illumination variations Set this parameter to 1. if you have varying illumination. See: Chambolle et al, A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging Journal of Mathematical imaging and vision, may 2011 Vol 40 issue 1, pp 120-145 Read more
Number of scales used to create the pyramid of images.
Number of warpings per scale. Represents the number of times that I1(x+u0) and grad( I1(x+u0) ) are computed per scale. This is a parameter that assures the stability of the method. It also affects the running time, so it is a compromise between speed and accuracy. Read more
Stopping criterion threshold used in the numerical scheme, which is a trade-off between precision and running time. A small value will yield more accurate solutions at the expense of a slower convergence. Read more
Stopping criterion iterations number used in the numerical scheme.
Calculates a sparse optical flow. Read more
Enables the stereo correspondence operator that finds the disparity for the specified data cost. Read more
Enables the stereo correspondence operator that finds the disparity for the specified data cost. Read more
Enables the stereo correspondence operator that finds the disparity for the specified data cost. Read more
Enables the stereo correspondence operator that finds the disparity for the specified data cost. Read more
number of BP iterations on each level
number of levels
truncation of data cost
data weight
truncation of discontinuity cost
discontinuity single jump
type for messages (CV_16SC1 or CV_32FC1)
number of BP iterations on each level
number of levels
truncation of data cost
data weight
truncation of discontinuity cost
discontinuity single jump
type for messages (CV_16SC1 or CV_32FC1)
number of active disparity on the first level
Computes disparity map for the specified stereo pair Read more
Computes disparity map with specified CUDA Stream Read more
Computes a proximity map for a raster template and an image where the template is searched for. Read more
Recovers inverse camera response. Read more
Recovers inverse camera response. Read more
Maximum possible object size. Objects larger than that are ignored. Used for second signature and supported only for LBP cascades. Read more
Minimum possible object size. Objects smaller than that are ignored.
Parameter specifying how much the image size is reduced at each image scale.
Parameter specifying how many neighbors each candidate rectangle should have to retain it. Read more
Detects objects of different sizes in the input image. Read more
Converts objects array from internal representation to standard vector. Read more
Draw a ChArUco board Read more
Resets the algorithm Read more
Process next depth frame Read more
Get current parameters
Renders a volume into an image Read more
Renders a volume into an image Read more
Gets points, normals and colors of current 3d mesh Read more
Gets points of current 3d mesh Read more
Calculates normals for given points Read more
Get current pose in voxel space
frame size in pixels
rgb frame size in pixels
camera intrinsics
rgb camera intrinsics
pre-scale per 1 meter for input values Read more
Depth sigma in meters for bilateral smooth
Spatial sigma in pixels for bilateral smooth
Kernel size in pixels for bilateral smooth
Number of pyramid levels for ICP
Resolution of voxel space Read more
Size of voxel in meters
Minimal camera movement in meters Read more
initial volume pose in meters
distance to truncate in meters Read more
max number of frames per voxel Read more
A length of one raycast step Read more
light pose for rendering in meters
distance theshold for ICP in meters
angle threshold for ICP in radians
number of ICP iterations for each pyramid level
Threshold for depth truncation in meters Read more
Set Initial Volume Pose Sets the initial pose of the TSDF volume. Read more
Set Initial Volume Pose Sets the initial pose of the TSDF volume. Read more
frame size in pixels
rgb frame size in pixels
camera intrinsics
rgb camera intrinsics
pre-scale per 1 meter for input values Read more
Depth sigma in meters for bilateral smooth
Spatial sigma in pixels for bilateral smooth
Kernel size in pixels for bilateral smooth
Number of pyramid levels for ICP
Resolution of voxel space Read more
Size of voxel in meters
Minimal camera movement in meters Read more
initial volume pose in meters
distance to truncate in meters Read more
max number of frames per voxel Read more
A length of one raycast step Read more
light pose for rendering in meters
distance theshold for ICP in meters
angle threshold for ICP in radians
number of ICP iterations for each pyramid level
Threshold for depth truncation in meters Read more
Add zero padding in case of concatenation of blobs with different spatial sizes. Read more
Add zero padding in case of concatenation of blobs with different spatial sizes. Read more
Fit two closed curves using fourier descriptors. More details in PersoonFu1977 and BergerRaghunathan1998 Read more
Fit two closed curves using fourier descriptors. More details in PersoonFu1977 and BergerRaghunathan1998 Read more
set number of Fourier descriptors used in estimateTransformation Read more
set number of Fourier descriptors when estimateTransformation used vector Read more
Returns Read more
Returns Read more
Computes a convolution (or cross-correlation) of two images. Read more
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Read more
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Read more
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Read more
Parameters Read more
Parameters Read more
Parameters Read more
Parameters Read more
Computes an FFT of a given image. Read more
Finest level of the Gaussian pyramid on which the flow is computed (zero level corresponds to the original image resolution). The final flow is obtained by bilinear upscaling. Read more
Size of an image patch for matching (in pixels). Normally, default 8x8 patches work well enough in most cases. Read more
Stride between neighbor patches. Must be less than patch size. Lower values correspond to higher flow quality. Read more
Maximum number of gradient descent iterations in the patch inverse search stage. Higher values may improve quality in some cases. Read more
Maximum number of gradient descent iterations in the patch inverse search stage. Higher values may improve quality in some cases. Read more
Weight of the smoothness term Read more
Weight of the color constancy term Read more
Weight of the gradient constancy term Read more
Whether to use mean-normalization of patches when computing patch distance. It is turned on by default as it typically provides a noticeable quality boost because of increased robustness to illumination variations. Turn it off if you are certain that your sequence doesn’t contain any changes in illumination. Read more
Whether to use spatial propagation of good optical flow vectors. This option is turned on by default, as it tends to work better on average and can sometimes help recover from major errors introduced by the coarse-to-fine scheme employed by the DIS optical flow algorithm. Turning this option off can make the output flow field a bit smoother, however. Read more
Finest level of the Gaussian pyramid on which the flow is computed (zero level corresponds to the original image resolution). The final flow is obtained by bilinear upscaling. Read more
Size of an image patch for matching (in pixels). Normally, default 8x8 patches work well enough in most cases. Read more
Stride between neighbor patches. Must be less than patch size. Lower values correspond to higher flow quality. Read more
Maximum number of gradient descent iterations in the patch inverse search stage. Higher values may improve quality in some cases. Read more
Number of fixed point iterations of variational refinement per scale. Set to zero to disable variational refinement completely. Higher values will typically result in more smooth and high-quality flow. Read more
Weight of the smoothness term Read more
Weight of the color constancy term Read more
Weight of the gradient constancy term Read more
Whether to use mean-normalization of patches when computing patch distance. It is turned on by default as it typically provides a noticeable quality boost because of increased robustness to illumination variations. Turn it off if you are certain that your sequence doesn’t contain any changes in illumination. Read more
Whether to use spatial propagation of good optical flow vectors. This option is turned on by default, as it tends to work better on average and can sometimes help recover from major errors introduced by the coarse-to-fine scheme employed by the DIS optical flow algorithm. Turning this option off can make the output flow field a bit smoother, however. Read more
Find rectangular regions in the given image that are likely to contain objects of loaded classes (models) and corresponding confidence levels. Read more
Return the class (model) names that were passed in constructor or method load or extracted from models filenames in those methods. Read more
Return a count of loaded models (classes).
Produce domain transform filtering operation on source image. Read more
Cluster possible values of a categorical variable into K<=maxCategories clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than maxCategories values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including our implementation) try to find sub-optimal split in this case by clustering all the samples into maxCategories clusters that is some categories are merged together. The clustering is applied only in n > 2-class classification problems for categorical variables with N > max_categories possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases. Default value is 10. Read more
The maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than maxDepth. The root node has zero depth. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure @ref ml_intro_trees “here”), and/or if the tree is pruned. Default value is INT_MAX. Read more
If the number of samples in a node is less than this parameter then the node will not be split. Read more
If CVFolds > 1 then algorithms prunes the built decision tree using K-fold cross-validation procedure where K is equal to CVFolds. Default value is 10. Read more
If true then surrogate splits will be built. These splits allow to work with missing data and compute variable importance correctly. Default value is false. Read more
If true then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate. Default value is true. Read more
If true then pruned branches are physically removed from the tree. Otherwise they are retained and it is possible to get results from the original unpruned (or pruned less aggressively) tree. Default value is true. Read more
Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be split further. Default value is 0.01f Read more
The array of a priori class probabilities, sorted by the class label value. Read more
Cluster possible values of a categorical variable into K<=maxCategories clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than maxCategories values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including our implementation) try to find sub-optimal split in this case by clustering all the samples into maxCategories clusters that is some categories are merged together. The clustering is applied only in n > 2-class classification problems for categorical variables with N > max_categories possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases. Default value is 10. Read more
The maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than maxDepth. The root node has zero depth. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure @ref ml_intro_trees “here”), and/or if the tree is pruned. Default value is INT_MAX. Read more
If the number of samples in a node is less than this parameter then the node will not be split. Read more
If CVFolds > 1 then algorithms prunes the built decision tree using K-fold cross-validation procedure where K is equal to CVFolds. Default value is 10. Read more
If true then surrogate splits will be built. These splits allow to work with missing data and compute variable importance correctly. Default value is false. Read more
If true then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate. Default value is true. Read more
If true then pruned branches are physically removed from the tree. Otherwise they are retained and it is possible to get results from the original unpruned (or pruned less aggressively) tree. Default value is true. Read more
Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be split further. Default value is 0.01f Read more
The array of a priori class probabilities, sorted by the class label value. Read more
Cluster possible values of a categorical variable into K<=maxCategories clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than maxCategories values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including our implementation) try to find sub-optimal split in this case by clustering all the samples into maxCategories clusters that is some categories are merged together. The clustering is applied only in n > 2-class classification problems for categorical variables with N > max_categories possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases. Default value is 10. Read more
The maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than maxDepth. The root node has zero depth. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure @ref ml_intro_trees “here”), and/or if the tree is pruned. Default value is INT_MAX. Read more
If the number of samples in a node is less than this parameter then the node will not be split. Read more
If CVFolds > 1 then algorithms prunes the built decision tree using K-fold cross-validation procedure where K is equal to CVFolds. Default value is 10. Read more
If true then surrogate splits will be built. These splits allow to work with missing data and compute variable importance correctly. Default value is false. Read more
If true then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate. Default value is true. Read more
If true then pruned branches are physically removed from the tree. Otherwise they are retained and it is possible to get results from the original unpruned (or pruned less aggressively) tree. Default value is true. Read more
Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be split further. Default value is 0.01f Read more
The array of a priori class probabilities, sorted by the class label value. Read more
Cluster possible values of a categorical variable into K<=maxCategories clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than maxCategories values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including our implementation) try to find sub-optimal split in this case by clustering all the samples into maxCategories clusters that is some categories are merged together. The clustering is applied only in n > 2-class classification problems for categorical variables with N > max_categories possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases. Default value is 10. Read more
The maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than maxDepth. The root node has zero depth. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure @ref ml_intro_trees “here”), and/or if the tree is pruned. Default value is INT_MAX. Read more
If the number of samples in a node is less than this parameter then the node will not be split. Read more
If CVFolds > 1 then algorithms prunes the built decision tree using K-fold cross-validation procedure where K is equal to CVFolds. Default value is 10. Read more
If true then surrogate splits will be built. These splits allow to work with missing data and compute variable importance correctly. Default value is false. Read more
If true then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate. Default value is true. Read more
If true then pruned branches are physically removed from the tree. Otherwise they are retained and it is possible to get results from the original unpruned (or pruned less aggressively) tree. Default value is true. Read more
Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be split further. Default value is 0.01f Read more
The array of a priori class probabilities, sorted by the class label value. Read more
Returns indices of root nodes
Returns all the nodes Read more
Returns all the splits Read more
Returns all the bitsets for categorical splits Read more
Cluster possible values of a categorical variable into K<=maxCategories clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than maxCategories values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including our implementation) try to find sub-optimal split in this case by clustering all the samples into maxCategories clusters that is some categories are merged together. The clustering is applied only in n > 2-class classification problems for categorical variables with N > max_categories possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases. Default value is 10. Read more
The maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than maxDepth. The root node has zero depth. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure @ref ml_intro_trees “here”), and/or if the tree is pruned. Default value is INT_MAX. Read more
If the number of samples in a node is less than this parameter then the node will not be split. Read more
If CVFolds > 1 then algorithms prunes the built decision tree using K-fold cross-validation procedure where K is equal to CVFolds. Default value is 10. Read more
If true then surrogate splits will be built. These splits allow to work with missing data and compute variable importance correctly. Default value is false. Read more
If true then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate. Default value is true. Read more
If true then pruned branches are physically removed from the tree. Otherwise they are retained and it is possible to get results from the original unpruned (or pruned less aggressively) tree. Default value is true. Read more
Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be split further. Default value is 0.01f Read more
The array of a priori class probabilities, sorted by the class label value. Read more
Returns indices of root nodes
Returns all the nodes Read more
Returns all the splits Read more
Returns all the bitsets for categorical splits Read more
Cluster possible values of a categorical variable into K<=maxCategories clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than maxCategories values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including our implementation) try to find sub-optimal split in this case by clustering all the samples into maxCategories clusters that is some categories are merged together. The clustering is applied only in n > 2-class classification problems for categorical variables with N > max_categories possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases. Default value is 10. Read more
The maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than maxDepth. The root node has zero depth. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure @ref ml_intro_trees “here”), and/or if the tree is pruned. Default value is INT_MAX. Read more
If the number of samples in a node is less than this parameter then the node will not be split. Read more
If CVFolds > 1 then algorithms prunes the built decision tree using K-fold cross-validation procedure where K is equal to CVFolds. Default value is 10. Read more
If true then surrogate splits will be built. These splits allow to work with missing data and compute variable importance correctly. Default value is false. Read more
If true then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate. Default value is true. Read more
If true then pruned branches are physically removed from the tree. Otherwise they are retained and it is possible to get results from the original unpruned (or pruned less aggressively) tree. Default value is true. Read more
Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be split further. Default value is 0.01f Read more
The array of a priori class probabilities, sorted by the class label value. Read more
Returns indices of root nodes
Returns all the nodes Read more
Returns all the splits Read more
Returns all the bitsets for categorical splits Read more
Returns the “default value” for a type. Read more
Calculates an optical flow. Read more
Releases all inner buffers.
Calculates an optical flow. Read more
Releases all inner buffers.
Calculates an optical flow. Read more
Releases all inner buffers.
Calculates an optical flow. Read more
Releases all inner buffers.
Calculates an optical flow. Read more
Releases all inner buffers.
Calculates an optical flow. Read more
Releases all inner buffers.
Configuration of the RLOF alogrithm. Read more
Threshold for the forward backward confidence check For each grid point inline formula a motion vector inline formula is computed. * If the forward backward error block formula * is larger than threshold given by this function then the motion vector will not be used by the following * vector field interpolation. inline formula denotes the backward flow. Note, the forward backward test * will only be applied if the threshold > 0. This may results into a doubled runtime for the motion estimation. * see also: getForwardBackward, setGridStep Read more
Size of the grid to spawn the motion vectors. For each grid point a motion vector is computed. Some motion vectors will be removed due to the forwatd backward * threshold (if set >0). The rest will be the base of the vector field interpolation. * see also: getForwardBackward, setGridStep * see also: getGridStep Read more
Interpolation used to compute the dense optical flow. Two interpolation algorithms are supported * - INTERP_GEO applies the fast geodesic interpolation, see Geistert2016. * - INTERP_EPIC_RESIDUAL applies the edge-preserving interpolation, see Revaud2015,Geistert2016. * see also: ximgproc::EdgeAwareInterpolator, getInterpolation Read more
see ximgproc::EdgeAwareInterpolator() K value. K is a number of nearest-neighbor matches considered, when fitting a locally affine * model. Usually it should be around 128. However, lower values would make the interpolation noticeably faster. * see also: ximgproc::EdgeAwareInterpolator, setEPICK * see also: ximgproc::EdgeAwareInterpolator, getEPICK Read more
see ximgproc::EdgeAwareInterpolator() sigma value. Sigma is a parameter defining how fast the weights decrease in the locally-weighted affine * fitting. Higher values can help preserve fine details, lower values can help to get rid of noise in the * output flow. * see also: ximgproc::EdgeAwareInterpolator, setEPICSigma * see also: ximgproc::EdgeAwareInterpolator, getEPICSigma Read more
see ximgproc::EdgeAwareInterpolator() lambda value. Lambda is a parameter defining the weight of the edge-aware term in geodesic distance, * should be in the range of 0 to 1000. * see also: ximgproc::EdgeAwareInterpolator, setEPICSigma * see also: ximgproc::EdgeAwareInterpolator, getEPICLambda Read more
see ximgproc::EdgeAwareInterpolator(). Sets the respective fastGlobalSmootherFilter() parameter. * see also: ximgproc::EdgeAwareInterpolator, setFgsLambda * see also: ximgproc::EdgeAwareInterpolator, ximgproc::fastGlobalSmootherFilter, getFgsLambda Read more
see ximgproc::EdgeAwareInterpolator(). Sets the respective fastGlobalSmootherFilter() parameter. * see also: ximgproc::EdgeAwareInterpolator, ximgproc::fastGlobalSmootherFilter, setFgsSigma * see also: ximgproc::EdgeAwareInterpolator, ximgproc::fastGlobalSmootherFilter, getFgsSigma Read more
source§

fn set_use_post_proc(&mut self, val: bool) -> Result<()>

enables ximgproc::fastGlobalSmootherFilter Read more
enables VariationalRefinement Read more
Parameter to tune the approximate size of the superpixel used for oversegmentation. Read more
Parameter to choose superpixel algorithm variant to use: Read more
Configuration of the RLOF alogrithm. Read more
Threshold for the forward backward confidence check For each grid point inline formula a motion vector inline formula is computed. * If the forward backward error block formula * is larger than threshold given by this function then the motion vector will not be used by the following * vector field interpolation. inline formula denotes the backward flow. Note, the forward backward test * will only be applied if the threshold > 0. This may results into a doubled runtime for the motion estimation. * getForwardBackward, setGridStep Read more
Size of the grid to spawn the motion vectors. For each grid point a motion vector is computed. Some motion vectors will be removed due to the forwatd backward * threshold (if set >0). The rest will be the base of the vector field interpolation. * see also: getForwardBackward, setGridStep Read more
Interpolation used to compute the dense optical flow. Two interpolation algorithms are supported * - INTERP_GEO applies the fast geodesic interpolation, see Geistert2016. * - INTERP_EPIC_RESIDUAL applies the edge-preserving interpolation, see Revaud2015,Geistert2016. * see also: ximgproc::EdgeAwareInterpolator, getInterpolation * see also: ximgproc::EdgeAwareInterpolator, setInterpolation Read more
see ximgproc::EdgeAwareInterpolator() K value. K is a number of nearest-neighbor matches considered, when fitting a locally affine * model. Usually it should be around 128. However, lower values would make the interpolation noticeably faster. * see also: ximgproc::EdgeAwareInterpolator, setEPICK Read more
see ximgproc::EdgeAwareInterpolator() sigma value. Sigma is a parameter defining how fast the weights decrease in the locally-weighted affine * fitting. Higher values can help preserve fine details, lower values can help to get rid of noise in the * output flow. * see also: ximgproc::EdgeAwareInterpolator, setEPICSigma Read more
see ximgproc::EdgeAwareInterpolator() lambda value. Lambda is a parameter defining the weight of the edge-aware term in geodesic distance, * should be in the range of 0 to 1000. * see also: ximgproc::EdgeAwareInterpolator, setEPICSigma Read more
see ximgproc::EdgeAwareInterpolator(). Sets the respective fastGlobalSmootherFilter() parameter. * see also: ximgproc::EdgeAwareInterpolator, setFgsLambda Read more
see ximgproc::EdgeAwareInterpolator(). Sets the respective fastGlobalSmootherFilter() parameter. * see also: ximgproc::EdgeAwareInterpolator, ximgproc::fastGlobalSmootherFilter, setFgsSigma Read more
source§

fn get_use_post_proc(&self) -> Result<bool>

enables ximgproc::fastGlobalSmootherFilter Read more
enables VariationalRefinement Read more
Parameter to tune the approximate size of the superpixel used for oversegmentation. Read more
Parameter to choose superpixel algorithm variant to use: Read more
Initializes some data that is cached for later computation If that function is not called, it will be called the first time normals are computed Read more
The resulting type after dereferencing.
Dereferences the value.
Mutably dereferences the value.
Adds descriptors to train a CPU(trainDescCollectionis) or GPU(utrainDescCollectionis) descriptor collection. Read more
Clears the train descriptor collections.
Trains a descriptor matcher Read more
Finds the best match for each descriptor from a query set. Read more
Finds the k best matches for each descriptor from a query set. Read more
For each query descriptor, finds the training descriptors not farther than the specified distance. Read more
Adds descriptors to train a CPU(trainDescCollectionis) or GPU(utrainDescCollectionis) descriptor collection. Read more
Clears the train descriptor collections.
Trains a descriptor matcher Read more
Finds the best match for each descriptor from a query set. Read more
Finds the k best matches for each descriptor from a query set. Read more
For each query descriptor, finds the training descriptors not farther than the specified distance. Read more
Adds descriptors to train a CPU(trainDescCollectionis) or GPU(utrainDescCollectionis) descriptor collection. Read more
Clears the train descriptor collections.
Trains a descriptor matcher Read more
Finds the best match for each descriptor from a query set. Read more
Finds the k best matches for each descriptor from a query set. Read more
For each query descriptor, finds the training descriptors not farther than the specified distance. Read more
Returns a constant link to the train descriptor collection trainDescCollection .
Returns true if there are no train descriptors in the both collections.
Returns true if the descriptor matcher supports masking permissible matches.
Finds the best match for each descriptor from a query set. Read more
Finds the k best matches for each descriptor from a query set. Read more
For each query descriptor, finds the training descriptors not farther than the specified distance. Read more
Clones the matcher. Read more
C++ default parameters Read more
Returns a constant link to the train descriptor collection trainDescCollection .
Returns true if there are no train descriptors in the both collections.
Returns true if the descriptor matcher supports masking permissible matches.
Finds the best match for each descriptor from a query set. Read more
Finds the k best matches for each descriptor from a query set. Read more
For each query descriptor, finds the training descriptors not farther than the specified distance. Read more
Clones the matcher. Read more
C++ default parameters Read more
Returns a constant link to the train descriptor collection trainDescCollection .
Returns true if there are no train descriptors in the both collections.
Returns true if the descriptor matcher supports masking permissible matches.
Finds the best match for each descriptor from a query set. Read more
Finds the k best matches for each descriptor from a query set. Read more
For each query descriptor, finds the training descriptors not farther than the specified distance. Read more
Clones the matcher. Read more
C++ default parameters Read more
Prepares the blender for blending. Read more
Prepares the blender for blending. Read more
Processes the image. Read more
Blends and returns the final pano. Read more
Prepares the blender for blending. Read more
Prepares the blender for blending. Read more
Processes the image. Read more
Blends and returns the final pano. Read more
Prepares the blender for blending. Read more
Prepares the blender for blending. Read more
Processes the image. Read more
Blends and returns the final pano. Read more
Parameters Read more
Parameters Read more
Compensate exposure in the specified image. Read more
Parameters Read more
Parameters Read more
Compensate exposure in the specified image. Read more
Parameters Read more
Parameters Read more
Compensate exposure in the specified image. Read more
Parameters Read more
Parameters Read more
Compensate exposure in the specified image. Read more
Parameters Read more
Parameters Read more
Compensate exposure in the specified image. Read more
Creates weight maps for fixed set of source images by their masks and top-left corners. Final image can be obtained by simple weighting of the source images. Read more
Frees unused memory allocated before if there is any.
Frees unused memory allocated before if there is any.
Projects the image point. Read more
Builds the projection maps according to the given camera data. Read more
Projects the image. Read more
Projects the image backward. Read more
Parameters Read more
Estimates seams. Read more
Estimates seams. Read more
Estimates seams. Read more
Estimates seams. Read more
Estimates seams. Read more
Read a new dictionary from FileNode. Format: Read more
Write a dictionary to FileStorage. Format is the same as in readDictionary().
Given a matrix of bits. Returns whether if marker is identified or not. It returns by reference the correct id (if any) and the correct rotation Read more
Returns the distance of the input bits to the specific id. If allRotations is true, the four posible bits rotation are considered Read more
Draw a canonical marker image Read more
Apply filtering to the disparity map. Read more
Lambda is a parameter defining the amount of regularization during filtering. Larger values force filtered disparity map edges to adhere more to source image edges. Typical value is 8000. Read more
See also Read more
SigmaColor is a parameter defining how sensitive the filtering process is to source image edges. Large values can lead to disparity leakage through low-contrast edges. Small values can make the filter too sensitive to noise and textures in the source image. Typical values range from 0.8 to 2.0. Read more
See also Read more
LRCthresh is a threshold of disparity difference used in left-right-consistency check during confidence map computation. The default value of 24 (1.5 pixels) is virtually always good enough. Read more
See also Read more
DepthDiscontinuityRadius is a parameter used in confidence computation. It defines the size of low-confidence regions around depth discontinuities. Read more
See also Read more
Get the confidence map that was used in the last filter call. It is a CV_32F one-channel image with values ranging from 0.0 (totally untrusted regions of the raw disparity map) to 255.0 (regions containing correct disparity values with a high degree of confidence). Read more
Get the ROI used in the last filter call
Read the model from the given path Read more
Read the model from the given path Read more
Set desired model Read more
Set computation backend
Set computation target
Upsample via neural network Read more
Upsample via neural network of multiple outputs Read more
Returns the scale factor of the model: Read more
Returns the scale factor of the model: Read more
Sets the initial step that will be used in downhill simplex algorithm. Read more
Returns the initial step that will be used in downhill simplex algorithm. Read more
Executes the destructor for this type. Read more
Time step of the numerical scheme Read more
Weight parameter for the data term, attachment parameter Read more
Weight parameter for (u - v)^2, tightness parameter Read more
coefficient for additional illumination variation term Read more
Number of scales used to create the pyramid of images Read more
Number of warpings per scale Read more
Stopping criterion threshold used in the numerical scheme, which is a trade-off between precision and running time Read more
Inner iterations (between outlier filtering) used in the numerical scheme Read more
Outer iterations (number of inner loops) used in the numerical scheme Read more
Use initial flow Read more
Step between scales (<1) Read more
Median filter kernel size (1 = no filter) (3 or 5) Read more
Time step of the numerical scheme Read more
Weight parameter for the data term, attachment parameter Read more
Weight parameter for (u - v)^2, tightness parameter Read more
coefficient for additional illumination variation term Read more
Number of scales used to create the pyramid of images Read more
Number of warpings per scale Read more
Stopping criterion threshold used in the numerical scheme, which is a trade-off between precision and running time Read more
Inner iterations (between outlier filtering) used in the numerical scheme Read more
Outer iterations (number of inner loops) used in the numerical scheme Read more
Use initial flow Read more
Step between scales (<1) Read more
Median filter kernel size (1 = no filter) (3 or 5) Read more
Resets the algorithm Read more
Process next depth frame Read more
C++ default parameters Read more
Get current parameters
Renders a volume into an image Read more
Gets points and normals of current 3d mesh Read more
Gets points of current 3d mesh Read more
Calculates normals for given points Read more
Get current pose in voxel space
The number of mixture components in the Gaussian mixture model. Default value of the parameter is EM::DEFAULT_NCLUSTERS=5. Some of %EM implementation could determine the optimal number of mixtures within a specified value range, but that is not the case in ML yet. Read more
Constraint on covariance matrices which defines type of matrices. See EM::Types. Read more
The termination criteria of the %EM algorithm. The %EM algorithm can be terminated by the number of iterations termCrit.maxCount (number of M-steps) or when relative change of likelihood logarithm is less than termCrit.epsilon. Default maximum number of iterations is EM::DEFAULT_MAX_ITERS=100. Read more
Estimate the Gaussian mixture parameters from a samples set. Read more
Estimate the Gaussian mixture parameters from a samples set. Read more
Estimate the Gaussian mixture parameters from a samples set. Read more
The number of mixture components in the Gaussian mixture model. Default value of the parameter is EM::DEFAULT_NCLUSTERS=5. Some of %EM implementation could determine the optimal number of mixtures within a specified value range, but that is not the case in ML yet. Read more
Constraint on covariance matrices which defines type of matrices. See EM::Types. Read more
The termination criteria of the %EM algorithm. The %EM algorithm can be terminated by the number of iterations termCrit.maxCount (number of M-steps) or when relative change of likelihood logarithm is less than termCrit.epsilon. Default maximum number of iterations is EM::DEFAULT_MAX_ITERS=100. Read more
Returns weights of the mixtures Read more
Returns the cluster centers (means of the Gaussian mixture) Read more
Returns covariation matrices Read more
Returns posterior probabilities for the provided samples Read more
Returns a likelihood logarithm value and an index of the most probable mixture component for the given sample. Read more
The key method of ERFilter algorithm. Read more
set/get methods to set the algorithm properties,
The classifier must return probability measure for the region. Read more
Interface to provide a more elaborated cost map, i.e. edge map, for the edge-aware term. This implementation is based on a rather simple gradient-based edge map estimation. To used more complex edge map estimator (e.g. StructuredEdgeDetection that has been used in the original publication) that may lead to improved accuracies, the internal edge map estimation can be bypassed here. Read more
Parameter to tune the approximate size of the superpixel used for oversegmentation. Read more
See also Read more
Sigma is a parameter defining how fast the weights decrease in the locally-weighted affine fitting. Higher values can help preserve fine details, lower values can help to get rid of noise in the output flow. Read more
See also Read more
Lambda is a parameter defining the weight of the edge-aware term in geodesic distance, should be in the range of 0 to 1000. Read more
See also Read more
source§

fn set_use_post_processing(&mut self, _use_post_proc: bool) -> Result<()>

Sets whether the fastGlobalSmootherFilter() post-processing is employed. It is turned on by default. Read more
source§

fn get_use_post_processing(&mut self) -> Result<bool>

See also Read more
Sets the respective fastGlobalSmootherFilter() parameter.
See also Read more
See also Read more
See also Read more
Returns array containing proposal boxes. Read more
Sets the step size of sliding window search.
Sets the nms threshold for object proposals.
Sets the adaptation rate for nms threshold.
Sets the min score of boxes to detect.
Sets max number of boxes to detect.
Sets the edge min magnitude.
Sets the edge merge threshold.
Sets the cluster min magnitude.
Sets the max aspect ratio of boxes.
Sets the minimum area of boxes.
Sets the affinity sensitivity
Sets the scale sensitivity.
Returns the step size of sliding window search.
Returns the nms threshold for object proposals.
Returns adaptation rate for nms threshold.
Returns the min score of boxes to detect.
Returns the max number of boxes to detect.
Returns the edge min magnitude.
Returns the edge merge threshold.
Returns the cluster min magnitude.
Returns the max aspect ratio of boxes.
Returns the minimum area of boxes.
Returns the affinity sensitivity.
Returns the scale sensitivity.
Detects edges in a grayscale image and prepares them to detect lines and ellipses. Read more
returns Edge Image prepared by detectEdges() function. Read more
returns Gradient Image prepared by detectEdges() function. Read more
Returns std::vector<std::vector> of detected edge segments, see detectEdges()
Detects lines. Read more
Detects circles and ellipses. Read more
sets parameters. Read more
Returns for each line found in detectLines() its edge segment index in getSegments()
Callback function to signal the start of bitstream that is to be encoded. Read more
Callback function to signal that the encoded bitstream is ready to be written to file.
Callback function to signal that the encoding operation on the frame has started. Read more
Callback function signals that the encoding operation on the frame has finished. Read more
Set the size for the network input, which overwrites the input size of creating model. Call this method when the size of input image does not match the input size when creating model Read more
Set the score threshold to filter out bounding boxes of score less than the given value Read more
Set the Non-maximum-suppression threshold to suppress bounding boxes that have IoU greater than the given value Read more
Set the number of bounding boxes preserved before NMS Read more
A simple interface to detect face from given image Read more
Trains a FaceRecognizer with given data and associated labels. Read more
Updates a FaceRecognizer with given data and associated labels. Read more
Loads a FaceRecognizer and its model state. Read more
Loads a FaceRecognizer and its model state. Read more
Sets string info for the specified model’s label. Read more
Sets threshold of model
Trains a FaceRecognizer with given data and associated labels. Read more
Updates a FaceRecognizer with given data and associated labels. Read more
Loads a FaceRecognizer and its model state. Read more
Loads a FaceRecognizer and its model state. Read more
Sets string info for the specified model’s label. Read more
Sets threshold of model
Trains a FaceRecognizer with given data and associated labels. Read more
Updates a FaceRecognizer with given data and associated labels. Read more
Loads a FaceRecognizer and its model state. Read more
Loads a FaceRecognizer and its model state. Read more
Sets string info for the specified model’s label. Read more
Sets threshold of model
Predicts a label and associated confidence (e.g. distance) for a given input image. Read more
Predicts a label and associated confidence (e.g. distance) for a given input image. Read more
  • if implemented - send all result of prediction to collector that can be used for somehow custom result handling
  • Read more
    Saves a FaceRecognizer and its model state. Read more
    Saves a FaceRecognizer and its model state. Read more
    This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
    Gets string information by label. Read more
    Gets vector of labels by string. Read more
    threshold parameter accessor - required for default BestMinDist collector
    Predicts a label and associated confidence (e.g. distance) for a given input image. Read more
    Predicts a label and associated confidence (e.g. distance) for a given input image. Read more
  • if implemented - send all result of prediction to collector that can be used for somehow custom result handling
  • Read more
    Saves a FaceRecognizer and its model state. Read more
    Saves a FaceRecognizer and its model state. Read more
    This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
    Gets string information by label. Read more
    Gets vector of labels by string. Read more
    threshold parameter accessor - required for default BestMinDist collector
    Predicts a label and associated confidence (e.g. distance) for a given input image. Read more
    Predicts a label and associated confidence (e.g. distance) for a given input image. Read more
  • if implemented - send all result of prediction to collector that can be used for somehow custom result handling
  • Read more
    Saves a FaceRecognizer and its model state. Read more
    Saves a FaceRecognizer and its model state. Read more
    This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
    Gets string information by label. Read more
    Gets vector of labels by string. Read more
    threshold parameter accessor - required for default BestMinDist collector
    Extracting face feature from aligned image Read more
    Aligning image to put face on the standard position Read more
    Calculating the distance between two face features Read more
    A function to load the trained model before the fitting process. Read more
    Detect facial landmarks from an image. Read more
    A function to load the trained model before the fitting process. Read more
    Detect facial landmarks from an image. Read more
    A function to load the trained model before the fitting process. Read more
    Detect facial landmarks from an image. Read more
    A function to load the trained model before the fitting process. Read more
    Detect facial landmarks from an image. Read more
    overload with additional Config structures
    This function is used to train the model using gradient boosting to get a cascade of regressors which can then be used to predict shape. Read more
    set the custom face detector
    get faces using the custom detector
    Add one training sample to the trainer. Read more
    Trains a Facemark algorithm using the given dataset. Before the training process, training samples should be added to the trainer using face::addTrainingSample function. Read more
    Set a user defined face detector for the Facemark algorithm. Read more
    Detect faces from a given image using default or user defined face detector. Some Algorithm might not provide a default face detector. Read more
    Get data from an algorithm Read more
    Add one training sample to the trainer. Read more
    Trains a Facemark algorithm using the given dataset. Before the training process, training samples should be added to the trainer using face::addTrainingSample function. Read more
    Set a user defined face detector for the Facemark algorithm. Read more
    Detect faces from a given image using default or user defined face detector. Some Algorithm might not provide a default face detector. Read more
    Get data from an algorithm Read more
    Apply smoothing operation to the source image. Read more
    Apply smoothing operation to the source image. Read more
    @example fld_lines.cpp An example using the FastLineDetector Read more
    Draws the line segments on a given image. Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Detects keypoints in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Computes the descriptors for a set of keypoints detected in an image (first variant) or image set (second variant). Read more
    Detects keypoints and computes the descriptors Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Return true if detector object is empty
    C++ default parameters Read more
    Opens a file. Read more
    Closes the file and releases all the memory buffers. Read more
    Closes the file and releases all the memory buffers. Read more
    Simplified writing API to use with bindings. Read more
    Simplified writing API to use with bindings. Read more
    Simplified writing API to use with bindings. Read more
    Simplified writing API to use with bindings. Read more
    Simplified writing API to use with bindings. Read more
    Writes multiple numbers. Read more
    Writes a comment. Read more
    Starts to write a nested structure (sequence or a mapping). Read more
    Finishes writing nested structure (should pair startWriteStruct())
    Checks whether the file is opened. Read more
    Returns the first element of the top-level mapping. Read more
    Returns the top-level mapping Read more
    Returns the specified element of the top-level mapping. Read more
    Returns the specified element of the top-level mapping. Read more
    Returns the current format. Read more
    Applies the specified filter to the image. Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    Load font data. Read more
    Set Split Number from Bezier-curve to line Read more
    Draws a text string. Read more
    Calculates the width and height of a text string. Read more
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    Converts to this type from the input type.
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    set template to search Read more
    C++ default parameters Read more
    find template on image Read more
    C++ default parameters Read more
    Canny low threshold.
    Canny high threshold.
    Minimum distance between the centers of the detected objects.
    Inverse ratio of the accumulator resolution to the image resolution.
    Maximal size of inner buffers.
    set template to search Read more
    C++ default parameters Read more
    find template on image Read more
    C++ default parameters Read more
    Canny low threshold.
    Canny high threshold.
    Minimum distance between the centers of the detected objects.
    Inverse ratio of the accumulator resolution to the image resolution.
    Maximal size of inner buffers.
    R-Table levels.
    The accumulator threshold for the template centers at the detection stage. The smaller it is, the more false positions may be detected.
    Angle difference in degrees between two points in feature.
    Feature table levels.
    Maximal difference between angles that treated as equal.
    Minimal rotation angle to detect in degrees.
    Maximal rotation angle to detect in degrees.
    Angle step in degrees.
    Angle votes threshold.
    Minimal scale to detect.
    Maximal scale to detect.
    Scale step.
    Scale votes threshold.
    Position votes threshold.
    Segment an image and store output in dst Read more
    Sets the value for white threshold, needed for decoding. Read more
    Sets the value for black threshold, needed for decoding (shadowsmasks computation). Read more
    Get the number of pattern images needed for the graycode pattern. Read more
    Generates the all-black and all-white images needed for shadowMasks computation. Read more
    For a (x,y) pixel of a camera returns the corresponding projector pixel. Read more
    Maximum saturation for a pixel to be included in the gray-world assumption Read more
    Maximum saturation for a pixel to be included in the gray-world assumption Read more
    Draw a GridBoard Read more
    Apply Guided Filter to the filtering image. Read more
    Close and release hdf5 object.
    Create a group. Read more
    Delete an attribute from the root group. Read more
    Write an attribute inside the root group. Read more
    Read an attribute from the root group. Read more
    Write an attribute inside the root group. Read more
    Read an attribute from the root group. Read more
    Write an attribute inside the root group. Read more
    Read an attribute from the root group. Read more
    Write an attribute into the root group. Read more
    Read an attribute from the root group. Read more
    Check if label exists or not. Read more
    Check whether a given attribute exits or not in the root group. Read more
    Create and allocate storage for two dimensional single or multi channel dataset. Read more
    Create and allocate storage for two dimensional single or multi channel dataset. Read more
    Create and allocate storage for two dimensional single or multi channel dataset. Read more
    Create and allocate storage for two dimensional single or multi channel dataset. Read more
    C++ default parameters Read more
    Create and allocate storage for n-dimensional dataset, single or multichannel type. Read more
    Fetch dataset sizes Read more
    Fetch dataset type Read more
    C++ default parameters Read more
    Write or overwrite a Mat object into specified dataset of hdf5 file. Read more
    C++ default parameters Read more
    Insert or overwrite a Mat object into specified dataset and auto expand dataset size if unlimited property allows. Read more
    C++ default parameters Read more
    Read specific dataset from hdf5 file into Mat object. Read more
    Fetch keypoint dataset size Read more
    Create and allocate special storage for cv::KeyPoint dataset. Read more
    Write or overwrite list of KeyPoint into specified dataset of hdf5 file. Read more
    Insert or overwrite list of KeyPoint into specified dataset and autoexpand dataset size if unlimited property allows. Read more
    Read specific keypoint dataset from hdf5 file into vector object. Read more
    Gaussian smoothing window parameter.
    L2-Hys normalization method shrinkage.
    Flag to specify whether the gamma correction preprocessing is required or not.
    Maximum number of detection window increases.
    Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here. Read more
    Window stride. It must be a multiple of block stride.
    Coefficient of the detection window increase.
    Coefficient to regulate the similarity threshold. When detected, some objects can be covered by many rectangles. 0 means not to perform grouping. See groupRectangles. Read more
    Descriptor storage format: Read more
    Sets coefficients for the linear SVM classifier.
    Performs object detection without a multi-scale window. Read more
    Performs object detection without a multi-scale window. Read more
    Performs object detection with a multi-scale window. Read more
    Performs object detection with a multi-scale window. Read more
    Returns block descriptors computed for the whole image. Read more
    Returns the number of coefficients required for the classification.
    Returns the block histogram size.
    Returns coefficients of the classifier trained for people detection.
    Set the norm used to compute the Hausdorff value between two shapes. It can be L1 or L2 norm. Read more
    This method sets the rank proportion (or fractional value) that establish the Kth ranked value of the partial Hausdorff distance. Experimentally had been shown that 0.6 is a good value to compare shapes. Read more
    set and get the parameter segEgbThresholdI. This parameter is used in the second stage mentioned above. It is a constant used to threshold weights of the edge when merging adjacent nodes when applying EGB algorithm. The segmentation result tends to have more regions remained if this value is large and vice versa. Read more
    set and get the parameter minRegionSizeI. This parameter is used in the second stage mentioned above. After the EGB segmentation, regions that have fewer pixels then this parameter will be merged into it’s adjacent region. Read more
    set and get the parameter segEgbThresholdII. This parameter is used in the third stage mentioned above. It serves the same purpose as segEgbThresholdI. The segmentation result tends to have more regions remained if this value is large and vice versa. Read more
    set and get the parameter minRegionSizeII. This parameter is used in the third stage mentioned above. It serves the same purpose as minRegionSizeI Read more
    set and get the parameter spatialWeight. This parameter is used in the first stage mentioned above(the SLIC stage). It describes how important is the role of position when calculating the distance between each pixel and it’s center. The exact formula to calculate the distance is inline formula. The segmentation result tends to have more local consistency if this value is larger. Read more
    set and get the parameter slicSpixelSize. This parameter is used in the first stage mentioned above(the SLIC stage). It describes the size of each superpixel when initializing SLIC. Every superpixel approximately has inline formula pixels in the beginning. Read more
    set and get the parameter numSlicIter. This parameter is used in the first stage. It describes how many iteration to perform when executing SLIC. Read more
    do segmentation gpu Read more
    do segmentation with cpu This method is only implemented for reference. It is highly NOT recommanded to use it. Read more
    Get the reliability map computed from the wrapped phase map. Read more
    assumes that [0, size-1) is in or equals to [range.first, range.second)
    assumes that [0, size-1) is in or equals to [range.first, range.second)
    assumes that [0, size-1) is in or equals to [range.first, range.second)
    assumes that [0, size-1) is in or equals to [range.first, range.second)
    assumes that [0, size-1) is in or equals to [range.first, range.second)
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    Computes hash of the input image Read more
    Computes hash of the input image Read more
    Computes hash of the input image Read more
    Computes hash of the input image Read more
    Computes hash of the input image Read more
    Computes hash of the input image Read more
    Compare the hash value between inOne and inTwo Read more
    Compare the hash value between inOne and inTwo Read more
    Compare the hash value between inOne and inTwo Read more
    Compare the hash value between inOne and inTwo Read more
    Compare the hash value between inOne and inTwo Read more
    Compare the hash value between inOne and inTwo Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    Default number of neighbors to use in predict method. Read more
    Whether classification or regression model should be trained. Read more
    Parameter for KDTree implementation. Read more
    %Algorithm type, one of KNearest::Types. Read more
    Default number of neighbors to use in predict method. Read more
    Whether classification or regression model should be trained. Read more
    Parameter for KDTree implementation. Read more
    %Algorithm type, one of KNearest::Types. Read more
    Finds the neighbors and predicts responses for input vectors. Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    Resets the algorithm Read more
    Process next depth frame Read more
    Get current parameters
    Renders a volume into an image Read more
    Renders a volume into an image Read more
    Gets points and normals of current 3d mesh Read more
    Gets points of current 3d mesh Read more
    Calculates normals for given points Read more
    Get current pose in voxel space
    frame size in pixels
    rgb frame size in pixels
    camera intrinsics
    rgb camera intrinsics
    pre-scale per 1 meter for input values Read more
    Depth sigma in meters for bilateral smooth
    Spatial sigma in pixels for bilateral smooth
    Kernel size in pixels for bilateral smooth
    Number of pyramid levels for ICP
    Resolution of voxel space Read more
    Size of voxel in meters
    Minimal camera movement in meters Read more
    initial volume pose in meters
    distance to truncate in meters Read more
    max number of frames per voxel Read more
    A length of one raycast step Read more
    light pose for rendering in meters
    distance theshold for ICP in meters
    angle threshold for ICP in radians
    number of ICP iterations for each pyramid level
    Threshold for depth truncation in meters Read more
    Set Initial Volume Pose Sets the initial pose of the TSDF volume. Read more
    Set Initial Volume Pose Sets the initial pose of the TSDF volume. Read more
    frame size in pixels
    rgb frame size in pixels
    camera intrinsics
    rgb camera intrinsics
    pre-scale per 1 meter for input values Read more
    Depth sigma in meters for bilateral smooth
    Spatial sigma in pixels for bilateral smooth
    Kernel size in pixels for bilateral smooth
    Number of pyramid levels for ICP
    Resolution of voxel space Read more
    Size of voxel in meters
    Minimal camera movement in meters Read more
    initial volume pose in meters
    distance to truncate in meters Read more
    max number of frames per voxel Read more
    A length of one raycast step Read more
    light pose for rendering in meters
    distance theshold for ICP in meters
    angle threshold for ICP in radians
    number of ICP iterations for each pyramid level
    Threshold for depth truncation in meters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    Type of Volume Values can be TSDF (single volume) or HASHTSDF (hashtable of volume units) Read more
    Resolution of voxel space Number of voxels in each dimension. Applicable only for TSDF Volume. HashTSDF volume only supports equal resolution in all three dimensions Read more
    Resolution of volumeUnit in voxel space Number of voxels in each dimension for volumeUnit Applicable only for hashTSDF. Read more
    Initial pose of the volume in meters
    Length of voxels in meters
    TSDF truncation distance Distances greater than value from surface will be truncated to 1.0 Read more
    Max number of frames to integrate per voxel Represents the max number of frames over which a running average of the TSDF is calculated for a voxel Read more
    Threshold for depth truncation in meters Truncates the depth greater than threshold to 0 Read more
    Length of single raycast step Describes the percentage of voxel length that is skipped per march Read more
    Type of Volume Values can be TSDF (single volume) or HASHTSDF (hashtable of volume units) Read more
    Resolution of voxel space Number of voxels in each dimension. Applicable only for TSDF Volume. HashTSDF volume only supports equal resolution in all three dimensions Read more
    Resolution of volumeUnit in voxel space Number of voxels in each dimension for volumeUnit Applicable only for hashTSDF. Read more
    Initial pose of the volume in meters
    Length of voxels in meters
    TSDF truncation distance Distances greater than value from surface will be truncated to 1.0 Read more
    Max number of frames to integrate per voxel Represents the max number of frames over which a running average of the TSDF is calculated for a voxel Read more
    Threshold for depth truncation in meters Truncates the depth greater than threshold to 0 Read more
    Length of single raycast step Describes the percentage of voxel length that is skipped per march Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    Sets the maximum number of iterations Read more
    Runs Levenberg-Marquardt algorithm using the passed vector of parameters as the start point. The final vector of parameters (whether the algorithm converged or not) is stored at the same vector. The method returns the number of iterations used. If it’s equal to the previously specified maxIters, there is a big chance the algorithm did not converge. Read more
    Retrieves the current maximum number of iterations
    computes error and Jacobian for the specified vector of parameters Read more
    Detect lines inside an image. Read more
    Detect lines inside an image. Read more
    👎Deprecated: Use LayerParams::blobs instead.
    Deprecated: Use LayerParams::blobs instead. Set trained weights for LSTM layer. Read more
    Specifies shape of output blob which will be [[T], N] + @p outTailShape. @details If this parameter is empty or unset then @p outTailShape = [Wh.size(0)] will be used, where Wh is parameter from setWeights(). Read more
    👎Deprecated: Use flag produce_cell_output in LayerParams.
    Deprecated: Use flag produce_cell_output in LayerParams. Specifies either interpret first dimension of input blob as timestamp dimension either as sample. Read more
    👎Deprecated: Use flag use_timestamp_dim in LayerParams.
    Deprecated: Use flag use_timestamp_dim in LayerParams. If this flag is set to true then layer will produce @f$ c_t @f$ as second output. @details Shape of the second output is the same as first output. Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
    Given the @p input blobs, computes the output @p blobs. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. Read more
    Given the @p input blobs, computes the output @p blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
    Computes and sets internal parameters according to inputs, outputs and blobs. Read more
    👎Deprecated: This method will be removed in the future release.
    Allocates layer and computes output. Read more
    Returns index of input blob into the input array. Read more
    Returns index of output blob in output array. Read more
    Ask layer if it support specific backend for doing computations. Read more
    Returns Halide backend node. Read more
    Returns a CUDA backend node Read more
    Returns a TimVX backend node Read more
    Implement layers fusing. Read more
    Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
    Try to fuse current layer with a next one Read more
    “Detaches” all the layers, attached to particular layer.
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    List of learned parameters must be stored here to allow read them by using Net::getParam().
    Name of the layer instance, can be used for logging or other internal purposes.
    Type name which was used for creating layer by layer factory.
    prefer target for layer forwarding
    Automatic Halide scheduling based on layer hyper-parameters. Read more
    Returns parameters of layers with channel-wise multiplication and addition. Read more
    Returns scale and zeropoint of layers Read more
    Implements the feature extraction part of the algorithm. Read more
    Maximum possible value of the input image (e.g. 255 for 8 bit images, 4095 for 12 bit images) Read more
    Threshold that is used to determine saturated pixels, i.e. pixels where at least one of the channels exceeds inline formula are ignored. Read more
    Defines the size of one dimension of a three-dimensional RGB histogram that is used internally by the algorithm. It often makes sense to increase the number of bins for images with higher bit depth (e.g. 256 bins for a 12 bit image). Read more
    Maximum possible value of the input image (e.g. 255 for 8 bit images, 4095 for 12 bit images) Read more
    Threshold that is used to determine saturated pixels, i.e. pixels where at least one of the channels exceeds inline formula are ignored. Read more
    Defines the size of one dimension of a three-dimensional RGB histogram that is used internally by the algorithm. It often makes sense to increase the number of bins for images with higher bit depth (e.g. 256 bins for a 12 bit image). Read more
    Finds lines in the input image. Read more
    Draws the line segments on a given image. Read more
    Draws two groups of lines in blue and red, counting the non overlapping (mismatching) pixels. Read more
    \brief Add new object template. Read more
    \brief Add a new object template computed by external means.
    C++ default parameters Read more
    C++ default parameters Read more
    \brief Detect objects by template matching. Read more
    \brief Get the modalities used by this detector. Read more
    \brief Get sampling step T at pyramid_level.
    \brief Get number of pyramid levels used by this detector.
    \brief Get the template pyramid identified by template_id. Read more
    C++ default parameters Read more
    \brief Form a quantized image pyramid from a source image. Read more
    \brief Form a quantized image pyramid from a source image. Read more
    \brief Form a quantized image pyramid from a source image. Read more
    \brief Go to the next pyramid level. Read more
    \brief Compute quantized image at current pyramid level for online detection. Read more
    \brief Extract most discriminant features at current pyramid level to form a new template. Read more
    Learning rate. Read more
    Number of iterations. Read more
    Kind of regularization to be applied. See LogisticRegression::RegKinds. Read more
    Kind of training method used. See LogisticRegression::Methods. Read more
    Specifies the number of training samples taken in each step of Mini-Batch Gradient Descent. Will only be used if using LogisticRegression::MINI_BATCH training algorithm. It has to take values less than the total number of training samples. Read more
    Termination criteria of the algorithm. Read more
    Learning rate. Read more
    Number of iterations. Read more
    Kind of regularization to be applied. See LogisticRegression::RegKinds. Read more
    Kind of training method used. See LogisticRegression::Methods. Read more
    Specifies the number of training samples taken in each step of Mini-Batch Gradient Descent. Will only be used if using LogisticRegression::MINI_BATCH training algorithm. It has to take values less than the total number of training samples. Read more
    Termination criteria of the algorithm. Read more
    Predicts responses for input samples and returns a float type. Read more
    This function returns the trained parameters arranged across rows. Read more
    Transforms the source matrix into the destination matrix using the given look-up table: dst(I) = lut(src(I)) . Read more
    optionally encrypt images with random convolution Read more
    train it on positive features compute the mace filter: h = D(-1) * X * (X(+) * D(-1) * X)(-1) * C also calculate a minimal threshold for this class, the smallest self-similarity from the train images Read more
    correlate query img and threshold to min class value Read more
    \brief Set the net which will be used to find the approximate bounding boxes for the color charts. Read more
    \brief Find the ColorCharts in the given image. Read more
    \brief Find the ColorCharts in the given image. Read more
    \brief Get the best color checker. By the best it means the one detected with the highest confidence. \return checker A single colorchecker, if atleast one colorchecker was detected, ‘nullptr’ otherwise. Read more
    \brief Get the list of all detected colorcheckers \return checkers vector of colorcheckers Read more
    \brief Draws the checker to the given image. \param img image in color space BGR \return void Read more
    Detect %MSER regions Read more
    Set Mh kernel parameters Read more
    self explain
    self explain
    Merges images. Read more
    Merges images. Read more
    Merges images. Read more
    Short version of process, that doesn’t take extra arguments. Read more
    Setter for the optimized function. Read more
    Set terminal criteria for solver. Read more
    actually runs the algorithm and performs the minimization. Read more
    Setter for the optimized function. Read more
    Set terminal criteria for solver. Read more
    actually runs the algorithm and performs the minimization. Read more
    Getter for the optimized function. Read more
    Getter for the previously set terminal criteria for this algorithm. Read more
    Getter for the optimized function. Read more
    Getter for the previously set terminal criteria for this algorithm. Read more
    Sets motion model. Read more
    Estimates global motion between two 2D point clouds. Read more
    Sets motion model. Read more
    Estimates global motion between two 2D point clouds. Read more
    Sets motion model. Read more
    Estimates global motion between two 2D point clouds. Read more
    C++ default parameters Read more
    C++ default parameters Read more
    This is a utility function that allows to set the correct size (taken from the input image) in the corresponding variables that will be used to size the data structures of the algorithm. Read more
    This function allows the correct initialization of all data structures that will be used by the algorithm. Read more
    Predicts the response for sample(s). Read more
    Recognize text using Beam Search. Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    The character classifier must return a (ranked list of) class(es) id(’s) Read more
    Recognize text using HMM. Read more
    Recognize text using HMM. Read more
    C++ default parameters Read more
    C++ default parameters Read more
    The character classifier must return a (ranked list of) class(es) id(’s) Read more
    C++ default parameters Read more
    Recognize text using a segmentation based word-spotting/classifier cnn. Read more
    Recognize text using the tesseract-ocr API. Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    Return the list of the rectangles’ objectness value, Read more
    This is a utility function that allows to set the correct path from which the algorithm will load the trained model. Read more
    This is a utility function that allows to set an arbitrary path in which the algorithm will save the optional results Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    Method to compute a transformation from the source frame to the destination one. Some odometry algorithms do not used some data of frames (eg. ICP does not use images). In such case corresponding arguments can be set as empty Mat. The method returns true if all internal computations were possible (e.g. there were enough correspondences, system of equations has a solution, etc) and resulting transformation satisfies some test if it’s provided by the Odometry inheritor implementation (e.g. thresholds for maximum translation and rotation). Read more
    One more method to compute a transformation from the source frame to the destination one. It is designed to save on computing the frame data (image pyramids, normals, etc.). Read more
    Prepare a cache for the frame. The function checks the precomputed/passed data (throws the error if this data does not satisfy) and computes all remaining cache data needed for the frame. Returned size is a resolution of the prepared frame. Read more
    See also Read more
    See also Read more
    Method to compute a transformation from the source frame to the destination one. Some odometry algorithms do not used some data of frames (eg. ICP does not use images). In such case corresponding arguments can be set as empty Mat. The method returns true if all internal computations were possible (e.g. there were enough correspondences, system of equations has a solution, etc) and resulting transformation satisfies some test if it’s provided by the Odometry inheritor implementation (e.g. thresholds for maximum translation and rotation). Read more
    One more method to compute a transformation from the source frame to the destination one. It is designed to save on computing the frame data (image pyramids, normals, etc.). Read more
    Prepare a cache for the frame. The function checks the precomputed/passed data (throws the error if this data does not satisfy) and computes all remaining cache data needed for the frame. Returned size is a resolution of the prepared frame. Read more
    See also Read more
    See also Read more
    Method to compute a transformation from the source frame to the destination one. Some odometry algorithms do not used some data of frames (eg. ICP does not use images). In such case corresponding arguments can be set as empty Mat. The method returns true if all internal computations were possible (e.g. there were enough correspondences, system of equations has a solution, etc) and resulting transformation satisfies some test if it’s provided by the Odometry inheritor implementation (e.g. thresholds for maximum translation and rotation). Read more
    One more method to compute a transformation from the source frame to the destination one. It is designed to save on computing the frame data (image pyramids, normals, etc.). Read more
    Prepare a cache for the frame. The function checks the precomputed/passed data (throws the error if this data does not satisfy) and computes all remaining cache data needed for the frame. Returned size is a resolution of the prepared frame. Read more
    See also Read more
    See also Read more
    Method to compute a transformation from the source frame to the destination one. Some odometry algorithms do not used some data of frames (eg. ICP does not use images). In such case corresponding arguments can be set as empty Mat. The method returns true if all internal computations were possible (e.g. there were enough correspondences, system of equations has a solution, etc) and resulting transformation satisfies some test if it’s provided by the Odometry inheritor implementation (e.g. thresholds for maximum translation and rotation). Read more
    One more method to compute a transformation from the source frame to the destination one. It is designed to save on computing the frame data (image pyramids, normals, etc.). Read more
    Prepare a cache for the frame. The function checks the precomputed/passed data (throws the error if this data does not satisfy) and computes all remaining cache data needed for the frame. Returned size is a resolution of the prepared frame. Read more
    See also Read more
    See also Read more
    Method to compute a transformation from the source frame to the destination one. Some odometry algorithms do not used some data of frames (eg. ICP does not use images). In such case corresponding arguments can be set as empty Mat. The method returns true if all internal computations were possible (e.g. there were enough correspondences, system of equations has a solution, etc) and resulting transformation satisfies some test if it’s provided by the Odometry inheritor implementation (e.g. thresholds for maximum translation and rotation). Read more
    One more method to compute a transformation from the source frame to the destination one. It is designed to save on computing the frame data (image pyramids, normals, etc.). Read more
    Prepare a cache for the frame. The function checks the precomputed/passed data (throws the error if this data does not satisfy) and computes all remaining cache data needed for the frame. Returned size is a resolution of the prepared frame. Read more
    See also Read more
    See also Read more
    Color resolution of the greyscale bitmap represented in allocated bits (i.e., value 4 means that 16 shades of grey are used). The greyscale bitmap is used for computing contrast and entropy values. Read more
    Size of the texture sampling window used to compute contrast and entropy (center of the window is always in the pixel selected by x,y coordinates of the corresponding feature sample). Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space. Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space. Read more
    Translations of the individual axes of the feature space. Read more
    Translations of the individual axes of the feature space. Read more
    Sets sampling points used to sample the input image. Read more
    Initial seed indexes for the k-means algorithm.
    Number of iterations of the k-means clustering. We use fixed number of iterations, since the modified clustering is pruning clusters (not iteratively refining k clusters). Read more
    Maximal number of generated clusters. If the number is exceeded, the clusters are sorted by their weights and the smallest clusters are cropped. Read more
    This parameter multiplied by the index of iteration gives lower limit for cluster size. Clusters containing fewer points than specified by the limit have their centroid dismissed and points are reassigned. Read more
    Threshold euclidean distance between two centroids. If two cluster centers are closer than this distance, one of the centroid is dismissed and points are reassigned. Read more
    Remove centroids in k-means whose weight is lesser or equal to given threshold.
    Distance function selector used for measuring distance between two points in k-means. Available: L0_25, L0_5, L1, L2, L2SQUARED, L5, L_INFINITY. Read more
    Computes signature of given image. Read more
    Computes signatures for multiple images in parallel. Read more
    Number of initial samples taken from the image.
    Color resolution of the greyscale bitmap represented in allocated bits (i.e., value 4 means that 16 shades of grey are used). The greyscale bitmap is used for computing contrast and entropy values. Read more
    Size of the texture sampling window used to compute contrast and entropy (center of the window is always in the pixel selected by x,y coordinates of the corresponding feature sample). Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy) Read more
    Initial samples taken from the image. These sampled features become the input for clustering. Read more
    ** clusterizer *** Read more
    Number of initial seeds (initial number of clusters) for the k-means algorithm.
    Number of iterations of the k-means clustering. We use fixed number of iterations, since the modified clustering is pruning clusters (not iteratively refining k clusters). Read more
    Maximal number of generated clusters. If the number is exceeded, the clusters are sorted by their weights and the smallest clusters are cropped. Read more
    This parameter multiplied by the index of iteration gives lower limit for cluster size. Clusters containing fewer points than specified by the limit have their centroid dismissed and points are reassigned. Read more
    Threshold euclidean distance between two centroids. If two cluster centers are closer than this distance, one of the centroid is dismissed and points are reassigned. Read more
    Remove centroids in k-means whose weight is lesser or equal to given threshold.
    Distance function selector used for measuring distance between two points in k-means.
    Computes Signature Quadratic Form Distance of two signatures. Read more
    Computes Signature Quadratic Form Distance between the reference signature and each of the other image signatures. Read more
    Minimum value of the statmodel parameter. Default value is 0.
    Maximum value of the statmodel parameter. Default value is 0.
    Logarithmic step for iterating the statmodel parameter. Read more
    Minimum value of the statmodel parameter. Default value is 0.
    Maximum value of the statmodel parameter. Default value is 0.
    Logarithmic step for iterating the statmodel parameter. Read more
    frame size in pixels
    camera intrinsics
    rgb camera intrinsics
    pre-scale per 1 meter for input values Typical values are: * 5000 per 1 meter for the 16-bit PNG files of TUM database * 1000 per 1 meter for Kinect 2 device * 1 per 1 meter for the 32-bit float images in the ROS bag files Read more
    Depth sigma in meters for bilateral smooth
    Spatial sigma in pixels for bilateral smooth
    Kernel size in pixels for bilateral smooth
    Number of pyramid levels for ICP
    Minimal camera movement in meters Integrate new depth frame only if camera movement exceeds this value. Read more
    light pose for rendering in meters
    distance theshold for ICP in meters
    angle threshold for ICP in radians
    number of ICP iterations for each pyramid level
    Threshold for depth truncation in meters All depth values beyond this threshold will be set to zero Read more
    Volume parameters
    frame size in pixels
    camera intrinsics
    rgb camera intrinsics
    pre-scale per 1 meter for input values Typical values are: * 5000 per 1 meter for the 16-bit PNG files of TUM database * 1000 per 1 meter for Kinect 2 device * 1 per 1 meter for the 32-bit float images in the ROS bag files Read more
    Depth sigma in meters for bilateral smooth
    Spatial sigma in pixels for bilateral smooth
    Kernel size in pixels for bilateral smooth
    Number of pyramid levels for ICP
    Minimal camera movement in meters Integrate new depth frame only if camera movement exceeds this value. Read more
    light pose for rendering in meters
    distance theshold for ICP in meters
    angle threshold for ICP in radians
    number of ICP iterations for each pyramid level
    Threshold for depth truncation in meters All depth values beyond this threshold will be set to zero Read more
    Volume parameters
    Unwraps a 2D phase map. Read more
    Switches data visualization mode Read more
    Sets the index of a point which coordinates will be printed on the top left corner of the plot (if ShowText flag is true). Read more
    Flag is true if at least one of the axes is global pooled.
    Flag is true if at least one of the axes is global pooled.
    Flag is true if at least one of the axes is global pooled.
    Flag is true if at least one of the axes is global pooled.
    \brief Updates the pose with the new one \param [in] NewPose New pose to overwrite Read more
    \brief Updates the pose with the new one
    \brief Updates the pose with the new one, but this time using quaternions to represent rotation
    \brief Left multiplies the existing pose in order to update the transformation \param [in] IncrementalPose New pose to apply Read more
    \brief Adds a new pose to the cluster. The pose should be “close” to the mean poses in order to preserve the consistency \param [in] newPose Pose to add to the cluster Read more
    Interface method called by face recognizer before results processing Read more
    Interface method called by face recognizer for each result Read more
    Interface method called by face recognizer before results processing Read more
    Interface method called by face recognizer for each result Read more
    Generates QR code from input string. Read more
    Generates QR code from input string in Structured Append mode. The encoded message is splitting over a number of QR codes. Read more
    Computes BRISQUE quality score for input image Read more
    Compute quality score per channel with the per-channel score in each element of the resulting cv::Scalar. See specific algorithm for interpreting result scores Read more
    Implements Algorithm::clear()
    Compute quality score per channel with the per-channel score in each element of the resulting cv::Scalar. See specific algorithm for interpreting result scores Read more
    Implements Algorithm::clear()
    Compute quality score per channel with the per-channel score in each element of the resulting cv::Scalar. See specific algorithm for interpreting result scores Read more
    Implements Algorithm::clear()
    Compute quality score per channel with the per-channel score in each element of the resulting cv::Scalar. See specific algorithm for interpreting result scores Read more
    Implements Algorithm::clear()
    Compute quality score per channel with the per-channel score in each element of the resulting cv::Scalar. See specific algorithm for interpreting result scores Read more
    Implements Algorithm::clear()
    Returns output quality map that was generated during computation, if supported by the algorithm
    Implements Algorithm::empty()
    Returns output quality map that was generated during computation, if supported by the algorithm
    Implements Algorithm::empty()
    Returns output quality map that was generated during computation, if supported by the algorithm
    Implements Algorithm::empty()
    Returns output quality map that was generated during computation, if supported by the algorithm
    Implements Algorithm::empty()
    Returns output quality map that was generated during computation, if supported by the algorithm
    Implements Algorithm::empty()
    Compute GMSD Read more
    Implements Algorithm::clear()
    Implements Algorithm::empty()
    Computes MSE for reference images supplied in class constructor and provided comparison images Read more
    Implements Algorithm::clear()
    Implements Algorithm::empty()
    Compute the PSNR Read more
    Implements Algorithm::clear()
    sets the maximum pixel value used for PSNR computation Read more
    Implements Algorithm::empty()
    return the maximum pixel value used for PSNR computation
    Computes SSIM Read more
    Implements Algorithm::clear()
    Implements Algorithm::empty()
    Load a file containing the configuration parameters of the class. Read more
    Save a file containing all the configuration parameters the class is currently set to. Read more
    Get The sparse corresponding points. Read more
    Get The dense corresponding points. Read more
    Main process of the algorithm. This method computes the sparse seeds and then densifies them. Read more
    Specify pixel coordinates in the left image and get its corresponding location in the right image. Read more
    Compute and return the disparity map based on the correspondences found in the “process” method. Read more
    K is a number of nearest-neighbor matches considered, when fitting a locally affine model for a superpixel segment. However, lower values would make the interpolation noticeably faster. The original implementation of Hu2017 uses 32. Read more
    Interface to provide a more elaborated cost map, i.e. edge map, for the edge-aware term. This implementation is based on a rather simple gradient-based edge map estimation. To used more complex edge map estimator (e.g. StructuredEdgeDetection that has been used in the original publication) that may lead to improved accuracies, the internal edge map estimation can be bypassed here. Read more
    Get the internal cost, i.e. edge map, used for estimating the edge-aware term. Read more
    Parameter defines the number of nearest-neighbor matches for each superpixel considered, when fitting a locally affine model. Read more
    Parameter to tune enforcement of superpixel smoothness factor used for oversegmentation. Read more
    Parameter to choose superpixel algorithm variant to use: Read more
    Alpha is a parameter defining a global weight for transforming geodesic distance into weight. Read more
    Parameter defining the number of iterations for piece-wise affine model estimation. Read more
    Parameter to choose wether additional refinement of the piece-wise affine models is employed. Read more
    MaxFlow is a threshold to validate the predictions using a certain piece-wise affine model. If the prediction exceeds the treshold the translational model will be applied instead. Read more
    Parameter to choose wether the VariationalRefinement post-processing is employed. Read more
    Sets whether the fastGlobalSmootherFilter() post-processing is employed. Read more
    Sets the respective fastGlobalSmootherFilter() parameter. Read more
    Sets the respective fastGlobalSmootherFilter() parameter. Read more
    K is a number of nearest-neighbor matches considered, when fitting a locally affine model for a superpixel segment. However, lower values would make the interpolation noticeably faster. The original implementation of Hu2017 uses 32. * see also: setK Read more
    Get the internal cost, i.e. edge map, used for estimating the edge-aware term. Read more
    Parameter defines the number of nearest-neighbor matches for each superpixel considered, when fitting a locally affine model. * see also: setSuperpixelNNCnt Read more
    Parameter to tune enforcement of superpixel smoothness factor used for oversegmentation. Read more
    Parameter to choose superpixel algorithm variant to use: Read more
    Alpha is a parameter defining a global weight for transforming geodesic distance into weight. Read more
    Parameter defining the number of iterations for piece-wise affine model estimation. Read more
    Parameter to choose wether additional refinement of the piece-wise affine models is employed. Read more
    MaxFlow is a threshold to validate the predictions using a certain piece-wise affine model. If the prediction exceeds the treshold the translational model will be applied instead. * see also: setMaxFlow Read more
    Parameter to choose wether the VariationalRefinement post-processing is employed. Read more
    Sets whether the fastGlobalSmootherFilter() post-processing is employed. Read more
    Sets the respective fastGlobalSmootherFilter() parameter. Read more
    Sets the respective fastGlobalSmootherFilter() parameter. Read more
    Enable M-estimator or disable and use least-square estimator. Enables M-estimator by setting sigma parameters to (3.2, 7.0). Disabling M-estimator can reduce * runtime, while enabling can improve the accuracy. Read more
    Setups learned weights. Read more
    If this flag is set to true then layer will produce @f$ h_t @f$ as second output. @details Shape of the second output is the same as first output. Read more
    If true then variable importance will be calculated and then it can be retrieved by RTrees::getVarImportance. Default value is false. Read more
    The size of the randomly selected subset of features at each tree node and that are used to find the best split(s). If you set it to 0 then the size will be set to the square root of the total number of features. Default value is 0. Read more
    The termination criteria that specifies when the training algorithm stops. Either when the specified number of trees is trained and added to the ensemble or when sufficient accuracy (measured as OOB error) is achieved. Typically the more trees you have the better the accuracy. However, the improvement in accuracy generally diminishes and asymptotes pass a certain number of trees. Also to keep in mind, the number of tree increases the prediction time linearly. Default value is TermCriteria(TermCriteria::MAX_ITERS + TermCriteria::EPS, 50, 0.1) Read more
    If true then variable importance will be calculated and then it can be retrieved by RTrees::getVarImportance. Default value is false. Read more
    The size of the randomly selected subset of features at each tree node and that are used to find the best split(s). If you set it to 0 then the size will be set to the square root of the total number of features. Default value is 0. Read more
    The termination criteria that specifies when the training algorithm stops. Either when the specified number of trees is trained and added to the ensemble or when sufficient accuracy (measured as OOB error) is achieved. Typically the more trees you have the better the accuracy. However, the improvement in accuracy generally diminishes and asymptotes pass a certain number of trees. Also to keep in mind, the number of tree increases the prediction time linearly. Default value is TermCriteria(TermCriteria::MAX_ITERS + TermCriteria::EPS, 50, 0.1) Read more
    Returns the variable importance array. The method returns the variable importance vector, computed at the training stage when CalculateVarImportance is set to true. If this flag was set to false, the empty matrix is returned. Read more
    Returns the result of each individual tree in the forest. In case the model is a regression problem, the method will return each of the trees’ results for each of the sample cases. If the model is a classifier, it will return a Mat with samples + 1 rows, where the first row gives the class number and the following rows return the votes each class had for each sample. Read more
    Returns next packet with RAW video frame. Read more
    Updates the coded width and height inside format.
    Returns true if the last packet contained a key frame.
    Returns information about video file format.
    Returns any extra data associated with the video source. Read more
    Retrieves the specified property used by the VideoSource. Read more
    Retreive retina input buffer size Read more
    Retreive retina output buffer size that can be different from the input if a spatial log transformation is applied Read more
    Try to open an XML retina parameters file to adjust current retina instance setup Read more
    Try to open an XML retina parameters file to adjust current retina instance setup Read more
    Try to open an XML retina parameters file to adjust current retina instance setup Read more
    Returns Read more
    Outputs a string showing the used parameters setup Read more
    Setup the OPL and IPL parvo channels (see biologocal model) Read more
    Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel Read more
    Method which allows retina to be applied on an input image, Read more
    Method which processes an image in the aim to correct its luminance correct backlight problems, enhance details in shadows. Read more
    Accessor of the details channel of the retina (models foveal vision). Read more
    Accessor of the details channel of the retina (models foveal vision). Read more
    Accessor of the motion channel of the retina (models peripheral vision). Read more
    Accessor of the motion channel of the retina (models peripheral vision). Read more
    Activate color saturation as the final step of the color demultiplexing process -> this saturation is a sigmoide function applied to each channel of the demultiplexed image. Read more
    Clears all retina buffers Read more
    Activate/desactivate the Magnocellular pathway processing (motion information extraction), by default, it is activated Read more
    Activate/desactivate the Parvocellular pathway processing (contours information extraction), by default, it is activated Read more
    Write xml/yml formated parameters information Read more
    Write xml/yml formated parameters information Read more
    Accessor of the motion channel of the retina (models peripheral vision). Read more
    Accessor of the details channel of the retina (models foveal vision). Read more
    applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) Read more
    updates tone mapping behaviors by adjusing the local luminance computation area Read more
    Initializes some data that is cached for later computation If that function is not called, it will be called the first time normals are computed Read more
    Apply Ridge detection filter on input image. Read more
    Calls the pipeline in order to perform Eclidean reconstruction. Read more
    Calls the pipeline in order to perform Eclidean reconstruction. Read more
    Calls the pipeline in order to perform Eclidean reconstruction. Read more
    Calls the pipeline in order to perform Eclidean reconstruction. Read more
    Returns the estimated 3d points. Read more
    Returns the estimated camera extrinsic parameters. Read more
    Setter method for reconstruction options. Read more
    Setter method for camera intrinsic options. Read more
    Returns the computed reprojection error.
    Returns the refined camera calibration matrix.
    max keypoints = min(keypointsRatio * img.size().area(), 65535)
    upload host keypoints to device memory
    download keypoints from device to host memory
    download descriptors from device to host memory
    Finds the keypoints using fast hessian detector used in SURF Read more
    Finds the keypoints and computes their descriptors using fast hessian detector used in SURF Read more
    max keypoints = min(keypointsRatio * img.size().area(), 65535)
    returns the descriptor size in float’s (64 or 128)
    returns the default norm type
    Type of a %SVM formulation. See SVM::Types. Default value is SVM::C_SVC. Read more
    Parameter inline formula of a kernel function. For SVM::POLY, SVM::RBF, SVM::SIGMOID or SVM::CHI2. Default value is 1. Read more
    Parameter coef0 of a kernel function. For SVM::POLY or SVM::SIGMOID. Default value is 0. Read more
    Parameter degree of a kernel function. For SVM::POLY. Default value is 0. Read more
    Parameter C of a %SVM optimization problem. For SVM::C_SVC, SVM::EPS_SVR or SVM::NU_SVR. Default value is 0. Read more
    Parameter inline formula of a %SVM optimization problem. For SVM::NU_SVC, SVM::ONE_CLASS or SVM::NU_SVR. Default value is 0. Read more
    Parameter inline formula of a %SVM optimization problem. For SVM::EPS_SVR. Default value is 0. Read more
    Optional weights in the SVM::C_SVC problem, assigned to particular classes. They are multiplied by C so the parameter C of class i becomes classWeights(i) * C. Thus these weights affect the misclassification penalty for different classes. The larger weight, the larger penalty on misclassification of data from the corresponding class. Default value is empty Mat. Read more
    Termination criteria of the iterative %SVM training procedure which solves a partial case of constrained quadratic optimization problem. You can specify tolerance and/or the maximum number of iterations. Default value is TermCriteria( TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, FLT_EPSILON ); Read more
    Initialize with one of predefined kernels. See SVM::KernelTypes. Read more
    Initialize with custom kernel. See SVM::Kernel class for implementation details Read more
    Trains an %SVM with optimal parameters. Read more
    Trains an %SVM with optimal parameters Read more
    Type of a %SVM formulation. See SVM::Types. Default value is SVM::C_SVC. Read more
    Parameter inline formula of a kernel function. For SVM::POLY, SVM::RBF, SVM::SIGMOID or SVM::CHI2. Default value is 1. Read more
    Parameter coef0 of a kernel function. For SVM::POLY or SVM::SIGMOID. Default value is 0. Read more
    Parameter degree of a kernel function. For SVM::POLY. Default value is 0. Read more
    Parameter C of a %SVM optimization problem. For SVM::C_SVC, SVM::EPS_SVR or SVM::NU_SVR. Default value is 0. Read more
    Parameter inline formula of a %SVM optimization problem. For SVM::NU_SVC, SVM::ONE_CLASS or SVM::NU_SVR. Default value is 0. Read more
    Parameter inline formula of a %SVM optimization problem. For SVM::EPS_SVR. Default value is 0. Read more
    Optional weights in the SVM::C_SVC problem, assigned to particular classes. They are multiplied by C so the parameter C of class i becomes classWeights(i) * C. Thus these weights affect the misclassification penalty for different classes. The larger weight, the larger penalty on misclassification of data from the corresponding class. Default value is empty Mat. Read more
    Termination criteria of the iterative %SVM training procedure which solves a partial case of constrained quadratic optimization problem. You can specify tolerance and/or the maximum number of iterations. Default value is TermCriteria( TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, FLT_EPSILON ); Read more
    Type of a %SVM kernel. See SVM::KernelTypes. Default value is SVM::RBF. Read more
    Retrieves all the support vectors Read more
    Retrieves all the uncompressed support vectors of a linear %SVM Read more
    Retrieves the decision function Read more
    Returns Read more
    Returns Read more
    Function sets optimal parameters values for chosen SVM SGD model. Read more
    %Algorithm type, one of SVMSGD::SvmsgdType. Read more
    %Margin type, one of SVMSGD::MarginType. Read more
    Parameter marginRegularization of a %SVMSGD optimization problem. Read more
    Parameter initialStepSize of a %SVMSGD optimization problem. Read more
    Parameter stepDecreasingPower of a %SVMSGD optimization problem. Read more
    Termination criteria of the training algorithm. You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Read more
    %Algorithm type, one of SVMSGD::SvmsgdType. Read more
    %Margin type, one of SVMSGD::MarginType. Read more
    Parameter marginRegularization of a %SVMSGD optimization problem. Read more
    Parameter initialStepSize of a %SVMSGD optimization problem. Read more
    Parameter stepDecreasingPower of a %SVMSGD optimization problem. Read more
    Termination criteria of the training algorithm. You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Read more
    \brief Compute the saliency \param image The image. \param saliencyMap The computed saliency map. \return true if the saliency map is computed, false otherwise Read more
    \brief Compute the saliency \param image The image. \param saliencyMap The computed saliency map. \return true if the saliency map is computed, false otherwise Read more
    \brief Compute the saliency \param image The image. \param saliencyMap The computed saliency map. \return true if the saliency map is computed, false otherwise Read more
    \brief Compute the saliency \param image The image. \param saliencyMap The computed saliency map. \return true if the saliency map is computed, false otherwise Read more
    Returns the actual superpixel segmentation from the last image processed using iterate. Read more
    Calculates the superpixel segmentation on a given image with the initialized parameters in the ScanSegment object. Read more
    Returns the segmentation labeling of the image. Read more
    Returns the mask of the superpixel segmentation stored in the ScanSegment object. Read more
    Set a image used by switch* functions to initialize the class Read more
    Initialize the class with the ‘Single stragegy’ parameters describled in uijlings2013selective. Read more
    Initialize the class with the ‘Selective search fast’ parameters describled in uijlings2013selective. Read more
    Initialize the class with the ‘Selective search fast’ parameters describled in uijlings2013selective. Read more
    Add a new image in the list of images to process. Read more
    Clear the list of images to process
    Add a new graph segmentation in the list of graph segementations to process. Read more
    Clear the list of graph segmentations to process;
    Add a new strategy in the list of strategy to process. Read more
    Clear the list of strategy to process;
    Based on all images, graph segmentations and stragies, computes all possible rects and return them Read more
    Set a initial image, with a segmentation. Read more
    Return the score between two regions (between 0 and 1) Read more
    Inform the strategy that two regions will be merged Read more
    Set a initial image, with a segmentation. Read more
    Return the score between two regions (between 0 and 1) Read more
    Inform the strategy that two regions will be merged Read more
    Set a initial image, with a segmentation. Read more
    Return the score between two regions (between 0 and 1) Read more
    Inform the strategy that two regions will be merged Read more
    Set a initial image, with a segmentation. Read more
    Return the score between two regions (between 0 and 1) Read more
    Inform the strategy that two regions will be merged Read more
    Set a initial image, with a segmentation. Read more
    Return the score between two regions (between 0 and 1) Read more
    Inform the strategy that two regions will be merged Read more
    Set a initial image, with a segmentation. Read more
    Return the score between two regions (between 0 and 1) Read more
    Inform the strategy that two regions will be merged Read more
    Add a new sub-strategy Read more
    Remove all sub-strategies
    Establish the number of angular bins for the Shape Context Descriptor used in the shape matching pipeline. Read more
    Establish the number of radial bins for the Shape Context Descriptor used in the shape matching pipeline. Read more
    Set the inner radius of the shape context descriptor. Read more
    Set the outer radius of the shape context descriptor. Read more
    Set the weight of the shape context distance in the final value of the shape distance. The shape context distance between two shapes is defined as the symmetric sum of shape context matching costs over best matching points. The final value of the shape distance is a user-defined linear combination of the shape context distance, an image appearance distance, and a bending energy. Read more
    Set the weight of the Image Appearance cost in the final value of the shape distance. The image appearance cost is defined as the sum of squared brightness differences in Gaussian windows around corresponding image points. The final value of the shape distance is a user-defined linear combination of the shape context distance, an image appearance distance, and a bending energy. If this value is set to a number different from 0, is mandatory to set the images that correspond to each shape. Read more
    Set the weight of the Bending Energy in the final value of the shape distance. The bending energy definition depends on what transformation is being used to align the shapes. The final value of the shape distance is a user-defined linear combination of the shape context distance, an image appearance distance, and a bending energy. Read more
    Set the images that correspond to each shape. This images are used in the calculation of the Image Appearance cost. Read more
    Set the algorithm used for building the shape context descriptor cost matrix. Read more
    Set the value of the standard deviation for the Gaussian window for the image appearance cost. Read more
    Set the algorithm used for aligning the shapes. Read more
    Compute the shape distance between two shapes defined by its contours. Read more
    Compute the shape distance between two shapes defined by its contours. Read more
    Estimate the transformation parameters of the current transformer algorithm, based on point matches. Read more
    Apply a transformation, given a pre-estimated transformation parameters. Read more
    Estimate the transformation parameters of the current transformer algorithm, based on point matches. Read more
    Apply a transformation, given a pre-estimated transformation parameters. Read more
    Estimate the transformation parameters of the current transformer algorithm, based on point matches. Read more
    Apply a transformation, given a pre-estimated transformation parameters. Read more
    Apply a transformation, given a pre-estimated transformation parameters, to an Image. Read more
    Apply a transformation, given a pre-estimated transformation parameters, to an Image. Read more
    Apply a transformation, given a pre-estimated transformation parameters, to an Image. Read more
    Input image range minimum value Read more
    Input image range maximum value Read more
    Output image range minimum value Read more
    Output image range maximum value Read more
    Percent of top/bottom values to ignore Read more
    Input image range minimum value Read more
    Input image range maximum value Read more
    Output image range minimum value Read more
    Output image range maximum value Read more
    Percent of top/bottom values to ignore Read more
    Compute a wrapped phase map from sinusoidal patterns. Read more
    Unwrap the wrapped phase map to remove phase ambiguities. Read more
    Find correspondences between the two devices thanks to unwrapped phase maps. Read more
    compute the data modulation term. Read more
    Vector of slice ranges. Read more
    Vector of slice ranges. Read more
    Interpolate input sparse matches. Read more
    Interpolate input sparse matches. Read more
    Calculates a sparse optical flow. Read more
    Calculates a sparse optical flow. Read more
    Calculates a sparse optical flow. Read more
    @copydoc DenseRLOFOpticalFlow::setRLOFOpticalFlowParameter
    Threshold for the forward backward confidence check For each feature point a motion vector inline formula is computed. * If the forward backward error block formula * is larger than threshold given by this function then the status will not be used by the following * vector field interpolation. inline formula denotes the backward flow. Note, the forward backward test * will only be applied if the threshold > 0. This may results into a doubled runtime for the motion estimation. * see also: setForwardBackward Read more
    @copydoc DenseRLOFOpticalFlow::setRLOFOpticalFlowParameter Read more
    Threshold for the forward backward confidence check For each feature point a motion vector inline formula is computed. * If the forward backward error block formula * is larger than threshold given by this function then the status will not be used by the following * vector field interpolation. inline formula denotes the backward flow. Note, the forward backward test * will only be applied if the threshold > 0. This may results into a doubled runtime for the motion estimation. * see also: setForwardBackward * see also: setForwardBackward Read more
    Number of copies that will be produced (is ignored when negative).
    Number of copies that will be produced (is ignored when negative).
    overloaded interface method
    overloaded interface method
    Returns label with minimal distance
    Returns minimal distance value
    Return results as vector Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Trains the statistical model Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    Returns the number of variables in training samples
    Returns true if the model is trained
    Returns true if the model is classifier
    Computes error on the training or test dataset Read more
    Predicts response(s) for the provided sample(s) Read more
    This function perform a binary map of given saliency map. This is obtained in this way: Read more
    This function perform a binary map of given saliency map. This is obtained in this way: Read more
    Computes disparity map for the specified stereo pair Read more
    Computes disparity map for the specified stereo pair Read more
    Computes disparity map for the specified stereo pair Read more
    Computes disparity map for the specified stereo pair Read more
    Computes disparity map for the specified stereo pair Read more
    Computes disparity map for the specified stereo pair Read more
    Computes disparity map for the specified stereo pair Read more
    These functions try to match the given images and to estimate rotations of each camera. Read more
    These function restors camera rotation and camera intrinsics of each camera that can be got with @ref Stitcher::cameras call Read more
    These function restors camera rotation and camera intrinsics of each camera that can be got with @ref Stitcher::cameras call Read more
    These functions try to compose the given images (or images stored internally from the other function calls) into the final pano under the assumption that the image transformations were estimated before. Read more
    These functions try to compose the given images (or images stored internally from the other function calls) into the final pano under the assumption that the image transformations were estimated before. Read more
    These functions try to stitch the given images. Read more
    These functions try to stitch the given images. Read more
    The function detects edges in src and draw them to dst. Read more
    The function computes orientation from edge image. Read more
    The function edgenms in edge image and suppress edges where edge is stronger in orthogonal direction. Read more
    Generates the structured light pattern to project. Read more
    Generates the structured light pattern to project. Read more
    Decodes the structured light pattern, generating a disparity map Read more
    Decodes the structured light pattern, generating a disparity map Read more
    Calculates the superpixel segmentation on a given image with the initialized parameters in the SuperpixelLSC object. Read more
    Enforce label connectivity. Read more
    Calculates the actual amount of superpixels on a given segmentation computed and stored in SuperpixelLSC object. Read more
    Returns the segmentation labeling of the image. Read more
    Returns the mask of the superpixel segmentation stored in SuperpixelLSC object. Read more
    Calculates the superpixel segmentation on a given image stored in SuperpixelSEEDS object. Read more
    Calculates the superpixel segmentation on a given image with the initialized parameters in the SuperpixelSEEDS object. Read more
    Returns the segmentation labeling of the image. Read more
    Returns the mask of the superpixel segmentation stored in SuperpixelSEEDS object. Read more
    Calculates the superpixel segmentation on a given image with the initialized parameters in the SuperpixelSLIC object. Read more
    Enforce label connectivity. Read more
    Calculates the actual amount of superpixels on a given segmentation computed and stored in SuperpixelSLIC object. Read more
    Returns the segmentation labeling of the image. Read more
    Returns the mask of the superpixel segmentation stored in SuperpixelSLIC object. Read more
    Flow smoothness Read more
    Gradient constancy importance Read more
    Pyramid scale factor Read more
    Number of lagged non-linearity iterations (inner loop) Read more
    Number of warping iterations (number of pyramid levels) Read more
    Number of linear system solver iterations Read more
    Flow smoothness Read more
    Gradient constancy importance Read more
    Pyramid scale factor Read more
    Number of lagged non-linearity iterations (inner loop) Read more
    Number of warping iterations (number of pyramid levels) Read more
    Number of linear system solver iterations Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    C++ default parameters Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    See also Read more
    Set input frame source for Super Resolution algorithm. Read more
    Process next frame from input and return output result. Read more
    Clear all inner buffers.
    Scale factor Read more
    Iterations count Read more
    Asymptotic value of steepest descent method Read more
    Weight parameter to balance data term and smoothness term Read more
    Parameter of spacial distribution in Bilateral-TV Read more
    Kernel size of Bilateral-TV filter Read more
    Gaussian blur kernel size Read more
    Gaussian blur sigma Read more
    Radius of the temporal search area Read more
    Dense optical flow algorithm Read more
    Scale factor Read more
    Iterations count Read more
    Asymptotic value of steepest descent method Read more
    Weight parameter to balance data term and smoothness term Read more
    Parameter of spacial distribution in Bilateral-TV Read more
    Kernel size of Bilateral-TV filter Read more
    Gaussian blur kernel size Read more
    Gaussian blur sigma Read more
    Radius of the temporal search area Read more
    Dense optical flow algorithm Read more
    Obtain the next frame in the sequence. Read more
    Method that provides a quick and simple interface to detect text inside an image Read more
    This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Read more
    Set the regularization parameter for relaxing the exact interpolation requirements of the TPS algorithm. Read more
    C++ default parameters Read more
    Tonemaps image Read more
    Tonemaps image Read more
    Tonemaps image Read more
    Tonemaps image Read more
    Tonemaps image Read more
    C++ default parameters Read more
    C++ default parameters Read more
    Initialize the tracker with a known bounding box that surrounded the target Read more
    Update the tracker, find the new most likely bounding box for the target Read more
    Initialize the tracker with a known bounding box that surrounded the target Read more
    Update the tracker, find the new most likely bounding box for the target Read more
    Initialize the tracker with a known bounding box that surrounded the target Read more
    Update the tracker, find the new most likely bounding box for the target Read more
    Initialize the tracker with a known bounding box that surrounded the target Read more
    Update the tracker, find the new most likely bounding box for the target Read more
    Initialize the tracker with a known bounding box that surrounded the target Read more
    Update the tracker, find the new most likely bounding box for the target Read more
    Return tracking score
    C++ default parameters Read more
    Splits the training data into the training and test parts Read more
    Splits the training data into the training and test parts Read more
    Returns matrix of train samples Read more
    Returns the vector of responses Read more
    Returns the vector of normalized categorical responses Read more
    Returns the vector of class labels Read more
    Returns matrix of test samples
    Returns vector of symbolic names captured in loadFromCSV()
    return the sze of the manage input and output images
    try to open an XML segmentation parameters file to adjust current segmentation instance setup Read more
    try to open an XML segmentation parameters file to adjust current segmentation instance setup Read more
    try to open an XML segmentation parameters file to adjust current segmentation instance setup Read more
    return the current parameters setup
    parameters setup display method Read more
    main processing method, get result using methods getSegmentationPicture() Read more
    access function return the last segmentation result: a boolean picture which is resampled between 0 and 255 for a display purpose Read more
    cleans all the buffers of the instance
    write xml/yml formated parameters information Read more
    write xml/yml formated parameters information Read more
    @ref calc function overload to handle separate horizontal (u) and vertical (v) flow components (to avoid extra splits/merges) Read more
    Number of outer (fixed-point) iterations in the minimization procedure. Read more
    Number of inner successive over-relaxation (SOR) iterations in the minimization procedure to solve the respective linear system. Read more
    Relaxation factor in SOR Read more
    Weight of the smoothness term Read more
    Weight of the color constancy term Read more
    Weight of the gradient constancy term Read more
    Number of outer (fixed-point) iterations in the minimization procedure. Read more
    Number of inner successive over-relaxation (SOR) iterations in the minimization procedure to solve the respective linear system. Read more
    Relaxation factor in SOR Read more
    Weight of the smoothness term Read more
    Weight of the color constancy term Read more
    Weight of the gradient constancy term Read more
    Grabs, decodes and returns the next video frame. Read more
    Grabs the next frame from the video source. Read more
    Sets a property in the VideoReader. Read more
    Set the desired ColorFormat for the frame returned by nextFrame()/retrieve(). Read more
    Returns information about video file format.
    Returns previously grabbed video data. Read more
    Returns previously grabbed encoded video data. Read more
    Returns the next video frame. Read more
    Returns the specified VideoReader property Read more
    C++ default parameters Read more
    Retrieves the specified property used by the VideoSource. Read more
    Writes the next video frame. Read more
    Read detector from FileNode. Read more
    Train WaldBoost detector Read more
    Detect objects on image using WaldBoost detector Read more
    Write detector to FileStorage. Read more
    Applies white balancing to the input image Read more
    Applies white balancing to the input image Read more
    Applies white balancing to the input image Read more
    set window background to custom image/ color Read more
    set window background to custom image/ color Read more
    enable an ordered chain of full-screen post processing effects Read more
    place an entity of a mesh in the scene Read more
    remove an entity from the scene Read more
    set the property of an entity to the given value Read more
    set the property of an entity to the given value Read more
    get the property of an entity Read more
    convenience method to visualize a camera position Read more
    creates a point light in the scene Read more
    update entity pose by transformation in the parent coordinate space. (pre-rotation) Read more
    set entity pose in the world coordinate space. Read more
    Retrieves the current pose of an entity Read more
    get a list of available entity animations Read more
    play entity animation Read more
    stop entity animation Read more
    read back the image generated by the last call to @ref waitKey
    read back the texture of an active compositor Read more
    get the depth for the current frame. Read more
    convenience method to force the “up” axis to stay fixed Read more
    Sets the current camera pose Read more
    convenience method to orient the camera to a specific entity Read more
    convenience method to orient an entity to a specific entity. If target is an empty string the entity looks at the given offset point Read more
    Retrieves the current camera pose Read more
    set intrinsics of the camera Read more
    render this window, but do not swap buffers. Automatically called by @ref ovis::waitKey

    Auto Trait Implementations§

    Blanket Implementations§

    Gets the TypeId of self. Read more
    Immutably borrows from an owned value. Read more
    Mutably borrows from an owned value. Read more

    Returns the argument unchanged.

    Calls U::from(self).

    That is, this conversion is whatever the implementation of From<T> for U chooses to do.

    The type returned in the event of a conversion error.
    Performs the conversion.
    The type returned in the event of a conversion error.
    Performs the conversion.