[−][src]Module opencv::optflow
Image Processing
This module includes image-processing functions.
Image Filtering
Functions and classes described in this section are used to perform various linear or non-linear
filtering operations on 2D images (represented as Mat's). It means that for each pixel location
in the source image (normally, rectangular), its neighborhood is considered and used to
compute the response. In case of a linear filter, it is a weighted sum of pixel values. In case of
morphological operations, it is the minimum or maximum values, and so on. The computed response is
stored in the destination image at the same location
. It means that the output image
will be of the same size as the input image. Normally, the functions support multi-channel arrays,
in which case every channel is processed independently. Therefore, the output image will also have
the same number of channels as the input one.
Another common feature of the functions and classes described in this section is that, unlike
simple arithmetic functions, they need to extrapolate values of some non-existing pixels. For
example, if you want to smooth an image using a Gaussian filter, then, when
processing the left-most pixels in each row, you need pixels to the left of them, that is, outside
of the image. You can let these pixels be the same as the left-most image pixels ("replicated
border" extrapolation method), or assume that all the non-existing pixels are zeros ("constant
border" extrapolation method), and so on. OpenCV enables you to specify the extrapolation method.
For details, see #BorderTypes
@anchor filter_depths
Depth combinations
| Input depth (src.depth()) | Output depth (ddepth) |
|---|---|
| CV_8U | -1/CV_16S/CV_32F/CV_64F |
| CV_16U/CV_16S | -1/CV_32F/CV_64F |
| CV_32F | -1/CV_32F/CV_64F |
| CV_64F | -1/CV_64F |
Note: when ddepth=-1, the output image will have the same depth as the source.
Geometric Image Transformations
The functions in this section perform various geometrical transformations of 2D images. They do not
change the image content but deform the pixel grid and map this deformed grid to the destination
image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from
destination to the source. That is, for each pixel of the destination image, the
functions compute coordinates of the corresponding "donor" pixel in the source image and copy the
pixel value:
In case when you specify the forward mapping , the OpenCV functions first compute the corresponding inverse mapping
and then use the above formula.
The actual implementations of the geometrical transformations, from the most generic remap and to the simplest and the fastest resize, need to solve two main problems with the above formula:
-
Extrapolation of non-existing pixels. Similarly to the filtering functions described in the previous section, for some
, either one of
, or
, or both of them may fall outside of the image. In this case, an extrapolation method needs to be used. OpenCV provides the same selection of extrapolation methods as in the filtering functions. In addition, it provides the method #BORDER_TRANSPARENT. This means that the corresponding pixels in the destination image will not be modified at all.
-
Interpolation of pixel values. Usually
and
are floating-point numbers. This means that
can be either an affine or perspective transformation, or radial lens distortion correction, and so on. So, a pixel value at fractional coordinates needs to be retrieved. In the simplest case, the coordinates can be just rounded to the nearest integer coordinates and the corresponding pixel can be used. This is called a nearest-neighbor interpolation. However, a better result can be achieved by using more sophisticated interpolation methods , where a polynomial function is fit into some neighborhood of the computed pixel
, and then the value of the polynomial at
is taken as the interpolated pixel value. In OpenCV, you can choose between several interpolation methods. See resize for details.
Note: The geometrical transformations do not work with CV_8S or CV_32S images.
Miscellaneous Image Transformations
Drawing Functions
Drawing functions work with matrices/images of arbitrary depth. The boundaries of the shapes can be rendered with antialiasing (implemented only for 8-bit images for now). All the functions include the parameter color that uses an RGB value (that may be constructed with the Scalar constructor ) for color images and brightness for grayscale images. For color images, the channel ordering is normally Blue, Green, Red. This is what imshow, imread, and imwrite expect. So, if you form a color using the Scalar constructor, it should look like:
If you are using your own image rendering and I/O functions, you can use any channel ordering. The drawing functions process each channel independently and do not depend on the channel order or even on the used color space. The whole image can be converted from BGR to RGB or to a different color space using cvtColor .
If a drawn figure is partially or completely outside the image, the drawing functions clip it. Also,
many drawing functions can handle pixel coordinates specified with sub-pixel accuracy. This means
that the coordinates can be passed as fixed-point numbers encoded as integers. The number of
fractional bits is specified by the shift parameter and the real point coordinates are calculated as
. This feature is
especially effective when rendering antialiased shapes.
Note: The functions do not support alpha-transparency when the target image is 4-channel. In this case, the color[3] is simply copied to the repainted pixels. Thus, if you want to paint semi-transparent shapes, you can paint them in a separate buffer and then blend it with the main image.
Color Space Conversions
ColorMaps in OpenCV
The human perception isn't built for observing fine changes in grayscale images. Human eyes are more sensitive to observing changes between colors, so you often need to recolor your grayscale images to get a clue about them. OpenCV now comes with various colormaps to enhance the visualization in your computer vision application.
In OpenCV you only need applyColorMap to apply a colormap on a given image. The following sample code reads the path to an image from command line, applies a Jet colormap on it and shows the result:
@include snippets/imgproc_applyColorMap.cpp
See also
#ColormapTypes
Planar Subdivision
The Subdiv2D class described in this section is used to perform various planar subdivision on a set of 2D points (represented as vector of Point2f). OpenCV subdivides a plane into triangles using the Delaunay's algorithm, which corresponds to the dual graph of the Voronoi diagram. In the figure below, the Delaunay's triangulation is marked with black lines and the Voronoi diagram with red lines.

The subdivisions can be used for the 3D piece-wise transformation of a plane, morphing, fast location of points on the plane, building special graphs (such as NNG,RNG), and so forth.
Histograms
Structural Analysis and Shape Descriptors
Motion Analysis and Object Tracking
Feature Detection
Object Detection
C API
Hardware Acceleration Layer
Modules
| prelude |
Structs
| GPCDetails | |
| GPCMatchingParams | Class encapsulating matching parameters. |
| GPCPatchDescriptor | |
| GPCPatchSample | |
| GPCTrainingParams | Class encapsulating training parameters. |
| GPCTrainingSamples | Class encapsulating training samples. |
| GPCTree | Class for individual tree. |
| GPCTree_Node | |
| OpticalFlowPCAFlow | PCAFlow algorithm. |
| PCAPrior | @brief This class can be used for imposing a learned prior on the resulting optical flow. Solution will be regularized according to this prior. You need to generate appropriate prior file with "learn_prior.py" script beforehand. |
| RLOFOpticalFlowParameter | This is used store and set up the parameters of the robust local optical flow (RLOF) algoritm. |
Enums
| GPCDescType | Descriptor types for the Global Patch Collider. |
| InterpolationType | |
| SolverType | |
| SupportRegionType |
Constants
| GPC_DESCRIPTOR_DCT | Better quality but slow |
| GPC_DESCRIPTOR_WHT | Worse quality but much faster |
| INTERP_EPIC | < Edge-preserving interpolation using ximgproc::EdgeAwareInterpolator, see Revaud2015,Geistert2016. |
| INTERP_GEO | < Fast geodesic interpolation, see Geistert2016 |
| INTERP_RIC | < SLIC based robust interpolation using ximgproc::RICInterpolator, see Hu2017. |
| SR_CROSS | < Apply a adaptive support region obtained by cross-based segmentation as described in Senst2014 |
| SR_FIXED | < Apply a constant support region |
| ST_BILINEAR | < Apply optimized iterative refinement based bilinear equation solutions as described in Senst2013 |
| ST_STANDART | < Apply standard iterative refinement |
Traits
| DenseRLOFOpticalFlow | Fast dense optical flow computation based on robust local optical flow (RLOF) algorithms and sparse-to-dense interpolation scheme. |
| DualTVL1OpticalFlow | "Dual TV L1" Optical Flow Algorithm. |
| GPCDetailsTrait | |
| GPCPatchDescriptorTrait | |
| GPCPatchSampleTrait | |
| GPCTrainingSamplesTrait | Class encapsulating training samples. |
| GPCTreeTrait | Class for individual tree. |
| OpticalFlowPCAFlowTrait | PCAFlow algorithm. |
| PCAPriorTrait | @brief This class can be used for imposing a learned prior on the resulting optical flow. Solution will be regularized according to this prior. You need to generate appropriate prior file with "learn_prior.py" script beforehand. |
| RLOFOpticalFlowParameterTrait | This is used store and set up the parameters of the robust local optical flow (RLOF) algoritm. |
| SparseRLOFOpticalFlow | Class used for calculation sparse optical flow and feature tracking with robust local optical flow (RLOF) algorithms. |
Functions
| calc_global_orientation | Calculates a global motion orientation in a selected region. |
| calc_motion_gradient | Calculates a gradient orientation of a motion history image. |
| calc_optical_flow_dense_rlof | Fast dense optical flow computation based on robust local optical flow (RLOF) algorithms and sparse-to-dense interpolation scheme. |
| calc_optical_flow_sf | Calculate an optical flow using "SimpleFlow" algorithm. |
| calc_optical_flow_sf_1 | Calculate an optical flow using "SimpleFlow" algorithm. |
| calc_optical_flow_sparse_rlof | Calculates fast optical flow for a sparse feature set using the robust local optical flow (RLOF) similar to optflow::calcOpticalFlowPyrLK(). |
| calc_optical_flow_sparse_to_dense | Fast dense optical flow based on PyrLK sparse matches interpolation. |
| create_opt_flow_deep_flow | DeepFlow optical flow algorithm implementation. |
| create_opt_flow_dense_rlof | Additional interface to the Dense RLOF algorithm - optflow::calcOpticalFlowDenseRLOF() |
| create_opt_flow_dual_tvl1 | Creates instance of cv::DenseOpticalFlow |
| create_opt_flow_farneback | Additional interface to the Farneback's algorithm - calcOpticalFlowFarneback() |
| create_opt_flow_pca_flow | Creates an instance of PCAFlow |
| create_opt_flow_simple_flow | Additional interface to the SimpleFlow algorithm - calcOpticalFlowSF() |
| create_opt_flow_sparse_rlof | Additional interface to the Sparse RLOF algorithm - optflow::calcOpticalFlowSparseRLOF() |
| create_opt_flow_sparse_to_dense | Additional interface to the SparseToDenseFlow algorithm - calcOpticalFlowSparseToDense() |
| read | |
| segment_motion | Splits a motion history image into a few parts corresponding to separate independent motions (for example, left hand, right hand). |
| update_motion_history | Updates the motion history image by a moving silhouette. |
| write |