Module opencv::imgproc[][src]

Expand description

Image Processing

This module includes image-processing functions.

Image Filtering

Functions and classes described in this section are used to perform various linear or non-linear filtering operations on 2D images (represented as Mat’s). It means that for each pixel location inline formula in the source image (normally, rectangular), its neighborhood is considered and used to compute the response. In case of a linear filter, it is a weighted sum of pixel values. In case of morphological operations, it is the minimum or maximum values, and so on. The computed response is stored in the destination image at the same location inline formula. It means that the output image will be of the same size as the input image. Normally, the functions support multi-channel arrays, in which case every channel is processed independently. Therefore, the output image will also have the same number of channels as the input one.

Another common feature of the functions and classes described in this section is that, unlike simple arithmetic functions, they need to extrapolate values of some non-existing pixels. For example, if you want to smooth an image using a Gaussian inline formula filter, then, when processing the left-most pixels in each row, you need pixels to the left of them, that is, outside of the image. You can let these pixels be the same as the left-most image pixels (“replicated border” extrapolation method), or assume that all the non-existing pixels are zeros (“constant border” extrapolation method), and so on. OpenCV enables you to specify the extrapolation method. For details, see #BorderTypes

@anchor filter_depths

Depth combinations

Input depth (src.depth())Output depth (ddepth)
CV_8U-1/CV_16S/CV_32F/CV_64F
CV_16U/CV_16S-1/CV_32F/CV_64F
CV_32F-1/CV_32F/CV_64F
CV_64F-1/CV_64F

Note: when ddepth=-1, the output image will have the same depth as the source.

Geometric Image Transformations

The functions in this section perform various geometrical transformations of 2D images. They do not change the image content but deform the pixel grid and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel inline formula of the destination image, the functions compute coordinates of the corresponding “donor” pixel in the source image and copy the pixel value:

block formula

In case when you specify the forward mapping inline formula, the OpenCV functions first compute the corresponding inverse mapping inline formula and then use the above formula.

The actual implementations of the geometrical transformations, from the most generic remap and to the simplest and the fastest resize, need to solve two main problems with the above formula:

  • Extrapolation of non-existing pixels. Similarly to the filtering functions described in the previous section, for some inline formula, either one of inline formula, or inline formula, or both of them may fall outside of the image. In this case, an extrapolation method needs to be used. OpenCV provides the same selection of extrapolation methods as in the filtering functions. In addition, it provides the method #BORDER_TRANSPARENT. This means that the corresponding pixels in the destination image will not be modified at all.

  • Interpolation of pixel values. Usually inline formula and inline formula are floating-point numbers. This means that inline formula can be either an affine or perspective transformation, or radial lens distortion correction, and so on. So, a pixel value at fractional coordinates needs to be retrieved. In the simplest case, the coordinates can be just rounded to the nearest integer coordinates and the corresponding pixel can be used. This is called a nearest-neighbor interpolation. However, a better result can be achieved by using more sophisticated interpolation methods , where a polynomial function is fit into some neighborhood of the computed pixel inline formula, and then the value of the polynomial at inline formula is taken as the interpolated pixel value. In OpenCV, you can choose between several interpolation methods. See resize for details.

Note: The geometrical transformations do not work with CV_8S or CV_32S images.

Miscellaneous Image Transformations

Drawing Functions

Drawing functions work with matrices/images of arbitrary depth. The boundaries of the shapes can be rendered with antialiasing (implemented only for 8-bit images for now). All the functions include the parameter color that uses an RGB value (that may be constructed with the Scalar constructor ) for color images and brightness for grayscale images. For color images, the channel ordering is normally Blue, Green, Red. This is what imshow, imread, and imwrite expect. So, if you form a color using the Scalar constructor, it should look like:

block formula

If you are using your own image rendering and I/O functions, you can use any channel ordering. The drawing functions process each channel independently and do not depend on the channel order or even on the used color space. The whole image can be converted from BGR to RGB or to a different color space using cvtColor .

If a drawn figure is partially or completely outside the image, the drawing functions clip it. Also, many drawing functions can handle pixel coordinates specified with sub-pixel accuracy. This means that the coordinates can be passed as fixed-point numbers encoded as integers. The number of fractional bits is specified by the shift parameter and the real point coordinates are calculated as inline formula . This feature is especially effective when rendering antialiased shapes.

Note: The functions do not support alpha-transparency when the target image is 4-channel. In this case, the color[3] is simply copied to the repainted pixels. Thus, if you want to paint semi-transparent shapes, you can paint them in a separate buffer and then blend it with the main image.

Color Space Conversions

ColorMaps in OpenCV

The human perception isn’t built for observing fine changes in grayscale images. Human eyes are more sensitive to observing changes between colors, so you often need to recolor your grayscale images to get a clue about them. OpenCV now comes with various colormaps to enhance the visualization in your computer vision application.

In OpenCV you only need applyColorMap to apply a colormap on a given image. The following sample code reads the path to an image from command line, applies a Jet colormap on it and shows the result:

@include snippets/imgproc_applyColorMap.cpp

See also

#ColormapTypes

Planar Subdivision

The Subdiv2D class described in this section is used to perform various planar subdivision on a set of 2D points (represented as vector of Point2f). OpenCV subdivides a plane into triangles using the Delaunay’s algorithm, which corresponds to the dual graph of the Voronoi diagram. In the figure below, the Delaunay’s triangulation is marked with black lines and the Voronoi diagram with red lines.

Delaunay triangulation (black) and Voronoi (red)

The subdivisions can be used for the 3D piece-wise transformation of a plane, morphing, fast location of points on the plane, building special graphs (such as NNG,RNG), and so forth.

Histograms

Structural Analysis and Shape Descriptors

Motion Analysis and Object Tracking

Feature Detection

Object Detection

Image Segmentation

C API

Hardware Acceleration Layer

Modules

Structs

Intelligent Scissors image segmentation

Line iterator

Enums

adaptive threshold algorithm

the color conversion codes

GNU Octave/MATLAB equivalent colormaps

connected components algorithm

connected components statistics

the contour approximation algorithm

distanceTransform algorithm flags

Mask size for distance transform

Distance types for Distance Transform and M-estimators

floodfill algorithm flags

class of the pixel in GrabCut algorithm

GrabCut algorithm flags

Only a subset of Hershey fonts https://en.wikipedia.org/wiki/Hershey_fonts are supported @ingroup imgproc_draw

Histogram comparison methods @ingroup imgproc_hist

Variants of a Hough transform

interpolation algorithm

Variants of Line Segment %Detector

types of line @ingroup imgproc_draw

Possible set of marker types used for the cv::drawMarker function @ingroup imgproc_draw

shape of the structuring element

type of morphological operation

types of intersection between rectangles

mode of the contour retrieval algorithm

Shape matching methods

type of the template matching operation

type of the threshold operation

threshold types

\brief Specify the polar mapping mode

Constants

the threshold value

inline formula

is a weighted sum (cross-correlation with a Gaussian window) of the

inline formula

neighborhood of

inline formula

minus C . The default sigma (standard deviation) is used for the specified blockSize . See #getGaussianKernel

the threshold value

inline formula

is a mean of the

inline formula

neighborhood of

inline formula

minus C

Same as CCL_GRANA. It is preferable to use the flag with the name of the algorithm (CCL_BBDT) rather than the one with the name of the first author (CCL_GRANA).

Spaghetti Bolelli2019 algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity.

BBDT Grana2010 algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity. The parallel implementation described in Bolelli2017 is available for both BBDT and SAUF.

BBDT Grana2010 algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity. The parallel implementation described in Bolelli2017 is available for both BBDT and SAUF.

Same as CCL_WU. It is preferable to use the flag with the name of the algorithm (CCL_SAUF) rather than the one with the name of the first author (CCL_WU).

Same as CCL_BOLELLI. It is preferable to use the flag with the name of the algorithm (CCL_SPAGHETTI) rather than the one with the name of the first author (CCL_BOLELLI).

SAUF Wu2009 algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity. The parallel implementation described in Bolelli2017 is available for SAUF.

The total area (in pixels) of the connected component

The vertical size of the bounding box

The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction.

Max enumeration value. Used internally only for memory allocation

The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction.

The horizontal size of the bounding box

stores absolutely all the contour points. That is, any 2 subsequent points (x1,y1) and (x2,y2) of the contour will be either horizontal, vertical or diagonal neighbors, that is, max(abs(x1-x2),abs(y2-y1))==1.

compresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points.

applies one of the flavors of the Teh-Chin chain approximation algorithm TehChin89

applies one of the flavors of the Teh-Chin chain approximation algorithm TehChin89

twilight shifted

convert between RGB/BGR and BGR555 (16-bit images)

convert between RGB/BGR and BGR565 (16-bit images)

add alpha channel to RGB or BGR image

convert between RGB/BGR and grayscale, @ref color_convert_rgb_gray “color conversions”

convert RGB/BGR to HLS (hue lightness saturation) with H range 0..180 if 8 bit image, @ref color_convert_rgb_hls “color conversions”

convert RGB/BGR to HLS (hue lightness saturation) with H range 0..255 if 8 bit image, @ref color_convert_rgb_hls “color conversions”

convert RGB/BGR to HSV (hue saturation value) with H range 0..180 if 8 bit image, @ref color_convert_rgb_hsv “color conversions”

convert RGB/BGR to HSV (hue saturation value) with H range 0..255 if 8 bit image, @ref color_convert_rgb_hsv “color conversions”

convert RGB/BGR to CIE Lab, @ref color_convert_rgb_lab “color conversions”

convert RGB/BGR to CIE Luv, @ref color_convert_rgb_luv “color conversions”

convert between RGB and BGR color spaces (with or without alpha channel)

convert RGB/BGR to CIE XYZ, @ref color_convert_rgb_xyz “color conversions”

convert RGB/BGR to luma-chroma (aka YCC), @ref color_convert_rgb_ycrcb “color conversions”

convert between RGB/BGR and YUV

RGB to YUV 4:2:0 family

RGB to YUV 4:2:0 family

RGB to YUV 4:2:0 family

remove alpha channel from RGB or BGR image

RGB to YUV 4:2:0 family

RGB to YUV 4:2:0 family

RGB to YUV 4:2:0 family

Demosaicing

Demosaicing with alpha channel

Edge-Aware Demosaicing

Demosaicing using Variable Number of Gradients

Demosaicing

Demosaicing

Demosaicing with alpha channel

Edge-Aware Demosaicing

Demosaicing using Variable Number of Gradients

Demosaicing

Demosaicing with alpha channel

Edge-Aware Demosaicing

Demosaicing using Variable Number of Gradients

Demosaicing

Demosaicing

Demosaicing with alpha channel

Edge-Aware Demosaicing

Demosaicing using Variable Number of Gradients

Demosaicing

Demosaicing with alpha channel

Edge-Aware Demosaicing

Demosaicing using Variable Number of Gradients

Demosaicing

Demosaicing

Demosaicing with alpha channel

Edge-Aware Demosaicing

Demosaicing using Variable Number of Gradients

Demosaicing

Demosaicing with alpha channel

Edge-Aware Demosaicing

Demosaicing using Variable Number of Gradients

Demosaicing

Demosaicing

Demosaicing with alpha channel

Edge-Aware Demosaicing

Demosaicing using Variable Number of Gradients

Demosaicing with alpha channel

convert between grayscale and BGR555 (16-bit images)

convert between grayscale to BGR565 (16-bit images)

backward conversions HLS to RGB/BGR with H range 0..180 if 8 bit image

backward conversions HLS to RGB/BGR with H range 0..255 if 8 bit image

backward conversions HSV to RGB/BGR with H range 0..180 if 8 bit image

backward conversions HSV to RGB/BGR with H range 0..255 if 8 bit image

RGB to YUV 4:2:0 family

RGB to YUV 4:2:0 family

RGB to YUV 4:2:0 family

RGB to YUV 4:2:0 family

RGB to YUV 4:2:0 family

RGB to YUV 4:2:0 family

alpha premultiplication

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:2 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

YUV 4:2:0 family to RGB

alpha premultiplication

block formula

block formula

block formula

distance = max(|x1-x2|,|y1-y2|)

distance = c^2(|x|/c-log(1+|x|/c)), c = 1.3998

distance = |x|<c ? x^2/2 : c(|x|-c/2), c=1.345

distance = |x1-x2| + |y1-y2|

the simple euclidean distance

L1-L2 metric: distance = 2(sqrt(1+x*x/2) - 1))

each connected component of zeros in src (as well as all the non-zero pixels closest to the connected component) will be assigned the same label

each zero pixel (and all the non-zero pixels closest to it) gets its own label.

User defined distance

distance = c^2/2(1-exp(-(x/c)^2)), c = 2.9846

If set, the difference between the current pixel and seed pixel is considered. Otherwise, the difference between neighbor pixels is considered (that is, the range is floating).

If set, the function does not change the image ( newVal is ignored), and only fills the mask with the value specified in bits 8-16 of flags as described above. This option only make sense in function variants that have the mask parameter.

normal size serif font

smaller version of FONT_HERSHEY_COMPLEX

normal size sans-serif font (more complex than FONT_HERSHEY_SIMPLEX)

small size sans-serif font

more complex variant of FONT_HERSHEY_SCRIPT_SIMPLEX

hand-writing style font

normal size sans-serif font

normal size serif font (more complex than FONT_HERSHEY_COMPLEX)

flag for italic font

an obvious background pixels

The value means that the algorithm should just resume.

The value means that the algorithm should just run the grabCut algorithm (a single iteration) with the fixed model

an obvious foreground (object) pixel

The function initializes the state using the provided mask. Note that GC_INIT_WITH_RECT and GC_INIT_WITH_MASK can be combined. Then, all the pixels outside of the ROI are automatically initialized with GC_BGD .

The function initializes the state and the mask using the provided rectangle. After that it runs iterCount iterations of the algorithm.

a possible background pixel

a possible foreground pixel

Bhattacharyya distance (In fact, OpenCV computes Hellinger distance, which is related to Bhattacharyya coefficient.)

block formula

Chi-Square

block formula

Alternative Chi-Square

block formula

This alternative formula is regularly used for texture comparison. See e.g. Puzicha1997

Correlation

block formula

where

block formula

and

inline formula

is a total number of histogram bins.

Synonym for HISTCMP_BHATTACHARYYA

Intersection

block formula

Kullback-Leibler divergence

block formula

basically 21HT, described in Yuen90

variation of HOUGH_GRADIENT to get better accuracy

multi-scale variant of the classical Hough transform. The lines are encoded the same way as HOUGH_STANDARD.

probabilistic Hough transform (more efficient in case if the picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of the CV_32SC4 type.

classical or standard Hough transform. Every line is represented by two floating-point numbers

inline formula

, where

inline formula

is a distance between (0,0) point and the line, and

inline formula

is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of CV_32FC2 type

One of the rectangle is fully enclosed in the other

No intersection

There is a partial intersection

resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire’-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method.

bicubic interpolation

Lanczos interpolation over 8x8 neighborhood

bilinear interpolation

Bit exact bilinear interpolation

mask for interpolation codes

nearest neighbor interpolation

Bit exact nearest neighbor interpolation. This will produce same results as the nearest neighbor method in PIL, scikit-image or Matlab.

4-connected line

8-connected line

antialiased line

Advanced refinement. Number of false alarms is calculated, lines are refined through increase of precision, decrement in size, etc.

No refinement applied

Standard refinement is applied. E.g. breaking arches into smaller straighter line approximations.

A crosshair marker shape

A diamond marker shape

A square marker shape

A star marker shape, combination of cross and tilted cross

A 45 degree tilted crosshair marker shape

A downwards pointing triangle marker shape

An upwards pointing triangle marker shape

“black hat”

block formula

a closing operation

block formula

a cross-shaped structuring element:

block formula

see #dilate

an elliptic structuring element, that is, a filled ellipse inscribed into the rectangle Rect(0, 0, esize.width, 0.esize.height)

see #erode

a morphological gradient

block formula

“hit or miss” .- Only supported for CV_8UC1 binary images. A tutorial can be found in the documentation

an opening operation

block formula

a rectangular structuring element:

block formula

“top hat”

block formula

retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level.

retrieves only the extreme outer contours. It sets hierarchy[i][2]=hierarchy[i][3]=-1 for all the contours.

retrieves all of the contours without establishing any hierarchical relationships.

retrieves all of the contours and reconstructs a full hierarchy of nested contours.

Point location error

Point inside some facet

Point on some edge

Point outside the subdivision bounding rect

Point coincides with one of the subdivision vertices

block formula

block formula

flag, use Otsu algorithm to choose the optimal threshold value

block formula

block formula

flag, use Triangle algorithm to choose the optimal threshold value

block formula

!<

block formula

where

block formula

with mask:

block formula

!<

block formula

!<

block formula

with mask:

block formula

!<

block formula

with mask:

block formula

!<

block formula

with mask:

block formula

!<

block formula

with mask:

block formula

flag, fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to zero

flag, inverse transformation

Remaps an image to/from polar space.

Remaps an image to/from semilog-polar space.

Traits

Base class for Contrast Limited Adaptive Histogram Equalization.

finds arbitrary template in the grayscale image using Generalized Hough Transform

finds arbitrary template in the grayscale image using Generalized Hough Transform

finds arbitrary template in the grayscale image using Generalized Hough Transform

Intelligent Scissors image segmentation

Line segment detector class

Functions

Adds an image to the accumulator image.

Adds the per-element product of two input images to the accumulator image.

Adds the square of a source image to the accumulator image.

Updates a running average.

Applies an adaptive threshold to an array.

Applies a GNU Octave/MATLAB equivalent colormap on a given image.

Applies a user colormap on a given image.

Approximates a polygonal curve(s) with the specified precision.

Calculates a contour perimeter or a curve length.

Draws an arrow segment pointing from the first point to the second one.

Applies the bilateral filter to an image.

Performs linear blending of two images:

block formula

Blurs an image using the normalized box filter.

Calculates the up-right bounding rectangle of a point set or non-zero pixels of gray-scale image.

Blurs an image using the box filter.

Finds the four vertices of a rotated rect. Useful to draw the rotated rectangle.

Constructs the Gaussian pyramid for an image.

Calculates the back projection of a histogram.

Calculates a histogram of a set of arrays.

Finds edges in an image using the Canny algorithm Canny86 .

Draws a circle.

Clips the line against the image rectangle.

Clips the line against the image rectangle.

Clips the line against the image rectangle.

Compares two histograms.

Compares two histograms.

computes the connected components labeled image of boolean image

computes the connected components labeled image of boolean image

computes the connected components labeled image of boolean image and also produces a statistics output for each label

computes the connected components labeled image of boolean image and also produces a statistics output for each label

Calculates a contour area.

Converts image transformation maps from one representation to another.

Finds the convex hull of a point set.

Finds the convexity defects of a contour.

Calculates eigenvalues and eigenvectors of image blocks for corner detection.

Harris corner detector.

Calculates the minimal eigenvalue of gradient matrices for corner detection.

Refines the corner locations.

Creates a smart pointer to a cv::CLAHE class and initializes it.

Creates a smart pointer to a cv::GeneralizedHoughBallard class and initializes it.

Creates a smart pointer to a cv::GeneralizedHoughGuil class and initializes it.

This function computes a Hanning window coefficients in two dimensions.

Creates a smart pointer to a LineSegmentDetector object and initializes it.

Converts an image from one color space to another.

Converts an image from one color space to another where the source image is stored in two planes.

main function for all demosaicing processes

Dilates an image by using a specific structuring element.

Calculates the distance to the closest zero pixel for each pixel of the source image.

Calculates the distance to the closest zero pixel for each pixel of the source image.

Performs the per-element division of the first Fourier spectrum by the second Fourier spectrum.

Draws contours outlines or filled contours.

Draws a marker on a predefined position in an image.

Draws a simple or thick elliptic arc or fills an ellipse sector.

Approximates an elliptic arc with a polyline.

Approximates an elliptic arc with a polyline.

Draws a simple or thick elliptic arc or fills an ellipse sector.

Computes the “minimal work” distance between two weighted point configurations.

C++ default parameters

Equalizes the histogram of a grayscale image.

Erodes an image by using a specific structuring element.

Fills a convex polygon.

Fills the area bounded by one or more polygons.

Convolves an image with the kernel.

Finds contours in a binary image.

Finds contours in a binary image.

Fits an ellipse around a set of 2D points.

Fits an ellipse around a set of 2D points.

Fits an ellipse around a set of 2D points.

Fits a line to a 2D or 3D point set.

Fills a connected component with the given color.

Fills a connected component with the given color.

Blurs an image using a Gaussian filter.

Calculates an affine transform from three pairs of the corresponding points.

Returns filter coefficients for computing spatial image derivatives.

Calculates the font-specific size to use to achieve a given height in pixels.

Returns Gabor filter coefficients.

Returns Gaussian filter coefficients.

Calculates a perspective transform from four pairs of the corresponding points.

Calculates a perspective transform from four pairs of the corresponding points.

Retrieves a pixel rectangle from an image with sub-pixel accuracy.

Calculates an affine matrix of 2D rotation.

Returns a structuring element of the specified size and shape for morphological operations.

Calculates the width and height of a text string.

Determines strong corners on an image.

Same as above, but returns also quality measure of the detected corners.

Runs the GrabCut algorithm.

Finds circles in a grayscale image using the Hough transform.

Finds lines in a binary image using the standard Hough transform.

Finds line segments in a binary image using the probabilistic Hough transform.

Finds lines in a set of points using the standard Hough transform.

Calculates seven Hu invariants.

Calculates seven Hu invariants.

Calculates the integral of an image.

Calculates the integral of an image.

Calculates the integral of an image.

Finds intersection of two convex polygons

Inverts an affine transformation.

Tests a contour convexity.

Calculates the Laplacian of an image.

Draws a line segment connecting two points.

linear_polarDeprecated

Remaps an image to polar coordinates space.

log_polarDeprecated

Remaps an image to semilog-polar coordinates space.

Compares two shapes.

Compares a template against overlapped image regions.

Blurs an image using the median filter.

Finds a rotated rectangle of the minimum area enclosing the input 2D point set.

Finds a circle of the minimum area enclosing a 2D point set.

Finds a triangle of minimum area enclosing a 2D point set and returns its area.

Calculates all of the moments up to the third order of a polygon or rasterized shape.

returns “magic” border value for erosion and dilation. It is automatically transformed to Scalar::all(-DBL_MAX) for dilation.

Performs advanced morphological transformations.

The function is used to detect translational shifts that occur between two images.

Performs a point-in-contour test.

Draws several polygonal curves.

Calculates a feature map for corner detection.

Draws a text string.

Blurs an image and downsamples it.

Performs initial step of meanshift segmentation of an image.

Upsamples an image and then blurs it.

Draws a simple, thick, or filled up-right rectangle.

Draws a simple, thick, or filled up-right rectangle.

Applies a generic geometrical transformation to an image.

Resizes an image.

Finds out if there is any intersection between two rotated rectangles.

Calculates the first x- or y- image derivative using Scharr operator.

Applies a separable linear filter to an image.

Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.

Calculates the first order image derivative in both x and y using a Sobel operator

Calculates the normalized sum of squares of the pixel values overlapping the filter.

Applies a fixed-level threshold to each array element.

Applies an affine transformation to an image.

Applies a perspective transformation to an image.

\brief Remaps an image to polar or semilog-polar coordinates space

Performs a marker-based image segmentation using the watershed algorithm.