[−][src]Module opencv::imgproc
Image Processing
This module includes image-processing functions.
Image Filtering
Functions and classes described in this section are used to perform various linear or non-linear filtering operations on 2D images (represented as Mat's). It means that for each pixel location in the source image (normally, rectangular), its neighborhood is considered and used to compute the response. In case of a linear filter, it is a weighted sum of pixel values. In case of morphological operations, it is the minimum or maximum values, and so on. The computed response is stored in the destination image at the same location . It means that the output image will be of the same size as the input image. Normally, the functions support multi-channel arrays, in which case every channel is processed independently. Therefore, the output image will also have the same number of channels as the input one.
Another common feature of the functions and classes described in this section is that, unlike simple arithmetic functions, they need to extrapolate values of some non-existing pixels. For example, if you want to smooth an image using a Gaussian filter, then, when processing the left-most pixels in each row, you need pixels to the left of them, that is, outside of the image. You can let these pixels be the same as the left-most image pixels ("replicated border" extrapolation method), or assume that all the non-existing pixels are zeros ("constant border" extrapolation method), and so on. OpenCV enables you to specify the extrapolation method. For details, see #BorderTypes
@anchor filter_depths
Depth combinations
Input depth (src.depth()) | Output depth (ddepth) |
---|---|
CV_8U | -1/CV_16S/CV_32F/CV_64F |
CV_16U/CV_16S | -1/CV_32F/CV_64F |
CV_32F | -1/CV_32F/CV_64F |
CV_64F | -1/CV_64F |
Note: when ddepth=-1, the output image will have the same depth as the source.
Geometric Image Transformations
The functions in this section perform various geometrical transformations of 2D images. They do not change the image content but deform the pixel grid and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel of the destination image, the functions compute coordinates of the corresponding "donor" pixel in the source image and copy the pixel value:
In case when you specify the forward mapping , the OpenCV functions first compute the corresponding inverse mapping and then use the above formula.
The actual implementations of the geometrical transformations, from the most generic remap and to the simplest and the fastest resize, need to solve two main problems with the above formula:
-
Extrapolation of non-existing pixels. Similarly to the filtering functions described in the previous section, for some , either one of , or , or both of them may fall outside of the image. In this case, an extrapolation method needs to be used. OpenCV provides the same selection of extrapolation methods as in the filtering functions. In addition, it provides the method #BORDER_TRANSPARENT. This means that the corresponding pixels in the destination image will not be modified at all.
-
Interpolation of pixel values. Usually and are floating-point numbers. This means that can be either an affine or perspective transformation, or radial lens distortion correction, and so on. So, a pixel value at fractional coordinates needs to be retrieved. In the simplest case, the coordinates can be just rounded to the nearest integer coordinates and the corresponding pixel can be used. This is called a nearest-neighbor interpolation. However, a better result can be achieved by using more sophisticated interpolation methods , where a polynomial function is fit into some neighborhood of the computed pixel , and then the value of the polynomial at is taken as the interpolated pixel value. In OpenCV, you can choose between several interpolation methods. See resize for details.
Note: The geometrical transformations do not work with CV_8S
or CV_32S
images.
Miscellaneous Image Transformations
Drawing Functions
Drawing functions work with matrices/images of arbitrary depth. The boundaries of the shapes can be rendered with antialiasing (implemented only for 8-bit images for now). All the functions include the parameter color that uses an RGB value (that may be constructed with the Scalar constructor ) for color images and brightness for grayscale images. For color images, the channel ordering is normally Blue, Green, Red. This is what imshow, imread, and imwrite expect. So, if you form a color using the Scalar constructor, it should look like:
If you are using your own image rendering and I/O functions, you can use any channel ordering. The drawing functions process each channel independently and do not depend on the channel order or even on the used color space. The whole image can be converted from BGR to RGB or to a different color space using cvtColor .
If a drawn figure is partially or completely outside the image, the drawing functions clip it. Also, many drawing functions can handle pixel coordinates specified with sub-pixel accuracy. This means that the coordinates can be passed as fixed-point numbers encoded as integers. The number of fractional bits is specified by the shift parameter and the real point coordinates are calculated as . This feature is especially effective when rendering antialiased shapes.
Note: The functions do not support alpha-transparency when the target image is 4-channel. In this case, the color[3] is simply copied to the repainted pixels. Thus, if you want to paint semi-transparent shapes, you can paint them in a separate buffer and then blend it with the main image.
Color Space Conversions
ColorMaps in OpenCV
The human perception isn't built for observing fine changes in grayscale images. Human eyes are more sensitive to observing changes between colors, so you often need to recolor your grayscale images to get a clue about them. OpenCV now comes with various colormaps to enhance the visualization in your computer vision application.
In OpenCV you only need applyColorMap to apply a colormap on a given image. The following sample code reads the path to an image from command line, applies a Jet colormap on it and shows the result:
@include snippets/imgproc_applyColorMap.cpp
See also
#ColormapTypes
Planar Subdivision
The Subdiv2D class described in this section is used to perform various planar subdivision on a set of 2D points (represented as vector of Point2f). OpenCV subdivides a plane into triangles using the Delaunay's algorithm, which corresponds to the dual graph of the Voronoi diagram. In the figure below, the Delaunay's triangulation is marked with black lines and the Voronoi diagram with red lines.
The subdivisions can be used for the 3D piece-wise transformation of a plane, morphing, fast location of points on the plane, building special graphs (such as NNG,RNG), and so forth.
Histograms
Structural Analysis and Shape Descriptors
Motion Analysis and Object Tracking
Feature Detection
Object Detection
C API
Hardware Acceleration Layer
Modules
prelude |
Structs
LineIterator | Line iterator |
Subdiv2D |
Enums
AdaptiveThresholdTypes | adaptive threshold algorithm |
ColorConversionCodes | the color conversion codes |
ColormapTypes | GNU Octave/MATLAB equivalent colormaps |
ConnectedComponentsAlgorithmsTypes | connected components algorithm |
ConnectedComponentsTypes | connected components algorithm output formats |
ContourApproximationModes | the contour approximation algorithm |
DistanceTransformLabelTypes | distanceTransform algorithm flags |
DistanceTransformMasks | Mask size for distance transform |
DistanceTypes | Distance types for Distance Transform and M-estimators |
FloodFillFlags | floodfill algorithm flags |
GrabCutClasses | class of the pixel in GrabCut algorithm |
GrabCutModes | GrabCut algorithm flags |
HersheyFonts | Only a subset of Hershey fonts https://en.wikipedia.org/wiki/Hershey_fonts are supported @ingroup imgproc_draw |
HistCompMethods | Histogram comparison methods @ingroup imgproc_hist |
HoughModes | Variants of a Hough transform |
InterpolationFlags | interpolation algorithm |
InterpolationMasks | |
LineSegmentDetectorModes | Variants of Line Segment %Detector |
LineTypes | types of line @ingroup imgproc_draw |
MarkerTypes | Possible set of marker types used for the cv::drawMarker function @ingroup imgproc_draw |
MorphShapes | shape of the structuring element |
MorphTypes | type of morphological operation |
RectanglesIntersectTypes | types of intersection between rectangles |
RetrievalModes | mode of the contour retrieval algorithm |
ShapeMatchModes | Shape matching methods |
SpecialFilter | |
TemplateMatchModes | type of the template matching operation |
ThresholdTypes | type of the threshold operation threshold types |
WarpPolarMode | \brief Specify the polar mapping mode |
Constants
ADAPTIVE_THRESH_GAUSSIAN_C | the threshold value inline formula is a weighted sum (cross-correlation with a Gaussian window) of theinline formula neighborhood ofinline formula minus C . The default sigma (standard deviation) is used for the specified blockSize . See #getGaussianKernel |
ADAPTIVE_THRESH_MEAN_C | the threshold value inline formula is a mean of theinline formula neighborhood ofinline formula minus C |
CCL_DEFAULT | BBDT algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity |
CCL_GRANA | BBDT algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity |
CCL_WU | SAUF algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity |
CC_STAT_AREA | The total area (in pixels) of the connected component |
CC_STAT_HEIGHT | The vertical size of the bounding box |
CC_STAT_LEFT | The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction. |
CC_STAT_MAX | Max enumeration value. Used internally only for memory allocation |
CC_STAT_TOP | The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction. |
CC_STAT_WIDTH | The horizontal size of the bounding box |
CHAIN_APPROX_NONE | stores absolutely all the contour points. That is, any 2 subsequent points (x1,y1) and (x2,y2) of the contour will be either horizontal, vertical or diagonal neighbors, that is, max(abs(x1-x2),abs(y2-y1))==1. |
CHAIN_APPROX_SIMPLE | compresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points. |
CHAIN_APPROX_TC89_KCOS | applies one of the flavors of the Teh-Chin chain approximation algorithm TehChin89 |
CHAIN_APPROX_TC89_L1 | applies one of the flavors of the Teh-Chin chain approximation algorithm TehChin89 |
COLORMAP_AUTUMN |
autumn |
COLORMAP_BONE |
bone |
COLORMAP_CIVIDIS |
cividis |
COLORMAP_COOL |
cool |
COLORMAP_HOT |
hot |
COLORMAP_HSV |
HSV |
COLORMAP_INFERNO |
inferno |
COLORMAP_JET |
jet |
COLORMAP_MAGMA |
magma |
COLORMAP_OCEAN |
ocean |
COLORMAP_PARULA |
parula |
COLORMAP_PINK |
pink |
COLORMAP_PLASMA |
plasma |
COLORMAP_RAINBOW |
rainbow |
COLORMAP_SPRING |
spring |
COLORMAP_SUMMER |
summer |
COLORMAP_TURBO |
turbo |
COLORMAP_TWILIGHT |
twilight |
COLORMAP_TWILIGHT_SHIFTED |
twilight shifted |
COLORMAP_VIRIDIS |
viridis |
COLORMAP_WINTER |
winter |
COLOR_BGR2BGRA | add alpha channel to RGB or BGR image |
COLOR_BGR2GRAY | convert between RGB/BGR and grayscale, @ref color_convert_rgb_gray "color conversions" |
COLOR_BGR2HLS | convert RGB/BGR to HLS (hue lightness saturation), @ref color_convert_rgb_hls "color conversions" |
COLOR_BGR2HLS_FULL | |
COLOR_BGR2HSV | convert RGB/BGR to HSV (hue saturation value), @ref color_convert_rgb_hsv "color conversions" |
COLOR_BGR2HSV_FULL | |
COLOR_BGR2Lab | convert RGB/BGR to CIE Lab, @ref color_convert_rgb_lab "color conversions" |
COLOR_BGR2Luv | convert RGB/BGR to CIE Luv, @ref color_convert_rgb_luv "color conversions" |
COLOR_BGR2RGB | |
COLOR_BGR2RGBA | convert between RGB and BGR color spaces (with or without alpha channel) |
COLOR_BGR2XYZ | convert RGB/BGR to CIE XYZ, @ref color_convert_rgb_xyz "color conversions" |
COLOR_BGR2YCrCb | convert RGB/BGR to luma-chroma (aka YCC), @ref color_convert_rgb_ycrcb "color conversions" |
COLOR_BGR2YUV | convert between RGB/BGR and YUV |
COLOR_BGR2YUV_IYUV | RGB to YUV 4:2:0 family |
COLOR_BGR5552BGR | |
COLOR_BGR5552BGRA | |
COLOR_BGR5552GRAY | |
COLOR_BGR5552RGB | |
COLOR_BGR5552RGBA | |
COLOR_BGR5652BGR | |
COLOR_BGR5652BGRA | |
COLOR_BGR5652GRAY | |
COLOR_BGR5652RGB | |
COLOR_BGR5652RGBA | |
COLOR_BGR2BGR555 | convert between RGB/BGR and BGR555 (16-bit images) |
COLOR_BGR2BGR565 | convert between RGB/BGR and BGR565 (16-bit images) |
COLOR_BGR2YUV_I420 | RGB to YUV 4:2:0 family |
COLOR_BGR2YUV_YV12 | RGB to YUV 4:2:0 family |
COLOR_BGRA2BGR | remove alpha channel from RGB or BGR image |
COLOR_BGRA2GRAY | |
COLOR_BGRA2RGB | |
COLOR_BGRA2RGBA | |
COLOR_BGRA2YUV_IYUV | RGB to YUV 4:2:0 family |
COLOR_BGRA2BGR555 | |
COLOR_BGRA2BGR565 | |
COLOR_BGRA2YUV_I420 | RGB to YUV 4:2:0 family |
COLOR_BGRA2YUV_YV12 | RGB to YUV 4:2:0 family |
COLOR_BayerBG2BGR | Demosaicing |
COLOR_BayerBG2BGRA | Demosaicing with alpha channel |
COLOR_BayerBG2BGR_EA | Edge-Aware Demosaicing |
COLOR_BayerBG2BGR_VNG | Demosaicing using Variable Number of Gradients |
COLOR_BayerBG2GRAY | Demosaicing |
COLOR_BayerBG2RGB | Demosaicing |
COLOR_BayerBG2RGBA | Demosaicing with alpha channel |
COLOR_BayerBG2RGB_EA | Edge-Aware Demosaicing |
COLOR_BayerBG2RGB_VNG | Demosaicing using Variable Number of Gradients |
COLOR_BayerGB2BGR | Demosaicing |
COLOR_BayerGB2BGRA | Demosaicing with alpha channel |
COLOR_BayerGB2BGR_EA | Edge-Aware Demosaicing |
COLOR_BayerGB2BGR_VNG | Demosaicing using Variable Number of Gradients |
COLOR_BayerGB2GRAY | Demosaicing |
COLOR_BayerGB2RGB | Demosaicing |
COLOR_BayerGB2RGBA | Demosaicing with alpha channel |
COLOR_BayerGB2RGB_EA | Edge-Aware Demosaicing |
COLOR_BayerGB2RGB_VNG | Demosaicing using Variable Number of Gradients |
COLOR_BayerGR2BGR | Demosaicing |
COLOR_BayerGR2BGRA | Demosaicing with alpha channel |
COLOR_BayerGR2BGR_EA | Edge-Aware Demosaicing |
COLOR_BayerGR2BGR_VNG | Demosaicing using Variable Number of Gradients |
COLOR_BayerGR2GRAY | Demosaicing |
COLOR_BayerGR2RGB | Demosaicing |
COLOR_BayerGR2RGBA | Demosaicing with alpha channel |
COLOR_BayerGR2RGB_EA | Edge-Aware Demosaicing |
COLOR_BayerGR2RGB_VNG | Demosaicing using Variable Number of Gradients |
COLOR_BayerRG2BGR | Demosaicing |
COLOR_BayerRG2BGRA | Demosaicing with alpha channel |
COLOR_BayerRG2BGR_EA | Edge-Aware Demosaicing |
COLOR_BayerRG2BGR_VNG | Demosaicing using Variable Number of Gradients |
COLOR_BayerRG2GRAY | Demosaicing |
COLOR_BayerRG2RGB | Demosaicing |
COLOR_BayerRG2RGBA | Demosaicing with alpha channel |
COLOR_BayerRG2RGB_EA | Edge-Aware Demosaicing |
COLOR_BayerRG2RGB_VNG | Demosaicing using Variable Number of Gradients |
COLOR_COLORCVT_MAX | Demosaicing with alpha channel |
COLOR_GRAY2BGR | |
COLOR_GRAY2BGRA | |
COLOR_GRAY2RGB | |
COLOR_GRAY2RGBA | |
COLOR_GRAY2BGR555 | convert between grayscale and BGR555 (16-bit images) |
COLOR_GRAY2BGR565 | convert between grayscale to BGR565 (16-bit images) |
COLOR_HLS2BGR | |
COLOR_HLS2BGR_FULL | |
COLOR_HLS2RGB | |
COLOR_HLS2RGB_FULL | |
COLOR_HSV2BGR | backward conversions to RGB/BGR |
COLOR_HSV2BGR_FULL | |
COLOR_HSV2RGB | |
COLOR_HSV2RGB_FULL | |
COLOR_LBGR2Lab | |
COLOR_LBGR2Luv | |
COLOR_LRGB2Lab | |
COLOR_LRGB2Luv | |
COLOR_Lab2BGR | |
COLOR_Lab2LBGR | |
COLOR_Lab2LRGB | |
COLOR_Lab2RGB | |
COLOR_Luv2BGR | |
COLOR_Luv2LBGR | |
COLOR_Luv2LRGB | |
COLOR_Luv2RGB | |
COLOR_RGB2BGR | |
COLOR_RGB2BGRA | |
COLOR_RGB2GRAY | |
COLOR_RGB2HLS | |
COLOR_RGB2HLS_FULL | |
COLOR_RGB2HSV | |
COLOR_RGB2HSV_FULL | |
COLOR_RGB2Lab | |
COLOR_RGB2Luv | |
COLOR_RGB2RGBA | |
COLOR_RGB2XYZ | |
COLOR_RGB2YCrCb | |
COLOR_RGB2YUV | |
COLOR_RGB2YUV_IYUV | RGB to YUV 4:2:0 family |
COLOR_RGB2BGR555 | |
COLOR_RGB2BGR565 | |
COLOR_RGB2YUV_I420 | RGB to YUV 4:2:0 family |
COLOR_RGB2YUV_YV12 | RGB to YUV 4:2:0 family |
COLOR_RGBA2BGR | |
COLOR_RGBA2BGRA | |
COLOR_RGBA2GRAY | |
COLOR_RGBA2RGB | |
COLOR_RGBA2YUV_IYUV | RGB to YUV 4:2:0 family |
COLOR_RGBA2mRGBA | alpha premultiplication |
COLOR_RGBA2BGR555 | |
COLOR_RGBA2BGR565 | |
COLOR_RGBA2YUV_I420 | RGB to YUV 4:2:0 family |
COLOR_RGBA2YUV_YV12 | RGB to YUV 4:2:0 family |
COLOR_XYZ2BGR | |
COLOR_XYZ2RGB | |
COLOR_YCrCb2BGR | |
COLOR_YCrCb2RGB | |
COLOR_YUV2BGR | |
COLOR_YUV2BGRA_IYUV | YUV 4:2:0 family to RGB |
COLOR_YUV2BGRA_UYNV | YUV 4:2:2 family to RGB |
COLOR_YUV2BGRA_UYVY | YUV 4:2:2 family to RGB |
COLOR_YUV2BGRA_YUNV | YUV 4:2:2 family to RGB |
COLOR_YUV2BGRA_YUYV | YUV 4:2:2 family to RGB |
COLOR_YUV2BGRA_YVYU | YUV 4:2:2 family to RGB |
COLOR_YUV2BGR_IYUV | YUV 4:2:0 family to RGB |
COLOR_YUV2BGR_UYNV | YUV 4:2:2 family to RGB |
COLOR_YUV2BGR_UYVY | YUV 4:2:2 family to RGB |
COLOR_YUV2BGR_YUNV | YUV 4:2:2 family to RGB |
COLOR_YUV2BGR_YUYV | YUV 4:2:2 family to RGB |
COLOR_YUV2BGR_YVYU | YUV 4:2:2 family to RGB |
COLOR_YUV2GRAY_IYUV | YUV 4:2:0 family to RGB |
COLOR_YUV2GRAY_UYNV | YUV 4:2:2 family to RGB |
COLOR_YUV2GRAY_UYVY | YUV 4:2:2 family to RGB |
COLOR_YUV2GRAY_YUNV | YUV 4:2:2 family to RGB |
COLOR_YUV2GRAY_YUYV | YUV 4:2:2 family to RGB |
COLOR_YUV2GRAY_YVYU | YUV 4:2:2 family to RGB |
COLOR_YUV2RGB | |
COLOR_YUV2RGBA_IYUV | YUV 4:2:0 family to RGB |
COLOR_YUV2RGBA_UYNV | YUV 4:2:2 family to RGB |
COLOR_YUV2RGBA_UYVY | YUV 4:2:2 family to RGB |
COLOR_YUV2RGBA_YUNV | YUV 4:2:2 family to RGB |
COLOR_YUV2RGBA_YUYV | YUV 4:2:2 family to RGB |
COLOR_YUV2RGBA_YVYU | YUV 4:2:2 family to RGB |
COLOR_YUV2RGB_IYUV | YUV 4:2:0 family to RGB |
COLOR_YUV2RGB_UYNV | YUV 4:2:2 family to RGB |
COLOR_YUV2RGB_UYVY | YUV 4:2:2 family to RGB |
COLOR_YUV2RGB_YUNV | YUV 4:2:2 family to RGB |
COLOR_YUV2RGB_YUYV | YUV 4:2:2 family to RGB |
COLOR_YUV2RGB_YVYU | YUV 4:2:2 family to RGB |
COLOR_YUV2BGRA_I420 | YUV 4:2:0 family to RGB |
COLOR_YUV2BGRA_NV12 | YUV 4:2:0 family to RGB |
COLOR_YUV2BGRA_NV21 | YUV 4:2:0 family to RGB |
COLOR_YUV2BGRA_Y422 | YUV 4:2:2 family to RGB |
COLOR_YUV2BGRA_YUY2 | YUV 4:2:2 family to RGB |
COLOR_YUV2BGRA_YV12 | YUV 4:2:0 family to RGB |
COLOR_YUV2BGR_I420 | YUV 4:2:0 family to RGB |
COLOR_YUV2BGR_NV12 | YUV 4:2:0 family to RGB |
COLOR_YUV2BGR_NV21 | YUV 4:2:0 family to RGB |
COLOR_YUV2BGR_Y422 | YUV 4:2:2 family to RGB |
COLOR_YUV2BGR_YUY2 | YUV 4:2:2 family to RGB |
COLOR_YUV2BGR_YV12 | YUV 4:2:0 family to RGB |
COLOR_YUV2GRAY_420 | YUV 4:2:0 family to RGB |
COLOR_YUV2GRAY_I420 | YUV 4:2:0 family to RGB |
COLOR_YUV2GRAY_NV12 | YUV 4:2:0 family to RGB |
COLOR_YUV2GRAY_NV21 | YUV 4:2:0 family to RGB |
COLOR_YUV2GRAY_Y422 | YUV 4:2:2 family to RGB |
COLOR_YUV2GRAY_YUY2 | YUV 4:2:2 family to RGB |
COLOR_YUV2GRAY_YV12 | YUV 4:2:0 family to RGB |
COLOR_YUV2RGBA_I420 | YUV 4:2:0 family to RGB |
COLOR_YUV2RGBA_NV12 | YUV 4:2:0 family to RGB |
COLOR_YUV2RGBA_NV21 | YUV 4:2:0 family to RGB |
COLOR_YUV2RGBA_Y422 | YUV 4:2:2 family to RGB |
COLOR_YUV2RGBA_YUY2 | YUV 4:2:2 family to RGB |
COLOR_YUV2RGBA_YV12 | YUV 4:2:0 family to RGB |
COLOR_YUV2RGB_I420 | YUV 4:2:0 family to RGB |
COLOR_YUV2RGB_NV12 | YUV 4:2:0 family to RGB |
COLOR_YUV2RGB_NV21 | YUV 4:2:0 family to RGB |
COLOR_YUV2RGB_Y422 | YUV 4:2:2 family to RGB |
COLOR_YUV2RGB_YUY2 | YUV 4:2:2 family to RGB |
COLOR_YUV2RGB_YV12 | YUV 4:2:0 family to RGB |
COLOR_YUV420p2BGR | YUV 4:2:0 family to RGB |
COLOR_YUV420p2BGRA | YUV 4:2:0 family to RGB |
COLOR_YUV420p2GRAY | YUV 4:2:0 family to RGB |
COLOR_YUV420p2RGB | YUV 4:2:0 family to RGB |
COLOR_YUV420p2RGBA | YUV 4:2:0 family to RGB |
COLOR_YUV420sp2BGR | YUV 4:2:0 family to RGB |
COLOR_YUV420sp2BGRA | YUV 4:2:0 family to RGB |
COLOR_YUV420sp2GRAY | YUV 4:2:0 family to RGB |
COLOR_YUV420sp2RGB | YUV 4:2:0 family to RGB |
COLOR_YUV420sp2RGBA | YUV 4:2:0 family to RGB |
COLOR_mRGBA2RGBA | alpha premultiplication |
CONTOURS_MATCH_I1 |
block formula |
CONTOURS_MATCH_I2 |
block formula |
CONTOURS_MATCH_I3 |
block formula |
DIST_C | distance = max(|x1-x2|,|y1-y2|) |
DIST_FAIR | distance = c^2(|x|/c-log(1+|x|/c)), c = 1.3998 |
DIST_HUBER | distance = |x|<c ? x^2/2 : c(|x|-c/2), c=1.345 |
DIST_L1 | distance = |x1-x2| + |y1-y2| |
DIST_L2 | the simple euclidean distance |
DIST_L12 | L1-L2 metric: distance = 2(sqrt(1+x*x/2) - 1)) |
DIST_LABEL_CCOMP | each connected component of zeros in src (as well as all the non-zero pixels closest to the connected component) will be assigned the same label |
DIST_LABEL_PIXEL | each zero pixel (and all the non-zero pixels closest to it) gets its own label. |
DIST_MASK_3 | mask=3 |
DIST_MASK_5 | mask=5 |
DIST_MASK_PRECISE | |
DIST_USER | User defined distance |
DIST_WELSCH | distance = c^2/2(1-exp(-(x/c)^2)), c = 2.9846 |
FILLED | |
FILTER_SCHARR | |
FLOODFILL_FIXED_RANGE | If set, the difference between the current pixel and seed pixel is considered. Otherwise, the difference between neighbor pixels is considered (that is, the range is floating). |
FLOODFILL_MASK_ONLY | If set, the function does not change the image ( newVal is ignored), and only fills the mask with the value specified in bits 8-16 of flags as described above. This option only make sense in function variants that have the mask parameter. |
FONT_HERSHEY_COMPLEX | normal size serif font |
FONT_HERSHEY_COMPLEX_SMALL | smaller version of FONT_HERSHEY_COMPLEX |
FONT_HERSHEY_DUPLEX | normal size sans-serif font (more complex than FONT_HERSHEY_SIMPLEX) |
FONT_HERSHEY_PLAIN | small size sans-serif font |
FONT_HERSHEY_SCRIPT_COMPLEX | more complex variant of FONT_HERSHEY_SCRIPT_SIMPLEX |
FONT_HERSHEY_SCRIPT_SIMPLEX | hand-writing style font |
FONT_HERSHEY_SIMPLEX | normal size sans-serif font |
FONT_HERSHEY_TRIPLEX | normal size serif font (more complex than FONT_HERSHEY_COMPLEX) |
FONT_ITALIC | flag for italic font |
GC_BGD | an obvious background pixels |
GC_EVAL | The value means that the algorithm should just resume. |
GC_EVAL_FREEZE_MODEL | The value means that the algorithm should just run the grabCut algorithm (a single iteration) with the fixed model |
GC_FGD | an obvious foreground (object) pixel |
GC_INIT_WITH_MASK | The function initializes the state using the provided mask. Note that GC_INIT_WITH_RECT and GC_INIT_WITH_MASK can be combined. Then, all the pixels outside of the ROI are automatically initialized with GC_BGD . |
GC_INIT_WITH_RECT | The function initializes the state and the mask using the provided rectangle. After that it runs iterCount iterations of the algorithm. |
GC_PR_BGD | a possible background pixel |
GC_PR_FGD | a possible foreground pixel |
HISTCMP_BHATTACHARYYA | Bhattacharyya distance (In fact, OpenCV computes Hellinger distance, which is related to Bhattacharyya coefficient.) block formula |
HISTCMP_CHISQR | Chi-Square block formula |
HISTCMP_CHISQR_ALT | Alternative Chi-Square block formula This alternative formula is regularly used for texture comparison. See e.g. Puzicha1997 |
HISTCMP_CORREL | Correlation block formula whereblock formula andinline formula is a total number of histogram bins. |
HISTCMP_HELLINGER | Synonym for HISTCMP_BHATTACHARYYA |
HISTCMP_INTERSECT | Intersection block formula |
HISTCMP_KL_DIV | Kullback-Leibler divergence block formula |
HOUGH_GRADIENT | basically 21HT, described in Yuen90 |
HOUGH_MULTI_SCALE | multi-scale variant of the classical Hough transform. The lines are encoded the same way as HOUGH_STANDARD. |
HOUGH_PROBABILISTIC | probabilistic Hough transform (more efficient in case if the picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of the CV_32SC4 type. |
HOUGH_STANDARD | classical or standard Hough transform. Every line is represented by two floating-point numbers inline formula , whereinline formula is a distance between (0,0) point and the line, andinline formula is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of CV_32FC2 type |
INTERSECT_FULL | One of the rectangle is fully enclosed in the other |
INTERSECT_NONE | No intersection |
INTERSECT_PARTIAL | There is a partial intersection |
INTER_AREA | resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire'-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method. |
INTER_BITS | |
INTER_BITS2 | |
INTER_CUBIC | bicubic interpolation |
INTER_LANCZOS4 | Lanczos interpolation over 8x8 neighborhood |
INTER_LINEAR | bilinear interpolation |
INTER_LINEAR_EXACT | Bit exact bilinear interpolation |
INTER_MAX | mask for interpolation codes |
INTER_NEAREST | nearest neighbor interpolation |
INTER_TAB_SIZE | |
INTER_TAB_SIZE2 | |
LINE_4 | 4-connected line |
LINE_8 | 8-connected line |
LINE_AA | antialiased line |
LSD_REFINE_ADV | Advanced refinement. Number of false alarms is calculated, lines are refined through increase of precision, decrement in size, etc. |
LSD_REFINE_NONE | No refinement applied |
LSD_REFINE_STD | Standard refinement is applied. E.g. breaking arches into smaller straighter line approximations. |
MARKER_CROSS | A crosshair marker shape |
MARKER_DIAMOND | A diamond marker shape |
MARKER_SQUARE | A square marker shape |
MARKER_STAR | A star marker shape, combination of cross and tilted cross |
MARKER_TILTED_CROSS | A 45 degree tilted crosshair marker shape |
MARKER_TRIANGLE_DOWN | A downwards pointing triangle marker shape |
MARKER_TRIANGLE_UP | An upwards pointing triangle marker shape |
MORPH_BLACKHAT | "black hat" block formula |
MORPH_CLOSE | a closing operation block formula |
MORPH_CROSS | a cross-shaped structuring element: block formula |
MORPH_DILATE | see #dilate |
MORPH_ELLIPSE | an elliptic structuring element, that is, a filled ellipse inscribed into the rectangle Rect(0, 0, esize.width, 0.esize.height) |
MORPH_ERODE | see #erode |
MORPH_GRADIENT | a morphological gradient block formula |
MORPH_HITMISS | "hit or miss" .- Only supported for CV_8UC1 binary images. A tutorial can be found in the documentation |
MORPH_OPEN | an opening operation block formula |
MORPH_RECT | a rectangular structuring element: block formula |
MORPH_TOPHAT | "top hat" block formula |
RETR_CCOMP | retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level. |
RETR_EXTERNAL | retrieves only the extreme outer contours. It sets |
RETR_FLOODFILL | |
RETR_LIST | retrieves all of the contours without establishing any hierarchical relationships. |
RETR_TREE | retrieves all of the contours and reconstructs a full hierarchy of nested contours. |
Subdiv2D_NEXT_AROUND_DST | |
Subdiv2D_NEXT_AROUND_LEFT | |
Subdiv2D_NEXT_AROUND_ORG | |
Subdiv2D_NEXT_AROUND_RIGHT | |
Subdiv2D_PREV_AROUND_DST | |
Subdiv2D_PREV_AROUND_LEFT | |
Subdiv2D_PREV_AROUND_ORG | |
Subdiv2D_PREV_AROUND_RIGHT | |
Subdiv2D_PTLOC_ERROR | Point location error |
Subdiv2D_PTLOC_INSIDE | Point inside some facet |
Subdiv2D_PTLOC_ON_EDGE | Point on some edge |
Subdiv2D_PTLOC_OUTSIDE_RECT | Point outside the subdivision bounding rect |
Subdiv2D_PTLOC_VERTEX | Point coincides with one of the subdivision vertices |
THRESH_BINARY |
block formula |
THRESH_BINARY_INV |
block formula |
THRESH_MASK | |
THRESH_OTSU | flag, use Otsu algorithm to choose the optimal threshold value |
THRESH_TOZERO |
block formula |
THRESH_TOZERO_INV |
block formula |
THRESH_TRIANGLE | flag, use Triangle algorithm to choose the optimal threshold value |
THRESH_TRUNC |
block formula |
TM_CCOEFF |
block formula whereblock formula |
TM_CCOEFF_NORMED |
block formula |
TM_CCORR |
block formula |
TM_CCORR_NORMED |
block formula |
TM_SQDIFF |
block formula |
TM_SQDIFF_NORMED |
block formula |
WARP_FILL_OUTLIERS | flag, fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to zero |
WARP_INVERSE_MAP | flag, inverse transformation |
WARP_POLAR_LINEAR | Remaps an image to/from polar space. |
WARP_POLAR_LOG | Remaps an image to/from semilog-polar space. |
Traits
CLAHE | Base class for Contrast Limited Adaptive Histogram Equalization. |
GeneralizedHough | finds arbitrary template in the grayscale image using Generalized Hough Transform |
GeneralizedHoughBallard | finds arbitrary template in the grayscale image using Generalized Hough Transform |
GeneralizedHoughGuil | finds arbitrary template in the grayscale image using Generalized Hough Transform |
LineIteratorTrait | Line iterator |
LineSegmentDetector | Line segment detector class |
Subdiv2DTrait |
Functions
accumulate | Adds an image to the accumulator image. |
accumulate_product | Adds the per-element product of two input images to the accumulator image. |
accumulate_square | Adds the square of a source image to the accumulator image. |
accumulate_weighted | Updates a running average. |
adaptive_threshold | Applies an adaptive threshold to an array. |
apply_color_map | Applies a GNU Octave/MATLAB equivalent colormap on a given image. |
apply_color_map_user | Applies a user colormap on a given image. |
approx_poly_dp | Approximates a polygonal curve(s) with the specified precision. |
arc_length | Calculates a contour perimeter or a curve length. |
arrowed_line | Draws a arrow segment pointing from the first point to the second one. |
bilateral_filter | Applies the bilateral filter to an image. |
blend_linear | Performs linear blending of two images: block formula |
blur | Blurs an image using the normalized box filter. |
bounding_rect | Calculates the up-right bounding rectangle of a point set or non-zero pixels of gray-scale image. |
box_filter | Blurs an image using the box filter. |
box_points | Finds the four vertices of a rotated rect. Useful to draw the rotated rectangle. |
build_pyramid | Constructs the Gaussian pyramid for an image. |
calc_back_project | Calculates the back projection of a histogram. |
calc_hist | Calculates a histogram of a set of arrays. |
canny | Finds edges in an image using the Canny algorithm Canny86 . |
canny_derivative | \overload |
circle | Draws a circle. |
clip_line | Clips the line against the image rectangle. |
clip_line_size | Clips the line against the image rectangle. |
clip_line_size_i64 | Clips the line against the image rectangle. |
compare_hist | Compares two histograms. |
compare_hist_1 | Compares two histograms. |
connected_components | computes the connected components labeled image of boolean image |
connected_components_with_algorithm | computes the connected components labeled image of boolean image |
connected_components_with_stats | computes the connected components labeled image of boolean image and also produces a statistics output for each label |
connected_components_with_stats_with_algorithm | computes the connected components labeled image of boolean image and also produces a statistics output for each label |
contour_area | Calculates a contour area. |
convert_maps | Converts image transformation maps from one representation to another. |
convex_hull | Finds the convex hull of a point set. |
convexity_defects | Finds the convexity defects of a contour. |
corner_eigen_vals_and_vecs | Calculates eigenvalues and eigenvectors of image blocks for corner detection. |
corner_harris | Harris corner detector. |
corner_min_eigen_val | Calculates the minimal eigenvalue of gradient matrices for corner detection. |
corner_sub_pix | Refines the corner locations. |
create_clahe | Creates a smart pointer to a cv::CLAHE class and initializes it. |
create_generalized_hough_ballard | Creates a smart pointer to a cv::GeneralizedHoughBallard class and initializes it. |
create_generalized_hough_guil | Creates a smart pointer to a cv::GeneralizedHoughGuil class and initializes it. |
create_hanning_window | This function computes a Hanning window coefficients in two dimensions. |
create_line_segment_detector | Creates a smart pointer to a LineSegmentDetector object and initializes it. |
cvt_color | Converts an image from one color space to another. |
cvt_color_two_plane | Converts an image from one color space to another where the source image is stored in two planes. |
demosaicing | main function for all demosaicing processes |
dilate | Dilates an image by using a specific structuring element. |
distance_transform | Calculates the distance to the closest zero pixel for each pixel of the source image. |
distance_transform_with_labels | Calculates the distance to the closest zero pixel for each pixel of the source image. |
draw_contours | Draws contours outlines or filled contours. |
draw_marker | Draws a marker on a predefined position in an image. |
ellipse | Draws a simple or thick elliptic arc or fills an ellipse sector. |
ellipse_2_poly | Approximates an elliptic arc with a polyline. |
ellipse_2_poly_f64 | Approximates an elliptic arc with a polyline. |
ellipse_rotated_rect | Draws a simple or thick elliptic arc or fills an ellipse sector. |
emd | Computes the "minimal work" distance between two weighted point configurations. |
emd_1 | C++ default parameters |
equalize_hist | Equalizes the histogram of a grayscale image. |
erode | Erodes an image by using a specific structuring element. |
fill_convex_poly | Fills a convex polygon. |
fill_poly | Fills the area bounded by one or more polygons. |
filter_2d | Convolves an image with the kernel. |
find_contours | Finds contours in a binary image. |
find_contours_with_hierarchy | Finds contours in a binary image. |
fit_ellipse | Fits an ellipse around a set of 2D points. |
fit_ellipse_ams | Fits an ellipse around a set of 2D points. |
fit_ellipse_direct | Fits an ellipse around a set of 2D points. |
fit_line | Fits a line to a 2D or 3D point set. |
flood_fill | Fills a connected component with the given color. |
flood_fill_mask | Fills a connected component with the given color. |
gaussian_blur | Blurs an image using a Gaussian filter. |
get_affine_transform | |
get_affine_transform_slice | Calculates an affine transform from three pairs of the corresponding points. |
get_deriv_kernels | Returns filter coefficients for computing spatial image derivatives. |
get_font_scale_from_height | Calculates the font-specific size to use to achieve a given height in pixels. |
get_gabor_kernel | Returns Gabor filter coefficients. |
get_gaussian_kernel | Returns Gaussian filter coefficients. |
get_perspective_transform | Calculates a perspective transform from four pairs of the corresponding points. |
get_perspective_transform_slice | Calculates a perspective transform from four pairs of the corresponding points. |
get_rect_sub_pix | Retrieves a pixel rectangle from an image with sub-pixel accuracy. |
get_rotation_matrix_2d | Calculates an affine matrix of 2D rotation. |
get_rotation_matrix_2d_matx | See also |
get_structuring_element | Returns a structuring element of the specified size and shape for morphological operations. |
get_text_size | Calculates the width and height of a text string. |
good_features_to_track | Determines strong corners on an image. |
good_features_to_track_with_gradient | C++ default parameters |
grab_cut | Runs the GrabCut algorithm. |
hough_circles | Finds circles in a grayscale image using the Hough transform. |
hough_lines | Finds lines in a binary image using the standard Hough transform. |
hough_lines_p | Finds line segments in a binary image using the probabilistic Hough transform. |
hough_lines_point_set | Finds lines in a set of points using the standard Hough transform. |
hu_moments | Calculates seven Hu invariants. |
hu_moments_1 | Calculates seven Hu invariants. |
integral | Calculates the integral of an image. |
integral2 | Calculates the integral of an image. |
integral3 | Calculates the integral of an image. |
intersect_convex_convex | Finds intersection of two convex polygons |
invert_affine_transform | Inverts an affine transformation. |
is_contour_convex | Tests a contour convexity. |
laplacian | Calculates the Laplacian of an image. |
line | Draws a line segment connecting two points. |
linear_polar | Deprecated Remaps an image to polar coordinates space. |
log_polar | Deprecated Remaps an image to semilog-polar coordinates space. |
match_shapes | Compares two shapes. |
match_template | Compares a template against overlapped image regions. |
median_blur | Blurs an image using the median filter. |
min_area_rect | Finds a rotated rectangle of the minimum area enclosing the input 2D point set. |
min_enclosing_circle | Finds a circle of the minimum area enclosing a 2D point set. |
min_enclosing_triangle | Finds a triangle of minimum area enclosing a 2D point set and returns its area. |
moments | Calculates all of the moments up to the third order of a polygon or rasterized shape. |
morphology_default_border_value | returns "magic" border value for erosion and dilation. It is automatically transformed to Scalar::all(-DBL_MAX) for dilation. |
morphology_ex | Performs advanced morphological transformations. |
phase_correlate | The function is used to detect translational shifts that occur between two images. |
point_polygon_test | Performs a point-in-contour test. |
polylines | Draws several polygonal curves. |
pre_corner_detect | Calculates a feature map for corner detection. |
put_text | Draws a text string. |
pyr_down | Blurs an image and downsamples it. |
pyr_mean_shift_filtering | Performs initial step of meanshift segmentation of an image. |
pyr_up | Upsamples an image and then blurs it. |
rectangle | Draws a simple, thick, or filled up-right rectangle. |
rectangle_points | Draws a simple, thick, or filled up-right rectangle. |
remap | Applies a generic geometrical transformation to an image. |
resize | Resizes an image. |
rotated_rectangle_intersection | Finds out if there is any intersection between two rotated rectangles. |
scharr | Calculates the first x- or y- image derivative using Scharr operator. |
sep_filter_2d | Applies a separable linear filter to an image. |
sobel | Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator. |
spatial_gradient | Calculates the first order image derivative in both x and y using a Sobel operator |
sqr_box_filter | Calculates the normalized sum of squares of the pixel values overlapping the filter. |
threshold | Applies a fixed-level threshold to each array element. |
warp_affine | Applies an affine transformation to an image. |
warp_perspective | Applies a perspective transformation to an image. |
warp_polar | \brief Remaps an image to polar or semilog-polar coordinates space |
watershed | Performs a marker-based image segmentation using the watershed algorithm. |