Module calib3d

Source
Expand description

§Camera Calibration and 3D Reconstruction

The functions in this section use a so-called pinhole camera model. The view of a scene is obtained by projecting a scene’s 3D point inline formula into the image plane using a perspective transformation which forms the corresponding pixel inline formula. Both inline formula and inline formula are represented in homogeneous coordinates, i.e. as 3D and 2D homogeneous vector respectively. You will find a brief introduction to projective geometry, homogeneous vectors and homogeneous transformations at the end of this section’s introduction. For more succinct notation, we often drop the ‘homogeneous’ and say vector instead of homogeneous vector.

The distortion-free projective transformation given by a pinhole camera model is shown below.

block formula

where inline formula is a 3D point expressed with respect to the world coordinate system, inline formula is a 2D pixel in the image plane, inline formula is the camera intrinsic matrix, inline formula and inline formula are the rotation and translation that describe the change of coordinates from world to camera coordinate systems (or camera frame) and inline formula is the projective transformation’s arbitrary scaling and not part of the camera model.

The camera intrinsic matrix inline formula (notation used as in Zhang2000 and also generally notated as inline formula) projects 3D points given in the camera coordinate system to 2D pixel coordinates, i.e.

block formula

The camera intrinsic matrix inline formula is composed of the focal lengths inline formula and inline formula, which are expressed in pixel units, and the principal point inline formula, that is usually close to the image center:

block formula

and thus

block formula

The matrix of intrinsic parameters does not depend on the scene viewed. So, once estimated, it can be re-used as long as the focal length is fixed (in case of a zoom lens). Thus, if an image from the camera is scaled by a factor, all of these parameters need to be scaled (multiplied/divided, respectively) by the same factor.

The joint rotation-translation matrix inline formula is the matrix product of a projective transformation and a homogeneous transformation. The 3-by-4 projective transformation maps 3D points represented in camera coordinates to 2D points in the image plane and represented in normalized camera coordinates inline formula and inline formula:

block formula

The homogeneous transformation is encoded by the extrinsic parameters inline formula and inline formula and represents the change of basis from world coordinate system inline formula to the camera coordinate sytem inline formula. Thus, given the representation of the point inline formula in world coordinates, inline formula, we obtain inline formula’s representation in the camera coordinate system, inline formula, by

block formula

This homogeneous transformation is composed out of inline formula, a 3-by-3 rotation matrix, and inline formula, a 3-by-1 translation vector:

block formula

and therefore

block formula

Combining the projective transformation and the homogeneous transformation, we obtain the projective transformation that maps 3D points in world coordinates into 2D points in the image plane and in normalized camera coordinates:

block formula

with inline formula and inline formula. Putting the equations for instrincs and extrinsics together, we can write out inline formula as

block formula

If inline formula, the transformation above is equivalent to the following,

block formula

with

block formula

The following figure illustrates the pinhole camera model.

Pinhole camera model

Real lenses usually have some distortion, mostly radial distortion, and slight tangential distortion. So, the above model is extended as:

block formula

where

block formula

with

block formula

and

block formula

if inline formula.

The distortion parameters are the radial coefficients inline formula, inline formula, inline formula, inline formula, inline formula, and inline formula ,inline formula and inline formula are the tangential distortion coefficients, and inline formula, inline formula, inline formula, and inline formula, are the thin prism distortion coefficients. Higher-order coefficients are not considered in OpenCV.

The next figures show two common types of radial distortion: barrel distortion (inline formula monotonically decreasing) and pincushion distortion (inline formula monotonically increasing). Radial distortion is always monotonic for real lenses, and if the estimator produces a non-monotonic result, this should be considered a calibration failure. More generally, radial distortion must be monotonic and the distortion function must be bijective. A failed estimation result may look deceptively good near the image center but will work poorly in e.g. AR/SFM applications. The optimization method used in OpenCV camera calibration does not include these constraints as the framework does not support the required integer programming and polynomial inequalities. See issue #15992 for additional information.

In some cases, the image sensor may be tilted in order to focus an oblique plane in front of the camera (Scheimpflug principle). This can be useful for particle image velocimetry (PIV) or triangulation with a laser fan. The tilt causes a perspective distortion of inline formula and inline formula. This distortion can be modeled in the following way, see e.g. Louhichi07.

block formula

where

block formula

and the matrix inline formula is defined by two rotations with angular parameter inline formula and inline formula, respectively,

block formula

In the functions below the coefficients are passed or returned as

block formula

vector. That is, if the vector contains four elements, it means that inline formula . The distortion coefficients do not depend on the scene viewed. Thus, they also belong to the intrinsic camera parameters. And they remain the same regardless of the captured image resolution. If, for example, a camera has been calibrated on images of 320 x 240 resolution, absolutely the same distortion coefficients can be used for 640 x 480 images from the same camera while inline formula, inline formula, inline formula, and inline formula need to be scaled appropriately.

The functions below use the above model to do the following:

  • Project 3D points to the image plane given intrinsic and extrinsic parameters.
  • Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their projections.
  • Estimate intrinsic and extrinsic camera parameters from several views of a known calibration pattern (every view is described by several 3D-2D point correspondences).
  • Estimate the relative position and orientation of the stereo camera “heads” and compute the rectification transformation that makes the camera optical axes parallel.

Homogeneous Coordinates
Homogeneous Coordinates are a system of coordinates that are used in projective geometry. Their use allows to represent points at infinity by finite coordinates and simplifies formulas when compared to the cartesian counterparts, e.g. they have the advantage that affine transformations can be expressed as linear homogeneous transformation.

One obtains the homogeneous vector inline formula by appending a 1 along an n-dimensional cartesian vector inline formula e.g. for a 3D cartesian vector the mapping inline formula is:

block formula

For the inverse mapping inline formula, one divides all elements of the homogeneous vector by its last element, e.g. for a 3D homogeneous vector one gets its 2D cartesian counterpart by:

block formula

if inline formula.

Due to this mapping, all multiples inline formula, for inline formula, of a homogeneous point represent the same point inline formula. An intuitive understanding of this property is that under a projective transformation, all multiples of inline formula are mapped to the same point. This is the physical observation one does for pinhole cameras, as all points along a ray through the camera’s pinhole are projected to the same image point, e.g. all points along the red ray in the image of the pinhole camera model above would be mapped to the same image coordinate. This property is also the source for the scale ambiguity s in the equation of the pinhole camera model.

As mentioned, by using homogeneous coordinates we can express any change of basis parameterized by inline formula and inline formula as a linear transformation, e.g. for the change of basis from coordinate system 0 to coordinate system 1 becomes:

block formula

Note:

  • Many functions in this module take a camera intrinsic matrix as an input parameter. Although all functions assume the same structure of this parameter, they may name it differently. The parameter’s description, however, will be clear in that a camera intrinsic matrix with the structure shown above is required.
  • A calibration sample for 3 cameras in a horizontal position can be found at opencv_source_code/samples/cpp/3calibration.cpp
  • A calibration sample based on a sequence of images can be found at opencv_source_code/samples/cpp/calibration.cpp
  • A calibration sample in order to do 3D reconstruction can be found at opencv_source_code/samples/cpp/build3dmodel.cpp
  • A calibration example on stereo calibration can be found at opencv_source_code/samples/cpp/stereo_calib.cpp
  • A calibration example on stereo matching can be found at opencv_source_code/samples/cpp/stereo_match.cpp
  • (Python) A camera calibration sample can be found at opencv_source_code/samples/python/calibrate.py

§Fisheye camera model

Definitions: Let P be a point in 3D of coordinates X in the world reference frame (stored in the matrix X) The coordinate vector of P in the camera reference frame is:

block formula

where R is the rotation matrix corresponding to the rotation vector om: R = rodrigues(om); call x, y and z the 3 coordinates of Xc:

block formula

The pinhole projection coordinates of P is [a; b] where

block formula

Fisheye distortion:

block formula

The distorted point coordinates are [x’; y’] where

block formula

Finally, conversion into pixel coordinates: The final pixel coordinates vector [u; v] where:

block formula

Summary: Generic camera model Kannala2006 with perspective projection and without distortion correction

Modules§

prelude

Structs§

CirclesGridFinderParameters
LMSolver
Levenberg-Marquardt solver. Starting with the specified vector of parameters it optimizes the target vector criteria “err” (finds local minima of each target vector component absolute value).
LMSolver_Callback
StereoBM
Class for computing stereo correspondence using the block matching algorithm, introduced and contributed to OpenCV by K. Konolige.
StereoMatcher
The base class for stereo correspondence algorithms.
StereoSGBM
The class implements the modified H. Hirschmuller algorithm HH08 that differs from the original one as follows:
UsacParams

Enums§

CirclesGridFinderParameters_GridType
HandEyeCalibrationMethod
LocalOptimMethod
NeighborSearchMethod
PolishingMethod
RobotWorldHandEyeCalibrationMethod
SamplingMethod
ScoreMethod
SolvePnPMethod
UndistortTypes
cv::undistort mode

Constants§

CALIB_CB_ACCURACY
CALIB_CB_ADAPTIVE_THRESH
CALIB_CB_ASYMMETRIC_GRID
CALIB_CB_CLUSTERING
CALIB_CB_EXHAUSTIVE
CALIB_CB_FAST_CHECK
CALIB_CB_FILTER_QUADS
CALIB_CB_LARGER
CALIB_CB_MARKER
CALIB_CB_NORMALIZE_IMAGE
CALIB_CB_PLAIN
CALIB_CB_SYMMETRIC_GRID
CALIB_FIX_ASPECT_RATIO
CALIB_FIX_FOCAL_LENGTH
CALIB_FIX_INTRINSIC
CALIB_FIX_K1
CALIB_FIX_K2
CALIB_FIX_K3
CALIB_FIX_K4
CALIB_FIX_K5
CALIB_FIX_K6
CALIB_FIX_PRINCIPAL_POINT
CALIB_FIX_S1_S2_S3_S4
CALIB_FIX_TANGENT_DIST
CALIB_FIX_TAUX_TAUY
CALIB_HAND_EYE_ANDREFF
On-line Hand-Eye Calibration Andreff99
CALIB_HAND_EYE_DANIILIDIS
Hand-Eye Calibration Using Dual Quaternions Daniilidis98
CALIB_HAND_EYE_HORAUD
Hand-eye Calibration Horaud95
CALIB_HAND_EYE_PARK
Robot Sensor Calibration: Solving AX = XB on the Euclidean Group Park94
CALIB_HAND_EYE_TSAI
A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration Tsai89
CALIB_NINTRINSIC
CALIB_RATIONAL_MODEL
CALIB_ROBOT_WORLD_HAND_EYE_LI
Simultaneous robot-world and hand-eye calibration using dual-quaternions and kronecker product Li2010SimultaneousRA
CALIB_ROBOT_WORLD_HAND_EYE_SHAH
Solving the robot-world/hand-eye calibration problem using the kronecker product Shah2013SolvingTR
CALIB_SAME_FOCAL_LENGTH
CALIB_THIN_PRISM_MODEL
CALIB_TILTED_MODEL
CALIB_USE_EXTRINSIC_GUESS
for stereoCalibrate
CALIB_USE_INTRINSIC_GUESS
CALIB_USE_LU
use LU instead of SVD decomposition for solving. much faster but potentially less precise
CALIB_USE_QR
use QR instead of SVD decomposition for solving. Faster but potentially less precise
CALIB_ZERO_DISPARITY
CALIB_ZERO_TANGENT_DIST
COV_POLISHER
CirclesGridFinderParameters_ASYMMETRIC_GRID
CirclesGridFinderParameters_SYMMETRIC_GRID
FM_7POINT
7-point algorithm
FM_8POINT
8-point algorithm
FM_LMEDS
least-median algorithm. 7-point algorithm is used.
FM_RANSAC
RANSAC algorithm. It needs at least 15 points. 7-point algorithm is used.
Fisheye_CALIB_CHECK_COND
Fisheye_CALIB_FIX_FOCAL_LENGTH
Fisheye_CALIB_FIX_INTRINSIC
Fisheye_CALIB_FIX_K1
Fisheye_CALIB_FIX_K2
Fisheye_CALIB_FIX_K3
Fisheye_CALIB_FIX_K4
Fisheye_CALIB_FIX_PRINCIPAL_POINT
Fisheye_CALIB_FIX_SKEW
Fisheye_CALIB_RECOMPUTE_EXTRINSIC
Fisheye_CALIB_USE_INTRINSIC_GUESS
Fisheye_CALIB_ZERO_DISPARITY
LMEDS
least-median of squares algorithm
LOCAL_OPTIM_GC
LOCAL_OPTIM_INNER_AND_ITER_LO
LOCAL_OPTIM_INNER_LO
LOCAL_OPTIM_NULL
LOCAL_OPTIM_SIGMA
LSQ_POLISHER
MAGSAC
NEIGH_FLANN_KNN
NEIGH_FLANN_RADIUS
NEIGH_GRID
NONE_POLISHER
PROJ_SPHERICAL_EQRECT
PROJ_SPHERICAL_ORTHO
RANSAC
RANSAC algorithm
RHO
RHO algorithm
SAMPLING_NAPSAC
SAMPLING_PROGRESSIVE_NAPSAC
SAMPLING_PROSAC
SAMPLING_UNIFORM
SCORE_METHOD_LMEDS
SCORE_METHOD_MAGSAC
SCORE_METHOD_MSAC
SCORE_METHOD_RANSAC
SOLVEPNP_AP3P
An Efficient Algebraic Solution to the Perspective-Three-Point Problem Ke17
SOLVEPNP_DLS
Broken implementation. Using this flag will fallback to EPnP.
SOLVEPNP_EPNP
EPnP: Efficient Perspective-n-Point Camera Pose Estimation lepetit2009epnp
SOLVEPNP_IPPE
Infinitesimal Plane-Based Pose Estimation Collins14
SOLVEPNP_IPPE_SQUARE
Infinitesimal Plane-Based Pose Estimation Collins14
SOLVEPNP_ITERATIVE
Pose refinement using non-linear Levenberg-Marquardt minimization scheme Madsen04 Eade13
SOLVEPNP_MAX_COUNT
Used for count
SOLVEPNP_P3P
Complete Solution Classification for the Perspective-Three-Point Problem gao2003complete
SOLVEPNP_SQPNP
SQPnP: A Consistently Fast and Globally OptimalSolution to the Perspective-n-Point Problem Terzakis2020SQPnP
SOLVEPNP_UPNP
Broken implementation. Using this flag will fallback to EPnP.
StereoBM_PREFILTER_NORMALIZED_RESPONSE
StereoBM_PREFILTER_XSOBEL
StereoMatcher_DISP_SCALE
StereoMatcher_DISP_SHIFT
StereoSGBM_MODE_HH
StereoSGBM_MODE_HH4
StereoSGBM_MODE_SGBM
StereoSGBM_MODE_SGBM_3WAY
USAC_ACCURATE
USAC, accurate settings
USAC_DEFAULT
USAC algorithm, default settings
USAC_FAST
USAC, fast settings
USAC_FM_8PTS
USAC, fundamental matrix 8 points
USAC_MAGSAC
USAC, runs MAGSAC++
USAC_PARALLEL
USAC, parallel version
USAC_PROSAC
USAC, sorted points, runs PROSAC

Traits§

LMSolverTrait
Mutable methods for crate::calib3d::LMSolver
LMSolverTraitConst
Constant methods for crate::calib3d::LMSolver
LMSolver_CallbackTrait
Mutable methods for crate::calib3d::LMSolver_Callback
LMSolver_CallbackTraitConst
Constant methods for crate::calib3d::LMSolver_Callback
StereoBMTrait
Mutable methods for crate::calib3d::StereoBM
StereoBMTraitConst
Constant methods for crate::calib3d::StereoBM
StereoMatcherTrait
Mutable methods for crate::calib3d::StereoMatcher
StereoMatcherTraitConst
Constant methods for crate::calib3d::StereoMatcher
StereoSGBMTrait
Mutable methods for crate::calib3d::StereoSGBM
StereoSGBMTraitConst
Constant methods for crate::calib3d::StereoSGBM

Functions§

calibrate
Performs camera calibration
calibrate_camera
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
calibrate_camera_def
@overload
calibrate_camera_extended
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
calibrate_camera_extended_def
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
calibrate_camera_ro
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
calibrate_camera_ro_def
@overload
calibrate_camera_ro_extended
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
calibrate_camera_ro_extended_def
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
calibrate_def
Performs camera calibration
calibrate_hand_eye
Computes Hand-Eye calibration: inline formula
calibrate_hand_eye_def
Computes Hand-Eye calibration: inline formula
calibrate_robot_world_hand_eye
Computes Robot-World/Hand-Eye calibration: inline formula and inline formula
calibrate_robot_world_hand_eye_def
Computes Robot-World/Hand-Eye calibration: inline formula and inline formula
calibration_matrix_values
Computes useful camera characteristics from the camera intrinsic matrix.
check_chessboard
compose_rt
Combines two rotation-and-shift transformations.
compose_rt_def
Combines two rotation-and-shift transformations.
compute_correspond_epilines
For points in an image of a stereo pair, computes the corresponding epilines in the other image.
convert_points_from_homogeneous
Converts points from homogeneous to Euclidean space.
convert_points_homogeneous
Converts points to/from homogeneous coordinates.
convert_points_to_homogeneous
Converts points from Euclidean to homogeneous space.
correct_matches
Refines coordinates of corresponding points.
decompose_essential_mat
Decompose an essential matrix to possible rotations and translation.
decompose_homography_mat
Decompose a homography matrix to rotation(s), translation(s) and plane normal(s).
decompose_projection_matrix
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.
decompose_projection_matrix_def
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.
distort_points
Distorts 2D points using fisheye model.
distort_points_def
@overload Overload of distortPoints function to handle cases when undistorted points are obtained with non-identity camera matrix, e.g. output of #estimateNewCameraMatrixForUndistortRectify.
draw_chessboard_corners
Renders the detected chessboard corners.
draw_frame_axes
Draw axes of the world/object coordinate system from pose estimation. see also: solvePnP
draw_frame_axes_def
Draw axes of the world/object coordinate system from pose estimation. see also: solvePnP
estimate_affine_2d
Computes an optimal affine transformation between two 2D point sets.
estimate_affine_2d_1
estimate_affine_2d_def
Computes an optimal affine transformation between two 2D point sets.
estimate_affine_3d
Computes an optimal affine transformation between two 3D point sets.
estimate_affine_3d_1
Computes an optimal affine transformation between two 3D point sets.
estimate_affine_3d_1_def
Computes an optimal affine transformation between two 3D point sets.
estimate_affine_3d_def
Computes an optimal affine transformation between two 3D point sets.
estimate_affine_partial_2d
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.
estimate_affine_partial_2d_def
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.
estimate_chessboard_sharpness
Estimates the sharpness of a detected chessboard.
estimate_chessboard_sharpness_def
Estimates the sharpness of a detected chessboard.
estimate_new_camera_matrix_for_undistort_rectify
Estimates new camera intrinsic matrix for undistortion or rectification.
estimate_new_camera_matrix_for_undistort_rectify_def
Estimates new camera intrinsic matrix for undistortion or rectification.
estimate_translation_3d
Computes an optimal translation between two 3D point sets.
estimate_translation_3d_def
Computes an optimal translation between two 3D point sets.
filter_homography_decomp_by_visible_refpoints
Filters homography decompositions based on additional information.
filter_homography_decomp_by_visible_refpoints_def
Filters homography decompositions based on additional information.
filter_speckles
Filters off small noise blobs (speckles) in the disparity map
filter_speckles_def
Filters off small noise blobs (speckles) in the disparity map
find4_quad_corner_subpix
finds subpixel-accurate positions of the chessboard corners
find_chessboard_corners
Finds the positions of internal corners of the chessboard.
find_chessboard_corners_def
Finds the positions of internal corners of the chessboard.
find_chessboard_corners_sb
Finds the positions of internal corners of the chessboard using a sector based approach.
find_chessboard_corners_sb_def
@overload
find_chessboard_corners_sb_with_meta
Finds the positions of internal corners of the chessboard using a sector based approach.
find_circles_grid
Finds centers in the grid of circles.
find_circles_grid_1
Finds centers in the grid of circles.
find_circles_grid_1_def
@overload
find_essential_mat
Calculates an essential matrix from the corresponding points in two images.
find_essential_mat_1
Calculates an essential matrix from the corresponding points in two images.
find_essential_mat_2
Calculates an essential matrix from the corresponding points in two images.
find_essential_mat_3
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.
find_essential_mat_4
find_essential_mat_1_def
@overload
find_essential_mat_3_def
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.
find_essential_mat_def
Calculates an essential matrix from the corresponding points in two images.
find_essential_mat_matrix
Calculates an essential matrix from the corresponding points in two images.
find_fundamental_mat
Calculates a fundamental matrix from the corresponding points in two images.
find_fundamental_mat_1
Calculates a fundamental matrix from the corresponding points in two images.
find_fundamental_mat_2
find_fundamental_mat_1_def
@overload
find_fundamental_mat_def
Calculates a fundamental matrix from the corresponding points in two images.
find_fundamental_mat_mask
Calculates a fundamental matrix from the corresponding points in two images.
find_fundamental_mat_mask_def
@overload
find_homography
Finds a perspective transformation between two planes.
find_homography_1
find_homography_def
@overload
find_homography_ext
Finds a perspective transformation between two planes.
find_homography_ext_def
Finds a perspective transformation between two planes.
fisheye_distort_points
Distorts 2D points using fisheye model.
fisheye_distort_points_def
Distorts 2D points using fisheye model.
fisheye_init_undistort_rectify_map
Computes undistortion and rectification maps for image transform by #remap. If D is empty zero distortion is used, if R or P is empty identity matrixes are used.
fisheye_project_points
Projects points using fisheye model
fisheye_project_points_def
Projects points using fisheye model
fisheye_project_points_vec
Projects points using fisheye model
fisheye_project_points_vec_def
@overload
fisheye_stereo_calibrate
Performs stereo calibration
fisheye_stereo_calibrate_def
@overload
fisheye_stereo_rectify
Stereo rectification for fisheye camera model
fisheye_stereo_rectify_def
Stereo rectification for fisheye camera model
fisheye_undistort_image
Transforms an image to compensate for fisheye lens distortion.
fisheye_undistort_image_def
Transforms an image to compensate for fisheye lens distortion.
fisheye_undistort_points
Undistorts 2D points using fisheye model
fisheye_undistort_points_def
Undistorts 2D points using fisheye model
get_default_new_camera_matrix
Returns the default new camera matrix.
get_default_new_camera_matrix_def
Returns the default new camera matrix.
get_optimal_new_camera_matrix
Returns the new camera intrinsic matrix based on the free scaling parameter.
get_optimal_new_camera_matrix_def
Returns the new camera intrinsic matrix based on the free scaling parameter.
get_valid_disparity_roi
computes valid disparity ROI from the valid ROIs of the rectified images (that are returned by #stereoRectify)
init_camera_matrix_2d
Finds an initial camera intrinsic matrix from 3D-2D point correspondences.
init_camera_matrix_2d_def
Finds an initial camera intrinsic matrix from 3D-2D point correspondences.
init_inverse_rectification_map
Computes the projection and inverse-rectification transformation map. In essense, this is the inverse of init_undistort_rectify_map to accomodate stereo-rectification of projectors (‘inverse-cameras’) in projector-camera pairs.
init_undistort_rectify_map
Computes the undistortion and rectification transformation map.
init_wide_angle_proj_map
initializes maps for [remap] for wide-angle
init_wide_angle_proj_map_def
initializes maps for [remap] for wide-angle
mat_mul_deriv
Computes partial derivatives of the matrix product for each multiplied matrix.
project_points
Projects 3D points to an image plane.
project_points_def
Projects 3D points to an image plane.
recover_pose
Recovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check.
recover_pose_2_cameras
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of inliers that pass the check.
recover_pose_2_cameras_def
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of inliers that pass the check.
recover_pose_def
@overload
recover_pose_estimated
Recovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check.
recover_pose_estimated_def
Recovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check.
recover_pose_triangulated
Recovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check.
recover_pose_triangulated_def
@overload
rectify3_collinear
computes the rectification transformations for 3-head camera, where all the heads are on the same line.
reproject_image_to_3d
Reprojects a disparity image to 3D space.
reproject_image_to_3d_def
Reprojects a disparity image to 3D space.
rodrigues
Converts a rotation matrix to a rotation vector or vice versa.
rodrigues_def
Converts a rotation matrix to a rotation vector or vice versa.
rq_decomp3x3
Computes an RQ decomposition of 3x3 matrices.
rq_decomp3x3_def
Computes an RQ decomposition of 3x3 matrices.
sampson_distance
Calculates the Sampson Distance between two points.
solve_p3p
Finds an object pose from 3 3D-2D point correspondences.
solve_pnp
Finds an object pose from 3D-2D point correspondences.
solve_pnp_1
Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.
solve_pnp_1_def
Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.
solve_pnp_def
Finds an object pose from 3D-2D point correspondences.
solve_pnp_generic
Finds an object pose from 3D-2D point correspondences.
solve_pnp_generic_def
Finds an object pose from 3D-2D point correspondences.
solve_pnp_ransac
Finds an object pose from 3D-2D point correspondences using the RANSAC scheme.
solve_pnp_ransac_1
C++ default parameters
solve_pnp_ransac_1_def
Note
solve_pnp_ransac_def
Finds an object pose from 3D-2D point correspondences using the RANSAC scheme.
solve_pnp_refine_lm
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
solve_pnp_refine_lm_def
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
solve_pnp_refine_vvs
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
solve_pnp_refine_vvs_def
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
stereo_calibrate
Calibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras.
stereo_calibrate_1
Calibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras.
stereo_calibrate_2
Performs stereo calibration
stereo_calibrate_1_def
@overload
stereo_calibrate_2_def
Performs stereo calibration
stereo_calibrate_def
@overload
stereo_calibrate_extended
Calibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras.
stereo_calibrate_extended_def
Calibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras.
stereo_rectify
Computes rectification transforms for each head of a calibrated stereo camera.
stereo_rectify_def
Computes rectification transforms for each head of a calibrated stereo camera.
stereo_rectify_uncalibrated
Computes a rectification transform for an uncalibrated stereo camera.
stereo_rectify_uncalibrated_def
Computes a rectification transform for an uncalibrated stereo camera.
triangulate_points
This function reconstructs 3-dimensional points (in homogeneous coordinates) by using their observations with a stereo camera.
undistort
Transforms an image to compensate for lens distortion.
undistort_def
Transforms an image to compensate for lens distortion.
undistort_image_points
Compute undistorted image points position
undistort_image_points_def
Compute undistorted image points position
undistort_points
Computes the ideal point coordinates from the observed point coordinates.
undistort_points_def
Computes the ideal point coordinates from the observed point coordinates.
undistort_points_iter
Computes the ideal point coordinates from the observed point coordinates.
validate_disparity
validates disparity using the left-right check. The matrix “cost” should be computed by the stereo correspondence algorithm
validate_disparity_def
validates disparity using the left-right check. The matrix “cost” should be computed by the stereo correspondence algorithm

Type Aliases§

CirclesGridFinderParameters2