Module opencv::rgbd[][src]

Expand description

RGB-Depth Processing

@ref kinfu_icp

Modules

Structs

Object that can clean a noisy depth image

A faster version of ICPOdometry which is used in KinectFusion implementation Partial list of differences:

Odometry based on the paper “KinectFusion: Real-Time Dense Surface Mapping and Tracking”, Richard A. Newcombe, Andrew Fitzgibbon, at al, SIGGRAPH, 2011.

Projects camera space vector onto screen

Camera intrinsics Reprojects screen point to camera space given z coord.

\brief Modality that computes quantized gradient orientations from a color image.

\brief Modality that computes quantized surface normals from a dense depth map.

\brief Object detector using the LINE template matching algorithm with any set of modalities.

\brief Discriminant feature described by its location and label.

\brief Represents a successful template match.

Object that contains a frame data that is possibly needed for the Odometry. It’s used for the efficiency (to pass precomputed/cached data of the frame that participates in the Odometry processing several times).

Object that contains a frame data.

Odometry that merges RgbdOdometry and ICPOdometry by minimize sum of their energy functions.

Object that can compute the normals in an image. It is an object as it can cache data for speed efficiency The implemented methods are either:

Odometry based on the paper “Real-Time Visual Odometry from Dense RGB-D Images”, F. Steinbucker, J. Strum, D. Cremers, ICCV, 2011.

Object that can compute planes in an image

Enums

NIL method is from Modeling Kinect Sensor Noise for Improved 3d Reconstruction and Tracking by C. Nguyen, S. Izadi, D. Lovel

Constants

Traits

KinectFusion implementation

Object that can clean a noisy depth image

A faster version of ICPOdometry which is used in KinectFusion implementation Partial list of differences:

Odometry based on the paper “KinectFusion: Real-Time Dense Surface Mapping and Tracking”, Richard A. Newcombe, Andrew Fitzgibbon, at al, SIGGRAPH, 2011.

KinectFusion implementation

Large Scale Dense Depth Fusion implementation

\brief Modality that computes quantized gradient orientations from a color image.

\brief Modality that computes quantized surface normals from a dense depth map.

\brief Object detector using the LINE template matching algorithm with any set of modalities.

\brief Represents a successful template match.

\brief Interface for modalities that plug into the LINE template matching representation.

\brief Represents a modality operating over an image pyramid.

Base class for computation of odometry.

Object that contains a frame data that is possibly needed for the Odometry. It’s used for the efficiency (to pass precomputed/cached data of the frame that participates in the Odometry processing several times).

Object that contains a frame data.

Odometry that merges RgbdOdometry and ICPOdometry by minimize sum of their energy functions.

Object that can compute the normals in an image. It is an object as it can cache data for speed efficiency The implemented methods are either:

Odometry based on the paper “Real-Time Visual Odometry from Dense RGB-D Images”, F. Steinbucker, J. Strum, D. Cremers, ICCV, 2011.

Object that can compute planes in an image

Functions

\brief Debug function to colormap a quantized image for viewing.

Converts a depth image to an organized set of 3d points. The coordinate system is x pointing left, y down and z away from the camera

\brief Debug function to draw linemod features

\brief Factory function for detector using LINE algorithm with color gradients.

\brief Factory function for detector using LINE-MOD algorithm with color gradients and depth normals.

Checks if the value is a valid depth. For CV_16U or CV_16S, the convention is to be invalid if it is a limit. For a float/double, we just check if it is a NaN

Registers depth data to an external camera Registration is performed by creating a depth cloud, transforming the cloud by the rigid body transformation between the cameras, and then projecting the transformed points into the RGB camera.

If the input image is of type CV_16UC1 (like the Kinect one), the image is converted to floats, divided by depth_factor to get a depth in meters, and the values 0 are converted to std::numeric_limits::quiet_NaN() Otherwise, the image is simply converted to floats

Warp the image: compute 3d points from the depth, transform them using given transformation, then project color point cloud to an image plane. This function can be used to visualize results of the Odometry algorithm.

Type Definitions

Backwards compatibility for old versions