Module opencv::sfm[][src]

Expand description

Structure From Motion

The opencv_sfm module contains algorithms to perform 3d reconstruction from 2d images.

The core of the module is based on a light version of Libmv originally developed by Sameer Agarwal and Keir Mierle.

Whats is libmv?

libmv, also known as the Library for Multiview Reconstruction (or LMV), is the computer vision backend for Blender’s motion tracking abilities. Unlike other vision libraries with general ambitions, libmv is focused on algorithms for match moving, specifically targeting Blender as the primary customer. Dense reconstruction, reconstruction from unorganized photo collections, image recognition, and other tasks are not a focus of libmv.

Development

libmv is officially under the Blender umbrella, and so is developed on developer.blender.org. The source repository can get checked out independently from Blender.

This module has been originally developed as a project for Google Summer of Code 2012-2015.

Note:

  • Notice that it is compiled only when Eigen, GLog and GFlags are correctly installed.

Check installation instructions in the following tutorial: @ref tutorial_sfm_installation

Conditioning

Fundamental

Input/Output

Numeric

Projection

Robust Estimation

Triangulation

Reconstruction

Note: - Notice that it is compiled only when Ceres Solver is correctly installed.

      Check installation instructions in the following tutorial: @ref tutorial_sfm_installation

Simple Pipeline

Note: - Notice that it is compiled only when Ceres Solver is correctly installed.

       Check installation instructions in the following tutorial: @ref tutorial_sfm_installation

Modules

Structs

Data structure describing the camera model and its parameters.

Data structure describing the reconstruction options.

Constants

Traits

base class BaseSFM declares a common API that would be used in a typical scene reconstruction scenario

SFMLibmvEuclideanReconstruction class provides an interface with the Libmv Structure From Motion pipeline.

Functions

Apply Transformation to points.

Computes Absolute or Exterior Orientation (Pose Estimation) between 2 sets of 3D point.

Returns the depth of a point transformed by a rigid transform.

Get Essential matrix from Fundamental and Camera matrices.

Get Essential matrix from Motion (R’s and t’s ).

Converts points from Euclidean to homogeneous space. E.g., ((x,y)->(x,y,1))

Estimate robustly the fundamental matrix between two dataset of 2D point (image coords space).

Estimate robustly the fundamental matrix between two dataset of 2D point (image coords space).

Get Essential matrix from Fundamental and Camera matrices.

Get Fundamental matrix from Projection matrices.

Converts point coordinates from homogeneous to euclidean pixel coordinates. E.g., ((x,y,z)->(x/z, y/z))

Import a reconstruction file.

Point conditioning (isotropic).

Get K, R and t from projection matrix P, decompose using the RQ decomposition.

Computes the mean and variance of a given matrix along its rows.

Get Motion (R’s and t’s ) from Essential matrix.

Choose one of the four possible motion solutions from an essential matrix.

Normalizes the Fundamental matrix.

This function normalizes points. (isotropic).

This function normalizes points (non isotropic).

Estimate the fundamental matrix between two dataset of 2D point (image coords space).

Point conditioning (non isotropic).

Get projection matrix P from K, R and t.

Get projection matrices from Fundamental matrix

Reconstruct 3d points from 2d correspondences while performing autocalibration.

Reconstruct 3d points from 2d correspondences while performing autocalibration.

Reconstruct 3d points from 2d images while performing autocalibration.

Reconstruct 3d points from 2d images while performing autocalibration.

Computes the relative camera motion between two cameras.

Returns the 3x3 skew symmetric matrix of a vector.

Reconstructs bunch of points by triangulation.