[][src]Struct cv_core::EssentialMatrix

pub struct EssentialMatrix(pub Matrix3<f32>);

This stores an essential matrix, which is satisfied by the following constraint:

transpose(x') * E * x = 0

Where x' and x are homogeneous normalized image coordinates. You can get a homogeneous normalized image coordinate by appending 1.0 to a NormalizedKeyPoint.

The essential matrix embodies the epipolar constraint between two images. Given that light travels in a perfectly straight line (it will not, but for short distances it mostly does) and assuming a pinhole camera model, for any point on the camera sensor, the light source for that point exists somewhere along a line extending out from the bearing (direction of travel) of that point. For a normalized image coordinate, that bearing is (x, y, 1.0). That is because normalized image coordinates exist on a virtual plane (the sensor) a distance z = 1.0 from the optical center (the location of the focal point) where the unit of distance is the focal length. In epipolar geometry, the point on the virtual plane is called an epipole. The line through 3d space created by the bearing that travels from the optical center through the epipole is called an epipolar line.

If you look at every point along an epipolar line, each one of those points would show up as a different point on the camera sensor of another image (if they are in view). If you traced every point along this epipolar line to where it would appear on the sensor of the camera (projection of the 3d points into normalized image coordinates), then the points would form a straight line. This means that you can draw epipolar lines that do not pass through the optical center of an image on that image.

The essential matrix makes it possible to create a vector that is perpendicular to all bearings that are formed from the epipolar line on the second image's sensor. This is done by computing E * x, where x is a homogeneous normalized image coordinate from the first image. The transpose of the resulting vector then has a dot product with the transpose of the second image coordinate x' which is equal to 0.0. This can be written as:

dot(transpose(E * x), x') = 0

This can be re-written into the form given above:

transpose(x') * E * x = 0

Where the first operation creates a pependicular vector to the epipoles on the first image and the second takes the dot product which should result in 0.

With a EssentialMatrix, you can retrieve the rotation and translation given one normalized image coordinate and one bearing that is scaled to the depth of the point relative to the current reconstruction. This kind of point can be computed using [WorldPose::project_camera] to convert a WorldPoint to a CameraPoint.

Methods

impl EssentialMatrix[src]

pub fn possible_poses(
    &self,
    max_iterations: usize
) -> Option<(Rotation3<f32>, Rotation3<f32>, Vector3<f32>)>
[src]

Returns two possible rotations for the essential matrix along with a translation direction of unit length. The translation's length is unknown and must be solved for by using a prior.

max_iterations is the maximum number of iterations that singular value decomposition will run on this matrix. Use this in soft realtime systems to cap the execution time. A max_iterations of 0 may execute indefinitely and is not recommended.

pub fn solve_pose(
    &self,
    consensus_ratio: f32,
    max_iterations: usize,
    correspondences: impl Iterator<Item = (f32, NormalizedKeyPoint, NormalizedKeyPoint)>
) -> Option<RelativeCameraPose>
[src]

Return the RelativeCameraPose that transforms a CameraPoint of image A (source of a) to the corresponding CameraPoint of image B (source of b). This determines the average expected translation from the points themselves and if the points agree with the rotation (points must be in front of the camera). The function takes an iterator containing tuples in the form (depth, a, b):

  • depth - The actual depth (z axis, not distance) of normalized keypoint a
  • a - A keypoint from image A
  • b - A keypoint from image B

self must satisfy the constraint:

transpose(homogeneous(a)) * E * homogeneous(b) = 0

Also, a and b must be a correspondence.

This will take the average translation over the entire iterator. This is done to smooth out noise and outliers (if present).

consensus_ratio is the ratio of points which must be in front of the camera for the model to be accepted and return Some. Otherwise, None is returned.

max_iterations is the maximum number of iterations that singular value decomposition will run on this matrix. Use this in soft realtime systems to cap the execution time. A max_iterations of 0 may execute indefinitely and is not recommended.

This does not communicate which points were outliers. To determine the outlier points, get the CameraPoint for all points and place them in front of

Trait Implementations

impl AsMut<Matrix<f32, U3, U3, <DefaultAllocator as Allocator<f32, U3, U3>>::Buffer>> for EssentialMatrix[src]

impl AsRef<Matrix<f32, U3, U3, <DefaultAllocator as Allocator<f32, U3, U3>>::Buffer>> for EssentialMatrix[src]

impl Clone for EssentialMatrix[src]

impl Copy for EssentialMatrix[src]

impl Debug for EssentialMatrix[src]

impl Deref for EssentialMatrix[src]

type Target = Matrix3<f32>

The resulting type after dereferencing.

impl DerefMut for EssentialMatrix[src]

impl From<EssentialMatrix> for Matrix3<f32>[src]

impl From<Matrix<f32, U3, U3, <DefaultAllocator as Allocator<f32, U3, U3>>::Buffer>> for EssentialMatrix[src]

impl Model<KeyPointsMatch> for EssentialMatrix[src]

impl PartialEq<EssentialMatrix> for EssentialMatrix[src]

impl PartialOrd<EssentialMatrix> for EssentialMatrix[src]

impl StructuralPartialEq for EssentialMatrix[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> Same<T> for T

type Output = T

Should always be Self

impl<T> Scalar for T where
    T: PartialEq<T> + Copy + Any + Debug
[src]

impl<SS, SP> SupersetOf<SS> for SP where
    SS: SubsetOf<SP>, 

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>,