# [−][src]Struct cv_core::EssentialMatrix

`pub struct EssentialMatrix(pub Matrix3<f64>);`

This stores an essential matrix, which is satisfied by the following constraint:

transpose(x') * E * x = 0

Where `x'` and `x` are homogeneous normalized image coordinates. You can get a homogeneous normalized image coordinate by appending `1.0` to a `NormalizedKeyPoint`.

The essential matrix embodies the epipolar constraint between two images. Given that light travels in a perfectly straight line (it will not, but for short distances it mostly does) and assuming a pinhole camera model, for any point on the camera sensor, the light source for that point exists somewhere along a line extending out from the bearing (direction of travel) of that point. For a normalized image coordinate, that bearing is `(x, y, 1.0)`. That is because normalized image coordinates exist on a virtual plane (the sensor) a distance `z = 1.0` from the optical center (the location of the focal point) where the unit of distance is the focal length. In epipolar geometry, the point on the virtual plane is called an epipole. The line through 3d space created by the bearing that travels from the optical center through the epipole is called an epipolar line.

If you look at every point along an epipolar line, each one of those points would show up as a different point on the camera sensor of another image (if they are in view). If you traced every point along this epipolar line to where it would appear on the sensor of the camera (projection of the 3d points into normalized image coordinates), then the points would form a straight line. This means that you can draw epipolar lines that do not pass through the optical center of an image on that image.

The essential matrix makes it possible to create a vector that is perpendicular to all bearings that are formed from the epipolar line on the second image's sensor. This is done by computing `E * x`, where `x` is a homogeneous normalized image coordinate from the first image. The transpose of the resulting vector then has a dot product with the transpose of the second image coordinate `x'` which is equal to `0.0`. This can be written as:

``````dot(transpose(E * x), x') = 0
``````

This can be re-written into the form given above:

``````transpose(x') * E * x = 0
``````

Where the first operation creates a pependicular vector to the epipoles on the first image and the second takes the dot product which should result in 0.

With a `EssentialMatrix`, you can retrieve the rotation and translation given one normalized image coordinate and one bearing that is scaled to the depth of the point relative to the current reconstruction. This kind of point can be computed using [`WorldPose::project_camera`] to convert a `WorldPoint` to a `CameraPoint`.

## Methods

### `impl EssentialMatrix`[src]

#### `pub fn possible_poses_unfixed_bearing(    &self,     epsilon: f64,     max_iterations: usize) -> Option<(Rotation3<f64>, Rotation3<f64>, Vector3<f64>)>`[src]

Returns two possible rotations for the essential matrix along with a translation bearing of arbitrary length. The translation bearing is not yet in the correct space and the inverse rotation (transpose) must be multiplied by the translation bearing to make the translation bearing be post-rotation. The translation's length is unknown and of unknown sign and must be solved for by using a prior.

`epsilon` is the threshold by which the singular value decomposition is considered complete. Making this smaller may improve the precision. It is recommended to set this to no higher than `1e-6`.

`max_iterations` is the maximum number of iterations that singular value decomposition will run on this matrix. Use this in soft realtime systems to cap the execution time. A `max_iterations` of `0` may execute indefinitely and is not recommended.

```let pose = RelativeCameraPose(Isometry3::from_parts(
Vector3::new(-0.8, 0.4, 0.5).into(),
UnitQuaternion::from_euler_angles(0.2, 0.3, 0.4),
));
// Get the possible poses for the essential matrix created from `pose`.
let (rot_a, rot_b, t) = pose.essential_matrix().possible_poses_unfixed_bearing(1e-6, 50).unwrap();
// The translation must be processed through the reverse rotation.
let t_a = t;
let t_b = t;
// Extract vector from quaternion.
let qcoord = |uquat: UnitQuaternion<f64>| uquat.quaternion().coords;
// Convert rotations into quaternion form.
let quat_a = UnitQuaternion::from(rot_a);
let quat_b = UnitQuaternion::from(rot_b);
// Compute residual via cosine distance of quaternions (guaranteed positive w).
let a_res = quat_a.rotation_to(&pose.rotation).angle();
let b_res = quat_b.rotation_to(&pose.rotation).angle();
let a_close = a_res < 1e-4;
let b_close = b_res < 1e-4;
// At least one rotation is correct.
assert!(a_close || b_close);
// The translation points in the same (or reverse) direction
let a_res = 1.0 - t_a.normalize().dot(&pose.translation.vector.normalize()).abs();
let b_res = 1.0 - t_b.normalize().dot(&pose.translation.vector.normalize()).abs();
let a_close = a_res < 1e-4;
let b_close = b_res < 1e-4;
assert!(a_close || b_close);```

#### `pub fn possible_rotations(    &self,     epsilon: f64,     max_iterations: usize) -> Option<[Rotation3<f64>; 2]>`[src]

This returns only the two rotations that are possible.

```let pose = RelativeCameraPose(Isometry3::from_parts(
Vector3::new(-0.8, 0.4, 0.5).into(),
UnitQuaternion::from_euler_angles(0.2, 0.3, 0.4),
));
// Get the possible rotations for the essential matrix created from `pose`.
let rbs = pose.essential_matrix().possible_rotations(1e-6, 50).unwrap();
let one_correct = rbs.iter().any(|&rot| {
let angle_residual =
UnitQuaternion::from(rot).rotation_to(&pose.rotation).angle();
angle_residual < 1e-4
});
assert!(one_correct);```

#### `pub fn possible_rotations_and_bearings(    &self,     epsilon: f64,     max_iterations: usize) -> Option<[(Rotation3<f64>, Vector3<f64>); 2]>`[src]

This returns the rotations and their corresponding post-rotation translation bearing.

```let pose = RelativeCameraPose(Isometry3::from_parts(
Vector3::new(-0.8, 0.4, 0.5).into(),
UnitQuaternion::from_euler_angles(0.2, 0.3, 0.4),
));
// Get the possible poses for the essential matrix created from `pose`.
let rbs = pose.essential_matrix().possible_rotations_and_bearings(1e-6, 50).unwrap();
let one_correct = rbs.iter().any(|&(rot, bearing)| {
let angle_residual =
UnitQuaternion::from(rot).rotation_to(&pose.rotation).angle();
let translation_residual =
1.0 - bearing.normalize().dot(&pose.translation.vector.normalize()).abs();
angle_residual < 1e-4 && translation_residual < 1e-4
});
assert!(one_correct);```

#### `pub fn solve_pose(    &self,     consensus_ratio: f64,     epsilon: f64,     max_iterations: usize,     bearing_scale: impl Fn(Vector3<f64>, CameraPoint, NormalizedKeyPoint) -> f64,     correspondences: impl Iterator<Item = (CameraPoint, NormalizedKeyPoint)>) -> Option<RelativeCameraPose>`[src]

Return the `RelativeCameraPose` that transforms a `CameraPoint` of image `A` (source of `a`) to the corresponding `CameraPoint` of image B (source of `b`). This determines the average expected translation from the points themselves and if the points agree with the rotation (points must be in front of the camera). The function takes an iterator containing tuples in the form `(depth, a, b)`:

• `depth` - The actual depth (`z` axis, not distance) of normalized keypoint `a`
• `a` - A keypoint from image `A`
• `b` - A keypoint from image `B`

`self` must satisfy the constraint:

``````transpose(homogeneous(a)) * E * homogeneous(b) = 0
``````

Also, `a` and `b` must be a correspondence.

This will take the average translation over the entire iterator. This is done to smooth out noise and outliers (if present).

`consensus_ratio` is the ratio of points which must be in front of the camera for the model to be accepted and return Some. Otherwise, None is returned.

`max_iterations` is the maximum number of iterations that singular value decomposition will run on this matrix. Use this in soft realtime systems to cap the execution time. A `max_iterations` of `0` may execute indefinitely and is not recommended.

`bearing_scale` is a function that is passed a translation bearing vector, an untranslated (but rotated) camera point, and a normalized key point where the actual point exists. It must return the scalar which the translation bearing vector must by multiplied by to get the actual translation.

`correspondences` must provide an iterator of tuples containing the depth (distance along the positive Z axis) of camera A's point `a`, a point `a` from camera A, and a point `b` from camera B.

This does not communicate which points were outliers.

```let pose = RelativeCameraPose(Isometry3::from_parts(
Vector3::new(-0.8, 0.4, 0.5).into(),
UnitQuaternion::from_euler_angles(0.2, 0.3, 0.4),
));
let a_point = CameraPoint(Point3::new(-1.0, 2.0, 3.0));
let b_point = pose.transform(a_point);
// Get the possible poses for the essential matrix created from `pose`.
let estimate_pose = pose.essential_matrix().solve_pose(0.5, 1e-6, 50,
cv_core::geom::triangulate_bearing_reproject,
std::iter::once((a_point, b_point.into())),
).unwrap();

let angle_residual =
estimate_pose.rotation.rotation_to(&pose.rotation).angle();
let translation_residual = (pose.translation.vector - estimate_pose.translation.vector).norm();
assert!(angle_residual < 1e-4 && translation_residual < 1e-4, "{}, {}, {:?}, {:?}", angle_residual, translation_residual, pose.translation.vector, estimate_pose.translation.vector);```

## Trait Implementations

### `impl Deref for EssentialMatrix`[src]

#### `type Target = Matrix3<f64>`

The resulting type after dereferencing.

## Blanket Implementations

### `impl<T> Same<T> for T`

#### `type Output = T`

Should always be `Self`

### `impl<T, U> TryFrom<U> for T where    U: Into<T>, `[src]

#### `type Error = Infallible`

The type returned in the event of a conversion error.

### `impl<T, U> TryInto<U> for T where    U: TryFrom<T>, `[src]

#### `type Error = <U as TryFrom<T>>::Error`

The type returned in the event of a conversion error.