[][src]Trait opencv::hub_prelude::LargeKinfu

pub trait LargeKinfu {
    pub fn as_raw_LargeKinfu(&self) -> *const c_void;
pub fn as_raw_mut_LargeKinfu(&mut self) -> *mut c_void; pub fn get_params(&self) -> Result<Params> { ... }
pub fn render(
        &self,
        image: &mut dyn ToOutputArray,
        camera_pose: Matx44f
    ) -> Result<()> { ... }
pub fn get_cloud(
        &self,
        points: &mut dyn ToOutputArray,
        normals: &mut dyn ToOutputArray
    ) -> Result<()> { ... }
pub fn get_points(&self, points: &mut dyn ToOutputArray) -> Result<()> { ... }
pub fn get_normals(
        &self,
        points: &dyn ToInputArray,
        normals: &mut dyn ToOutputArray
    ) -> Result<()> { ... }
pub fn reset(&mut self) -> Result<()> { ... }
pub fn get_pose(&self) -> Result<Affine3f> { ... }
pub fn update(&mut self, depth: &dyn ToInputArray) -> Result<bool> { ... } }

Large Scale Dense Depth Fusion implementation

This class implements a 3d reconstruction algorithm for larger environments using Spatially hashed TSDF volume "Submaps". It also runs a periodic posegraph optimization to minimize drift in tracking over long sequences. Currently the algorithm does not implement a relocalization or loop closure module. Potentially a Bag of words implementation or RGBD relocalization as described in Glocker et al. ISMAR 2013 will be implemented

It takes a sequence of depth images taken from depth sensor (or any depth images source such as stereo camera matching algorithm or even raymarching renderer). The output can be obtained as a vector of points and their normals or can be Phong-rendered from given camera pose.

An internal representation of a model is a spatially hashed voxel cube that stores TSDF values which represent the distance to the closest surface (for details read the kinectfusion article about TSDF). There is no interface to that representation yet.

For posegraph optimization, a Submap abstraction over the Volume class is created. New submaps are added to the model when there is low visibility overlap between current viewing frustrum and the existing volume/model. Multiple submaps are simultaneously tracked and a posegraph is created and optimized periodically.

LargeKinfu does not use any OpenCL acceleration yet. To enable or disable it explicitly use cv::setUseOptimized() or cv::ocl::setUseOpenCL().

This implementation is inspired from Kintinuous, InfiniTAM and other SOTA algorithms

You need to set the OPENCV_ENABLE_NONFREE option in CMake to use KinectFusion.

Required methods

Loading content...

Provided methods

pub fn get_params(&self) -> Result<Params>[src]

pub fn render(
    &self,
    image: &mut dyn ToOutputArray,
    camera_pose: Matx44f
) -> Result<()>
[src]

C++ default parameters

  • camera_pose: Matx44f::eye()

pub fn get_cloud(
    &self,
    points: &mut dyn ToOutputArray,
    normals: &mut dyn ToOutputArray
) -> Result<()>
[src]

pub fn get_points(&self, points: &mut dyn ToOutputArray) -> Result<()>[src]

pub fn get_normals(
    &self,
    points: &dyn ToInputArray,
    normals: &mut dyn ToOutputArray
) -> Result<()>
[src]

pub fn reset(&mut self) -> Result<()>[src]

pub fn get_pose(&self) -> Result<Affine3f>[src]

pub fn update(&mut self, depth: &dyn ToInputArray) -> Result<bool>[src]

Loading content...

Implementations

impl<'_> dyn LargeKinfu + '_[src]

pub fn create(_params: &Ptr<Params>) -> Result<Ptr<dyn LargeKinfu>>[src]

Implementors

Loading content...