pub trait ColoredKinfu_ColoredKinFuConst {
fn as_raw_ColoredKinfu_ColoredKinFu(&self) -> *const c_void;
fn get_params(&self) -> Result<ColoredKinfu_Params> { ... }
fn render(&self, image: &mut dyn ToOutputArray) -> Result<()> { ... }
fn render_1(
&self,
image: &mut dyn ToOutputArray,
camera_pose: Matx44f
) -> Result<()> { ... }
fn get_cloud(
&self,
points: &mut dyn ToOutputArray,
normals: &mut dyn ToOutputArray,
colors: &mut dyn ToOutputArray
) -> Result<()> { ... }
fn get_points(&self, points: &mut dyn ToOutputArray) -> Result<()> { ... }
fn get_normals(
&self,
points: &dyn ToInputArray,
normals: &mut dyn ToOutputArray
) -> Result<()> { ... }
fn get_pose(&self) -> Result<Affine3f> { ... }
}
Expand description
KinectFusion implementation
This class implements a 3d reconstruction algorithm described in kinectfusion paper.
It takes a sequence of depth images taken from depth sensor (or any depth images source such as stereo camera matching algorithm or even raymarching renderer). The output can be obtained as a vector of points and their normals or can be Phong-rendered from given camera pose.
An internal representation of a model is a voxel cuboid that keeps TSDF values which are a sort of distances to the surface (for details read the kinectfusion article about TSDF). There is no interface to that representation yet.
KinFu uses OpenCL acceleration automatically if available. To enable or disable it explicitly use cv::setUseOptimized() or cv::ocl::setUseOpenCL().
This implementation is based on kinfu-remake.
Note that the KinectFusion algorithm was patented and its use may be restricted by the list of patents mentioned in README.md file in this module directory.
That’s why you need to set the OPENCV_ENABLE_NONFREE option in CMake to use KinectFusion.
Required Methods
fn as_raw_ColoredKinfu_ColoredKinFu(&self) -> *const c_void
Provided Methods
fn get_params(&self) -> Result<ColoredKinfu_Params>
fn get_params(&self) -> Result<ColoredKinfu_Params>
Get current parameters
fn render(&self, image: &mut dyn ToOutputArray) -> Result<()>
fn render(&self, image: &mut dyn ToOutputArray) -> Result<()>
Renders a volume into an image
Renders a 0-surface of TSDF using Phong shading into a CV_8UC4 Mat. Light pose is fixed in KinFu params.
Parameters
- image: resulting image
Renders a volume into an image
Renders a 0-surface of TSDF using Phong shading into a CV_8UC4 Mat. Light pose is fixed in KinFu params.
Parameters
- image: resulting image
- cameraPose: pose of camera to render from. If empty then render from current pose which is a last frame camera pose.
fn get_cloud(
&self,
points: &mut dyn ToOutputArray,
normals: &mut dyn ToOutputArray,
colors: &mut dyn ToOutputArray
) -> Result<()>
fn get_cloud(
&self,
points: &mut dyn ToOutputArray,
normals: &mut dyn ToOutputArray,
colors: &mut dyn ToOutputArray
) -> Result<()>
Gets points, normals and colors of current 3d mesh
The order of normals corresponds to order of points. The order of points is undefined.
Parameters
- points: vector of points which are 4-float vectors
- normals: vector of normals which are 4-float vectors
- colors: vector of colors which are 4-float vectors
C++ default parameters
- colors: noArray()
fn get_points(&self, points: &mut dyn ToOutputArray) -> Result<()>
fn get_points(&self, points: &mut dyn ToOutputArray) -> Result<()>
Gets points of current 3d mesh
The order of points is undefined.
Parameters
- points: vector of points which are 4-float vectors
fn get_normals(
&self,
points: &dyn ToInputArray,
normals: &mut dyn ToOutputArray
) -> Result<()>
fn get_normals(
&self,
points: &dyn ToInputArray,
normals: &mut dyn ToOutputArray
) -> Result<()>
Calculates normals for given points
Parameters
- points: input vector of points which are 4-float vectors
- normals: output vector of corresponding normals which are 4-float vectors