[−][src]Trait opencv::prelude::PPF3DDetectorTrait
Class, allowing the load and matching 3D models. Typical Use:
ⓘ
// Train a model ppf_match_3d::PPF3DDetector detector(0.05, 0.05); detector.trainModel(pc); // Search the model in a given scene vector<Pose3DPtr> results; detector.match(pcTest, results, 1.0/5.0,0.05);
Required methods
pub fn as_raw_PPF3DDetector(&self) -> *const c_void
[src]
pub fn as_raw_mut_PPF3DDetector(&mut self) -> *mut c_void
[src]
Provided methods
pub fn set_search_params(
&mut self,
position_threshold: f64,
rotation_threshold: f64,
use_weighted_clustering: bool
) -> Result<()>
[src]
&mut self,
position_threshold: f64,
rotation_threshold: f64,
use_weighted_clustering: bool
) -> Result<()>
Set the parameters for the search
Parameters
- positionThreshold: Position threshold controlling the similarity of translations. Depends on the units of calibration/model.
- rotationThreshold: Position threshold controlling the similarity of rotations. This parameter can be perceived as a threshold over the difference of angles
- useWeightedClustering: The algorithm by default clusters the poses without weighting. A non-zero value would indicate that the pose clustering should take into account the number of votes as the weights and perform a weighted averaging instead of a simple one.
C++ default parameters
- position_threshold: -1
- rotation_threshold: -1
- use_weighted_clustering: false
pub fn train_model(&mut self, model: &Mat) -> Result<()>
[src]
\brief Trains a new model.
Parameters
- Model: The input point cloud with normals (Nx6)
\details Uses the parameters set in the constructor to downsample and learn a new model. When the model is learnt, the instance gets ready for calling "match".
pub fn match_(
&mut self,
scene: &Mat,
results: &mut Vector<Pose3DPtr>,
relative_scene_sample_step: f64,
relative_scene_distance: f64
) -> Result<()>
[src]
&mut self,
scene: &Mat,
results: &mut Vector<Pose3DPtr>,
relative_scene_sample_step: f64,
relative_scene_distance: f64
) -> Result<()>
\brief Matches a trained model across a provided scene.
Parameters
- scene: Point cloud for the scene
- results:[out] List of output poses
- relativeSceneSampleStep: The ratio of scene points to be used for the matching after sampling with relativeSceneDistance. For example, if this value is set to 1.0/5.0, every 5th point from the scene is used for pose estimation. This parameter allows an easy trade-off between speed and accuracy of the matching. Increasing the value leads to less points being used and in turn to a faster but less accurate pose computation. Decreasing the value has the inverse effect.
- relativeSceneDistance: Set the distance threshold relative to the diameter of the model. This parameter is equivalent to relativeSamplingStep in the training stage. This parameter acts like a prior sampling with the relativeSceneSampleStep parameter.
C++ default parameters
- relative_scene_sample_step: 1.0/5.0
- relative_scene_distance: 0.03