Struct arrsac::Arrsac [−][src]
pub struct Arrsac<R> { /* fields omitted */ }
Expand description
The ARRSAC algorithm for sample consensus.
Don’t forget to shuffle your input data points to avoid bias before using this consensus process. It will not shuffle your data for you. If you do not shuffle, the output will be biased towards data at the beginning of the inputs.
Implementations
rng
should have the same properties you would want for a Monte Carlo simulation.
It should generate random numbers quickly without having any discernable patterns.
The inlier_threshold
is the one parameter that is always specific to your dataset.
This must be set to the threshold in which a data point’s residual is considered an inlier.
Some of the other parameters may need to be configured based on the amount of data,
such as block_size
, likelihood_ratio_threshold
, and block_size
. However,
inlier_threshold
has to be set based on the residual function used with the model.
initial_epsilon
must be higher than initial_delta
. If you modify these values,
you need to make sure that within one block_size
the likelihood_ratio_threshold
can be reached and a model can be rejected. Basically, make sure that
((1.0 - delta) / (1.0 - epsilon))^block_size >>> likelihood_ratio_threshold
.
This must be done to ensure outlier models are rejected during the initial generation
phase, which only processes block_size
datapoints.
initial_epsilon
should also be as large as you can set it where it is still relatively
pessimistic. This is so that we can more easily reject a model early in the process
to compute an updated value for delta during the adaptive process. This may not be possible
and will depend on your data.
Number of models generated in the initial step when epsilon and delta are being estimated.
Default: 256
Number of data blocks used to compute the initial estimate of delta and epsilon before proceeding with regular block processing. This is used instead of an initial epsilon and delta, which were suggested by the paper.
Default: 4
Maximum number of best hypotheses to retain during block processing
This number is halved on each block such that on block n
the number of
hypotheses retained is max_candidate_hypotheses >> n
.
Default: 64
Number of estmations (may generate multiple hypotheses) that will be ran for each block of data evaluated
Default: 64
Number of data points evaluated before more hypotheses are generated
Default: 64
Once a model reaches this level of unlikelihood, it is rejected. Set this higher to make it less restrictive, usually at the cost of more execution time.
Increasing this will make it more likely to find a good result.
Decreasing this will speed up execution.
This ratio is not exposed as a parameter in the original paper, but is instead computed recursively for a few iterations. It is roughly equivalent to the reciprocal of the probability of rejecting a good model. You can use that to control the probability that a good model is rejected.
Default: 1e3
Residual threshold for determining if a data point is an inlier or an outlier of a model
Trait Implementations
Takes a slice over the data and an estimator instance.
It returns None
if no valid model could be found for the data and
Some
if a model was found. Read more
Takes a slice over the data and an estimator instance.
It returns None
if no valid model could be found for the data and
Some
if a model was found. It includes the inliers consistent with the model. Read more