pub struct RankingMetrics {
pub average_rank: Option<f64>,
pub mean_average_precision: Option<f64>,
pub mean_squared_error: Option<f64>,
pub normalized_discounted_cumulative_gain: Option<f64>,
}Expand description
Evaluation metrics used by weighted-ALS models specified by feedback_type=implicit.
This type is not used in any activity, and only used as part of another schema.
Fields§
§average_rank: Option<f64>Determines the goodness of a ranking by computing the percentile rank from the predicted confidence and dividing it by the original rank.
mean_average_precision: Option<f64>Calculates a precision per user for all the items by ranking them and then averages all the precisions across all the users.
mean_squared_error: Option<f64>Similar to the mean squared error computed in regression and explicit recommendation models except instead of computing the rating directly, the output from evaluate is computed against a preference which is 1 or 0 depending on if the rating exists or not.
normalized_discounted_cumulative_gain: Option<f64>A metric to determine the goodness of a ranking calculated from the predicted confidence by comparing it to an ideal rank measured by the original ratings.
Trait Implementations§
Source§impl Clone for RankingMetrics
impl Clone for RankingMetrics
Source§fn clone(&self) -> RankingMetrics
fn clone(&self) -> RankingMetrics
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more