Struct tantivy::aggregation::metric::PercentilesAggregationReq
source · pub struct PercentilesAggregationReq {
pub field: String,
pub percents: Option<Vec<f64>>,
pub keyed: bool,
pub missing: Option<f64>,
}
Expand description
§Percentiles
The percentiles aggregation is a useful tool for understanding the distribution of a data set. It calculates the values below which a given percentage of the data falls. For instance, the 95th percentile indicates the value below which 95% of the data points can be found.
This aggregation can be particularly interesting for analyzing website or service response times. For example, if the 95th percentile website load time is significantly higher than the median, this indicates that a small percentage of users are experiencing much slower load times than the majority.
To use the percentiles aggregation, you’ll need to provide a field to aggregate on. In the case of website load times, this would typically be a field containing the duration of time it takes for the site to load.
The following example demonstrates a request for the percentiles of the “load_time” field:
{
"percentiles": {
"field": "load_time"
}
}
This request will return an object containing the default percentiles (1, 5, 25, 50 (median), 75, 95, and 99). You can also customize the percentiles you want to calculate by providing an array of values in the “percents” parameter:
{
"percentiles": {
"field": "load_time",
"percents": [10, 20, 30, 40, 50, 60, 70, 80, 90]
}
}
In this example, the aggregation will return the 10th, 20th, 30th, 40th, 50th, 60th, 70th, 80th, and 90th percentiles of the “load_time” field.
Analyzing the percentiles of website load times can help you understand the user experience and identify areas for optimization. For example, if the 95th percentile load time is significantly higher than the median, this indicates that a small percentage of users are experiencing much slower load times than the majority.
§Estimating Percentiles
While percentiles provide valuable insights into the distribution of data, it’s important to understand that they are often estimates. This is because calculating exact percentiles for large data sets can be computationally expensive and time-consuming. As a result, many percentile aggregation algorithms use approximation techniques to provide faster results.
Fields§
§field: String
The field name to compute the percentiles on.
percents: Option<Vec<f64>>
The percentiles to compute. Defaults to [1.0, 5.0, 25.0, 50.0, 75.0, 95.0, 99.0]
keyed: bool
Whether to return the percentiles as a hash map
missing: Option<f64>
The missing parameter defines how documents that are missing a value should be treated. By default they will be ignored but it is also possible to treat them as if they had a value. Examples in JSON format: { “field”: “my_numbers”, “missing”: “10.0” }
Implementations§
source§impl PercentilesAggregationReq
impl PercentilesAggregationReq
sourcepub fn from_field_name(field_name: String) -> Self
pub fn from_field_name(field_name: String) -> Self
Creates a new PercentilesAggregationReq
instance from a field name.
sourcepub fn field_name(&self) -> &str
pub fn field_name(&self) -> &str
Returns the field name the aggregation is computed on.
Trait Implementations§
source§impl Clone for PercentilesAggregationReq
impl Clone for PercentilesAggregationReq
source§fn clone(&self) -> PercentilesAggregationReq
fn clone(&self) -> PercentilesAggregationReq
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for PercentilesAggregationReq
impl Debug for PercentilesAggregationReq
source§impl<'de> Deserialize<'de> for PercentilesAggregationReq
impl<'de> Deserialize<'de> for PercentilesAggregationReq
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
source§impl PartialEq for PercentilesAggregationReq
impl PartialEq for PercentilesAggregationReq
source§fn eq(&self, other: &PercentilesAggregationReq) -> bool
fn eq(&self, other: &PercentilesAggregationReq) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for PercentilesAggregationReq
Auto Trait Implementations§
impl Freeze for PercentilesAggregationReq
impl RefUnwindSafe for PercentilesAggregationReq
impl Send for PercentilesAggregationReq
impl Sync for PercentilesAggregationReq
impl Unpin for PercentilesAggregationReq
impl UnwindSafe for PercentilesAggregationReq
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
source§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.source§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.source§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.source§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.