1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
//! # NLP Evaluation Metrics and Online LDA
//!
//! This module provides standard NLP evaluation metrics for machine translation
//! and text summarization, along with online topic modeling.
//!
//! ## Metrics
//!
//! - **BLEU** (Bilingual Evaluation Understudy): Measures n-gram precision of
//! generated text against reference translations (Papineni et al. 2002).
//! - **ROUGE** (Recall-Oriented Understudy for Gisting Evaluation): Measures
//! n-gram recall for summarization evaluation.
//! - **METEOR** (Metric for Evaluation of Translation with Explicit ORdering):
//! Alignment-based metric with stemming and synonym matching.
//! - **STS** (Semantic Textual Similarity): Cosine-similarity evaluation against
//! human similarity ratings (Pearson/Spearman/MSE).
//! - **Perplexity**: Language-model perplexity computation via the
//! [`LanguageModelLike`] trait.
//!
//! ## Topic Modeling
//!
//! - **Online LDA**: Streaming Latent Dirichlet Allocation using stochastic
//! variational inference (Hoffman et al. 2010).
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;