pub struct CountVectorizer {
pub feature_names: Vec<String>,
}Expand description
Struct for converting a collection of text documents to a matrix of token counts. This implementation produces a sparse representation of the counts using a Vector.
§Fields
feature_names: A vector storing the unique words found across all documents.
These are the ‘features’ that the model has learned.
§Examples
use ducky_learn::feature_extraction::CountVectorizer;
let mut count_vector = CountVectorizer::new();
let document = vec![
"hello this is a test".to_string(),
"this is another test".to_string(),
];
count_vector.fit_transform(&document);
assert_eq!(count_vector.feature_names, vec!["hello", "this", "is", "a", "test", "another"]);Fields§
§feature_names: Vec<String>Implementations§
Source§impl CountVectorizer
impl CountVectorizer
Sourcepub fn fit_transform(&mut self, input_document: &Vec<String>) -> Vec<Vec<f64>>
pub fn fit_transform(&mut self, input_document: &Vec<String>) -> Vec<Vec<f64>>
Fits the model according to the given training data and then transforms the data into a matrix of token counts.
This process involves learning the ‘vocabulary’ from the input data (i.e., all unique words across all documents) and then representing each document as a vector of counts of the words in the learned vocabulary.
§Arguments
input_document- A vector of strings where each string represents a document.
§Returns
A vector of vectors, where each inner vector represents a document and contains the token counts for each word in the learned vocabulary.
§Examples
use ducky_learn::feature_extraction::CountVectorizer;
let mut count_vector = CountVectorizer::new();
let document = vec![
"hello this is a test".to_string(),
"this is another test".to_string(),
];
let transformed_document = count_vector.fit_transform(&document);
assert_eq!(transformed_document, vec![
vec![1.0, 1.0, 1.0, 1.0, 1.0, 0.0],
vec![0.0, 1.0, 1.0, 0.0, 1.0, 1.0],
]);Sourcepub fn transform(&self, input_document: &Vec<String>) -> Vec<Vec<f64>>
pub fn transform(&self, input_document: &Vec<String>) -> Vec<Vec<f64>>
Transforms the data into a matrix of token counts using the learned vocabulary.
This process involves representing each document as a vector of counts of the
words in the learned vocabulary. Note that this method does not learn the vocabulary
and assumes that fit_transform has already been called.
§Arguments
input_document- A vector of strings where each string represents a document.
§Returns
A vector of vectors, where each inner vector represents a document and contains the token counts for each word in the learned vocabulary.
§Examples
use ducky_learn::feature_extraction::CountVectorizer;
let mut count_vector = CountVectorizer::new();
let document = vec![
"hello this is a test".to_string(),
"this is another test".to_string(),
];
count_vector.fit_transform(&document);
let new_document = vec![
"this another test".to_string(),
];
let transformed_new_document = count_vector.transform(&new_document);
assert_eq!(transformed_new_document, vec![
vec![0.0, 1.0, 0.0, 0.0, 1.0, 1.0],
]);