Struct linfa_preprocessing::tf_idf_vectorization::TfIdfVectorizer [−][src]
pub struct TfIdfVectorizer { /* fields omitted */ }Simlar to CountVectorizer but instead of
just counting the term frequency of each vocabulary entry in each given document,
it computes the term frequecy times the inverse document frequency, thus giving more importance
to entries that appear many times but only on some documents. The weight function can be adjusted
by setting the appropriate method. This struct provides the same string
processing customizations described in CountVectorizer.
Implementations
impl TfIdfVectorizer[src]
impl TfIdfVectorizer[src]pub fn convert_to_lowercase(self, convert_to_lowercase: bool) -> Self[src]
If true, all documents used for fitting will be converted to lowercase.
pub fn split_regex(self, regex_str: &str) -> Self[src]
Sets the regex espression used to split decuments into tokens
pub fn n_gram_range(self, min_n: usize, max_n: usize) -> Self[src]
If set to (1,1) single tokens will be candidate vocabulary entries, if (2,2) then adjacent token pairs will be considered,
if (1,2) then both single tokens and adjacent token pairs will be considered, and so on. The definition of token depends on the
regex used fpr splitting the documents.
min_n should not be greater than max_n
pub fn normalize(self, normalize: bool) -> Self[src]
If true, all charachters in the documents used for fitting will be normalized according to unicode’s NFKD normalization.
pub fn document_frequency(self, min_freq: f32, max_freq: f32) -> Self[src]
Specifies the minimum and maximum (relative) document frequencies that each vocabulary entry must satisfy.
min_freq and max_freq must lie in [0;1] and min_freq should not be greater than max_freq
pub fn stopwords<T: ToString>(self, stopwords: &[T]) -> Self[src]
List of entries to be excluded from the generated vocabulary.
pub fn fit<T: ToString + Clone, D: Data<Elem = T>>(
&self,
x: &ArrayBase<D, Ix1>
) -> Result<FittedTfIdfVectorizer>[src]
&self,
x: &ArrayBase<D, Ix1>
) -> Result<FittedTfIdfVectorizer>
Learns a vocabulary from the texts in x, according to the specified attributes and maps each
vocabulary entry to an integer value, producing a FittedTfIdfVectorizer.
Returns an error if:
- one of the
n_gramboundaries is set to zero or the minimum value is greater than the maximum value - if the minimum document frequency is greater than one or than the maximum frequency, or if the maximum frequecy is
smaller than zero
pub fn fit_vocabulary<T: ToString>(
&self,
words: &[T]
) -> Result<FittedTfIdfVectorizer>[src]
&self,
words: &[T]
) -> Result<FittedTfIdfVectorizer>
Produces a FittedTfIdfVectorizer with the input vocabulary.
All struct attributes are ignored in the fitting but will be used by the FittedTfIdfVectorizer
to transform any text to be examined. As such this will return an error in the same cases as the fit method.
pub fn fit_files<P: AsRef<Path>>(
&self,
input: &[P],
encoding: EncodingRef,
trap: DecoderTrap
) -> Result<FittedTfIdfVectorizer>[src]
&self,
input: &[P],
encoding: EncodingRef,
trap: DecoderTrap
) -> Result<FittedTfIdfVectorizer>
Trait Implementations
impl Default for TfIdfVectorizer[src]
impl Default for TfIdfVectorizer[src]Auto Trait Implementations
impl RefUnwindSafe for TfIdfVectorizer
impl RefUnwindSafe for TfIdfVectorizerimpl Send for TfIdfVectorizer
impl Send for TfIdfVectorizerimpl Sync for TfIdfVectorizer
impl Sync for TfIdfVectorizerimpl Unpin for TfIdfVectorizer
impl Unpin for TfIdfVectorizerimpl UnwindSafe for TfIdfVectorizer
impl UnwindSafe for TfIdfVectorizer