Struct linfa_preprocessing::tf_idf_vectorization::TfIdfVectorizer[][src]

pub struct TfIdfVectorizer { /* fields omitted */ }

Simlar to CountVectorizer but instead of just counting the term frequency of each vocabulary entry in each given document, it computes the term frequecy times the inverse document frequency, thus giving more importance to entries that appear many times but only on some documents. The weight function can be adjusted by setting the appropriate method. This struct provides the same string
processing customizations described in CountVectorizer.

Implementations

impl TfIdfVectorizer[src]

pub fn convert_to_lowercase(self, convert_to_lowercase: bool) -> Self[src]

If true, all documents used for fitting will be converted to lowercase.

pub fn split_regex(self, regex_str: &str) -> Self[src]

Sets the regex espression used to split decuments into tokens

pub fn n_gram_range(self, min_n: usize, max_n: usize) -> Self[src]

If set to (1,1) single tokens will be candidate vocabulary entries, if (2,2) then adjacent token pairs will be considered, if (1,2) then both single tokens and adjacent token pairs will be considered, and so on. The definition of token depends on the regex used fpr splitting the documents.

min_n should not be greater than max_n

pub fn normalize(self, normalize: bool) -> Self[src]

If true, all charachters in the documents used for fitting will be normalized according to unicode’s NFKD normalization.

pub fn document_frequency(self, min_freq: f32, max_freq: f32) -> Self[src]

Specifies the minimum and maximum (relative) document frequencies that each vocabulary entry must satisfy. min_freq and max_freq must lie in [0;1] and min_freq should not be greater than max_freq

pub fn stopwords<T: ToString>(self, stopwords: &[T]) -> Self[src]

List of entries to be excluded from the generated vocabulary.

pub fn fit<T: ToString + Clone, D: Data<Elem = T>>(
    &self,
    x: &ArrayBase<D, Ix1>
) -> Result<FittedTfIdfVectorizer>
[src]

Learns a vocabulary from the texts in x, according to the specified attributes and maps each vocabulary entry to an integer value, producing a FittedTfIdfVectorizer.

Returns an error if:

  • one of the n_gram boundaries is set to zero or the minimum value is greater than the maximum value
  • if the minimum document frequency is greater than one or than the maximum frequency, or if the maximum frequecy is
    smaller than zero

pub fn fit_vocabulary<T: ToString>(
    &self,
    words: &[T]
) -> Result<FittedTfIdfVectorizer>
[src]

Produces a FittedTfIdfVectorizer with the input vocabulary. All struct attributes are ignored in the fitting but will be used by the FittedTfIdfVectorizer to transform any text to be examined. As such this will return an error in the same cases as the fit method.

pub fn fit_files<P: AsRef<Path>>(
    &self,
    input: &[P],
    encoding: EncodingRef,
    trap: DecoderTrap
) -> Result<FittedTfIdfVectorizer>
[src]

Trait Implementations

impl Default for TfIdfVectorizer[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>,