Struct linfa_preprocessing::count_vectorization::CountVectorizer [−][src]
pub struct CountVectorizer { /* fields omitted */ }Count vectorizer: learns a vocabulary from a sequence of documents (or file paths) and maps each vocabulary entry to an integer value, producing a FittedCountVectorizer that can be used to count the occurrences of each vocabulary entry in any sequence of documents. Alternatively a user-specified vocabulary can be used for fitting.
Attributes
If a user-defined vocabulary is used for fitting then the following attributes will not be considered during the fitting phase but they will still be used by the FittedCountVectorizer to transform any text to be examined.
split_regex: the regex espression used to split decuments into tokens. Defaults to r“\b\w\w+\b“, which selects “words”, using whitespaces and punctuation symbols as separators.convert_to_lowercase: if true, all documents used for fitting will be converted to lowercase. Defaults totrue.n_gram_range: if set to(1,1)single tokens will be candidate vocabulary entries, if(2,2)then adjacent token pairs will be considered, if(1,2)then both single tokens and adjacent token pairs will be considered, and so on. The definition of token depends on the regex used fpr splitting the documents. The default value is(1,1).normalize: if true, all charachters in the documents used for fitting will be normalized according to unicode’s NFKD normalization. Defaults totrue.document_frequency: specifies the minimum and maximum (relative) document frequencies that each vocabulary entry must satisfy. Defaults to(0., 1.)(i.e. 0% minimum and 100% maximum)stopwords: optional list of entries to be excluded from the generated vocabulary. Defaults toNone
Implementations
impl CountVectorizer[src]
impl CountVectorizer[src]pub fn convert_to_lowercase(self, convert_to_lowercase: bool) -> Self[src]
If true, all documents used for fitting will be converted to lowercase.
pub fn split_regex(self, regex_str: &str) -> Self[src]
Sets the regex espression used to split decuments into tokens
pub fn n_gram_range(self, min_n: usize, max_n: usize) -> Self[src]
If set to (1,1) single tokens will be candidate vocabulary entries, if (2,2) then adjacent token pairs will be considered,
if (1,2) then both single tokens and adjacent token pairs will be considered, and so on. The definition of token depends on the
regex used fpr splitting the documents.
min_n should not be greater than max_n
pub fn normalize(self, normalize: bool) -> Self[src]
If true, all charachters in the documents used for fitting will be normalized according to unicode’s NFKD normalization.
pub fn document_frequency(self, min_freq: f32, max_freq: f32) -> Self[src]
Specifies the minimum and maximum (relative) document frequencies that each vocabulary entry must satisfy.
min_freq and max_freq must lie in [0;1] and min_freq should not be greater than max_freq
pub fn stopwords<T: ToString>(self, stopwords: &[T]) -> Self[src]
List of entries to be excluded from the generated vocabulary.
pub fn fit<T: ToString + Clone, D: Data<Elem = T>>(
&self,
x: &ArrayBase<D, Ix1>
) -> Result<FittedCountVectorizer>[src]
&self,
x: &ArrayBase<D, Ix1>
) -> Result<FittedCountVectorizer>
Learns a vocabulary from the documents in x, according to the specified attributes and maps each
vocabulary entry to an integer value, producing a FittedCountVectorizer.
Returns an error if:
- one of the
n_gramboundaries is set to zero or the minimum value is greater than the maximum value - if the minimum document frequency is greater than one or than the maximum frequency, or if the maximum frequency is
smaller than zero - if the regex expression for the split is invalid
pub fn fit_files<P: AsRef<Path>>(
&self,
input: &[P],
encoding: EncodingRef,
trap: DecoderTrap
) -> Result<FittedCountVectorizer>[src]
&self,
input: &[P],
encoding: EncodingRef,
trap: DecoderTrap
) -> Result<FittedCountVectorizer>
Learns a vocabulary from the documents contained in the files in input, according to the specified attributes and maps each
vocabulary entry to an integer value, producing a FittedCountVectorizer.
The files will be read using the specified encoding, and any sequence unrecognized by the encoding will be handled
according to trap.
Returns an error if:
- one of the
n_gramboundaries is set to zero or the minimum value is greater than the maximum value - if the minimum document frequency is greater than one or than the maximum frequency, or if the maximum frequency is
smaller than zero - if the regex expression for the split is invalid
- if one of the files couldn’t be opened
- if the trap is strict and an unrecognized sequence is encountered in one of the files
pub fn fit_vocabulary<T: ToString>(
&self,
words: &[T]
) -> Result<FittedCountVectorizer>[src]
&self,
words: &[T]
) -> Result<FittedCountVectorizer>
Produces a FittedCountVectorizer with the input vocabulary.
All struct attributes are ignored in the fitting but will be used by the FittedCountVectorizer
to transform any text to be examined. As such this will return an error in the same cases as the fit method.
Trait Implementations
impl Default for CountVectorizer[src]
impl Default for CountVectorizer[src]Auto Trait Implementations
impl RefUnwindSafe for CountVectorizer
impl RefUnwindSafe for CountVectorizerimpl Send for CountVectorizer
impl Send for CountVectorizerimpl Sync for CountVectorizer
impl Sync for CountVectorizerimpl Unpin for CountVectorizer
impl Unpin for CountVectorizerimpl UnwindSafe for CountVectorizer
impl UnwindSafe for CountVectorizer