Enum elasticsearch_dsl::analyze::TokenType
source · pub enum TokenType {
Show 14 variants
Alphanum,
Synonym,
Word,
Hangul,
Num,
Email,
Apostrophe,
Double,
Katakana,
Acronym,
Gram,
Fingerprint,
Shingle,
Other(String),
}
Expand description
Type of token
Variants§
Alphanum
Alphanumeric token
Synonym
Synonym token
Word
Word token
Hangul
Hangul (Korean alphabet) token
Num
Numeric token
Email token
Apostrophe
Words with apostrophe token
Double
CJK (Chinese, Japanese, and Korean) tokens
Katakana
Normalized CJK (Chinese, Japanese, and Korean) tokens. Normalizes width differences in CJK (Chinese, Japanese, and Korean) characters as follows: Folds full-width ASCII character variants into the equivalent basic Latin characters Folds half-width Katakana character variants into the equivalent Kana characters
Acronym
Acronym token
Gram
Gram token
Fingerprint
Fingerprint token
Shingle
Shingle token
Other(String)
Other token
Trait Implementations§
source§impl<'de> Deserialize<'de> for TokenType
impl<'de> Deserialize<'de> for TokenType
source§fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>where
D: Deserializer<'de>,
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>where
D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more