Struct bert_tokenizer::BasicTokenizer
source · pub struct BasicTokenizer { /* private fields */ }
Expand description
A basic tokenizer that runs basic tokenization (punctuation splitting, lower casing, etc.). By default, it does not lower case the input.
Example
use bert_tokenizer::{BasicTokenizer, Tokenizer};
let tokenizer = BasicTokenizer::default();
let tokens = tokenizer.tokenize("Hello, World!");
assert_eq!(tokens, vec!["Hello", ",", "World", "!"]);
If you want to provide lower casing, you can use the do_lower_case
method:
use bert_tokenizer::{BasicTokenizer, Tokenizer};
let tokenizer = BasicTokenizer::do_lower_case(true).build();
let tokens = tokenizer.tokenize("Hello, World!");
assert_eq!(tokens, vec!["hello", ",", "world", "!"]);
Implementations§
source§impl BasicTokenizer
impl BasicTokenizer
pub fn do_lower_case(do_lower_case: bool) -> BasicTokenizerBuilder
Trait Implementations§
source§impl Default for BasicTokenizer
impl Default for BasicTokenizer
source§fn default() -> BasicTokenizer
fn default() -> BasicTokenizer
Returns the “default value” for a type. Read more
source§impl Tokenizer for BasicTokenizer
impl Tokenizer for BasicTokenizer
source§fn tokenize(&self, text: &str) -> Vec<String>
fn tokenize(&self, text: &str) -> Vec<String>
Apply basic tokenization (punctuation splitting, lower casing, etc.) to a piece of text.
Arguments
text
- Text to tokenize
Returns
Vec<String>
- Vector of tokens
Example
use bert_tokenizer::{BasicTokenizer, Tokenizer};
let tokenizer = BasicTokenizer::default();
let tokens = tokenizer.tokenize("Hello, World!");
assert_eq!(tokens, vec!["Hello", ",", "World", "!"]);