Documentation
Tokenizers
This crate provides multiple tokenizers built on top of Scanner.
- EbnfTokenizer: A tokenizing an EBNF grammar.
let grammar = r#"
expr := expr ('+'|'-') term | term ;
term := term ('*'|'/') factor | factor ;
factor := '-' factor | power ;
power := ufact '^' factor | ufact ;
ufact := ufact '!' | group ;
group := num | '(' expr ')' ;
"#;
let mut tok = new
- LispTokenizer: for tokenizing lisp like input.
new;
- MathTokenizer: emits
MathTokentokens.
new;
- DelimTokenizer: emits tokens split by some delimiter.
Scanner
Scanner is the building block for implementing tokenizers. You can build one from an Iterator and use it to extract tokens. Check the above mentioned tokenizers for examples.
Example
// Define a Tokenizer
;
// Use it to tokenize a math expression
let mut lx = tokenizer;
let token = lex.next;
Tips
-
scan_Xfunctions try to consume some text-object out of the scanner. For example numbers, identifiers, quoted strings, etc. -
buffer_posandset_buffer_posare used for back-tracking as long as the Scanner's buffer still has the data you need. That means you haven't consumed or discarded it.