Create ridiculously fast Lexers.
Logos works by:
- Resolving all logical branching of token definitions into a state machine.
- Optimizing complex patterns into Lookup Tables.
- Avoiding backtracking, unwinding loops, and batching reads to minimize bounds checking.
In practice it means that for most grammars the lexing performance is virtually unaffected by the number
of tokens defined in the grammar. Or, in other words, it is really fast.
Example
use logos::Logos;
#[derive(Logos, Debug, PartialEq)]
enum Token {
#[end]
End,
#[error]
Error,
#[token = "fast"]
Fast,
#[token = "."]
Period,
#[regex = "[a-zA-Z]+"]
Text,
}
fn main() {
let mut lexer = Token::lexer("Create ridiculously fast Lexers.");
assert_eq!(lexer.token, Token::Text);
assert_eq!(lexer.slice(), "Create");
assert_eq!(lexer.range(), 0..6);
lexer.advance();
assert_eq!(lexer.token, Token::Text);
assert_eq!(lexer.slice(), "ridiculously");
assert_eq!(lexer.range(), 7..19);
lexer.advance();
assert_eq!(lexer.token, Token::Fast);
assert_eq!(lexer.slice(), "fast");
assert_eq!(lexer.range(), 20..24);
lexer.advance();
assert_eq!(lexer.token, Token::Text);
assert_eq!(lexer.slice(), "Lexers");
assert_eq!(lexer.range(), 25..31);
lexer.advance();
assert_eq!(lexer.token, Token::Period);
assert_eq!(lexer.slice(), ".");
assert_eq!(lexer.range(), 31..32);
lexer.advance();
assert_eq!(lexer.token, Token::End);
}
Callbacks
On top of using the enum variants, Logos can also call arbitrary functions whenever a pattern is matched:
use logos::{Logos, Lexer, Extras};
#[derive(Default)]
struct TokenExtras {
denomination: u32,
}
impl Extras for TokenExtras {}
fn one<S>(lexer: &mut Lexer<Token, S>) {
lexer.extras.denomination = 1;
}
fn kilo<S>(lexer: &mut Lexer<Token, S>) {
lexer.extras.denomination = 1_000;
}
fn mega<S>(lexer: &mut Lexer<Token, S>) {
lexer.extras.denomination = 1_000_000;
}
#[derive(Logos, Debug, PartialEq)]
#[extras = "TokenExtras"] enum Token { #[end]
End,
#[error]
Error,
#[regex("[0-9]+", callback = "one")]
#[regex("[0-9]+k", callback = "kilo")]
#[regex("[0-9]+m", callback = "mega")]
Number,
}
fn main() {
let mut lexer = Token::lexer("5 42k 75m");
assert_eq!(lexer.token, Token::Number);
assert_eq!(lexer.slice(), "5");
assert_eq!(lexer.extras.denomination, 1);
lexer.advance();
assert_eq!(lexer.token, Token::Number);
assert_eq!(lexer.slice(), "42k");
assert_eq!(lexer.extras.denomination, 1_000);
lexer.advance();
assert_eq!(lexer.token, Token::Number);
assert_eq!(lexer.slice(), "75m");
assert_eq!(lexer.extras.denomination, 1_000_000);
lexer.advance();
assert_eq!(lexer.token, Token::End);
}
Token disambiguation
Rule of thumb is:
- Longer beats shorter.
- Specific beats generic.
If any two definitions could match the same input, like fast
and [a-zA-Z]+
in the example above, it's the longer and more specific definition of Token::Fast
that will be the result.
This is done by comparing numeric priority attached to each definition. Every consecutive,
non-repeating single byte adds 2 to the priority, while every range or regex class adds 1.
Loops or optional blocks are ignored, while alternations count the shortest alternative:
[a-zA-Z]+
has a priority of 1 (lowest possible), because at minimum it can match a single byte to a class.
foobar
has a priority of 12.
(foo|hello)(bar)?
has a priority of 6, foo
being it's shortest possible match.