pub struct Lexer<'a> { /* private fields */ }Expand description
A standalone lexer that produces tokens on demand without heap allocation.
The lexer processes input bytes and yields Token values one at a time.
It handles whitespace/comment skipping, number parsing, string escape
sequences, character literals, and symbol case-folding.
§No-std Compatible
The lexer uses fixed-size stack buffers for string and symbol content, avoiding any heap allocation. String literals are limited to 1024 characters and symbols to 64 characters.
§Example
use grift_parser::Lexer;
let mut lexer = Lexer::new("(+ 1 2)");
while let Some(result) = lexer.next_token() {
let spanned = result.unwrap();
// Process token...
}Implementations§
Source§impl<'a> Lexer<'a>
impl<'a> Lexer<'a>
Sourcepub fn from_bytes(input: &'a [u8]) -> Lexer<'a>
pub fn from_bytes(input: &'a [u8]) -> Lexer<'a>
Create a new lexer from a byte slice
Sourcepub fn symbol_bytes(&self, len: usize) -> &[u8] ⓘ
pub fn symbol_bytes(&self, len: usize) -> &[u8] ⓘ
Get the lowercased symbol bytes from the last Token::Symbol produced.
The len field from Token::Symbol { len } indicates how many
bytes are valid.
Sourcepub fn string_chars(&self, len: usize) -> &[char]
pub fn string_chars(&self, len: usize) -> &[char]
Get the string characters from the last Token::String produced.
The len field from Token::String { len } indicates how many
characters are valid.
Sourcepub fn has_more(&mut self) -> bool
pub fn has_more(&mut self) -> bool
Check if there’s more input (after skipping whitespace/comments)
Sourcepub fn next_token(&mut self) -> Option<Result<SpannedToken, LexError>>
pub fn next_token(&mut self) -> Option<Result<SpannedToken, LexError>>
Produce the next token, or None if input is exhausted.
Returns None when there is no more input (after whitespace).
Returns Some(Err(...)) for lexer errors.
Returns Some(Ok(...)) for successfully lexed tokens.