Expand description
This module takes care of lexing Python source text.
This means source code is scanned and translated into separate tokens. The rules governing what is and is not a valid token are defined in the Python reference guide section on Lexical analysis.
The primary function in this module is lex
, which takes a string slice
and returns an iterator over the tokens in the source code. The tokens are currently returned
as a Result<Spanned, LexicalError>
, where Spanned
is a tuple containing the
start and end TextSize
and a Tok
denoting the token.
§Example
use rustpython_parser::{lexer::lex, Tok, Mode, StringKind};
let source = "x = 'RustPython'";
let tokens = lex(source, Mode::Module)
.map(|tok| tok.expect("Failed to lex"))
.collect::<Vec<_>>();
for (token, range) in tokens {
println!(
"{token:?}@{range:?}",
);
}
Structs§
- Lexer
- A lexer for Python source code.
- Lexical
Error - Represents an error that occur during lexing and are
returned by the
parse_*
functions in the iterator in the lexer implementation.
Enums§
- Lexical
Error Type - Represents the different types of errors that can occur during lexing.
Statics§
- KEYWORDS
- A map of keywords to their tokens.
Functions§
- lex
- Create a new lexer from a source string.
- lex_
starts_ at - Create a new lexer from a source string, starting at a given location.
You probably want to use
lex
instead.