Expand description
Auto-generated module
🤖 Generated with SplitRS
Modules§
- ansi
- ANSI color codes for syntax highlighting.
Functions§
- adjacent_
pairs - Find all pairs of adjacent tokens.
- annotate_
tokens - Annotate a token slice with depth and category information.
- annotate_
with_ meta - Annotate a token slice with metadata from source text.
- are_
adjacent - Check if two token spans are adjacent (no gap between them in source).
- can_
start_ decl - Return
trueifkindcan begin a declaration. - can_
start_ expr - Return
trueifkindcan begin a term / expression. - categorise
- Assign a
TokenCategoryto aTokenKind. - check_
bracket_ balance - Verify that all brackets in
tokensare balanced. - closing_
bracket - Return the closing bracket kind for a
TokenKind, if any. - closing_
for - Given an opening bracket kind, return the matching closing
TokenKind. - collect_
idents - Collect all identifier names from a token slice.
- colorize_
token - Apply ANSI coloring to a token based on its category.
- compute_
depths - Count the nesting depth at each token position.
- contains_
ident - Return
trueif the token list contains any identifier with the given name. - count_
bigrams - Count bigrams (pairs of consecutive tokens) in a slice. Returns counts keyed by (debug_repr_a, debug_repr_b) string pairs.
- count_
kind - Count tokens with the given kind.
- covering_
span - Return the span covering the entire slice of tokens, or a dummy span if empty.
- describe_
token - Deserialize a debug token description (best-effort, for tests only).
- enrich_
tokens - Convert a slice of tokens to a
Vec<RichToken>. - extract_
bracketed - Extract the content between matched brackets (exclusive).
- filter_
tokens - Return all tokens matching a predicate.
- find_
by_ category - Find the first token with the given category.
- find_
matching_ close - Find the index of the matching closing bracket for an opening bracket at
open_idx. - has_
operator - Check if the token slice contains any operator.
- ident_
of - Extract the identifier string from a token, if it is one.
- infix_
precedence - Return the precedence of an infix operator, or
Noneif not an operator. - is_
assign - Check whether a token is a
:=(assign) token. - is_
colon - Check whether a token is a
:(colon) token. - is_
ident_ token - Check whether a token is an identifier.
- is_
infix_ op - Return
trueifkindrepresents an infix binary operator. - is_
keyword_ token - Returns true if the token kind string represents a keyword.
- is_
right_ assoc - Return
trueifkindis a right-associative operator. - longest_
run - Return the longest run of consecutive tokens with the same kind.
- max_
bracket_ depth - Return the maximum depth of bracket nesting in a token slice.
- nat_
lit_ of - Extract a natural-number literal value from a token, if it is one.
- opening_
bracket - Return the opening bracket kind for a
TokenKind, if any. - opening_
for - Given a closing bracket kind, return the matching opening
TokenKind. - operator_
arity - Determine the arity of an operator token.
- operator_
priority - Look up the binding priority of an operator token.
- reconstruct_
source - Reconstruct source text from tokens and original source.
- reformat
- Reformat a token stream to a normalized string.
- render_
colored - Render a token slice as a colorized string.
- serialize_
tokens - Serialize a token slice to a compact string representation (for debugging).
- span_
char_ count - Count the total number of characters spanned by a token slice.
- split_
at_ kind - Split a token slice at every occurrence of
sep, returning the groups. - starts_
with_ valid_ decl_ head - Validate that a token slice represents a well-formed declaration head.
- starts_
with_ valid_ expr_ head - Validate that a token slice represents a well-formed expression head.
- strip_
comments - Strip all comment tokens from a token list.
- strip_
eof - Strip leading and trailing EOF tokens.
- structurally_
equal - Compare two token slices for structural equality (same kinds in same order).
- token_
edit_ distance - Compute the edit distance between two token sequences (by kind).
- token_
frequencies - Tokenize frequencies: return (kind_string, count) pairs sorted by count.
- token_
hash - Compute a simple hash of a token sequence (by kind only).
- token_
kind_ display - Format a token kind for use in error messages.
- token_
kind_ display_ name - Returns the display name for a token kind.
- token_
lcs_ length - Find the longest common subsequence (by kind) of two token slices.
- token_
ngrams - Produce N-grams of size
nfrom a token slice. - type_
token_ ratio - Compute the type-token ratio (distinct kinds / total tokens).
- vocabulary
- Compute the vocabulary (set of distinct token kinds) in a slice.