tuker 0.1.0

A small tokenizer/parser library with an emphasis on usability
Documentation
  • Coverage
  • 86.14%
    87 out of 101 items documented0 out of 55 items with examples
  • Size
  • Source code size: 54.07 kB This is the summed size of all the files inside the crates.io package for this release.
  • Documentation size: 6.34 MB This is the summed size of all files generated by rustdoc for all configured targets
  • Ø build duration
  • this release: 41s Average build duration of successful builds.
  • all releases: 41s Average build duration of successful builds in releases after 2024-10-23.
  • Links
  • Repository
  • crates.io
  • Dependencies
  • Versions
  • Owners
  • mondobe

Tuker

Tuker is a small tokenizer/parser library with an emphasis on usability. Tokenize text in two lines of code, then parse it in two more. Navigate parse trees using simple functions that aren't brittle.

Tuker is only the latest in a long series of lexer/parser libraries I've written in a number of languages. This one is an evolution of tuckey, using many of the same principles while also being much easier to work with.

# example.toml

[tokenizer]

l_paren = "[(]"
r_paren = "[)]"
word = "[a-zA-Z_]+"
number = "[0-9]+"

[parser]

main = "expr"
expr = "[word number list]"
list = "(l_paren expr* r_paren)"
// main.rs

let table_input = &fs::read_to_string("example.toml")
    .expect("Error reading table");
let input_text = "(add 1 2)";

let tokenizer = Tokenizer::from_toml(table_input)
    .expect("Error constructing tokenizer");
let tokens = tokenizer.tokenize_str(&input_text);

let parser = Parser::from_toml(table_input, &tokenizer)
    .expect("Error constructing parser");
let parse_tree = parser.parse_tokens("main", &tokens, &tokenizer)
    .expect("Error parsing tokens");

See the examples folder for more comprehensive examples.