[][src]Crate lrpar

lrpar provides a Yacc-compatible parser (where grammars can be generated at compile-time or run-time). It can take in traditional .y files and convert them into an idiomatic Rust parser. More details can be found in the grmtools book; the quickstart guide is a good place to start.

Example

Let's assume we want to statically generate a parser for a simple calculator language (and let's also assume we are able to use lrlex for the lexer). We need to add a build.rs file to our project which tells lrpar to statically compile the lexer and parser files:

use cfgrammar::yacc::YaccKind;
use lrlex::LexerBuilder;
use lrpar::CTParserBuilder;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let lex_rule_ids_map = CTParserBuilder::new()
        .yacckind(YaccKind::Grmtools)
        .process_file_in_src("calc.y")?;
    LexerBuilder::new()
        .rule_ids_map(lex_rule_ids_map)
        .process_file_in_src("calc.l")?;
    Ok(())
}

where src/calc.l is as follows:

%%
[0-9]+ "INT"
\+ "+"
\* "*"
\( "("
\) ")"
[\t ]+ ;

and src/calc.y is as follows:

%start Expr
%avoid_insert "INT"
%%
Expr -> Result<u64, ()>:
      Term '+' Expr { Ok($1? + $3?) }
    | Term { $1 }
    ;

Term -> Result<u64, ()>:
      Factor '*' Term { Ok($1? * $3?) }
    | Factor { $1 }
    ;

Factor -> Result<u64, ()>:
      '(' Expr ')' { $2 }
    | 'INT'
      {
          let v = $1.map_err(|_| ())?;
          parse_int($lexer.lexeme_str(&v))
      }
    ;
%%
// Any functions here are in scope for all the grammar actions above.

fn parse_int(s: &str) -> Result<u64, ()> {
    match s.parse::<u64>() {
        Ok(val) => Ok(val),
        Err(_) => {
            eprintln!("{} cannot be represented as a u64", s);
            Err(())
        }
    }
}

Because we specified that our Yacc file is in Grmtools format, each rule has a separate Rust type to which all its functions conform (in this case, all the rules have the same type, but that's not a requirement).

A simple src/main.rs is as follows:

use std::io::{self, BufRead, Write};

use lrlex::lrlex_mod;
use lrpar::lrpar_mod;

// Using `lrlex_mod!` brings the lexer for `calc.l` into scope. By default the module name
// will be `calc_l` (i.e. the file name, minus any extensions, with a suffix of `_l`).
lrlex_mod!("calc.l");
// Using `lrpar_mod!` brings the parser for `calc.y` into scope. By default the module name
// will be `calc_y` (i.e. the file name, minus any extensions, with a suffix of `_y`).
lrpar_mod!("calc.y");

fn main() {
    // Get the `LexerDef` for the `calc` language.
    let lexerdef = calc_l::lexerdef();
    let stdin = io::stdin();
    loop {
        print!(">>> ");
        io::stdout().flush().ok();
        match stdin.lock().lines().next() {
            Some(Ok(ref l)) => {
                if l.trim().is_empty() {
                    continue;
                }
                // Now we create a lexer with the `lexer` method with which we can lex an input.
                let lexer = lexerdef.lexer(l);
                // Pass the lexer to the parser and lex and parse the input.
                let (res, errs) = calc_y::parse(&lexer);
                for e in errs {
                    println!("{}", e.pp(&lexer, &calc_y::token_epp));
                }
                match res {
                    Some(Ok(r)) => println!("Result: {}", r),
                    _ => eprintln!("Unable to evaluate expression.")
                }
            }
            _ => break
        }
    }
}

We can now cargo run our project and evaluate simple expressions:

>>> 2 + 3
Result: 5
>>> 2 + 3 * 4
Result: 14
>>> (2 + 3) * 4
Result: 20

lrpar also comes with advanced error recovery built-in:

>>> 2 + + 3
Parsing error at line 1 column 5. Repair sequences found:
   1: Delete +
   2: Insert INT
Result: 5
>>> 2 + 3 3
Parsing error at line 1 column 7. Repair sequences found:
   1: Insert *
   2: Insert +
   3: Delete 3
Result: 11
>>> 2 + 3 4 5
Parsing error at line 1 column 7. Repair sequences found:
   1: Insert *, Delete 4
   2: Insert +, Delete 4
   3: Delete 4, Delete 5
   4: Insert +, Shift 4, Delete 5
   5: Insert +, Shift 4, Insert +
   6: Insert *, Shift 4, Delete 5
   7: Insert *, Shift 4, Insert *
   8: Insert *, Shift 4, Insert +
   9: Insert +, Shift 4, Insert *
Result: 17

Macros

lrpar_mod

A convenience macro for including statically compiled .y files. A file src/a/b/c.y which is statically compiled by lrpar can then be used in a crate with lrpar_mod!("a/b/c.y").

Structs

CTParserBuilder

A CTParserBuilder allows one to specify the criteria for building a statically generated parser.

LexError

A Lexing error.

Lexeme

A Lexeme represents a segment of the user's input that conforms to a known type. Note that even if the type of a lexeme seemingly requires it to have len() > 0 (e.g. integers might match the regular expressions [0-9]+), error recovery might cause a lexeme to have a length of 0. Users can detect the difference between a lexeme with an intentionally zero length from a lexeme with zero length due to error recovery through the inserted method.

ParseError

Records a single parse error.

RTParserBuilder

A run-time parser builder.

Enums

LexParseError

A lexing or parsing error. Although the two are quite distinct in terms of what can be reported to users, both can (at least conceptually) occur at any point of the intertwined lexing/parsing process.

Node

A generic parse tree.

ParseRepair

After a parse error is encountered, the parser attempts to find a way of recovering. Each entry in the sequence of repairs is represented by a ParseRepair.

RecoveryKind

What recovery algorithm should be used when a syntax error is encountered?

Traits

Lexer

The trait which all lexers which want to interact with lrpar must implement.