[−][src]Struct antlr_rust::parser_atn_simulator::ParserATNSimulator
The embodiment of the adaptive LL(), ALL(), parsing strategy.
The basic complexity of the adaptive strategy makes it harder to understand. We begin with ATN simulation to build paths in a DFA. Subsequent prediction requests go through the DFA first. If they reach a state without an edge for the current symbol, the algorithm fails over to the ATN simulation to complete the DFA path for the current input (until it finds a conflict state or uniquely predicting state).
All of that is done without using the outer context because we want to create a DFA that is not dependent upon the rule invocation stack when we do a prediction. One DFA works in all contexts. We avoid using context not necessarily because it's slower, although it can be, but because of the DFA caching problem. The closure routine only considers the rule invocation stack created during prediction beginning in the decision rule. For example, if prediction occurs without invoking another rule's ATN, there are no context stacks in the configurations. When lack of context leads to a conflict, we don't know if it's an ambiguity or a weakness in the strong LL(*) parsing strategy (versus full LL(*)).
When SLL yields a configuration set with conflict, we rewind the input and retry the ATN simulation, this time using full outer context without adding to the DFA. Configuration context stacks will be the full invocation stacks from the start rule. If we get a conflict using full context, then we can definitively say we have a true ambiguity for that input sequence. If we don't get a conflict, it implies that the decision is sensitive to the outer context. (It is not context-sensitive in the sense of context-sensitive grammars.)
The next time we reach this DFA state with an SLL conflict, through DFA simulation, we will again retry the ATN simulation using full context mode. This is slow because we can't save the results and have to "interpret" the ATN each time we get that input.
For more info see Java version
Implementations
impl ParserATNSimulator
[src][−]
pub fn new(
atn: Arc<ATN>,
decision_to_dfa: Arc<Vec<RwLock<DFA>>>,
shared_context_cache: Arc<PredictionContextCache>
) -> ParserATNSimulator
[src][−]
atn: Arc<ATN>,
decision_to_dfa: Arc<Vec<RwLock<DFA>>>,
shared_context_cache: Arc<PredictionContextCache>
) -> ParserATNSimulator
creates new ParserATNSimulator
pub fn get_prediction_mode(&self) -> PredictionMode
[src][−]
Returns current prediction mode
pub fn set_prediction_mode(&self, v: PredictionMode)
[src][−]
Sets current prediction mode
pub fn adaptive_predict<'a, T: Parser<'a>>(
&self,
decision: isize,
parser: &mut T
) -> Result<isize, ANTLRError>
[src][−]
&self,
decision: isize,
parser: &mut T
) -> Result<isize, ANTLRError>
Called by generated parser to choose an alternative when LL(1) parsing is not enough
Trait Implementations
impl Debug for ParserATNSimulator
[src][+]
Auto Trait Implementations
impl !RefUnwindSafe for ParserATNSimulator
[src]
impl Send for ParserATNSimulator
[src]
impl !Sync for ParserATNSimulator
[src]
impl Unpin for ParserATNSimulator
[src]
impl !UnwindSafe for ParserATNSimulator
[src]
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src][+]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src][+]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src][+]
T: ?Sized,
impl<T> From<T> for T
[src][+]
impl<T, U> Into<U> for T where
U: From<T>,
[src][+]
U: From<T>,
impl<T> NodeText for T
[src][+]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src][+]
U: Into<T>,
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src][+]
U: TryFrom<T>,