pub struct ForwardPartialPathStitcher<H> { /* private fields */ }Expand description
Implements a phased forward partial path stitching algorithm.
Our overall goal is to start with a set of seed partial paths, and to repeatedly extend each partial path by concatenating another, compatible partial path onto the end of it. (If there are multiple compatible partial paths, we concatenate each of them separately, resulting in more than one extension for the current path.)
We perform this processing in phases. At the start of each phase, we have a current set of
partial paths that need to be processed. As we extend those partial paths, we add the
extensions to the set of partial paths to process in the next phase. Phases are processed
one at a time, each time you invoke the process_next_phase method.
After each phase has completed, you can use the previous_phase_partial_paths method to
retrieve all of the partial paths that were discovered during that phase. That gives you a
chance to add to the Database all of the other partial paths that we might need to extend
those partial paths with before invoking the next phase.
If you don’t care about this phasing nonsense, you can instead preload your Database with all
possible partial paths, and run the forward partial path stitching algorithm all the way to
completion, using the find_all_complete_partial_paths method.
Implementations§
source§impl<H> ForwardPartialPathStitcher<H>
impl<H> ForwardPartialPathStitcher<H>
sourcepub fn from_partial_paths<I>(
_graph: &StackGraph,
_partials: &mut PartialPaths,
initial_partial_paths: I
) -> Selfwhere
I: IntoIterator<Item = PartialPath>,
pub fn from_partial_paths<I>( _graph: &StackGraph, _partials: &mut PartialPaths, initial_partial_paths: I ) -> Selfwhere I: IntoIterator<Item = PartialPath>,
Creates a new forward partial path stitcher that is “seeded” with a set of initial partial
paths. If the sticher is used to find complete paths, it is the responsibility of the caller
to ensure precondition variables are eliminated by calling PartialPath::eliminate_precondition_stack_variables.
source§impl<H: Clone> ForwardPartialPathStitcher<H>
impl<H: Clone> ForwardPartialPathStitcher<H>
sourcepub fn previous_phase_partial_paths(
&self
) -> impl Iterator<Item = &PartialPath> + '_
pub fn previous_phase_partial_paths( &self ) -> impl Iterator<Item = &PartialPath> + '_
Returns an iterator of all of the (possibly incomplete) partial paths that were encountered during the most recent phase of the algorithm.
sourcepub fn previous_phase_partial_paths_slice(&mut self) -> &[PartialPath]
pub fn previous_phase_partial_paths_slice(&mut self) -> &[PartialPath]
Returns a slice of all of the (possibly incomplete) partial paths that were encountered during the most recent phase of the algorithm.
sourcepub fn previous_phase_partial_paths_slice_mut(&mut self) -> &mut [PartialPath]
pub fn previous_phase_partial_paths_slice_mut(&mut self) -> &mut [PartialPath]
Returns a mutable slice of all of the (possibly incomplete) partial paths that were encountered during the most recent phase of the algorithm.
sourcepub fn set_similar_path_detection(&mut self, detect_similar_paths: bool)
pub fn set_similar_path_detection(&mut self, detect_similar_paths: bool)
Sets whether similar path detection should be enabled during path stitching. Paths are similar if start and end node, and pre- and postconditions are the same. The presence of similar paths can lead to exponential blow up during path stitching. Similar path detection is disabled by default because of the associated preformance cost.
sourcepub fn set_max_work_per_phase(&mut self, max_work_per_phase: usize)
pub fn set_max_work_per_phase(&mut self, max_work_per_phase: usize)
Sets the maximum amount of work that can be performed during each phase of the algorithm. By bounding our work this way, you can ensure that it’s not possible for our CPU-bound algorithm to starve any worker threads or processes that you might be using. If you don’t call this method, then we allow ourselves to process all of the extensions of all of the paths found in the previous phase, with no additional bound.
sourcepub fn is_complete(&self) -> bool
pub fn is_complete(&self) -> bool
Returns whether the algorithm has completed.
sourcepub fn process_next_phase<A, Db, E>(
&mut self,
graph: &StackGraph,
partials: &mut PartialPaths,
db: &mut Db,
extend_while: E
)where
A: Appendable,
Db: Candidates<H> + ToAppendable<H, A>,
E: Fn(&StackGraph, &mut PartialPaths, &PartialPath) -> bool,
pub fn process_next_phase<A, Db, E>( &mut self, graph: &StackGraph, partials: &mut PartialPaths, db: &mut Db, extend_while: E )where A: Appendable, Db: Candidates<H> + ToAppendable<H, A>, E: Fn(&StackGraph, &mut PartialPaths, &PartialPath) -> bool,
Runs the next phase of the algorithm. We will have built up a set of incomplete partial
paths during the previous phase. Before calling this function, you must ensure that db
contains all of the possible appendables that we might want to extend any of those
candidate partial paths with.
After this method returns, you can use previous_phase_partial_paths to retrieve a
list of the (possibly incomplete) partial paths that were encountered during this phase.
The extend_while closure is used to control whether the extended paths are further extended
or not. It is not called on the initial paths.
source§impl ForwardPartialPathStitcher<Edge>
impl ForwardPartialPathStitcher<Edge>
sourcepub fn find_minimal_partial_path_set_in_file<F>(
graph: &StackGraph,
partials: &mut PartialPaths,
file: Handle<File>,
cancellation_flag: &dyn CancellationFlag,
visit: F
) -> Result<(), CancellationError>where
F: FnMut(&StackGraph, &mut PartialPaths, &PartialPath),
pub fn find_minimal_partial_path_set_in_file<F>( graph: &StackGraph, partials: &mut PartialPaths, file: Handle<File>, cancellation_flag: &dyn CancellationFlag, visit: F ) -> Result<(), CancellationError>where F: FnMut(&StackGraph, &mut PartialPaths, &PartialPath),
Finds a minimal set of partial paths in a file, calling the visit closure for each one.
This function ensures that the set of visited partial paths (a) is minimal, no path can be constructed by stitching other paths in the set, and (b) covers all complete paths, from references to definitions, when used for path stitching
This function will not return until all reachable partial paths have been processed, so
your database must already contain all partial paths that might be needed. If you have a
very large stack graph stored in some other storage system, and want more control over
lazily loading only the necessary pieces, then you should code up your own loop that calls
process_next_phase manually.
Caveat: Edges between nodes of different files are not used. Hence the returned set of partial paths will not cover paths going through those edges.
source§impl<H: Clone> ForwardPartialPathStitcher<H>
impl<H: Clone> ForwardPartialPathStitcher<H>
sourcepub fn find_all_complete_partial_paths<I, F, A, Db>(
graph: &StackGraph,
partials: &mut PartialPaths,
db: &mut Db,
starting_nodes: I,
cancellation_flag: &dyn CancellationFlag,
visit: F
) -> Result<(), CancellationError>where
I: IntoIterator<Item = Handle<Node>>,
A: Appendable,
Db: Candidates<H> + ToAppendable<H, A>,
F: FnMut(&StackGraph, &mut PartialPaths, &PartialPath),
pub fn find_all_complete_partial_paths<I, F, A, Db>( graph: &StackGraph, partials: &mut PartialPaths, db: &mut Db, starting_nodes: I, cancellation_flag: &dyn CancellationFlag, visit: F ) -> Result<(), CancellationError>where I: IntoIterator<Item = Handle<Node>>, A: Appendable, Db: Candidates<H> + ToAppendable<H, A>, F: FnMut(&StackGraph, &mut PartialPaths, &PartialPath),
Finds all complete partial paths that are reachable from a set of starting nodes,
building them up by stitching together partial paths from this database, and calling
the visit closure on each one.
This function will not return until all reachable partial paths have been processed, so
your database must already contain all partial paths that might be needed. If you have a
very large stack graph stored in some other storage system, and want more control over
lazily loading only the necessary pieces, then you should code up your own loop that calls
process_next_phase manually.
Auto Trait Implementations§
impl<H> !RefUnwindSafe for ForwardPartialPathStitcher<H>
impl<H> Send for ForwardPartialPathStitcher<H>where H: Send,
impl<H> Sync for ForwardPartialPathStitcher<H>where H: Sync,
impl<H> Unpin for ForwardPartialPathStitcher<H>where H: Unpin,
impl<H> !UnwindSafe for ForwardPartialPathStitcher<H>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Conv for T
impl<T> Conv for T
§impl<T> FmtForward for T
impl<T> FmtForward for T
§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where Self: Binary,
self to use its Binary implementation when Debug-formatted.§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where Self: Display,
self to use its Display implementation when
Debug-formatted.§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where Self: LowerExp,
self to use its LowerExp implementation when
Debug-formatted.§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where Self: LowerHex,
self to use its LowerHex implementation when
Debug-formatted.§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where Self: Octal,
self to use its Octal implementation when Debug-formatted.§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where Self: Pointer,
self to use its Pointer implementation when
Debug-formatted.§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where Self: UpperExp,
self to use its UpperExp implementation when
Debug-formatted.§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where Self: UpperHex,
self to use its UpperHex implementation when
Debug-formatted.§fn fmt_list(self) -> FmtList<Self>where
&'a Self: for<'a> IntoIterator,
fn fmt_list(self) -> FmtList<Self>where &'a Self: for<'a> IntoIterator,
§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere T: ?Sized,
§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere Self: Sized,
§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere R: 'a,
self and passes that borrow into the pipe function. Read more§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere R: 'a,
self and passes that borrow into the pipe function. Read more§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> Rwhere
Self: Borrow<B>,
B: 'a + ?Sized,
R: 'a,
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> Rwhere Self: Borrow<B>, B: 'a + ?Sized, R: 'a,
§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> Rwhere
Self: BorrowMut<B>,
B: 'a + ?Sized,
R: 'a,
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R ) -> Rwhere Self: BorrowMut<B>, B: 'a + ?Sized, R: 'a,
§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> Rwhere
Self: AsRef<U>,
U: 'a + ?Sized,
R: 'a,
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> Rwhere Self: AsRef<U>, U: 'a + ?Sized, R: 'a,
self, then passes self.as_ref() into the pipe function.§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> Rwhere
Self: AsMut<U>,
U: 'a + ?Sized,
R: 'a,
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> Rwhere Self: AsMut<U>, U: 'a + ?Sized, R: 'a,
self, then passes self.as_mut() into the pipe
function.§impl<T> Tap for T
impl<T> Tap for T
§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Selfwhere
Self: Borrow<B>,
B: ?Sized,
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Selfwhere Self: Borrow<B>, B: ?Sized,
Borrow<B> of a value. Read more§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Selfwhere
Self: BorrowMut<B>,
B: ?Sized,
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Selfwhere Self: BorrowMut<B>, B: ?Sized,
BorrowMut<B> of a value. Read more§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Selfwhere
Self: AsRef<R>,
R: ?Sized,
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Selfwhere Self: AsRef<R>, R: ?Sized,
AsRef<R> view of a value. Read more§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Selfwhere
Self: AsMut<R>,
R: ?Sized,
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Selfwhere Self: AsMut<R>, R: ?Sized,
AsMut<R> view of a value. Read more§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Selfwhere
Self: Deref<Target = T>,
T: ?Sized,
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Selfwhere Self: Deref<Target = T>, T: ?Sized,
Deref::Target of a value. Read more§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Selfwhere
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Selfwhere Self: DerefMut<Target = T> + Deref, T: ?Sized,
Deref::Target of a value. Read more§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap() only in debug builds, and is erased in release builds.§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut() only in debug builds, and is erased in release
builds.§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Selfwhere
Self: Borrow<B>,
B: ?Sized,
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Selfwhere Self: Borrow<B>, B: ?Sized,
.tap_borrow() only in debug builds, and is erased in release
builds.§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Selfwhere
Self: BorrowMut<B>,
B: ?Sized,
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Selfwhere Self: BorrowMut<B>, B: ?Sized,
.tap_borrow_mut() only in debug builds, and is erased in release
builds.§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Selfwhere
Self: AsRef<R>,
R: ?Sized,
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Selfwhere Self: AsRef<R>, R: ?Sized,
.tap_ref() only in debug builds, and is erased in release
builds.§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Selfwhere
Self: AsMut<R>,
R: ?Sized,
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Selfwhere Self: AsMut<R>, R: ?Sized,
.tap_ref_mut() only in debug builds, and is erased in release
builds.