pub struct FlowRunner { /* private fields */ }Expand description
Executes a DagGraph using registered Node implementations.
For lifecycle control (pause / resume / terminate), use FlowEngine
instead of constructing a FlowRunner directly.
§Example
use a3s_flow::{DagGraph, FlowRunner, NodeRegistry};
use serde_json::json;
#[tokio::main]
async fn main() {
let def = json!({
"nodes": [
{ "id": "start", "type": "noop" },
{ "id": "end", "type": "noop" }
],
"edges": [{ "source": "start", "target": "end" }]
});
let dag = DagGraph::from_json(&def).unwrap();
let registry = NodeRegistry::with_defaults();
let runner = FlowRunner::new(dag, registry);
let result = runner.run(Default::default()).await.unwrap();
println!("{:?}", result.outputs);
}Implementations§
Source§impl FlowRunner
impl FlowRunner
Sourcepub fn new(dag: DagGraph, registry: NodeRegistry) -> Self
pub fn new(dag: DagGraph, registry: NodeRegistry) -> Self
Create a new runner from a validated DAG and a node registry.
Uses NoopEventEmitter by default. Call
.with_event_emitter to register a custom
listener before executing.
Sourcepub fn with_arc_registry(dag: DagGraph, registry: Arc<NodeRegistry>) -> Self
pub fn with_arc_registry(dag: DagGraph, registry: Arc<NodeRegistry>) -> Self
Create a new runner sharing an existing Arc<NodeRegistry>.
Used by the "iteration" and "sub-flow" nodes so that sub-flow
runners share the same registry without extra Arc wrapping.
Sourcepub fn with_event_emitter(self, emitter: Arc<dyn EventEmitter>) -> Self
pub fn with_event_emitter(self, emitter: Arc<dyn EventEmitter>) -> Self
Attach a custom event emitter to this runner.
The emitter receives node and flow lifecycle events during execution.
Returns self for method chaining.
Sourcepub fn with_flow_store(self, store: Arc<dyn FlowStore>) -> Self
pub fn with_flow_store(self, store: Arc<dyn FlowStore>) -> Self
Attach a flow definition store to this runner.
When set, the store is passed to every ExecContext so that nodes
like "sub-flow" can load named flow definitions at execution time.
Returns self for method chaining.
Sourcepub fn with_max_concurrency(self, n: usize) -> Self
pub fn with_max_concurrency(self, n: usize) -> Self
Limit the number of nodes that may execute concurrently within a single wave.
By default all ready nodes in a wave run in parallel. Setting
max_concurrency to n caps this using a Tokio Semaphore so that
at most n nodes are active at the same time. Useful when downstream
services impose rate limits.
Returns self for method chaining.
Sourcepub async fn run(&self, variables: HashMap<String, Value>) -> Result<FlowResult>
pub async fn run(&self, variables: HashMap<String, Value>) -> Result<FlowResult>
Execute the flow to completion with no external control signals.
Sourcepub async fn resume_from(
&self,
prior: &FlowResult,
variables: HashMap<String, Value>,
) -> Result<FlowResult>
pub async fn resume_from( &self, prior: &FlowResult, variables: HashMap<String, Value>, ) -> Result<FlowResult>
Resume a flow from a prior (partial or complete) result, skipping nodes
that already have outputs in prior.
A new execution ID is assigned to the resumed run. Nodes listed in
prior.completed_nodes are not re-executed; their outputs from prior
are used directly as inputs for any downstream nodes that still need to run.
§Example
let def = json!({ "nodes": [{ "id": "a", "type": "noop" }], "edges": [] });
let dag = DagGraph::from_json(&def).unwrap();
let runner = FlowRunner::new(dag, NodeRegistry::with_defaults());
let partial = runner.run(HashMap::new()).await.unwrap();
// Resume with the partial result — completed nodes are skipped.
let full = runner.resume_from(&partial, HashMap::new()).await.unwrap();