Expand description
FlowRunner – convenience wrapper that loads a session, executes exactly one graph step, and persists the updated session back to storage.
§When should you use FlowRunner
?
- Interactive workflows / web services: you usually want to run one step per HTTP
request, send the assistant’s reply back to the client, and have the session automatically
saved for the next roundtrip.
FlowRunner
makes that a one-liner. - CLI demos & examples: keeps example code tiny; no need to repeat the load-execute-save boilerplate.
§When should you use Graph::execute_session
directly?
- Batch processing where you intentionally want to run many steps in a tight loop and save once at the end to reduce I/O.
- Custom persistence logic (e.g. optimistic locking, distributed transactions).
- Advanced diagnostics where you want to inspect the intermediate
Session
before saving.
Both APIs are 100 % compatible – FlowRunner
merely builds on top of the low-level function.
§Patterns for Stateless HTTP Services
§Pattern 1: Shared FlowRunner (RECOMMENDED)
Create FlowRunner
once at startup, share across all requests:
use graph_flow::FlowRunner;
use std::sync::Arc;
// At startup
struct AppState {
flow_runner: FlowRunner,
}
// In request handler (async context)
let result = state.flow_runner.run(&session_id).await?;
Pros: Most efficient, zero allocation per request
Cons: Requires the same graph for all requests
§Pattern 2: Per-Request FlowRunner
Create FlowRunner
fresh for each request:
use graph_flow::{FlowRunner, Graph, InMemorySessionStorage};
use std::sync::Arc;
// In request handler
let runner = FlowRunner::new(graph.clone(), storage.clone());
let result = runner.run(&session_id).await?;
Pros: Flexible, can use different graphs per request
Cons: Tiny allocation cost per request (still very cheap)
§Pattern 3: Manual (Original)
Use Graph::execute_session
directly:
use graph_flow::{Graph, SessionStorage, InMemorySessionStorage};
use std::sync::Arc;
let mut session = storage.get(&session_id).await?.unwrap();
let result = graph.execute_session(&mut session).await?;
storage.save(session).await?;
Pros: Maximum control
Cons: More boilerplate, easy to forget session.save()
§Performance Characteristics
- FlowRunner creation cost: ~2 pointer copies (negligible)
- Memory overhead: 16 bytes (2 ×
Arc<T>
) - Runtime cost: Identical to manual approach
For high-throughput services, Pattern 1 is recommended. For services with different graphs per request or complex routing, Pattern 2 is perfectly fine.
§Examples
§Basic Usage
use graph_flow::{FlowRunner, Graph, InMemorySessionStorage};
use std::sync::Arc;
let graph = Arc::new(Graph::new("my_workflow"));
let storage = Arc::new(InMemorySessionStorage::new());
let runner = FlowRunner::new(graph, storage);
// Execute workflow step (note: this will fail if session doesn't exist)
let result = runner.run("session_id").await?;
println!("Response: {:?}", result.response);
§Shared Runner Pattern (Recommended for Web Services)
use graph_flow::FlowRunner;
use std::sync::Arc;
// Application state
struct AppState {
flow_runner: Arc<FlowRunner>,
}
impl AppState {
fn new(runner: FlowRunner) -> Self {
Self {
flow_runner: Arc::new(runner),
}
}
}
// Request handler
async fn handle_request(
state: Arc<AppState>,
session_id: String,
) -> Result<String, Box<dyn std::error::Error>> {
let result = state.flow_runner.run(&session_id).await?;
Ok(result.response.unwrap_or_default())
}
Structs§
- Flow
Runner - High-level helper that orchestrates the common load → execute → save pattern.