Expand description
§Cano: Type-Safe Async Workflow Engine
Cano is an async workflow orchestration engine for Rust built on Finite State Machines (FSM). States are user-defined enums; the engine guarantees type-safe transitions between them.
Well-suited for:
- Data pipelines: ETL jobs with parallel processing (Split/Join) and aggregation
- AI agents: Multi-step inference chains with shared context
- Background systems: Scheduled maintenance, periodic reporting, cron jobs
§Quick Start
use cano::prelude::*;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
enum Step { Fetch, Process, Done }
struct FetchTask;
struct ProcessTask;
#[async_trait]
impl Task<Step> for FetchTask {
async fn run(&self, store: &MemoryStore) -> Result<TaskResult<Step>, CanoError> {
store.put("data", vec![1u32, 2, 3])?;
Ok(TaskResult::Single(Step::Process))
}
}
#[async_trait]
impl Task<Step> for ProcessTask {
async fn run(&self, store: &MemoryStore) -> Result<TaskResult<Step>, CanoError> {
let data: Vec<u32> = store.get("data")?;
store.put("sum", data.iter().sum::<u32>())?;
Ok(TaskResult::Single(Step::Done))
}
}
let store = MemoryStore::new();
let workflow = Workflow::new(store.clone())
.register(Step::Fetch, FetchTask)
.register(Step::Process, ProcessTask)
.add_exit_state(Step::Done);
let final_state = workflow.orchestrate(Step::Fetch).await?;
assert_eq!(final_state, Step::Done);§Core Concepts
§Finite State Machines (FSM)
Workflows in Cano are state machines. You define your states as an enum, register
handlers (Task or Node) for each state, and the engine manages transitions.
§Tasks and Nodes
Two interfaces for processing logic:
Tasktrait: singlerun()method — straightforward operations and prototypingNodetrait: three-phase lifecycle (prep→exec→post) with built-in retry
Every Node automatically implements Task via a blanket impl, so both can be
registered with the same Workflow::register method.
§Parallel Execution (Split/Join)
Run tasks concurrently with Workflow::register_split and join results using
strategies: All, Any, Quorum(n), Percentage(f64), PartialResults(min),
or PartialTimeout.
§Store
MemoryStore provides a thread-safe Arc<RwLock<HashMap>> for sharing typed data
between states. Implement KeyValueStore to plug in a custom backend.
§Processing Lifecycle
Task: Single run() method — full control over execution flow.
Node: Three-phase lifecycle (retried as a unit on prep or post failure):
prep— load data, validate inputs, allocate resourcesexec— core logic (infallible — signature returns result directly)post— write results, determine next state
§Module Overview
task: TheTasktrait — singlerun()methodnode: TheNodetrait — three-phase lifecycle with retry viaTaskConfigworkflow:Workflow— FSM orchestration with Split/Join support- [
scheduler] (requiresschedulerfeature): [Scheduler] — cron and interval scheduling store:MemoryStoreand theKeyValueStoretraiterror:CanoErrorvariants and theCanoResultalias
§Getting Started
- Run an example:
cargo run --example workflow_simple - Read the module docs — each module has detailed documentation and examples
- Run benchmarks:
cargo bench --bench node_performance
Re-exports§
pub use error::CanoError;pub use error::CanoResult;pub use node::DefaultNodeResult;pub use node::DefaultParams;pub use node::DynNode;pub use node::Node;pub use node::NodeObject;pub use store::KeyValueStore;pub use store::MemoryStore;pub use task::DefaultTaskParams;pub use task::DynTask;pub use task::RetryMode;pub use task::Task;pub use task::TaskConfig;pub use task::TaskObject;pub use task::TaskResult;pub use workflow::JoinConfig;pub use workflow::JoinStrategy;pub use workflow::SplitResult;pub use workflow::SplitTaskResult;pub use workflow::StateEntry;pub use workflow::Workflow;
Modules§
- error
- Error Handling - Clear, Actionable Error Messages
- node
- Node API - Structured Workflow Processing
- prelude
- Simplified imports for common usage patterns
- store
- Key-Value Store Helpers for Processing Pipelines
- task
- Task API - Simplified Workflow Interface
- workflow
- Workflow API - Build Workflows with Split/Join Support