Expand description
ยงCano: Type-Safe Async Workflow Engine
Cano is a high-performance orchestration engine designed for building resilient, self-healing systems in Rust. Unlike simple task queues, Cano uses Finite State Machines (FSM) to define strict, type-safe transitions between processing steps.
It excels at managing complex lifecycles where state transitions matter:
- Data Pipelines: ETL jobs with parallel processing (Split/Join) and aggregation.
- AI Agents: Multi-step inference chains with shared context and memory.
- Background Systems: Scheduled maintenance, periodic reporting, and distributed cron jobs.
ยง๐ Quick Start
Choose between Task (simple) or Node (structured) for your processing logic,
create a MemoryStore for sharing data, then run your workflow. Every Node automatically
works as a Task for maximum flexibility.
ยง๐ฏ Core Concepts
ยงFinite State Machines (FSM)
Workflows in Cano are state machines. You define your states as an enum, and register
handlers (Task or Node) for each state. The engine ensures type safety and
manages transitions between states.
ยงTasks & Nodes - Your Processing Units
Two approaches for implementing processing logic:
Tasktrait: Simple interface with singlerun()method - perfect for prototypes and simple operationsNodetrait: Structured three-phase lifecycle with built-in retry strategies - ideal for production workloads
Every Node automatically implements Task, providing seamless interoperability and upgrade paths.
ยงParallel Execution (Split/Join)
Run tasks concurrently and join results with strategies like All, Any, Quorum, or PartialResults.
This allows for powerful patterns like scatter-gather, redundant execution, and latency optimization.
ยงStore - Share Data Between Processing Units
Use MemoryStore to pass data around your workflow. Store different types of data
using key-value pairs, and retrieve them later with type safety. All values are
wrapped in std::borrow::Cow for memory efficiency.
ยง๐๏ธ Processing Lifecycle
Task: Single run() method with full control over execution flow
Node: Three-phase lifecycle for structured processing:
- Prep: Load data, validate inputs, setup resources
- Exec: Core processing logic (with automatic retry support)
- Post: Store results, cleanup, determine next action
This structure makes nodes predictable and easy to reason about, while tasks provide maximum flexibility.
ยง๐ Module Overview
-
task: TheTasktrait for simple, flexible processing logic- Single
run()method for maximum simplicity - Perfect for prototypes and straightforward operations
- Single
-
node: TheNodetrait for structured processing logic- Built-in retry logic and error handling
- Three-phase lifecycle (
prep,exec,post) - Fluent configuration API via
TaskConfig
-
workflow: Core workflow orchestrationWorkflowfor state machine-based workflows with Split/Join support
-
[
scheduler] (optionalschedulerfeature): Advanced workflow scheduling- [
Scheduler] for managing multiple flows with cron support - Time-based and event-driven scheduling
- [
-
store: Thread-safe key-value storage helpers for pipeline data sharingMemoryStorefor in-memory data sharingKeyValueStoretrait for custom storage backends
-
error: Comprehensive error handling systemCanoErrorfor categorized error typesCanoResulttype alias for convenient error handling- Rich error context and conversion traits
ยง๐ Getting Started
- Start with the examples: Run
cargo run --example basic_node_usage - Read the module docs: Each module has detailed documentation and examples
- Check the benchmarks: Run
cargo bench --bench node_performanceto see performance - Join the community: Contribute features, fixes, or feedback
ยงPerformance Characteristics
- Low Latency: Minimal overhead with direct execution
- High Throughput: Direct execution for maximum performance
- Memory Efficient: Scales with data size, not concurrency settings
- Async I/O: Efficient async operations with tokio runtime
Re-exportsยง
pub use error::CanoError;pub use error::CanoResult;pub use node::DefaultNodeResult;pub use node::DefaultParams;pub use node::DynNode;pub use node::Node;pub use store::KeyValueStore;pub use store::MemoryStore;pub use task::DefaultTaskParams;pub use task::DynTask;pub use task::RetryMode;pub use task::Task;pub use task::TaskConfig;pub use task::TaskObject;pub use task::TaskResult;pub use workflow::JoinConfig;pub use workflow::JoinStrategy;pub use workflow::SplitResult;pub use workflow::SplitTaskResult;pub use workflow::StateEntry;pub use workflow::Workflow;
Modulesยง
- error
- Error Handling - Clear, Actionable Error Messages
- node
- Node API - Structured Workflow Processing
- prelude
- Simplified imports for common usage patterns
- store
- Key-Value Store Helpers for Processing Pipelines
- task
- Task API - Simplified Workflow Interface
- workflow
- Workflow API - Build Workflows with Split/Join Support