Expand description
ยงCano: Simple & Fast Async Workflows in Rust
Cano is an async workflow engine that makes complex data processing simple. Whether you need to process one item or millions, Cano provides a clean API with minimal overhead for maximum performance.
ยง๐ Quick Start
Choose between Task (simple) or Node (structured) for your processing logic,
create a MemoryStore for sharing data, then run your workflow. Every Node automatically
works as a Task for maximum flexibility.
ยง๐ฏ Core Concepts
ยงTasks & Nodes - Your Processing Units
Two approaches for implementing processing logic:
Tasktrait: Simple interface with singlerun()method - perfect for prototypes and simple operationsNodetrait: Structured three-phase lifecycle with built-in retry strategies - ideal for production workloads
Every Node automatically implements Task, providing seamless interoperability and upgrade paths.
ยงStore - Share Data Between Processing Units
Use MemoryStore to pass data around your workflow. Store different types of data
using key-value pairs, and retrieve them later with type safety. All values are
wrapped in std::borrow::Cow for memory efficiency.
ยงCustom Logic - Your Business Implementation
Choose the right approach for your needs:
- Implement the
Tasktrait for simple, single-method processing - Implement the
Nodetrait for structured processing with three phases: Prep (load data, validate inputs), Exec (core processing), and Post (store results, determine next action)
ยง๐๏ธ Processing Lifecycle
Task: Single run() method with full control over execution flow
Node: Three-phase lifecycle for structured processing:
- Prep: Load data, validate inputs, setup resources
- Exec: Core processing logic (with automatic retry support)
- Post: Store results, cleanup, determine next action
This structure makes nodes predictable and easy to reason about, while tasks provide maximum flexibility.
ยง๐ Module Overview
-
task: TheTasktrait for simple, flexible processing logic- Single
run()method for maximum simplicity - Perfect for prototypes and straightforward operations
- Single
-
node: TheNodetrait for structured processing logic- Built-in retry logic and error handling
- Three-phase lifecycle (
prep,exec,post) - Fluent configuration API via
TaskConfig
-
workflow: Core workflow orchestrationWorkflowfor state machine-based workflows
-
[
scheduler] (optionalschedulerfeature): Advanced workflow scheduling- [
Scheduler] for managing multiple flows with cron support - Time-based and event-driven scheduling
- [
-
store: Thread-safe key-value storage helpers for pipeline data sharingMemoryStorefor in-memory data sharingKeyValueStoretrait for custom storage backends
-
error: Comprehensive error handling systemCanoErrorfor categorized error typesCanoResulttype alias for convenient error handling- Rich error context and conversion traits
ยง๐ Getting Started
- Start with the examples: Run
cargo run --example basic_node_usage - Read the module docs: Each module has detailed documentation and examples
- Check the benchmarks: Run
cargo bench --bench node_performanceto see performance - Join the community: Contribute features, fixes, or feedback
ยงPerformance Characteristics
- Low Latency: Minimal overhead with direct execution
- High Throughput: Direct execution for maximum performance
- Memory Efficient: Scales with data size, not concurrency settings
- Async I/O: Efficient async operations with tokio runtime
Re-exportsยง
pub use error::CanoError;pub use error::CanoResult;pub use node::DefaultNodeResult;pub use node::DefaultParams;pub use node::DynNode;pub use node::Node;pub use store::KeyValueStore;pub use store::MemoryStore;pub use task::DefaultTaskParams;pub use task::DynTask;pub use task::RetryMode;pub use task::Task;pub use task::TaskConfig;pub use task::TaskObject;pub use workflow::ConcurrentWorkflow;pub use workflow::ConcurrentWorkflowBuilder;pub use workflow::ConcurrentWorkflowStatus;pub use workflow::WaitStrategy;pub use workflow::Workflow;pub use workflow::WorkflowBuilder;pub use workflow::WorkflowResult;