Soft fork of dagx for Operese
dagx
A minimal, type-safe, runtime-agnostic async DAG (Directed Acyclic Graph) executor with compile-time cycle prevention and true parallel execution.
Why dagx?
Blazing Fast: 1-100x faster than dagrs
| Workload | Tasks | dagx | dagrs | Speedup |
|---|---|---|---|---|
| Sequential chain | 5 | 1.02 µs | 770.42 µs | 755x faster 🚀 |
| Diamond pattern | 4 | 5.16 µs | 770.87 µs | 149x faster |
| Sequential chain | 100 | 25.09 µs | 1.19 ms | 47.4x faster |
| Fan-out (1→100) | 101 | 100.75 µs | 1.02 ms | 10.1x faster |
| Independent tasks | 10,000 | 8.61 ms | 15.37 ms | 1.79x faster |
Simple API
let sum = dag.add_task.depends_on;
dag.run.await?;
That's it. No trait boilerplate, no manual channels, no node IDs.
Quick Start
Add to your Cargo.toml:
[]
= "0.4"
= { = "1", = ["macros", "rt-multi-thread"] }
Basic example:
use ;
// Define tasks with the #[task] macro
;
;
async
Features
Compile-Time Safety
- Cycles are impossible — the type system prevents them at compile time, zero runtime overhead
- No runtime type errors — dependencies validated at compile time
- Compiler-verified correctness — no surprise failures in production
See how it works.
Runtime Agnostic
dagx works with any async runtime. Provide a spawner function to run():
// With Tokio
// The join handle result can be unwrapped because operese_dagx catches panics internally
dag.run.await.unwrap;
// With smol
dag.run.await.unwrap;
// Single-threaded concurrency on the invoking runtime
// Can be faster in situations where waiting time dominates
dag.run.await.unwrap
Task Patterns
dagx supports three task patterns:
1. Stateless - Pure functions with no state:
;
2. Read-only state - Configuration accessed via &self:
;
3. Mutable state - State modification via &mut self:
;
Tracing
dagx provides optional observability using the tracing crate, controlled by the tracing feature flag.
Enabling Tracing
[]
= { = "0.4", = ["tracing"] }
= "0.3"
Log Levels
- INFO: DAG execution start/completion
- DEBUG: Task additions, dependency wiring, layer computation
- TRACE: Individual task execution (inline vs spawned), detailed execution flow
- ERROR: Task panics, concurrent execution attempts
Other
- True parallelism: Chosen runtime executes tasks concurrently and/or in parallel
- No boilerplate: The
derivefeature and the#[task]macro are enabled by default to simplify task implementation.
Performance
dagx provides true parallel execution with sub-microsecond overhead per task.
How is dagx so fast?
- Inline fast-path: Sequential chains execute inline without spawning
- Adaptive execution: Inline for sequential work, executor-agnostic parallelism for concurrent work
- Zero-cost abstractions: Compile-time graph validation eliminates overhead
See design philosophy for details.
Tutorials & Examples
Tutorials (Start Here)
Step-by-step introduction to dagx:
01_basic.rs- Your first DAG02_fan_out.rs- One task feeds many (1→N)03_fan_in.rs- Many tasks feed one (N→1)04_parallel_computation.rs- Map-reduce with true parallelism
Run tutorial examples:
Advanced Examples
Real-world patterns:
circuit_breaker.rs- Circuit breaker pattern for resilient systemsdata_pipeline.rs- ETL data processing pipelineerror_handling.rs- Error propagation and recovery
Run any example: cargo run --example circuit_breaker
Documentation
Full API documentation is available at docs.rs/dagx.
Detailed documentation on dagx's internals and advanced features:
- Compile-Time Cycle Prevention - How the type system prevents cycles
- Design Philosophy - Primitives as scheduler, inline fast-path optimization
- Library Comparisons - Detailed comparison with dagrs, async_dag, and others
When to use operese_dagx
dagx is ideal for:
- Data pipelines with complex dependencies between stages
- Build systems where tasks depend on outputs of other tasks
- Parallel computation where work can be split and aggregated
- Workflow engines with typed data flow between stages
- ETL processes with validation and transformation steps
Benchmarks
Run the full benchmark suite:
View detailed HTML reports:
# macOS
# Linux
# Windows
Benchmarks run on Intel i9-13950HX @ 5.5GHz.
Code of Conduct
This project follows the Builder's Code of Conduct.
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
For security issues, see SECURITY.md.
License
Licensed under the MIT License. See LICENSE for details.
Copyright (c) 2025 Stephen Waits steve@waits.net