IronFlow
A lightning-fast, zero-dependency workflow orchestrator written in Rust.
⚡ Benchmarks
We ran a 50-task sequential pipeline against the industry's top orchestrators — same machine, same tasks, same workload. Every competitor ran in its fastest single-process mode.
| Orchestrator | Execution Time | Peak Memory | vs. IronFlow |
|---|---|---|---|
| 🥇 IronFlow | 0.08s | 12 MB | — |
| Dagster | 1.07s | 98 MB | 13× slower |
| Prefect | 4.69s | 155 MB | 58× slower |
| Apache Airflow | 31.38s | ~380 MB | 392× slower |
IronFlow completed the entire 50-task pipeline — including full SQLite state persistence — in 85 milliseconds. Apache Airflow took 31 seconds for the exact same job.
Methodology, test scripts, and raw results are fully open-sourced in benchmarks/.
What Is IronFlow?
IronFlow is a single-binary workflow orchestrator for teams and individuals who need Airflow-grade reliability without Airflow-grade infrastructure.
No Python runtime. No Celery workers. No Redis. No PostgreSQL. One binary. One file.

# 30-second Quickstart
🏗 Architecture
graph TD
subgraph "IronFlow Binary"
A[CLI / REST API] --> B[Scheduler]
B --> C[Executor]
C --> D[Operator Registry]
D --> E[Bash/Python/HTTP/SQL]
C --> F[SQLite Store]
A --> F
A --> G[Embedded Web UI]
end
F --> H[(ironflow.db)]
Storage: Single SQLite file with WAL mode. No network, no daemon, no lock files.
Concurrency: All task execution is tokio::spawn'd — parallel tasks run concurrently within each DAG run.
State Machine:
Queued → Running → Success
↘ Retried → Running → …
↘ Failed
Quick Start
Install
Option 1: Cargo (Recommended)
Option 2: Prebuilt Binaries Download the latest binary for your OS from the Releases page.
Option 3: From Source
Define a DAG
Create dags/my_pipeline.toml:
[]
= "my_pipeline"
= "My first IronFlow pipeline"
= "0 9 * * *" # daily at 9am
[[]]
= "extract"
= "bash"
[]
= "echo '{\"rows\": 1000, \"source\": \"db\"}'"
[[]]
= "transform"
= "bash"
= ["extract"]
= ["extract"] # receive upstream JSON output
= 3
= 300
[]
= "echo 'Transforming data...'"
[[]]
= "load"
= "bash"
= ["transform"]
[]
= "echo 'Loading complete'"
Run
# Start scheduler + API server + Web UI
# Trigger manually via CLI
# Open the dashboard
Storage: Single SQLite file with WAL mode. No network, no daemon, no lock files.
Concurrency: All task execution is tokio::spawn'd — parallel tasks run concurrently within each DAG run.
State Machine:
Queued → Running → Success
↘ Retried → Running → …
↘ Failed
CLI Reference
ironflow start --dags-dir <path> --db-path <path> [--with-api] [--port <n>]
ironflow serve --port <n> --db-path <path>
ironflow trigger <dag_id> --db-path <path>
ironflow status <dag_id> [--limit <n>] --db-path <path>
ironflow list --db-path <path>
ironflow pause <dag_id> --db-path <path>
ironflow unpause <dag_id> --db-path <path>
REST API
GET /api/dags List all DAGs
GET /api/dags/:id DAG definition + metadata
GET /api/dags/:id/runs Run history (last 100)
POST /api/dags/:id/trigger Trigger a manual run (non-blocking)
GET /api/runs/:run_id Run status + all task states + logs
GET / Web dashboard
Operators
| Operator | Config Keys | Description |
|---|---|---|
bash |
command |
Run a shell command |
python |
script, args |
Execute a Python script |
http |
url, method, body, headers |
Make an HTTP request |
sql |
query, connection |
Run a SQL query |
slack |
webhook_url, message |
Post a Slack message |
Adding a Custom Operator
// src/operators/my_op.rs
use async_trait;
use Result;
use crateOperator;
;
Register it in src/operators/mod.rs:
registry.insert;
Tests
# test result: ok. 32 passed; 0 failed; 1 ignored
When to Use IronFlow
✅ IronFlow is ideal for:
- Single-node deployments (developer machines, edge servers, IoT)
- Air-gapped environments with no internet connectivity
- CI/CD pipelines that need lightweight orchestration
- Teams that want zero infrastructure overhead
- Workloads with 1–100k tasks per day
Consider Airflow/Prefect instead for:
- Multi-tenant enterprise deployments (1000+ teams)
- Python-heavy ML pipelines needing native object passing
- Distributed execution across many worker machines
Roadmap
- Embedded Web Dashboard
- XCom Data Passing
- Crash Recovery
- WebSocket live log streaming
- Task-level metrics & SLA tracking
- Multi-user authentication & RBAC
- Distributed executor mode (PostgreSQL backend)
- Plugin registry for custom operators
License
Apache 2.0 — see LICENSE.
Contributing
Pull requests welcome. Please open an issue first for large changes.