dagex
dagex is a pure Rust grid/node graph executor and optimizer. The project focuses on representing directed dataflow graphs, computing port mappings by graph inspection, and executing nodes efficiently in-process with parallel CPU execution.
Core Features
- Implicit Node Connections: Nodes automatically connect based on execution order
- Parallel Branching: Create fan-out execution paths with
.branch() - Configuration Variants: Use
.variant()to create parameter sweeps - DAG Analysis: Automatic inspection and optimization of execution paths
- Mermaid Visualization: Generate diagrams with
.to_mermaid() - In-process Execution: Parallel execution using rayon
Installation
Rust
Add to your Cargo.toml:
[]
= "2026.4"
# Optional: For radar signal processing examples with ndarray and FFT support
[]
= ["dagex/radar_examples"]
For radar signal processing with ndarray and complex number support, enable the radar_examples feature.
Python
The library can also be used from Python via PyO3 bindings:
Or build from source:
Quick Start
Rust
Basic Sequential Pipeline
use ;
use HashMap;
Python
Basic Sequential Pipeline
return
=
return
# Create graph
=
# Add source node
# Add processing node
# Build and execute
=
=
Mermaid visualization output:
graph TD
0["DataSource"]
1["Multiply"]
0 -->|data → x| 1
Parallel Branching (Fan-Out)
let mut graph = new;
// Source node
graph.add;
// Create parallel branches
graph.branch;
graph.add;
graph.branch;
graph.add;
graph.branch;
graph.add;
let dag = graph.build;
Mermaid visualization output:
graph TD
0["Source"]
1["Statistics"]
2["MLModel"]
3["Visualization"]
0 -->|data → input| 1
0 -->|data → input| 2
0 -->|data → input| 3
style 1 fill:#e1f5ff
style 2 fill:#e1f5ff
style 3 fill:#e1f5ff
DAG Statistics:
- Nodes: 4
- Depth: 2 levels
- Max Parallelism: 3 nodes (all branches execute in parallel)
Parameter Sweep with Variants
use ;
let mut graph = new;
// Source node
graph.add;
// Create variants for different learning rates
let learning_rates = vec!;
graph.variant;
graph.add;
let dag = graph.build;
Mermaid visualization output:
graph TD
0["DataSource"]
1["ScaleLR (v0)"]
2["ScaleLR (v1)"]
3["ScaleLR (v2)"]
4["ScaleLR (v3)"]
0 -->|data → input| 1
0 -->|data → input| 2
0 -->|data → input| 3
0 -->|data → input| 4
style 1 fill:#e1f5ff
style 2 fill:#e1f5ff
style 3 fill:#e1f5ff
style 4 fill:#e1f5ff
style 1 fill:#ffe1e1
style 2 fill:#e1ffe1
style 3 fill:#ffe1ff
style 4 fill:#ffffe1
DAG Statistics:
- Nodes: 5
- Depth: 2 levels
- Max Parallelism: 4 nodes
- Variants: 4 (all execute in parallel)
Radar Signal Processing Example
This example demonstrates a complete radar signal processing pipeline using GraphData with ndarray arrays and complex numbers. The pipeline implements:
- LFM Pulse Generation - Creates a Linear Frequency Modulation chirp signal
- Pulse Stacking - Accumulates multiple pulses with Doppler shifts
- Range Compression - FFT-based matched filtering
- Doppler Compression - Creates Range-Doppler map
Rust Implementation
use ;
use Array1;
use Complex;
use HashMap;
// LFM pulse generator node
// Stack pulses node
Run the example:
Mermaid visualization output:
graph TD
0["LFMGenerator"]
1["StackPulses"]
2["RangeCompress"]
3["DopplerCompress"]
0 -->|lfm_pulse → pulse| 1
1 -->|stacked_data → data| 2
2 -->|compressed_data → data| 3
DAG Statistics:
- Nodes: 4
- Depth: 4 levels
- Max Parallelism: 1 node
Execution Output:
LFMGenerator: Generated 256 sample LFM pulse
StackPulses: Stacked 128 pulses with Doppler shifts
RangeCompress: Performed matched filtering on 32768 samples
DopplerCompress: Created Range-Doppler map of shape (128, 256)
Peak at Doppler bin 13, Range bin 255
Magnitude: 11974.31
Peak magnitude: 11974.31
Peak Doppler bin: 13
Peak Range bin: 255
Python Implementation
"""Generate LFM pulse with rectangular envelope."""
= 256
= 100e6 # 100 MHz
= 1e-6 # 1 microsecond
= 100e6
# Generate LFM chirp
= /
=
# ... signal generation code ...
# Return numpy array directly (no conversion needed)
return
"""Stack multiple pulses with Doppler shifts."""
= 128
# Get pulse data directly as complex array (implicit handling)
=
=
# Stack with Doppler shifts
# ... stacking logic ...
# Return numpy array directly (no conversion needed)
return
# Create graph
=
# Add nodes
# Build and execute
=
=
Run the example:
Key Features Demonstrated
- Native Type Support: Uses
GraphData::complex_array()for signal data,GraphData::int()for metadata - No String Conversions: Numeric data stays in native format (i64, f64, Complex)
- Implicit Complex Number Handling: Python complex numbers (numpy.complex128, built-in complex) are automatically converted to/from GraphData::Complex without manual real/imag splitting
- Direct Numpy Array Support: Pass numpy ndarrays directly without
.tolist()conversion - automatic detection and conversion - Type Safety: Accessor methods (
.as_complex_array(),.as_int(),.as_float()) provide safe type extraction - Complex Signal Processing: Full FFT-based radar processing with ndarray integration
Adding Plotting Nodes
Plotting and visualization functions can be added as terminal nodes that take input but produce no output:
// Add to graph
graph.add;
This pattern allows visualization and logging nodes to be integrated into the pipeline without affecting data flow.
API Overview
Rust API
Graph Construction
Graph::new()- Create a new graphgraph.add(fn, name, inputs, outputs)- Add a nodefn: Node function with signaturefn(&HashMap<String, GraphData>, &HashMap<String, GraphData>) -> HashMap<String, GraphData>name: Optional node nameinputs: Optional vector of(broadcast_var, impl_var)tuples for input mappingsoutputs: Optional vector of(impl_var, broadcast_var)tuples for output mappings
graph.branch()- Create a new parallel branchgraph.variant(param_name, values)- Create parameter sweep variantsgraph.build()- Build the DAG
DAG Operations
dag.execute()- Execute the graph and return execution contextdag.stats()- Get DAG statistics (nodes, depth, parallelism, branches, variants)dag.to_mermaid()- Generate Mermaid diagram representation
Python API
The Python bindings provide a similar API with proper GIL handling:
Graph Construction
PyGraph()- Create a new graphgraph.add(function, label, inputs, outputs)- Add a nodefunction: Python callable with signaturefn(inputs: dict, variant_params: dict) -> dictlabel: Optional node name (str)inputs: Optional list of(broadcast_var, impl_var)tuples or dictoutputs: Optional list of(impl_var, broadcast_var)tuples or dict
graph.branch(subgraph)- Create a new parallel branch with a subgraphgraph.build()- Build the DAG and return a PyDag
DAG Operations
dag.execute()- Execute the graph and return execution context (dict)dag.execute_parallel()- Execute with parallel execution where possible (dict)dag.to_mermaid()- Generate Mermaid diagram representation (str)
GIL Handling
The Python bindings are designed with proper GIL handling:
- GIL Release: The Rust executor runs without holding the GIL, allowing true parallelism
- GIL Acquisition: Python callables used as node functions acquire the GIL only during their execution
- Thread Safety: The bindings use
pyo3::prepare_freethreaded_python()(via auto-initialize) for multi-threaded safety
This means that while Python functions execute sequentially (due to the GIL), the Rust graph traversal and coordination happens in parallel without GIL contention.
Development
Rust Development
Prerequisites:
- Rust (stable toolchain) installed: https://www.rust-lang.org/tools/install
Build and run tests:
Run examples:
Python Development
Prerequisites:
- Python 3.8+ installed
- Rust toolchain installed
Build Python bindings:
# Create virtual environment
# Install maturin
# Build and install in development mode
# Run Python example
Build wheel for distribution:
# Wheel will be in target/wheels/
Publishing
This repository is configured with GitHub Actions workflows to automatically publish to crates.io and PyPI when a release tag is pushed.
Required Repository Secrets
To enable automatic publishing, the repository owner must configure the following secrets in GitHub Settings → Secrets and variables → Actions:
CRATES_IO_TOKEN: Your crates.io API token (obtain from https://crates.io/me)PYPI_API_TOKEN: Your PyPI API token (obtain from https://pypi.org/manage/account/token/)
Publishing Process
The publish workflow (.github/workflows/publish.yml) will automatically run when:
- A tag matching
v*is pushed (e.g.,v0.1.0,v1.0.0) - The workflow is manually triggered via workflow_dispatch
Creating a release:
# Ensure version numbers in Cargo.toml and pyproject.toml are correct
The workflow will:
- Build Python wheels for Python 3.8-3.11 on Linux, macOS, and Windows
- Upload wheel artifacts to the GitHub Actions run (always, even without secrets)
- Publish to PyPI (only if
PYPI_API_TOKENis set) - prebuilt wheels mean end users do not need Rust - Publish to crates.io (only if
CRATES_IO_TOKENis set)
Important notes:
- Installing from PyPI with
pip install dagexwill not require Rust on the target machine because prebuilt platform-specific wheels are published - Both crates.io and PyPI will reject duplicate version numbers - update versions before tagging
- The workflow will continue even if tokens are not set, allowing you to download artifacts for manual publishing
- For local testing, you can build wheels with
maturin build --release --features python
Manual Publishing
If you prefer to publish manually or need to publish from a local machine:
To crates.io:
To PyPI:
# Install maturin
# Build and publish wheels