graph-sp
graph-sp is a pure Rust grid/node graph executor and optimizer. The project focuses on representing directed dataflow graphs, computing port mappings by graph inspection, and executing nodes efficiently in-process with parallel CPU execution.
Core Features
- Implicit Node Connections: Nodes automatically connect based on execution order
- Parallel Branching: Create fan-out execution paths with
.branch() - Configuration Variants: Use
.variant()to create parameter sweeps - DAG Analysis: Automatic inspection and optimization of execution paths
- Mermaid Visualization: Generate diagrams with
.to_mermaid() - In-process Execution: Parallel execution using rayon
Installation
Rust
Add to your Cargo.toml:
[]
= "0.1.0"
Python
The library can also be used from Python via PyO3 bindings:
Or build from source:
Quick Start
Rust
Basic Sequential Pipeline
use Graph;
use HashMap;
Python
Basic Sequential Pipeline
return
=
return
# Create graph
=
# Add source node
# Add processing node
# Build and execute
=
=
Mermaid visualization output:
graph TD
0["DataSource"]
1["Multiply"]
0 -->|data → x| 1
Parallel Branching (Fan-Out)
let mut graph = new;
// Source node
graph.add;
// Create parallel branches
graph.branch;
graph.add;
graph.branch;
graph.add;
graph.branch;
graph.add;
let dag = graph.build;
Mermaid visualization output:
graph TD
0["Source"]
1["Statistics"]
2["MLModel"]
3["Visualization"]
0 -->|data → input| 1
0 -->|data → input| 2
0 -->|data → input| 3
style 1 fill:#e1f5ff
style 2 fill:#e1f5ff
style 3 fill:#e1f5ff
DAG Statistics:
- Nodes: 4
- Depth: 2 levels
- Max Parallelism: 3 nodes (all branches execute in parallel)
Parameter Sweep with Variants
use ;
let mut graph = new;
// Source node
graph.add;
// Create variants for different learning rates
let learning_rates = vec!;
graph.variant;
graph.add;
let dag = graph.build;
Mermaid visualization output:
graph TD
0["DataSource"]
1["ScaleLR (v0)"]
2["ScaleLR (v1)"]
3["ScaleLR (v2)"]
4["ScaleLR (v3)"]
0 -->|data → input| 1
0 -->|data → input| 2
0 -->|data → input| 3
0 -->|data → input| 4
style 1 fill:#e1f5ff
style 2 fill:#e1f5ff
style 3 fill:#e1f5ff
style 4 fill:#e1f5ff
style 1 fill:#ffe1e1
style 2 fill:#e1ffe1
style 3 fill:#ffe1ff
style 4 fill:#ffffe1
DAG Statistics:
- Nodes: 5
- Depth: 2 levels
- Max Parallelism: 4 nodes
- Variants: 4 (all execute in parallel)
API Overview
Rust API
Graph Construction
Graph::new()- Create a new graphgraph.add(fn, name, inputs, outputs)- Add a nodefn: Node function with signaturefn(&HashMap<String, String>, &HashMap<String, String>) -> HashMap<String, String>name: Optional node nameinputs: Optional vector of(broadcast_var, impl_var)tuples for input mappingsoutputs: Optional vector of(impl_var, broadcast_var)tuples for output mappings
graph.branch()- Create a new parallel branchgraph.variant(param_name, values)- Create parameter sweep variantsgraph.build()- Build the DAG
DAG Operations
dag.execute()- Execute the graph and return execution contextdag.stats()- Get DAG statistics (nodes, depth, parallelism, branches, variants)dag.to_mermaid()- Generate Mermaid diagram representation
Python API
The Python bindings provide a similar API with proper GIL handling:
Graph Construction
PyGraph()- Create a new graphgraph.add(function, label, inputs, outputs)- Add a nodefunction: Python callable with signaturefn(inputs: dict, variant_params: dict) -> dictlabel: Optional node name (str)inputs: Optional list of(broadcast_var, impl_var)tuples or dictoutputs: Optional list of(impl_var, broadcast_var)tuples or dict
graph.branch(subgraph)- Create a new parallel branch with a subgraphgraph.build()- Build the DAG and return a PyDag
DAG Operations
dag.execute()- Execute the graph and return execution context (dict)dag.execute_parallel()- Execute with parallel execution where possible (dict)dag.to_mermaid()- Generate Mermaid diagram representation (str)
GIL Handling
The Python bindings are designed with proper GIL handling:
- GIL Release: The Rust executor runs without holding the GIL, allowing true parallelism
- GIL Acquisition: Python callables used as node functions acquire the GIL only during their execution
- Thread Safety: The bindings use
pyo3::prepare_freethreaded_python()(via auto-initialize) for multi-threaded safety
This means that while Python functions execute sequentially (due to the GIL), the Rust graph traversal and coordination happens in parallel without GIL contention.
Development
Rust Development
Prerequisites:
- Rust (stable toolchain) installed: https://www.rust-lang.org/tools/install
Build and run tests:
Run examples:
Python Development
Prerequisites:
- Python 3.8+ installed
- Rust toolchain installed
Build Python bindings:
# Create virtual environment
# Install maturin
# Build and install in development mode
# Run Python example
Build wheel for distribution:
# Wheel will be in target/wheels/
Publishing
This repository is configured with GitHub Actions workflows to automatically publish to crates.io and PyPI when a release tag is pushed.
Required Repository Secrets
To enable automatic publishing, the repository owner must configure the following secrets in GitHub Settings → Secrets and variables → Actions:
CRATES_IO_TOKEN: Your crates.io API token (obtain from https://crates.io/me)PYPI_API_TOKEN: Your PyPI API token (obtain from https://pypi.org/manage/account/token/)
Publishing Process
The publish workflow (.github/workflows/publish.yml) will automatically run when:
- A tag matching
v*is pushed (e.g.,v0.1.0,v1.0.0) - The workflow is manually triggered via workflow_dispatch
Creating a release:
# Ensure version numbers in Cargo.toml and pyproject.toml are correct
The workflow will:
- Build Python wheels for Python 3.8-3.11 on Linux, macOS, and Windows
- Upload wheel artifacts to the GitHub Actions run (always, even without secrets)
- Publish to PyPI (only if
PYPI_API_TOKENis set) - prebuilt wheels mean end users do not need Rust - Publish to crates.io (only if
CRATES_IO_TOKENis set)
Important notes:
- Installing from PyPI with
pip install graph_spwill not require Rust on the target machine because prebuilt platform-specific wheels are published - Both crates.io and PyPI will reject duplicate version numbers - update versions before tagging
- The workflow will continue even if tokens are not set, allowing you to download artifacts for manual publishing
- For local testing, you can build wheels with
maturin build --release --features python
Manual Publishing
If you prefer to publish manually or need to publish from a local machine:
To crates.io:
To PyPI:
# Install maturin
# Build and publish wheels