Expand description
pipeflow - A lightweight, configuration-driven data pipeline framework
§Overview
pipeflow follows the classic ETL pattern:
Source → Transform → SinkPipelines are defined in YAML configuration files, requiring no code to set up common data processing workflows.
§Quick Start
# pipeline.yaml
name: "api-to-db"
sources:
- id: api_poller
type: http_client
config:
url: "https://api.example.com/data"
interval_secs: 60
transforms:
- id: mapper
type: remap
input: api_poller
config:
mappings:
price: "$.data.price"
sinks:
- id: db_writer
type: database
input: mapper
config:
connection: "${DATABASE_URL}"
table: metrics§Features
- Configuration-driven: Define pipelines in YAML
- Built-in DLQ: Dead Letter Queue for error handling
- Backpressure: Bounded channels with configurable strategies
- Disk Buffer: JSONL-based persistence for reliable delivery
- Fan-out: One source/transform can feed multiple downstream nodes
Re-exports§
pub use error::Error;pub use error::Result;pub use message::DeadLetter;pub use message::Message;pub use message::MessageMeta;
Modules§
- buffer
- Disk buffer for reliable message delivery
- channel
- Bounded channel with backpressure support
- config
- Pipeline configuration parsing
- engine
- Pipeline engine for DAG construction and execution
- error
- Error types for the pipeline
- message
- Message types for the pipeline
- prelude
- Prelude module for convenient imports
- sink
- Sink trait and implementations
- source
- Source trait and implementations
- transform
- Transform trait and implementations