Prodigy ๐
Transform ad-hoc Claude sessions into reproducible development pipelines with parallel execution, automatic retry, and full state management.
Table of Contents
- Features
- Installation
- Quick Start
- Usage
- Examples
- Documentation
- Troubleshooting
- Contributing
- License
- Acknowledgments
Features
- โจ Workflow Orchestration - Define complex development workflows in simple YAML
- โก Parallel Execution - Run multiple Claude agents simultaneously with MapReduce
- ๐ Automatic Retry - Smart retry strategies with exponential backoff and circuit breakers
- ๐พ Full State Management - Checkpoint and resume interrupted workflows exactly where they left off
- ๐ฏ Goal-Seeking - Iterative refinement until specifications are met
- ๐ณ Git Integration - Automatic worktree isolation for every workflow execution with commit tracking
- ๐ก๏ธ Error Recovery - Comprehensive failure handling with on-failure handlers
- ๐ Analytics - Cost tracking, performance metrics, and optimization recommendations
- ๐ง Extensible - Custom validators, handlers, and workflow composition
- ๐ Documentation - Comprehensive man pages and built-in help system
Installation
Using Cargo (Recommended)
From Source
# Clone the repository
# Build and install
# Optional: Install man pages
Quick Start
Get up and running in under 5 minutes with these simple examples.
Your First Workflow
- Initialize Prodigy in your project:
- Create a simple workflow (
fix-tests.yml):
name: fix-failing-tests
steps:
- shell: "cargo test"
on_failure:
claude: "/fix-test-failures"
max_attempts: 3
- Run the workflow:
Parallel Execution Example
Process multiple files simultaneously with MapReduce:
name: add-documentation
mode: mapreduce
setup:
- shell: "find src -name '*.rs' -type f > files.json"
map:
input: files.json
agent_template:
- claude: "/add-rust-docs ${item}"
max_parallel: 10
reduce:
- claude: "/summarize Documentation added to ${map.successful} files"
Run with:
Goal-Seeking Example
Iteratively improve code until all tests pass:
name: achieve-full-coverage
steps:
- goal_seek:
goal: "Achieve 100% test coverage"
command: "claude: /improve-test-coverage"
validate: "cargo tarpaulin --print-summary | grep '100.00%'"
max_attempts: 5
Usage
Basic Commands
# Run a workflow
# Execute a single command with retries
# Process files in parallel
# Resume an interrupted workflow
# Goal-seeking operation
# View analytics and costs
# Manage worktrees (all workflow executions use isolated git worktrees by default)
Advanced Workflows
Retry Configuration
retry_defaults:
attempts: 3
backoff: exponential
initial_delay: 2s
max_delay: 30s
jitter: true
steps:
- shell: "deploy.sh"
retry:
attempts: 5
backoff:
fibonacci:
initial: 1s
retry_on:
retry_budget: 5m
Environment Management
env:
NODE_ENV: production
WORKERS:
command: "nproc"
cache: true
secrets:
API_KEY: ${vault:api/keys/production}
steps:
- shell: "npm run build"
env:
BUILD_TARGET: production
working_dir: ./frontend
Workflow Composition
imports:
- path: ./common/base.yml
alias: base
templates:
test-suite:
parameters:
- name: language
type: string
steps:
- shell: "${language} test"
workflows:
main:
extends: base.default
steps:
- use: test-suite
with:
language: cargo
Git Context Variables
Prodigy automatically tracks git changes during workflow execution and provides context variables for accessing file changes, commits, and statistics:
Step-level Variables (Current Step)
${step.files_added}- Files added in the current step${step.files_modified}- Files modified in the current step${step.files_deleted}- Files deleted in the current step${step.files_changed}- All files changed (added + modified + deleted)${step.commits}- Commit hashes created in the current step${step.commit_count}- Number of commits in the current step${step.insertions}- Lines inserted in the current step${step.deletions}- Lines deleted in the current step
Workflow-level Variables (Cumulative)
${workflow.files_added}- All files added across the workflow${workflow.files_modified}- All files modified across the workflow${workflow.files_deleted}- All files deleted across the workflow${workflow.files_changed}- All files changed across the workflow${workflow.commits}- All commit hashes across the workflow${workflow.commit_count}- Total commits across the workflow${workflow.insertions}- Total lines inserted across the workflow${workflow.deletions}- Total lines deleted across the workflow
Pattern Filtering
Variables support pattern filtering using glob patterns:
# Get only markdown files added
- shell: "echo '${step.files_added:*.md}'"
# Get only Rust source files modified
- claude: "/review ${step.files_modified:*.rs}"
# Get specific directory changes
- shell: "echo '${workflow.files_changed:src/*}'"
Format Modifiers
Control output format with modifiers:
# JSON array format
- shell: "echo '${step.files_added:json}'" # ["file1.rs", "file2.rs"]
# Newline-separated (for scripts)
- shell: "echo '${step.files_added:lines}'" # file1.rs\nfile2.rs
# Comma-separated
- shell: "echo '${step.files_added:csv}'" # file1.rs,file2.rs
# Space-separated (default)
- shell: "echo '${step.files_added}'" # file1.rs file2.rs
Example Usage
name: code-review-workflow
steps:
# Make changes
- claude: "/implement feature X"
commit_required: true
# Review only the changed Rust files
- claude: "/review-code ${step.files_modified:*.rs}"
# Generate changelog for markdown files
- shell: "echo 'Changed docs:' && echo '${step.files_added:*.md:lines}'"
# Conditional execution based on changes
- shell: "cargo test"
when: "${step.files_modified:*.rs}" # Only run if Rust files changed
# Summary at the end
- claude: |
/summarize-changes
Total files changed: ${workflow.files_changed:json}
Commits created: ${workflow.commit_count}
Lines added: ${workflow.insertions}
Lines removed: ${workflow.deletions}
Workflow Syntax
Write File Command
The write_file command allows workflows to create files with content, supporting multiple formats with validation and automatic formatting.
Basic Syntax:
- write_file:
path: "output/results.txt"
content: "Processing complete!"
format: text # text, json, or yaml
mode: "0644" # Unix permissions (default: 0644)
create_dirs: false # Create parent directories (default: false)
Supported Formats:
- Text (default) - Plain text with no processing:
- write_file:
path: "logs/build.log"
content: "Build started at ${timestamp}"
format: text
- JSON - Validates and pretty-prints JSON:
- write_file:
path: "output/results.json"
content: '{"status": "success", "items_processed": ${map.total}}'
format: json
create_dirs: true
- YAML - Validates and formats YAML:
- write_file:
path: "config/settings.yml"
content: |
environment: production
server:
port: 8080
host: localhost
format: yaml
Variable Interpolation:
All fields support variable interpolation:
# In MapReduce map phase
- write_file:
path: "output/${item.name}.json"
content: '{"id": "${item.id}", "processed": true}'
format: json
create_dirs: true
# In reduce phase
- write_file:
path: "summary.txt"
content: "Processed ${map.total} items, ${map.successful} successful"
format: text
Security Features:
- Path traversal protection (rejects paths containing
..) - JSON/YAML validation before writing
- Configurable file permissions (Unix systems only)
Common Use Cases:
- Aggregating MapReduce results:
reduce:
- write_file:
path: "results/summary.json"
content: '{"total": ${map.total}, "successful": ${map.successful}, "failed": ${map.failed}}'
format: json
- Generating configuration files:
- write_file:
path: ".config/app.yml"
content: |
name: ${PROJECT_NAME}
version: ${VERSION}
features:
- authentication
- caching
format: yaml
- Creating executable scripts:
- write_file:
path: "scripts/deploy.sh"
content: |
#!/bin/bash
echo "Deploying ${APP_NAME}"
./deploy.sh --env production
mode: "0755"
create_dirs: true
Validation and Error Recovery
Prodigy supports multi-step validation and error recovery with two formats:
Array Format (for simple command sequences):
validate:
- shell: "prep-command-1"
- shell: "prep-command-2"
- claude: "/validate-result"
Object Format (when you need metadata like threshold, max_attempts, etc.):
validate:
commands:
- shell: "prep-command-1"
- shell: "prep-command-2"
- claude: "/validate-result"
result_file: "validation-results.json"
threshold: 75 # Validation must score at least 75/100
on_incomplete:
commands:
- claude: "/fix-gaps --gaps ${validation.gaps}"
- shell: "rebuild-and-revalidate.sh"
max_attempts: 3
fail_workflow: false
Key Points:
- Use array format when you only need to run commands
- Use object format when you need to set
threshold,result_file,max_attempts, orfail_workflow - Fields like
thresholdandmax_attemptsbelong at the config level, not on individual commands on_incompletesupports the same two formats (array or object withcommands:)
Example: Multi-step validation workflow
- claude: "/implement-feature spec.md"
commit_required: true
validate:
commands:
- shell: "cargo test"
- shell: "cargo clippy"
- claude: "/validate-implementation spec.md"
result_file: ".prodigy/validation.json"
threshold: 90
on_incomplete:
commands:
- claude: "/fix-issues --gaps ${validation.gaps}"
- shell: "cargo test"
max_attempts: 5
fail_workflow: true
Configuration
Prodigy looks for configuration in these locations (in order):
.prodigy/config.yml- Project-specific configuration~/.config/prodigy/config.yml- User configuration/etc/prodigy/config.yml- System-wide configuration
Example configuration:
# .prodigy/config.yml
claude:
model: claude-3-opus
max_tokens: 4096
worktree:
max_parallel: 20
cleanup_policy:
idle_timeout: 300
max_age: 3600
retry:
default_attempts: 3
default_backoff: exponential
storage:
events_dir: ~/.prodigy/events
state_dir: ~/.prodigy/state
Examples
Example 1: Automated Testing Pipeline
Fix all test failures automatically with intelligent retry:
name: test-pipeline
steps:
- shell: "cargo test"
on_failure:
- claude: "/analyze-test-failure ${shell.output}"
- claude: "/fix-test-failure"
- shell: "cargo test"
retry:
attempts: 3
backoff: exponential
- shell: "cargo fmt -- --check"
on_failure: "cargo fmt"
- shell: "cargo clippy -- -D warnings"
on_failure:
claude: "/fix-clippy-warnings"
Example 2: Parallel Code Analysis
Analyze and improve multiple files concurrently:
name: parallel-analysis
mode: mapreduce
setup:
- shell: |
find . -name "*.rs" -exec wc -l {} + |
sort -rn |
head -20 |
awk '{print $2}' > complex-files.json
map:
input: complex-files.json
agent_template:
- claude: "/analyze-complexity ${item}"
- claude: "/suggest-refactoring ${item}"
- shell: "cargo test --lib $(basename ${item} .rs)"
max_parallel: 10
reduce:
- claude: "/generate-refactoring-report ${map.results}"
- shell: "echo 'Analyzed ${map.total} files, ${map.successful} successful'"
Example 3: Goal-Seeking Optimization
Iteratively improve performance until benchmarks pass:
name: performance-optimization
steps:
- goal_seek:
goal: "Reduce benchmark time below 100ms"
command: "claude: /optimize-performance benches/main.rs"
validate: |
cargo bench --bench main |
grep "time:" |
awk '{print ($2 < 100) ? "score: 100" : "score: " int(100 - $2)}'
threshold: 100
max_attempts: 10
timeout: 1800
- shell: "cargo bench --bench main > benchmark-results.txt"
- claude: "/document-optimization benchmark-results.txt"
Documentation
๐ Full documentation is available at https://iepathos.github.io/prodigy
Quick links:
Building Documentation Locally
# Install mdBook
# Serve with live reload
Additional Resources
- ๐ Workflow Syntax (Single Page) - Complete syntax reference in one file
- ๐๏ธ Architecture - System design and internals
- ๐ค Contributing Guide - How to contribute to Prodigy
- ๐ Man Pages - Unix-style manual pages for all commands
Quick Reference
| Command | Description |
|---|---|
prodigy run <workflow> |
Execute a workflow |
prodigy exec <command> |
Run a single command |
prodigy batch <pattern> |
Process files in parallel |
prodigy resume <id> |
Resume interrupted workflow |
prodigy goal-seek |
Run goal-seeking operation |
prodigy analytics |
View session analytics |
prodigy worktree |
Manage git worktrees |
prodigy init |
Initialize Prodigy in project |
Troubleshooting
Common Issues and Solutions
- Check parallel execution limits:
- Enable verbose mode to identify bottlenecks:
Note: The -v flag also enables Claude streaming JSON output for debugging Claude interactions.
- Review analytics for optimization opportunities:
Prodigy automatically creates checkpoints. To resume:
# List available checkpoints
# Resume from latest checkpoint
# Resume specific workflow
Review and reprocess failed items:
# View failed items
# Reprocess failed items
Check configuration precedence:
# Show effective configuration
# Validate configuration
Install man pages manually:
# Or install to user directory
Enable debug logging:
# Set log level
# View detailed events
Prodigy provides fine-grained control over Claude interaction visibility:
Default behavior (no flags):
# Shows progress and results, but no Claude JSON streaming output
Verbose mode (-v):
# Shows Claude streaming JSON output for debugging interactions
Debug mode (-vv) and trace mode (-vvv):
# Also shows Claude streaming output plus additional internal logs
Force Claude output (environment override):
PRODIGY_CLAUDE_CONSOLE_OUTPUT=true
# Shows Claude streaming output regardless of verbosity level
This allows you to keep normal runs clean while enabling detailed debugging when needed.
Getting Help
- ๐ Report Issues
- ๐ฌ Discussions
- ๐ง Email Support
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Quick Start for Contributors
# Fork and clone the repository
# Set up development environment
# Run with verbose output
RUST_LOG=debug
# Before submitting PR
Areas We Need Help
- ๐ฆ Package manager distributions (brew, apt, yum)
- ๐ Internationalization and translations
- ๐ Documentation and examples
- ๐งช Testing and bug reports
- โก Performance optimizations
- ๐จ UI/UX improvements
License
Prodigy is licensed under MIT. See LICENSE for details.
Acknowledgments
Prodigy builds on the shoulders of giants:
- Claude Code CLI - The AI pair programmer that powers Prodigy
- Tokio - Async runtime for Rust
- Clap - Command-line argument parsing
- Serde - Serialization framework
Special thanks to all contributors who have helped make Prodigy better!