IronQ
On-chain job queue & task scheduler for Solana.
IronQ reimagines Web2 distributed task queues (Celery, BullMQ, Amazon SQS) as an on-chain state machine with cryptoeconomic guarantees. Instead of relying on trusted infrastructure, jobs are posted, claimed, executed, and verified entirely through Solana program instructions — with staking, slashing, and permissionless crank rewards replacing centralized monitoring.
Program ID: J6GjTDeKugyMhFEuTYnEbsBCVSHsPimmbHcURbJ3wtrQ (Devnet)
Quickstart (Devnet)
# 1. Build
&& &&
# 2. Configure
&&
# 3. Initialize queue (one-time)
# 4. Register as a worker
# 5. Full job lifecycle
Table of Contents
- Web2 vs IronQ
- Architecture
- State Machine
- Account Structure
- PDA Derivation
- Instruction Set
- Economic Model
- Tradeoffs & Constraints
- CLI Usage
- Setup & Development
- Testing
- Devnet Deployment
- Composability (CPI)
- Upgrade & Migration Strategy
- FAQ
Web2 vs IronQ
How a Job Queue Works in Web2
Traditional task queue systems follow a broker-worker architecture:
Web2 Task Queue (Celery / BullMQ / SQS)
┌──────────┐ ┌──────────────────┐ ┌───────────┐
│ Producer │──push──▶│ Message Broker │──pull──▶│ Worker │
│ (App) │ │ (Redis/RabbitMQ) │ │ (Process) │
└──────────┘ └──────────────────┘ └───────────┘
│ │
┌──────┘ ┌─────┘
▼ ▼
┌───────────┐ ┌───────────┐
│ Result │ │ Dead │
│ Backend │ │ Letter Q │
└───────────┘ └───────────┘
- A producer pushes a job (serialized function call + args) into a broker.
- A worker pulls the job, executes it, and writes the result to a backend (Redis, Postgres).
- Failure handling relies on DLQs, exponential backoff, and human monitoring.
- Trust assumptions: The broker, workers, and result backend are all trusted infrastructure controlled by a single operator.
How IronQ Reimagines This On-Chain
IronQ replaces every centralized component with on-chain equivalents:
IronQ On-Chain Architecture
┌──────────┐ ┌──────────────────┐ ┌───────────┐
│ Creator │──tx────▶│ Solana Program │◀──tx───│ Worker │
│ (Wallet) │ │ (State Machine) │ │ (Wallet) │
└──────────┘ └──────────────────┘ └───────────┘
│ │ │
┌──────────┘ │ └──────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Queue │ │ Job │ │ Vault │
│ Config │ │ Accounts │ │ (SPL) │
│ (PDA) │ │ (PDAs) │ │ (PDA) │
└───────────┘ └───────────┘ └───────────┘
| Web2 Concept | IronQ Equivalent |
|---|---|
| Message Broker (Redis/RabbitMQ) | QueueConfig PDA account |
| Job Payload | Job PDA with data_hash (off-chain data, on-chain commitment) |
| Worker Process | Wallet with Worker PDA (staked, tracked) |
| Result Backend | JobResult PDA with result_hash |
| ACK/NACK | claim_job / approve_result / dispute_result instructions |
| Dead Letter Queue | Expired status + retry counter |
| Monitoring/Alerting | Permissionless crank reclaim_expired (anyone can call) |
| Admin Dashboard | On-chain account reads + CLI |
| Trust Model | Trustless — cryptoeconomic incentives replace trust |
redis BRPOP (blocking pop) |
claim_job instruction |
celery.task.retry() |
reclaim_expired with max_retries > 0 |
| Database row lock | PDA ownership + JobAlreadyClaimed error |
| Webhook callback | Event emission (CPI logs) |
The key insight: Instead of trusting operators, IronQ aligns incentives with tokens. Workers stake to prove commitment, lose stake if they fail, and earn rewards when they deliver. Anyone can crank expired jobs for a reward, replacing the need for centralized monitoring infrastructure.
Solana 101 (for Web2 developers)
If you're coming from Web2, a few Solana concepts are essential for understanding IronQ:
- Accounts are Solana's storage primitive — think of each account as a small database row identified by a public key. Programs (smart contracts) read and write to accounts; they don't have internal storage.
- Program Derived Addresses (PDAs) are deterministic account addresses computed from a program ID and a set of "seeds" (e.g.,
["job", queue_key, job_id]). This replaces database primary keys — given the seeds, anyone can derive the address without a lookup table. - Transactions contain one or more instructions, each targeting a specific program. Every account an instruction touches must be declared upfront, which enables Solana's parallel execution engine.
- SPL Tokens are Solana's token standard (like ERC-20). Token balances live in "token accounts" owned by the SPL Token program, not in the token contract itself.
- Rent is a deposit required to keep an account alive. IronQ's
close_jobinstruction reclaims this deposit when a job is finished.
Architecture
System Overview
┌─────────────────────────────────────────────────────────────────┐
│ IronQ Program │
│ │
│ ┌─────────────┐ ┌──────────────┐ ┌────────────────────────┐ │
│ │ Queue │ │ Worker │ │ Job Lifecycle │ │
│ │ Management │ │ Operations │ │ │ │
│ │ │ │ │ │ create ──▶ claim ──▶ │ │
│ │ init │ │ register │ │ submit ──▶ approve │ │
│ │ update │ │ stake │ │ │ │ │
│ │ pause │ │ deregister │ │ dispute ──▶ resolve │ │
│ └─────────────┘ └──────────────┘ └────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Permissionless Cranks │ │
│ │ │ │
│ │ reclaim_expired (slash + reward cranker) │ │
│ │ close_job (reclaim rent from terminal) │ │
│ └──────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ SPL Token Vault (PDA) │ │
│ │ │ │
│ │ Holds: worker stakes + job rewards │ │
│ │ Authority: vault PDA itself (self-custodial) │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Data Flow
Creator Program Worker
│ │ │
│ create_job(reward) │ │
│──────────────────────────▶ SPL transfer to vault │
│ │ │
│ │ claim_job() │
│ │◀──────────────────────────│
│ │ set status=Claimed │
│ │ set deadline │
│ │ │
│ │ submit_result(hash) │
│ │◀──────────────────────────│
│ │ create JobResult PDA │
│ │ │
│ approve_result() │ │
│──────────────────────────▶ vault → worker token │
│ │ status=Completed │
│ │ │
│ close_job() │ │
│──────────────────────────▶ reclaim rent │
│ │ │
Happy Path Sequence Diagram
Creator Program/Vault Worker Cranker
│ │ │ │
│ create_job ────▶│ │ │
│ (reward $$$) │ │ │
│ │◀── claim_job ────│ │
│ │ (set deadline) │ │
│ │ │ │
│ │◀── submit_result │ │
│ │ (result_hash) │ │
│ │ │ │
│ approve ───────▶│ │ │
│ │── reward ───────▶│ │
│ │ │ │
│ close_job ─────▶│ │ │
│ (reclaim rent) │ │ │
State Machine
Jobs transition through 7 states:
┌──────────┐
cancel_job │ │ create_job
┌──────────────│ CANCELLED│◀─────────────┐
│ │ │ │
│ └──────────┘ │
│ │
│ ┌──────────┐ │
└──────────────│ │───────────────┘
│ OPEN │
┌─────│ │◀──── retry (reclaim_expired
│ └──────────┘ when retry_count < max_retries)
│ │
│ claim_job
│ │
│ ▼
│ ┌──────────┐
│ │ │──── deadline passes ──▶ reclaim_expired
│ │ CLAIMED │ │
│ │ │ ▼
│ └──────────┘ ┌──────────────┐
│ │ │ EXPIRED │
│ submit_result │ (terminal) │
│ │ └──────────────┘
│ ▼ ▲
│ ┌──────────┐ │
│ │ │ resolve_dispute
│ │SUBMITTED │ (worker loses)
│ │ │ │
│ └──────────┘ ┌──────────────┐
│ │ │ │ DISPUTED │
│ │ └── dispute_result ──▶│ │
│ │ └──────────────┘
│ approve_result │
│ │ resolve_dispute
│ ▼ (worker wins)
│ ┌───────────┐ │
│ │ │◀────────────────────────────┘
└──│ COMPLETED │
│ (terminal)│
└───────────┘
Terminal states: Completed, Expired, Cancelled — only these can be closed to reclaim rent.
Account Structure
QueueConfig
The central configuration account for a job queue.
| Field | Type | Description |
|---|---|---|
authority |
Pubkey |
Queue owner, can update config and pause |
reward_mint |
Pubkey |
SPL token used for rewards and staking |
arbiter |
Pubkey |
Designated dispute resolver |
vault |
Pubkey |
PDA token account holding stakes + rewards |
min_worker_stake |
u64 |
Minimum tokens a worker must stake |
job_timeout |
i64 |
Seconds after claim before a job can be expired |
slash_rate_bps |
u16 |
Basis points of stake slashed on failure |
crank_reward_bps |
u16 |
Basis points of slash amount paid to cranker |
total_jobs_created |
u64 |
Monotonic job counter (used in PDA seeds) |
total_active_workers |
u32 |
Count of registered workers |
max_concurrent_jobs |
u8 |
Max jobs a worker can hold simultaneously |
is_paused |
bool |
Pause flag — blocks new jobs and claims |
Job
Represents a single unit of work.
| Field | Type | Description |
|---|---|---|
queue |
Pubkey |
Parent queue |
job_id |
u64 |
Unique ID (from queue counter) |
creator |
Pubkey |
Who posted the job |
worker |
Pubkey |
Who claimed it (default if unclaimed) |
status |
JobStatus |
Current state (enum, 7 variants) |
reward_amount |
u64 |
Tokens escrowed in vault for this job |
data_hash |
[u8; 32] |
Blake3 hash of off-chain job payload |
created_at |
i64 |
Unix timestamp |
claimed_at |
i64 |
Unix timestamp (0 if unclaimed) |
deadline |
i64 |
claimed_at + job_timeout |
priority |
u8 |
0=Low, 1=Medium, 2=High (see note below) |
max_retries |
u8 |
How many times to re-open after expiry |
retry_count |
u8 |
Current retry count |
Note on Priority: The priority field (0=Low, 1=Medium, 2=High) is stored as metadata only. There is no on-chain priority queue or ordering mechanism — workers freely choose which open jobs to claim. Priority serves as a signal to off-chain clients and UIs that can sort/filter jobs by priority level.
Worker
Tracks a registered worker's state.
| Field | Type | Description |
|---|---|---|
queue |
Pubkey |
Parent queue |
wallet |
Pubkey |
Worker's signing wallet |
stake_amount |
u64 |
Tokens currently staked |
jobs_completed |
u32 |
Lifetime successes |
jobs_failed |
u32 |
Lifetime failures |
total_earned |
u64 |
Lifetime reward earnings |
active_jobs |
u8 |
Currently held jobs |
max_concurrent_jobs |
u8 |
Hard cap (3) |
is_active |
bool |
Active flag |
registered_at |
i64 |
Unix timestamp |
JobResult
Created when a worker submits results.
| Field | Type | Description |
|---|---|---|
job |
Pubkey |
Parent job |
worker |
Pubkey |
Who submitted |
result_hash |
[u8; 32] |
Hash of the off-chain result data |
submitted_at |
i64 |
Unix timestamp |
PDA Derivation
All accounts are Program Derived Addresses (PDAs), ensuring deterministic addresses and removing the need for account registries:
| Account | Seeds | Rationale |
|---|---|---|
| Queue | ["queue", authority] |
One queue per authority |
| Vault | ["vault", queue] |
One vault per queue |
| Job | ["job", queue, job_id_le_bytes] |
Unique per queue + monotonic ID |
| Worker | ["worker", queue, wallet] |
One registration per worker per queue |
| Result | ["result", job] |
One result per job |
job_id_le_bytes is the 8-byte little-endian representation of the u64 job ID.
Instruction Set
Queue Management (Authority Only)
| Instruction | Description |
|---|---|
initialize_queue |
Create queue + vault, set config params (including max_concurrent_jobs) |
update_queue_config |
Modify arbiter, stake, timeout, slash/crank rates |
toggle_queue_pause |
Flip is_paused flag |
Worker Operations
| Instruction | Signer | Description |
|---|---|---|
register_worker |
Worker | Stake tokens, create Worker PDA |
increase_stake |
Worker | Add more tokens to existing stake |
deregister_worker |
Worker | Reclaim full stake (requires 0 active jobs) |
Job Lifecycle
| Instruction | Signer | Description |
|---|---|---|
create_job |
Creator | Escrow reward, create Job PDA |
cancel_job |
Creator | Cancel unclaimed job, reclaim reward |
claim_job |
Worker | Claim open job, set deadline |
submit_result |
Worker | Submit result hash, create Result PDA |
approve_result |
Creator | Accept result, release reward to worker |
dispute_result |
Creator | Dispute result, require arbiter resolution |
resolve_dispute |
Arbiter | Rule for/against worker |
Permissionless Cranks
| Instruction | Signer | Description |
|---|---|---|
reclaim_expired |
Anyone | Expire overdue job, slash worker, earn crank reward |
close_job |
Creator | Close terminal job + result accounts, reclaim rent |
Economic Model
Token Flow
Creator Worker
│ │
│ create_job ──▶ reward ──▶ ┌───────┐
│ │ │
│ │ VAULT │ ◀── register_worker (stake)
│ │ │
│ approve ◀── reward ──────▶│ │──▶ reward to worker
│ └───────┘
│ │
│ If expired: │
│ ◀── reward + slash share │ slash deducted from stake
│ │
Cranker ◀── crank reward (% of slash) │
Slashing
When a worker fails to deliver before the deadline:
- Slash amount =
worker_stake * slash_rate_bps / 10,000 - Crank reward =
slash_amount * crank_reward_bps / 10,000 - Creator share =
slash_amount - crank_reward
Example with slash_rate_bps = 1000 (10%), crank_reward_bps = 2000 (20% of slash):
- Worker staked 100 tokens → slashed 10 tokens
- Cranker gets 2 tokens (20% of 10)
- Creator gets 8 tokens (remaining slash)
- Creator also gets their original reward back (if terminal expiry)
Retry Behavior
If retry_count < max_retries on expiry:
- Job re-opens as
Open(reward stays in vault for the next worker) - Worker still gets slashed
- Cranker still gets paid
If retries are exhausted:
- Job goes to
Expired(terminal) - Full reward returned to creator
Dispute Resolution
The arbiter (designated at queue creation) can resolve disputes:
- Worker wins: reward released to worker, status →
Completed - Worker loses: worker slashed, reward + full slash returned to creator, status →
Expired
Tradeoffs & Constraints
Solana-Specific Constraints
| Constraint | Impact | Mitigation |
|---|---|---|
| 10KB account size limit | Job payloads stored off-chain | Only a 32-byte data_hash stored on-chain |
| 200KB transaction limit | Cannot batch many operations | Each instruction is a single focused operation |
| Clock granularity | Clock::unix_timestamp has ~1s precision |
Timeout values should be minutes, not seconds |
| Rent exemption | Every PDA costs SOL to create | close_job instruction reclaims rent from terminal jobs |
| No cron/scheduler | Expired jobs won't auto-expire | Permissionless reclaim_expired crank with economic incentive |
Design Decisions
| Decision | Rationale |
|---|---|
| One queue per authority | Simplifies PDA derivation; create multiple with different wallets |
| Max 3 concurrent jobs per worker | Prevents overcommitment, bounds compute |
| Vault PDA as its own authority | Self-custodial — no admin key can drain funds |
| data_hash instead of data | Solana accounts are expensive; store payloads on Arweave/IPFS |
| Separate Result account | Keeps Job account fixed-size; result can be closed independently |
| Basis points for rates | Integer math, no floating point, max precision of 0.01% |
| reward_mint immutability | The token mint cannot be changed after queue creation. Changing it would orphan existing rewards and stakes in the vault. Create a new queue to use a different token. |
| max_concurrent_jobs snapshot | A worker's job limit is copied from queue config at registration time and not updated retroactively. This prevents the admin from restricting active workers mid-operation. New workers get the updated value. |
Compute Unit Usage
All IronQ instructions fit comfortably within Solana's default 200,000 CU limit per instruction — no ComputeBudgetProgram.setComputeUnitLimit is required.
initialize_queueis the most expensive at ~24,000 CU due to PDA creation and vault initialization.claim_jobis the lightest at ~6,500 CU (a single account status update).- Most other instructions (
create_job,submit_result,cancel_job, etc.) fall in the 10,000-25,000 CU range. - Token transfers via CPI (used in
approve_result,reclaim_expired,resolve_dispute) add ~4,500 CU per SPL Token call. - For transactions that batch multiple IronQ instructions, the 1.4M CU per-transaction limit provides ample headroom — you could fit 50+ lightweight calls in a single transaction before hitting the cap.
Latency Comparison
| Operation | Web2 (Celery + Redis) | IronQ (Solana) |
|---|---|---|
| Job dispatch | < 1ms | ~400ms (1 slot) |
| Job claiming | Instant (push model) | ~400ms (worker pulls) |
| Result confirmation | Instant (in-process) | ~400ms + finality (~6s for optimistic, ~13s for confirmed) |
| End-to-end cycle | < 10ms | ~2-15 seconds |
Key insight: IronQ trades latency for trustlessness. In Web2, the broker pushes tasks to workers over a private network in sub-millisecond time. In IronQ, workers pull tasks from a global ledger with ~400ms slot times. This is acceptable for task markets (bounties, freelance work, compute jobs) where tasks take minutes/hours to complete and the latency of assignment is negligible. It is NOT suitable for real-time event processing or microservice communication.
Cost Comparison
| Operation | AWS SQS | IronQ (Solana) |
|---|---|---|
| Send message | $0.40 / million | ~$0.000005 (1 tx fee) + rent deposit |
| Receive message | $0.40 / million | ~$0.000005 (claim tx) |
| Storage | Free (4-day retention) | ~0.002 SOL rent per job account (~181 bytes) |
| Monthly cost (10k jobs) | ~$0.01 | ~$0.10 (tx fees) + ~20 SOL (rent, reclaimable) |
Key insight: Solana transactions are extremely cheap, but rent-exempt account creation adds a fixed cost per job. Unlike SQS where messages are ephemeral, IronQ jobs are persistent on-chain accounts. Rent is recoverable via close_job after completion, making the true cost just transaction fees. For high-volume ephemeral tasks, SQS wins on cost. For tasks requiring auditability, escrow, and trustless execution, IronQ's costs are justified.
Throughput
| Metric | Redis (Celery broker) | Solana (IronQ) |
|---|---|---|
| Operations/sec | 100,000+ | ~400 TPS per account (single-writer lock) |
| Concurrent queues | Unlimited (namespace-based) | 1 queue per authority (each independent) |
| Horizontal scaling | Add Redis nodes | Separate queues (different PDAs = parallel execution) |
Key insight: Solana enforces single-writer semantics per account — only one transaction can modify a given account per slot. This means a single IronQ queue can process ~400 state transitions per second. For higher throughput, deploy multiple queues across different authorities. Each queue's accounts are independent, so Solana's runtime can process them in parallel. This is analogous to sharding in Web2, but enforced at the protocol level.
Privacy Considerations
| Aspect | Web2 Queue | IronQ |
|---|---|---|
| Task data | Private (internal network) | Public (on-chain metadata visible) |
| Worker identity | Private (internal auth) | Public (wallet addresses on-chain) |
| Payment amounts | Private (internal billing) | Public (token transfers on-chain) |
| Queue configuration | Private (config files) | Public (account data readable) |
Mitigation: IronQ uses a data_hash pattern — the actual job specification and result data are stored off-chain (IPFS, Arweave, S3) and only their content hashes are stored on-chain. This provides data integrity verification without exposing sensitive payloads. However, metadata (who created a job, who claimed it, reward amounts, timestamps) is inherently public on Solana. For use cases requiring metadata privacy, consider encrypting off-chain payloads and using zero-knowledge proofs for on-chain verification (future enhancement).
What IronQ Does NOT Do
- Execute jobs — IronQ is a coordination layer. Workers execute off-chain and submit proof.
- Verify results — The creator (or arbiter) verifies. Future versions could add on-chain verification.
- Priority scheduling — Priority is a metadata field. Workers choose which jobs to claim.
- Guarantee availability — If no workers are registered, jobs sit until someone claims them.
When to Use IronQ vs Traditional Queues
Use IronQ when:
- Workers are untrusted third parties (freelancers, compute providers, external services)
- Tasks require escrow — payment should only release on verified completion
- You need a public, auditable record of task assignment and completion
- Dispute resolution between task creators and workers is needed
- Tasks take minutes/hours to complete (latency of assignment is negligible)
- You want permissionless participation — anyone can become a worker by staking
Use Celery/SQS/BullMQ when:
- Sub-millisecond dispatch latency is required
- Tasks are internal microservice communication (trusted environment)
- High throughput (>1000 tasks/sec on a single queue)
- Task data must be completely private
- Cost per task must be near-zero (no rent overhead)
- Complex task routing (topic-based, headers exchange, fan-out)
CLI Usage
The ironq CLI interacts with the on-chain program directly from the terminal.
Global Flags
--json Output as JSON for programmatic consumption
--cluster <URL> Override Solana RPC URL (defaults to solana config)
--keypair <PATH> Override wallet keypair path
--queue-authority <PUBKEY> Use another authority's queue
Commands
# Queue Management
# Worker Operations
# Job Lifecycle
# Dispute Resolution (Arbiter)
# Permissionless Cranks
Example Output
)
)
)
JSON Output
Every command supports --json for scripting:
|
|
Command Dependencies
init ──▶ worker register ──▶ (worker can now claim jobs)
│ │
│ ├── job claim ──▶ job submit ──▶ job approve ──▶ crank close
│ │ │
├── job create ◀──────────────┘ ├── job dispute ──▶ dispute resolve
│ │
├── job cancel (if unclaimed) └── (deadline passes) ──▶ crank expired
│
├── config / status / pause / unpause (admin)
│
└── worker deregister (when done)
Setup & Development
Prerequisites
- Rust (stable toolchain)
- Solana CLI (v2.x)
- Anchor (v0.31.1)
- Node.js (v18+)
- Yarn
Build
# Build the Solana program
# Build the CLI
# The binary will be at target/release/ironq
Configure Solana
# For devnet
# For localnet (testing)
Testing
IronQ has a comprehensive test suite — 118 TypeScript integration tests across 13 groups, plus 15 Rust unit tests (133 total) — using solana-bankrun for fast, deterministic testing.
Run Tests
Test Groups
| Group | Tests | Coverage |
|---|---|---|
| Queue Initialization | 3 | Init, config values, duplicate prevention |
| Queue Admin | 3 | Update config, toggle pause, authority check |
| Worker Registration | 4 | Register, min stake enforcement, increase stake, deregister |
| Job Creation | 4 | Create, pause check, zero reward, priority validation |
| Job Cancellation | 3 | Cancel open, refund check, post-claim prevention |
| Claim Job | 4 | Claim, duplicate prevention, pause check, max concurrent |
| Submit Result | 3 | Submit, unauthorized worker, result account creation |
| Approve Result | 4 | Approve, reward transfer, worker stats, unauthorized |
| Dispute Flow | 5 | Dispute, arbiter resolve (win/lose), slash, unauthorized |
| Expiry & Cranks | 4 | Expire, crank reward, retry re-open, close terminal |
| Edge Cases | 3 | Deregister with active jobs, claim expired, overflow |
| Event Emission | 18 | Verifies CPI event logs for all 18 lifecycle events |
| Zero Economics | 1 | Verifies correct behavior with slash_rate=0 and crank_reward=0 |
Devnet Deployment
IronQ is deployed and verified on Solana devnet.
Program ID: J6GjTDeKugyMhFEuTYnEbsBCVSHsPimmbHcURbJ3wtrQ
Devnet Transaction Walkthrough
A full lifecycle was executed on devnet — create queue, register worker, post job, claim, submit, approve, and close:
| Step | Description | Transaction |
|---|---|---|
| Deploy Program | Deploys the IronQ BPF binary to devnet | Explorer |
| 1. Initialize Queue | Creates QueueConfig PDA and token vault. Sets min stake, timeout, slash/crank rates | Explorer |
| 2. Register Worker | Worker stakes 10 tokens into vault. Worker PDA created with max_concurrent_jobs from queue | Explorer |
| 3. Create Job | Creator deposits 5 tokens as reward. Job PDA created with status=Open, priority=HIGH | Explorer |
| 4. Claim Job | Worker claims job. Status transitions to Claimed. Deadline set. active_jobs incremented | Explorer |
| 5. Submit Result | Worker submits result hash. JobResult PDA created. Status transitions to Submitted | Explorer |
| 6. Approve Result | Creator approves. 5 tokens transferred from vault to worker. Status=Completed | Explorer |
| 7. Close Job | Terminal job+result accounts closed. Rent returned to creator and worker respectively | Explorer |
Project Structure
ironq/
├── Anchor.toml # Anchor project config
├── Cargo.toml # Workspace definition
├── programs/
│ └── ironq/
│ └── src/
│ ├── lib.rs # Program entrypoint (18 instructions)
│ ├── errors.rs # IronQError enum (24 variants)
│ ├── utils.rs # Slash/reward math (checked arithmetic)
│ ├── state/
│ │ ├── mod.rs
│ │ ├── queue.rs # QueueConfig account
│ │ ├── job.rs # Job account + JobStatus enum
│ │ ├── worker.rs # Worker account
│ │ └── result.rs # JobResult account
│ └── instructions/
│ ├── mod.rs
│ ├── initialize_queue.rs
│ ├── update_queue.rs
│ ├── toggle_pause.rs
│ ├── register_worker.rs
│ ├── increase_stake.rs
│ ├── deregister_worker.rs
│ ├── create_job.rs
│ ├── cancel_job.rs
│ ├── claim_job.rs
│ ├── submit_result.rs
│ ├── approve_result.rs
│ ├── dispute_result.rs
│ ├── resolve_dispute.rs
│ ├── reclaim_expired.rs
│ └── close_job.rs
├── cli/
│ └── src/
│ ├── main.rs # Clap CLI entrypoint
│ ├── display.rs # Colored output + table formatting
│ └── commands/
│ ├── mod.rs # Shared types, PDA derivation, discriminators
│ ├── queue.rs # init, config, status, pause/unpause
│ ├── worker.rs # register, info, deregister
│ ├── job.rs # create, list, info, claim, submit, approve, dispute, cancel
│ ├── dispute.rs # resolve
│ └── crank.rs # expired, close
├── tests/
│ └── *.ts # 118 integration tests across 13 groups
├── scripts/
│ └── devnet-happy-path.ts # Devnet integration script
└── target/
├── idl/ironq.json # Generated IDL
└── types/ironq.ts # Generated TypeScript types
Composability (CPI)
IronQ is designed as a standalone program, but other Solana programs can integrate with it via Cross-Program Invocation (CPI):
Example use cases:
- Oracle network: An oracle program creates jobs on IronQ when new data feeds are requested, and auto-approves when the oracle result matches
- DAO governance: A governance program creates jobs for funded proposals, with the DAO treasury as the creator
- Compute marketplace: A scheduler program dispatches compute tasks as IronQ jobs, with automated result verification via ZK proofs
CPI integration:
// In your program's Cargo.toml:
// ironq = { version = "0.1.0", features = ["cpi"] }
// In your instruction handler:
let cpi_ctx = new;
create_job?;
The program exposes the cpi feature flag which generates the necessary types for cross-program calls.
Upgrade & Migration Strategy
IronQ v1 does not include an upgradeable program pattern. The program is deployed as immutable — users can verify the deployed bytecode matches the source.
For future versions:
- Deploy a new program with a new Program ID
- Existing queues on v1 continue to function (no data migration needed)
- New queues are created on v2
- The CLI can support
--program-idto interact with either version - Account versioning: future account structs can include a
version: u8field as the first byte after the discriminator, enabling the program to deserialize multiple schema versions
Account schema changes:
- Adding new fields: Append to the end of the struct, increase SIZE, use a migration instruction to resize existing accounts
- Removing fields: Not recommended — mark as deprecated and ignore
- Changing field types: Not possible without migration — deploy a new program
Using the IDL & TypeScript Types
After anchor build, the program generates:
- IDL:
target/idl/ironq.json— the full interface definition (accounts, instructions, events, errors) - TypeScript types:
target/types/ironq.ts— typed bindings for use with@coral-xyz/anchor
import { Program } from "@coral-xyz/anchor";
import { Ironq } from "../target/types/ironq";
import idl from "../target/idl/ironq.json";
// Initialize the program client
const program = new Program<Ironq>(idl, provider);
// All instructions are typed
await program.methods
.createJob(rewardAmount, dataHash, priority, maxRetries)
.accountsPartial({ creator, queue, job, creatorTokenAccount, vault, tokenProgram, systemProgram })
.rpc();
// Account fetches are typed
const job = await program.account.job.fetch(jobPDA);
console.log(job.status); // { open: {} } | { claimed: {} } | ...
The IDL can also be used with other Solana clients (Python, Rust, Go) that support the Anchor IDL format.
FAQ
Q: Can the queue admin steal funds from the vault?
A: No. The vault's token authority is its own PDA (seeds = ["vault", queue]), not the admin's wallet. No instruction allows the admin to transfer tokens out of the vault except through the defined lifecycle (approve, cancel, reclaim_expired, resolve_dispute). The admin can pause the queue but cannot drain it.
Q: What happens if a worker claims a job and disappears?
A: After job_timeout seconds, anyone can call reclaim_expired to slash the worker's stake and return the reward to the creator (or re-open the job if retries remain). The cranker who calls this earns a percentage of the slashed amount.
Q: Can a malicious creator always dispute and steal work? A: No. Disputes are resolved by the arbiter (a neutral third party, ideally a multisig), not the creator. The creator can flag a dispute, but only the arbiter decides the outcome. This prevents reward theft.
Q: Why not use Clockwork or similar automation for expiry?
A: IronQ uses permissionless cranking instead of automation services to minimize external dependencies. Any wallet can call reclaim_expired and earn the crank reward, making cleanup self-sustaining without relying on a third-party service's uptime or pricing.
Q: How does IronQ handle concurrent claims? A: Solana's runtime enforces single-writer semantics per account. When two workers try to claim the same job in the same slot, only one transaction succeeds — the other fails because the job's status has already changed from Open to Claimed. No mutex or lock is needed.
Q: What if the arbiter is malicious? A: The arbiter is a trust assumption in v1. To mitigate this, use a multisig (e.g., Squads Protocol) as the arbiter so no single party can rule on disputes unilaterally. Future versions could implement a jury/DAO mechanism for fully trustless dispute resolution.
Q: Can I run multiple queues?
A: Yes. Each queue is derived from ["queue", authority], so one wallet creates one queue. To run multiple queues, use different wallets. Each queue has independent configuration, vault, workers, and jobs.
Real-World Use Cases
Content Moderation -- A social platform posts uploaded images as jobs, with the data_hash pointing to an IPFS-pinned image. Staked workers run NSFW classifiers off-chain and submit pass/fail results. The arbiter (a moderation multisig) resolves disputes on edge cases where automated classification is uncertain.
Off-Chain Compute -- A DeFi protocol needs expensive option pricing calculations that exceed on-chain compute limits. It posts the pricing parameters as a job hash, and workers compute Black-Scholes or Monte Carlo simulations off-chain, submitting result hashes. Multiple workers claiming similar jobs across separate queues, combined with the dispute mechanism, ensure computational accuracy.
Bug Bounties / Code Review -- A project posts code review tasks as IronQ jobs with the repository commit hash as data_hash. Verified auditors (workers who have staked tokens) claim tasks, review the code off-chain, and submit their findings as result hashes. The project owner approves valid findings and releases rewards, or disputes low-effort submissions for arbiter review.
Data Labeling -- ML teams post batches of data labeling tasks, each referencing an off-chain dataset via data_hash. Workers label the data off-chain and submit result hashes pointing to their annotations. Quality control is enforced through the dispute mechanism -- the team can dispute poor labels, and the arbiter (a senior annotator or QA multisig) resolves disagreements.
License
ISC