FORGE
Stop Assembling. Start Building.
You didn't sign up to be a distributed systems engineer. You signed up to build products.
Yet here you are, wiring up Redis for caching, Kafka for events, BullMQ for jobs, a separate cron daemon, and praying they all stay in sync. Your docker-compose.yml has more services than your app has features.
FORGE compiles your entire backend into one binary: API, jobs, crons, workflows, real-time subscriptions. The only dependency? PostgreSQL. That's it.
|
&&
The Problem
Modern backend development is infrastructure theater:
Your Typical Stack What You Actually Need
─────────────────── ────────────────────────
API Server (Express/FastAPI) Handle HTTP requests
Redis Remember things temporarily
Kafka/RabbitMQ Process things later
BullMQ/Celery Run background jobs
Cron daemon Do things on schedule
WebSocket server Push updates to clients
Prometheus + Grafana Know what's happening
Seven systems. Seven failure points. Seven things to deploy, monitor, and debug at 3 AM.
PostgreSQL already does all of this. SKIP LOCKED for job queues. LISTEN/NOTIFY for pub/sub. Advisory locks for coordination. You just need a runtime that actually uses them.
How It Works
Queries and Mutations
pub async
pub async
These become typed RPC endpoints automatically. A TypeScript client is generated. No routing files, no fetch wrappers, no manual type definitions.
Mutations run inside a database transaction. The dispatch_job call gets buffered and inserted atomically when the transaction commits. If the mutation fails, the job never exists.
Background Jobs
pub async
Persisted in PostgreSQL. Survive restarts. Retry with backoff. Report progress in real-time. No Redis. No separate worker process.
Cron
pub async
Timezone support. Catch-up for missed runs. Leader-elected so it runs exactly once across all instances.
Durable Workflows
pub async
Sleep for 45 days, deploy new code, restart servers, scale up. The workflow picks up exactly where it left off. Compensation runs automatically if later steps fail. No separate orchestration cluster.
Real-Time Subscriptions
<script lang="ts">
import { subscribe } from '$lib/forge';
const users = subscribe('list_users', {});
</script>
{#each $users.data ?? [] as user}
<div>{user.email}</div>
{/each}
Compile-time SQL parsing extracts table dependencies (including JOINs and subqueries). PostgreSQL triggers fire NOTIFY on changes. FORGE re-runs affected queries. SSE pushes diffs to clients. No manual cache invalidation. No pub/sub wiring.
Webhooks
pub async
Signature validation, idempotency tracking, and job dispatch. One handler.
MCP Tools
pub async
Expose any function as an MCP tool. Same auth, rate limiting, and validation as your API. AI agents get first-class access without a separate integration layer.
Type Safety, End to End
// Generated automatically
export interface User {
id: string;
email: string;
role: UserRole;
created_at: string;
}
export type UserRole = 'Admin' | 'Member' | 'Guest';
import { api } from '$lib/forge';
const user = await api.get_user({ id: '...' }); // Fully typed
If your Rust code compiles, your frontend types are correct.
Architecture
┌──────────────────────────────────────────────────┐
│ forge run │
├─────────────┬─────────────┬─────────────┤
│ Gateway │ Workers │ Scheduler │
│ (HTTP/SSE) │ (Jobs) │ (Cron) │
└──────┬──────┴──────┬──────┴──────┬──────┘
│ │ │
└─────────────┴──────┬──────┘
│
┌──────▼──────┐
│ PostgreSQL │
└─────────────┘
One process, multiple subsystems:
- Gateway: HTTP/SSE server built on Axum
- Workers: Pull jobs from PostgreSQL using
SKIP LOCKED - Scheduler: Leader-elected cron runner via advisory locks
- Daemons: Long-running singleton processes with leader election
Scale horizontally by running more instances. They coordinate through PostgreSQL. No service mesh, no gossip protocol, no Redis cluster.
forge → Public API, Forge::builder(), prelude, CLI
├── forge-runtime → Gateway, function router, job worker, workflow executor, cron scheduler
│ ├── forge-core → Types, traits, error types, contexts, schema definitions
│ └── forge-macros → #[query], #[mutation], #[job], #[workflow], #[cron]
└── forge-codegen → TypeScript/Svelte client generator
Why Not Just Use...
| FORGE | Supabase | Firebase | PocketBase | |
|---|---|---|---|---|
| Background Jobs | Built-in | External | Cloud Functions | - |
| Durable Workflows | Built-in | - | - | - |
| Cron Scheduling | Built-in | External | Cloud Scheduler | - |
| Query Caching | Built-in | - | - | - |
| Rate Limiting | Built-in | - | - | - |
| Real-time | Built-in | Built-in | Built-in | - |
| Webhooks | Built-in | - | Cloud Functions | - |
| MCP Tools | Built-in | - | - | - |
| Full Type Safety | Rust to TS | Partial | - | - |
| Self-Hosted | One binary | Complex | - | One binary |
| Vendor Lock-in | None | Low | High | None |
| Database | PostgreSQL | PostgreSQL | Firestore | SQLite |
vs. Temporal/Inngest: FORGE workflows run in-process with no separate orchestration service. If you need child workflows, signals, or advanced versioning, use Temporal. If you need durable multi-step processes without the ops overhead, FORGE handles it.
vs. Node.js + BullMQ + the rest: FORGE trades ecosystem breadth for operational simplicity. Fewer npm packages, fewer 3 AM pages about Redis running out of memory.
CLI
forge dev starts PostgreSQL, a cargo-watch backend, and a Vite frontend. All three come up together and stop with Ctrl+C. --demo scaffolds a working app with queries, mutations, jobs, crons, and workflows. --minimal gives you a clean slate.
Deploy
One binary. Embeds the frontend build and the entire runtime. Point it at PostgreSQL and it runs. Read the docs for more.
Debugging
Everything runs through PostgreSQL. That means everything is queryable.
Health Endpoints
GET /health → { "status": "healthy", "version": "0.4.1" }
GET /ready → { "ready": true, "database": true, "reactor": true }
Inspect Jobs
-- pending jobs
SELECT id, job_type, status, attempts, max_attempts, scheduled_at
FROM forge_jobs WHERE status = 'pending' ORDER BY scheduled_at;
-- failed jobs with error messages
SELECT id, job_type, last_error, attempts, failed_at
FROM forge_jobs WHERE status IN ('failed', 'dead_letter') ORDER BY failed_at DESC;
-- running jobs with progress
SELECT id, job_type, progress_percent, progress_message, worker_id
FROM forge_jobs WHERE status = 'running';
Inspect Workflows
-- active workflows
SELECT id, workflow_name, status, current_step, started_at
FROM forge_workflow_runs WHERE status IN ('created', 'running');
-- step-by-step details for a specific run
SELECT step_name, status, error, started_at, completed_at
FROM forge_workflow_steps WHERE workflow_run_id = $1 ORDER BY started_at;
Inspect Cron Runs
SELECT cron_name, scheduled_time, status, error
FROM forge_cron_runs ORDER BY scheduled_time DESC LIMIT 20;
Logging
Configure in forge.toml:
[]
= "debug" # debug, info, warn, error
Or override with environment variables:
RUST_LOG=debug RUST_LOG=warn,my_app=debug
Queries slower than 500ms are logged as warnings automatically. Distributed tracing is built in via OpenTelemetry (OTLP over HTTP).
Realtime Subscriptions
If subscriptions aren't updating after mutations:
- Make sure the SSE connection is established before mutating (check the network tab for
/events) - Verify reactivity is enabled for the table:
SELECT forge_enable_reactivity('table_name'); - Don't manually call
refetch()after mutations. The SSE pipeline handles invalidation automatically.
System Tables
All FORGE state lives in PostgreSQL. The full set of system tables:
| Table | What it tracks |
|---|---|
forge_jobs |
Job queue, status, errors, progress |
forge_cron_runs |
Cron execution history |
forge_workflow_runs |
Workflow instances and state |
forge_workflow_steps |
Individual step results |
forge_nodes |
Cluster node registry |
forge_leaders |
Leader election state |
forge_daemons |
Long-running process status |
forge_sessions |
Active SSE connections |
forge_subscriptions |
Live query subscriptions |
forge_rate_limits |
Token bucket state |
forge_webhook_events |
Webhook idempotency tracking |
Who's This For
FORGE is opinionated. It's for:
- Solo developers and small teams building SaaS products who don't want to manage infrastructure
- Teams who value correctness: errors caught at compile time, not at 3 AM
- Anyone tired of gluing together seven services for basic backend functionality
Not the right fit if:
- You have a dedicated platform team that wants fine-grained control over each component
- You're building for millions of concurrent users (FORGE targets ~100k MAU comfortably)
- You need deep integration with cloud-native services (Lambda, DynamoDB, Pub/Sub)
AI Agents
If you're using an AI coding agent to build with FORGE, install the forge-idiomatic-engineer skill for Forge-aware code generation:
This is installed automatically when you run forge new.
Project Maturity
FORGE is pre-1.0. Breaking changes happen between releases. Good for side projects, internal tools, and kicking the tires. Not production yet.
Breaking changes are documented in CHANGELOG.md. Pin your version if you need stability. Once the core API settles, we cut 1.0 and commit to semver.
License
MIT. Do whatever you want.