Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
FORGE
The full-stack Rust framework that compiles your backend into one binary, powered by PostgreSQL.
Queries, mutations, background jobs, cron, durable workflows, real-time subscriptions, webhooks, and MCP tools — all written as plain Rust functions, all served from a single process, all backed by the database you already know.
| &&

One mutation. Both clients update instantly. No manual cache busting, no fetch wrappers, no pub/sub to configure.
What You Get
- One binary, one database. Gateway, workers, scheduler, and daemons run in the same process. PostgreSQL is the only moving part.
- Type safety from SQL to UI.
sqlxchecks your queries at compile time.#[forge::model]generates the matching TypeScript or Rust types for your frontend. - Real-time by default. Compile-time SQL parsing extracts table dependencies. PostgreSQL
LISTEN/NOTIFYinvalidates affected subscriptions. SSE pushes diffs to clients. - Durable by design. Jobs and workflow state live in PostgreSQL. They survive restarts, deployments, and crashes.
- Frontends as first-class targets. SvelteKit and Dioxus today, more to come. Same Rust source of truth generates bindings for whichever you pick.
Write a Function, Get an API
Queries and Mutations
pub async
pub async
These become typed RPC endpoints automatically. The same Rust source generates frontend bindings — TypeScript for SvelteKit, Rust plus hooks for Dioxus — so your client is always in sync. Transactional mutations buffer dispatch_job calls and insert them atomically when the transaction commits. If the mutation fails, the job never exists.
Background Jobs
pub async
Persisted in PostgreSQL, claimed with SKIP LOCKED, bounded by a worker semaphore. Survive restarts. Retry with backoff. Report progress in real-time to any client that wants to watch.
Cron
pub async
Cron expressions validated at compile time. Timezone-aware. Leader-elected so it runs exactly once across all instances, with catch-up for missed runs.
Durable Workflows
pub async
Workflows are versioned and signature-guarded. New runs pin to the active version; in-flight runs resume only on exact version and signature match. Sleep for 45 days, deploy new code, restart servers, scale up — the workflow picks up exactly where it left off. Compensation runs automatically in reverse order if a later step fails.
Real-Time Subscriptions
<script lang="ts">
import { listUsersStore$ } from '$lib/forge';
const users = listUsersStore$();
</script>
{#each $users.data ?? [] as user}
<div>{user.email}</div>
{/each}
Compile-time SQL parsing extracts table dependencies (including JOINs and subqueries). PostgreSQL triggers fire NOTIFY on changes. Forge re-runs affected queries, hashes the results, and pushes diffs to subscribed clients over SSE. No cache to invalidate, no channels to wire up.
Webhooks
pub async
Signature validation, idempotency tracking, and job dispatch in a single handler.
MCP Tools
pub async
Expose any function as an MCP tool with the same auth, rate limiting, and validation as your API. AI agents get first-class access alongside your human users.
Type Safety, End to End
// Generated automatically
export interface User {
id: string;
email: string;
role: UserRole;
created_at: string;
}
export type UserRole = "Admin" | "Member" | "Guest";
import { api } from "$lib/forge";
const user = await api.get_user({ id: "..." }); // Fully typed
If your Rust code compiles, your frontend types are correct and your SQL is valid.
forge migrate prepare runs pending migrations and then refreshes the .sqlx/ offline cache so CI can build without a live database. forge check verifies that the cache is up to date.
Built for Real Workloads
Forge ships an adaptive capacity benchmark that ramps concurrent users until the system breaks. Every user holds a live SSE subscription while continuously making RPC calls; 30% of traffic is writes that trigger the full reactivity pipeline.
On a 12-core laptop with PostgreSQL 18 in Docker and two Forge instances:
- 12,535 req/s peak throughput with p90 under 50ms
- 2,250 concurrent SSE users with zero errors, each maintaining a live subscription plus 10 req/s
- 30% writes, each propagated through
NOTIFY→ invalidation → re-execution → SSE fan-out
Scaling to ~10,000 concurrent SSE users on dedicated infrastructure (4× Forge + primary + 2 replicas) projects to roughly $1,200/month on AWS on-demand pricing. Full methodology, tuning knobs, and a reproducible benchmark are in benchmarks/app/ and the performance docs.
Architecture
┌──────────────────────────────────────────┐
│ forge run │
├─────────────┬─────────────┬──────────────┤
│ Gateway │ Workers │ Scheduler │
│ (HTTP/SSE) │ (Jobs) │ (Cron) │
└──────┬──────┴──────┬──────┴──────┬───────┘
│ │ │
└─────────────┼─────────────┘
│
┌──────▼──────┐
│ PostgreSQL │
└─────────────┘
One process, multiple subsystems:
- Gateway — HTTP and SSE server built on Axum
- Workers — Pull jobs from PostgreSQL using
FOR UPDATE SKIP LOCKED - Scheduler — Leader-elected cron runner via advisory locks
- Daemons — Long-running singleton processes with leader election
Scale horizontally by running more instances. They coordinate through PostgreSQL: SKIP LOCKED for queues, LISTEN/NOTIFY for fan-out, advisory locks for leadership. No service mesh, no gossip protocol, no extra cluster to operate.
forge → Public API, Forge::builder(), prelude, CLI
├── forge-runtime → Gateway, function router, job worker, workflow executor, cron scheduler
│ ├── forge-core → Types, traits, errors, contexts, schema definitions
│ └── forge-macros → #[query], #[mutation], #[job], #[workflow], #[cron], ...
└── forge-codegen → Framework binding generators (SvelteKit, Dioxus)
CLI
Development runs through docker compose up --build, which starts PostgreSQL, a cargo-watch backend, and the selected frontend. forge new takes an explicit template id such as with-svelte/minimal, with-svelte/demo, or with-dioxus/realtime-todo-list.
Deploy
One binary, embedding the frontend build and the entire runtime. Point it at PostgreSQL and it runs. See the deployment guide for Docker, Kubernetes, graceful shutdown, and rolling updates.
Debugging
Everything runs through PostgreSQL, which means everything is queryable.
Health Endpoints
GET /_api/health → { "status": "healthy", "version": "0.4.1" }
GET /_api/ready → { "ready": true, "database": true, "reactor": true, "workflows": true }
Inspect Jobs and Workflows
-- pending jobs
SELECT id, job_type, status, attempts, scheduled_at
FROM forge_jobs WHERE status = 'pending' ORDER BY scheduled_at;
-- in-flight workflows
SELECT id, workflow_name, workflow_version, status, current_step, started_at
FROM forge_workflow_runs WHERE status IN ('created', 'running');
-- blocked workflows (version/signature mismatches after a deploy)
SELECT id, workflow_name, blocking_reason
FROM forge_workflow_runs WHERE status LIKE 'blocked_%';
System Tables
| Table | What it tracks |
|---|---|
forge_jobs |
Job queue, status, errors, progress |
forge_cron_runs |
Cron execution history |
forge_workflow_definitions |
Registered workflow versions |
forge_workflow_runs |
Workflow instances and state |
forge_workflow_steps |
Individual step results |
forge_nodes |
Cluster node registry |
forge_leaders |
Leader election state |
forge_daemons |
Long-running process status |
forge_sessions |
Active SSE connections |
forge_subscriptions |
Live query subscriptions |
forge_rate_limits |
Token bucket state |
forge_webhook_events |
Webhook idempotency tracking |
Distributed tracing is built in via OpenTelemetry (OTLP over HTTP). Queries slower than 500ms are logged as warnings automatically. Signals — built-in product analytics — correlate every frontend event to the backend RPC call that caused it via a shared x-correlation-id.
Who It's For
Forge is opinionated. It's a great fit if you're:
- A solo developer or small team shipping a SaaS product and want to spend your time on the product
- A team that values correctness and wants errors at compile time rather than 3 AM
- Someone who prefers boring, well-understood infrastructure (a database, a binary) over a distributed system you have to operate
Less of a fit if you:
- Need to integrate deeply with cloud-native primitives like Lambda, DynamoDB, or Pub/Sub
- Are building for millions of concurrent connections out of the gate (Forge targets tens of thousands of concurrent SSE users per cluster)
- Have a platform team that wants fine-grained control over each component in isolation
AI Agents
Building with an AI coding agent? Install the forge-idiomatic-engineer skill for Forge-aware code generation:
It's installed automatically when you run forge new.
Project Maturity
Forge is pre-1.0. Breaking changes happen between releases and are documented in CHANGELOG.md — pin your version if you need stability. Great for side projects, internal tools, and early-stage products. Once the core API settles, we cut 1.0 and commit to semver.
License
MIT. Do whatever you want.