forgex 0.4.1

CLI and runtime for the Forge full-stack framework
Documentation

FORGE

Stop Assembling. Start Building.

You didn't sign up to be a distributed systems engineer. You signed up to build products.

Yet here you are, wiring up Redis for caching, Kafka for events, BullMQ for jobs, a separate cron daemon, and praying they all stay in sync. Your docker-compose.yml has more services than your app has features.

FORGE compiles your entire backend into one binary: API, jobs, crons, workflows, real-time subscriptions. The only dependency? PostgreSQL. That's it.

curl -fsSL https://tryforge.dev/install.sh | sh
forge new my-app --demo && cd my-app
forge dev

forge dev runs Docker Compose with PostgreSQL, backend, and frontend. Use forge dev down --clear to reset everything.

Crates.io License Docs


The Problem

Modern backend development has become infrastructure theater:

Your Typical Stack                    What You Actually Need
───────────────────                   ────────────────────────
API Server (Express/FastAPI)          Handle HTTP requests
Redis                                 Remember things temporarily
Kafka/RabbitMQ                        Process things later
BullMQ/Celery                         Run background jobs
Cron daemon                           Do things on schedule
WebSocket server                      Push updates to clients
Prometheus + Grafana                  Know what's happening

Seven systems. Seven failure points. Seven things to deploy, monitor, and debug at 3 AM.

PostgreSQL already does all of this. SKIP LOCKED for job queues. LISTEN/NOTIFY for pub/sub. Advisory locks for coordination. You just need a framework that uses them properly.


What FORGE Actually Does

1. Queries and Mutations (Your API)

#[forge::query(cache = "30s")]
pub async fn get_user(ctx: &QueryContext, id: Uuid) -> Result<User> {
    sqlx::query_as("SELECT * FROM users WHERE id = $1")
        .bind(id)
        .fetch_one(ctx.db())
        .await
        .map_err(Into::into)
}

#[forge::mutation]
pub async fn create_user(ctx: &MutationContext, input: CreateUser) -> Result<User> {
    let user = sqlx::query_as("INSERT INTO users (email) VALUES ($1) RETURNING *")
        .bind(&input.email)
        .fetch_one(ctx.db())
        .await?;

    // Dispatch a background job
    ctx.dispatch_job("send_welcome_email", json!({ "user_id": user.id })).await?;

    Ok(user)
}

These become /_api/rpc/get_user and /_api/rpc/create_user automatically. A fully typed TypeScript client is generated. No routing. No fetch wrappers. No manual type definitions.

Mutations run inside a database transaction. The dispatch_job call above doesn't fire immediately. It gets buffered and inserted atomically when the transaction commits. If the mutation fails, the job never gets created. No orphaned jobs, no missing side effects.

2. Background Jobs (Things That Take Time)

#[forge::job]
#[retry(max_attempts = 3, backoff = "exponential")]
pub async fn send_welcome_email(ctx: &JobContext, input: EmailInput) -> Result<()> {
    ctx.progress(0, "Starting...")?;

    let user = fetch_user(ctx.db(), input.user_id).await?;
    send_email(&user.email, "Welcome!").await?;

    ctx.progress(100, "Sent")?;
    Ok(())
}

Jobs are persisted in PostgreSQL, survive restarts, retry with backoff, and report progress in real-time. No Redis. No separate worker process.

3. Scheduled Tasks (Cron Without the Daemon)

#[forge::cron("0 9 * * *")]  // 9 AM daily
#[timezone = "America/New_York"]
pub async fn daily_digest(ctx: &CronContext) -> Result<()> {
    if ctx.is_late() {
        ctx.log.warn("Running late", json!({ "delay": ctx.delay() }));
    }

    generate_and_send_digest(ctx.db()).await
}

Cron scheduling with timezone support, catch-up for missed runs, and structured logging. Runs in the same process.

4. Durable Workflows (Multi-Step Processes That Don't Break)

#[forge::workflow]
#[version = 1]  // Bump when changing step order. In-flight workflows keep their original version.
#[timeout = "60d"]
pub async fn free_trial_flow(ctx: &WorkflowContext, user: User) -> Result<()> {
    // Each step can define compensation (rollback) logic
    ctx.step("start_trial")
        .run(|| activate_trial(&user))
        .compensate(|_| deactivate_trial(&user))
        .await?;

    ctx.step("send_welcome").run(|| send_email(&user, "Welcome!")).await?;

    ctx.sleep(Duration::from_days(45)).await;  // 45 days. Survives deployments.

    ctx.step("trial_ending").run(|| send_email(&user, "3 days left!")).await?;

    ctx.sleep(Duration::from_days(3)).await;

    ctx.step("convert_or_expire").run(|| end_trial(&user)).await?;
    Ok(())
    // If any step fails, previous steps compensate in reverse order
}

Deploy new code, restart servers, scale up or down. The workflow picks up right where it left off. Sleep for 45 days, and it just works. Compensation (rollback) runs automatically if later steps fail. This is durable execution without running a separate orchestration cluster.

5. Real-Time Subscriptions (Live Data, No Extra Work)

<script lang="ts">
  import { subscribe } from '$lib/forge';

  // This auto-updates when data changes. Any client, anywhere.
  const users = subscribe('list_users', {});
</script>

{#each $users.data ?? [] as user}
  <div>{user.email}</div>
{/each}

Under the hood: Compile-time SQL parsing extracts all table dependencies (including JOINs and subqueries) → PostgreSQL triggers fire NOTIFY on changes → FORGE re-runs affected queries → SSE pushes diffs to clients.

No manual cache invalidation. No pub/sub wiring. Just reactive queries.

6. Webhooks (Receive External Events)

#[forge::webhook(
    path = "/hooks/stripe",
    signature = WebhookSignature::hmac_sha256("Stripe-Signature", "STRIPE_WEBHOOK_SECRET"),
    idempotency = "header:Idempotency-Key",
)]
pub async fn stripe(ctx: &WebhookContext, payload: Value) -> Result<WebhookResult> {
    ctx.dispatch_job("process_payment", payload.clone()).await?;
    Ok(WebhookResult::Accepted)
}

Signature validation, idempotency tracking, and job dispatch in one handler. Supports HMAC and RSA signatures.

7. MCP Tools (Give AI Agents Access)

#[forge::mcp_tool(
    name = "tickets.list",
    title = "List Support Tickets",
    read_only,
)]
pub async fn list_tickets(ctx: &McpToolContext) -> Result<Vec<Ticket>> {
    sqlx::query_as("SELECT * FROM tickets")
        .fetch_all(ctx.db())
        .await
        .map_err(Into::into)
}

Expose any function as an MCP tool. LLM agents can call your backend with the same auth, rate limiting, and validation as regular API calls. One macro, same business logic.


The Architecture

┌──────────────────────────────────────────────────┐
│                    forge run                     │
├─────────────┬─────────────┬─────────────┤
│   Gateway   │   Workers   │  Scheduler  │
│ (HTTP/SSE)  │   (Jobs)    │   (Cron)    │
└──────┬──────┴──────┬──────┴──────┬──────┘
       │             │             │
       └─────────────┴──────┬──────┘
                            │
                     ┌──────▼──────┐
                     │ PostgreSQL  │
                     └─────────────┘

One process. Multiple subsystems handle different concerns:

  • Gateway: HTTP/SSE server (built on Axum)
  • Workers: Pull jobs from PostgreSQL using SKIP LOCKED
  • Scheduler: Leader-elected cron runner (advisory locks prevent duplicate runs)
  • Daemons: Long-running singleton processes with leader election

Scale horizontally by running multiple instances. They coordinate through PostgreSQL. No service mesh, no gossip protocol, no Redis cluster.

Crate Layout

forge              → Public API, Forge::builder(), prelude, CLI
├── forge-runtime  → Gateway, function router, job worker, workflow executor, cron scheduler
│   ├── forge-core → Types, traits, error types, contexts, schema definitions
│   └── forge-macros → #[query], #[mutation], #[job], #[workflow], #[cron]
└── forge-codegen  → TypeScript/Svelte client generator

Type Safety, End to End

FORGE generates TypeScript types from your Rust models:

// Rust: your source of truth
#[forge::model]
pub struct User {
    pub id: Uuid,
    pub email: String,
    pub role: UserRole,
    pub created_at: DateTime<Utc>,
}

#[forge::model]
pub enum UserRole {
    Admin,
    Member,
    Guest,
}
// TypeScript: generated automatically
export interface User {
  id: string;
  email: string;
  role: UserRole;
  created_at: string;
}

export type UserRole = 'Admin' | 'Member' | 'Guest';

// API client is also generated
import { api } from '$lib/forge';
const user = await api.get_user({ id: '...' });  // Fully typed

If your Rust code compiles, your frontend types are correct. This eliminates an entire class of "worked in dev, broke in prod" bugs.


Why Not Just Use...

FORGE Supabase Firebase PocketBase
Background Jobs Built-in External Cloud Functions
Durable Workflows Built-in
Cron Scheduling Built-in External Cloud Scheduler
Query Caching Built-in
Rate Limiting Built-in
Real-time Built-in Built-in Built-in
Webhooks Built-in Cloud Functions
MCP Tools Built-in
Full Type Safety Rust → TS Partial
Self-Hosted One binary Complex One binary
Vendor Lock-in None Low High None
Database PostgreSQL PostgreSQL Firestore SQLite

vs. Temporal/Inngest: FORGE workflows run in-process with no separate orchestration service, but you lose some features. If you need child workflows, signals, or advanced versioning, use Temporal. If you need durable multi-step processes without the ops overhead, FORGE handles that.

vs. Node.js + BullMQ + etc.: FORGE trades ecosystem breadth for operational simplicity. You get fewer npm packages but also fewer 3 AM pages about Redis running out of memory.


Getting Started

# Install
curl -fsSL https://tryforge.dev/install.sh | sh
# Or: cargo install forgex

# Create and run
forge new my-app --demo
cd my-app
forge dev
# → Frontend at http://localhost:5173
# → Backend at http://localhost:8080
# → PostgreSQL at localhost:5432

forge dev runs Docker Compose with PostgreSQL, a cargo-watch backend, and a Vite frontend. All three services start together and stop with Ctrl+C.

The --demo flag scaffolds a working app with examples of queries, mutations, jobs, crons, and workflows. Or use --minimal for a clean slate.

Scaffold new components without writing boilerplate:

forge add query list_orders      # new query function
forge add mutation create_order  # new mutation
forge add job send_receipt       # background job
forge add workflow onboarding    # durable workflow
forge add cron daily_report      # scheduled task
forge check                      # validate project health

forge generate syncs TypeScript types from your Rust models. forge check validates config, migrations, function signatures, and frontend setup.

Deployment

cargo build --release
./target/release/my-app

The release binary embeds the frontend build and the Forge runtime. One file to deploy. Point it at a PostgreSQL instance and it runs.

Check out the examples for working apps you can run with docker compose up.

Read the docs →


Who's This For

FORGE is opinionated. It's designed for:

  • Solo developers and small teams building SaaS products who don't want to manage infrastructure
  • Teams who value reliability: no null pointer exceptions, no "undefined is not a function", errors caught at compile time
  • Anyone tired of gluing together 7 different services for basic backend functionality

Probably not the right fit if:

  • You have a dedicated platform team and need fine-grained control over each component
  • You're building for millions of concurrent users (FORGE targets ~100k MAU comfortably)
  • You need deep integration with cloud-native services (Lambda, DynamoDB, Pub/Sub)

Project Maturity

FORGE is pre-1.0. Expect breaking changes between releases. Not ready for production yet. Good for side projects, internal tools, and kicking the tires.

Breaking changes are documented in CHANGELOG.md. Pin your version if you need stability. Once the core API settles, we will cut 1.0 and commit to semver.

Contributions welcome.


License

MIT. Do whatever you want.