dioxus-cloudflare 0.7.14

Bridge between Dioxus server functions and Cloudflare Workers
Documentation

dioxus-cloudflare

The missing bridge between Dioxus server functions and Cloudflare Workers.

crates.io docs.rs license


Write a #[server] function once. It runs on Cloudflare Workers. The client calls it like a normal async function. No manual routing, no manual serialization, no duplicated endpoints.

// shared crate — server function
use dioxus::prelude::*;
use dioxus_cloudflare::prelude::*;

#[server]
pub async fn get_user(id: String) -> Result<User, ServerFnError> {
    let db = cf::d1("DB")?;
    db.prepare("SELECT * FROM users WHERE id = ?")
        .bind(&[id.into()])?
        .first::<User>(None)
        .await
        .cf()?
        .ok_or_else(|| ServerFnError::new("Not found"))
}
// client component — just call it
let user = get_user("abc".into()).await;
Client WASM                    Cloudflare Worker
┌──────────┐    fetch()    ┌─────────────────────┐
│ #[server] │ ───────────▶ │ handle(req, env)     │
│ generates │              │   ↓ set_context()    │
│ POST to   │              │   ↓ worker→http req  │
│ /api/...  │              │   ↓ Axum dispatch    │
│           │ ◀─ stream ─  │   ↓ http→worker resp │
└──────────┘               └─────────────────────┘
  1. Client calls get_user(id) — Dioxus serializes args, sends POST to /api/get_user
  2. Worker #[event(fetch)] receives the request
  3. dioxus_cloudflare::handle(req, env) is called:
    • Stores Env in thread-local (cf::env() becomes available)
    • Stores raw Request in thread-local (cf::req() becomes available)
    • Converts worker::Requesthttp::Request
    • Dispatches through the Dioxus Axum router (axum_core feature)
    • Converts http::Responseworker::Response (streaming via ReadableStream)
  4. Worker returns the response

Cloudflare Workers run one request per isolate at a time (single-threaded WASM). There is no concurrent access to thread-locals within a single Worker invocation.

Export What It Does
cf::d1(name) D1 database — env + binding + error conversion in one call
cf::kv(name) Workers KV namespace
cf::r2(name) R2 bucket
cf::durable_object(name) Durable Object namespace
cf::queue(name) Queue producer (requires queue feature)
cf::secret(name) Encrypted secret (wrangler secret put)
cf::var(name) Plaintext environment variable ([vars] in wrangler.toml)
cf::ai(name) Workers AI inference
cf::service(name) Service binding (call other Workers)
cf::env() Full Worker Env — for bindings without a shorthand
cf::req() Raw worker::Request — headers, IP
cf::cookie(name) Read a named cookie from the request
cf::cookies() Read all cookies from the request
cf::set_cookie() Set an HttpOnly/Secure auth cookie (secure defaults)
cf::set_cookie_with() Set a cookie with custom options (builder pattern)
cf::clear_cookie() Clear a cookie (logout)
cf::session() Load session data (async); returns Session handle for sync get/set/remove
SessionConfig Session backend configuration (KV or D1) — pass to Handler::session()
handle(req, env) Main entry point — wire this into #[event(fetch)]
Handler Builder with before/after middleware hooks + .session() + .websocket() routing
cf::websocket_upgrade() Create a WebSocketPair + 101 response in one call (for Durable Objects)
cf::websocket_pair() Create a raw WebSocketPair for custom handling
CfError Newtype for worker::ErrorServerFnError conversion
CfResultExt .cf() method on Result<T, worker::Error> and Result<T, KvError>

This crate requires a patched version of dioxus-server that adds wasm32 target support. Add the following to your workspace Cargo.toml:

[patch.crates-io]

dioxus-server = { git = "https://github.com/JaffeSystems/dioxus-server-cf.git" }

This is necessary because upstream dioxus-server 0.7.3 does not compile for wasm32-unknown-unknown. The patch applies minimal cfg-gating to make it compatible with Cloudflare Workers.

Install dioxus-cloudflare-build to automate the entire build pipeline — cargo build, wasm-bindgen, and JavaScript shim generation — in a single command:

cargo install dioxus-cloudflare-build

dioxus-cf-build [OPTIONS] -p <CRATE>

Options:
  -p, --package <CRATE>    Crate to build (cargo -p flag)
      --release            Build in release mode
      --out-dir <DIR>      Output directory [default: build/worker]

What it does:

  1. Windows MSVC PATH fix — auto-detects the Git Bash link.exe conflict and prepends the real MSVC linker directory (no-op on other platforms)
  2. cargo build — runs cargo build --target wasm32-unknown-unknown -p <crate> [--release]
  3. wasm-bindgen — runs wasm-bindgen --out-dir <dir> --target web on the output .wasm
  4. Shim generation — parses the wasm-bindgen .d.ts to auto-detect Durable Object classes and generates shim.mjs

Use it as your wrangler build command:

# wrangler.toml

[build]

command = "dioxus-cf-build --release -p my-worker"

Now npx wrangler deploy handles everything — no manual steps, no hand-written shim.

use worker::*;
use dioxus_cloudflare::prelude::*;

// Import server functions so they register with inventory
use shared::server_fns::*;

extern "C" { fn __wasm_call_ctors(); }

#[event(fetch)]
async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> {
    // Required: initialize inventory for #[server] function registration
    // SAFETY: Called once per cold start. inventory crate needs this in WASM.
    unsafe { __wasm_call_ctors(); }

    dioxus_cloudflare::handle(req, env).await
}

Use [Handler] for before/after middleware without touching bridge internals.

CORS headers on all responses:

use worker::*;
use dioxus_cloudflare::Handler;

#[event(fetch)]
async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> {
    unsafe { __wasm_call_ctors(); }

    Handler::new()
        .after(|resp| {
            resp.headers_mut().set("Access-Control-Allow-Origin", "*")?;
            Ok(())
        })
        .handle(req, env)
        .await
}

Auth check (short-circuit unauthorized requests):

Handler::new()
    .before(|req| {
        if req.headers().get("Authorization")?.is_none() {
            return Ok(Some(Response::error("Unauthorized", 401)?));
        }
        Ok(None) // continue to server functions
    })
    .handle(req, env)
    .await

Before hooks run after context is set (cf::env(), cf::d1(), etc. work). Return Ok(None) to continue, Ok(Some(resp)) to short-circuit. After hooks run on all responses (including short-circuited ones) and can modify headers.

Built-in session management backed by Workers KV or D1. Configure it on the Handler builder — cf::session() becomes available in all server functions.

KV-backed sessions (automatic expiry via KV TTL):

Handler::new()
    .session(SessionConfig::kv("SESSIONS"))
    .handle(req, env)
    .await

D1-backed sessions:

Handler::new()
    .session(SessionConfig::d1("DB", "sessions"))
    .handle(req, env)
    .await

D1 requires a table with this schema:

CREATE TABLE sessions (
    id TEXT PRIMARY KEY,
    data TEXT NOT NULL,
    expires_at INTEGER NOT NULL
);

Reading and writing session data:

#[server]
pub async fn login(user: String) -> Result<(), ServerFnError> {
    let session = cf::session().await?;
    session.set("user_id", &user)?;
    Ok(())
}

#[server]
pub async fn profile() -> Result<String, ServerFnError> {
    let session = cf::session().await?;
    let user: Option<String> = session.get("user_id")?;
    Ok(user.unwrap_or_else(|| "not logged in".into()))
}

#[server]
pub async fn logout() -> Result<(), ServerFnError> {
    let session = cf::session().await?;
    session.destroy();
    Ok(())
}

cf::session() is async (loads from KV/D1 on first call, cached after). Session methods (get, set, remove, destroy) are sync — they operate on the in-memory cache. Dirty data is flushed to the backend automatically before the response is sent.

Custom configuration:

SessionConfig::kv("SESSIONS")
    .cookie_name("my_session")   // default: "__session"
    .max_age(60 * 60 * 24 * 7)  // 7 days (default: 86400 = 24h)

wrangler.toml — add the KV namespace:

[[kv_namespaces]]

binding = "SESSIONS"

id = "your-kv-namespace-id"

Access encrypted secrets and plaintext variables from inside server functions.

Secrets are set via wrangler secret put or the Cloudflare dashboard — encrypted at rest, never in wrangler.toml:

#[server]
pub async fn verify_token(token: String) -> Result<bool, ServerFnError> {
    let expected = cf::secret("API_TOKEN")?.to_string();
    Ok(token == expected)
}

Variables are set in the [vars] section of wrangler.toml — plaintext, visible in source:

#[server]
pub async fn get_environment() -> Result<String, ServerFnError> {
    Ok(cf::var("ENVIRONMENT")?.to_string())
}
# wrangler.toml

[vars]

ENVIRONMENT = "production"

Run AI inference from server functions using Cloudflare's built-in Workers AI models.

use serde::{Deserialize, Serialize};

#[derive(Serialize)]
struct AiInput { messages: Vec<AiMessage> }
#[derive(Serialize)]
struct AiMessage { role: String, content: String }
#[derive(Deserialize)]
struct AiOutput { response: Option<String> }

#[server]
pub async fn generate(prompt: String) -> Result<String, ServerFnError> {
    use dioxus_cloudflare::prelude::*;

    let ai = cf::ai("AI")?;
    let resp: AiOutput = ai.run("@cf/meta/llama-3.1-8b-instruct", AiInput {
        messages: vec![AiMessage { role: "user".into(), content: prompt }],
    }).await.cf()?;

    Ok(resp.response.unwrap_or_default())
}
# wrangler.toml

[ai]

binding = "AI"

Any model listed in the Workers AI catalog can be used — text generation, embeddings, image generation, etc. Define typed input/output structs matching the model's API — serde_json::Value does not work correctly through serde_wasm_bindgen.

Call other Workers from server functions. The target Worker must be deployed separately and bound in wrangler.toml.

#[server]
pub async fn call_auth(token: String) -> Result<String, ServerFnError> {
    use dioxus_cloudflare::prelude::*;

    let auth = cf::service("AUTH")?;
    let resp = auth.fetch("https://fake-host/verify", None).await.cf()?;
    Ok(resp.text().await.cf()?)
}
# wrangler.toml

[[services]]

binding = "AUTH"

service = "auth-worker"

The URL host is ignored — the request goes directly to the bound Worker. Use any placeholder host.

Render Dioxus components to HTML at the edge. Requires the ssr feature.

When the Axum router returns 404 and the request accepts text/html, the handler renders your app component and returns the HTML. Non-HTML requests (JS, CSS, WASM, JSON) pass through normally.

Minimal SSR (default HTML shell, no client JS):

Handler::new()
    .with_ssr(App)
    .handle(req, env)
    .await

SSR with custom index.html (SPA takeover after first paint):

Handler::new()
    .with_ssr(App)
    .with_index_html(include_str!("path/to/index.html"))?
    .handle(req, env)
    .await

The custom index.html must contain an element with id="main" — rendered component output is inserted at that point.

Suspense is supported: wait_for_suspense() resolves server futures during SSR, so components that call #[server] functions via use_server_future will have their data ready in the initial HTML.

SSR always renders with hydration markers (data-node-hydration attributes) and injects serialized hydration data. When the client WASM is built with hydrate(true), it reuses the server-rendered DOM instead of re-rendering — providing instant first paint with no flash.

Worker (server):

Handler::new()
    .with_ssr(App)
    .with_index_html(include_str!("path/to/index.html"))?
    .handle(req, env)
    .await

Client WASM (must render the same component):

The client must enable the fullstack feature on dioxus (which activates dioxus-web/hydrate):

# Cargo.toml

[dependencies]

dioxus = { version = "=0.7.3", features = ["web", "fullstack"] }

fn main() {
    dioxus::launch(App);
}

Important: Do not use ? on use_server_future in hydrated components. The ? operator suspends the component if the resource isn't immediately ready, which creates a VirtualDom/DOM tree mismatch and crashes the hydration walker. Instead, match on the Result:

#[component]
fn App() -> Element {
    let data_text = match use_server_future(get_data) {
        Ok(resource) => match &*resource.read() {
            Some(Ok(s)) => s.clone(),
            Some(Err(e)) => format!("Error: {e}"),
            None => "Loading...".into(),
        },
        Err(_) => "Loading...".into(),
    };
    rsx! { p { "{data_text}" } }
}

Build order:

  1. dx build --release — builds client WASM + index.html
  2. dioxus-cf-build --release -p your-worker — builds worker WASM, runs wasm-bindgen, generates shim (see Build Tool)

Send the initial HTML immediately with suspense fallbacks as placeholders, then stream resolved content out-of-order via ReadableStream as each suspense boundary completes. Fast data renders instantly; slow data streams in later. Requires the ssr feature.

Handler::new()
    .with_streaming_ssr(App)
    .with_index_html(include_str!("path/to/index.html"))?
    .handle(req, env)
    .await

If no suspense boundaries are pending after the initial render, streaming SSR automatically falls back to a single-shot response with no overhead — you can always use with_streaming_ssr without penalty.

The client-side JavaScript (window.dx_hydrate) swaps suspense placeholders with resolved content as chunks arrive. This is the same mechanism used by upstream Dioxus streaming SSR.

Real-time WebSocket connections via Durable Objects. The worker upgrades the request and forwards it to a DO, which creates the WebSocketPair and handles messages.

Worker entry point — route WebSocket upgrades to a Durable Object:

Handler::new()
    .websocket("/ws", |req| async move {
        let ns = cf::durable_object("WS_DO")?;
        let room = req.path().strip_prefix("/ws/").unwrap_or("default");
        let id = ns.id_from_name(room).cf()?;
        let stub = id.get_stub().cf()?;
        Ok(stub.fetch_with_request(req).await.cf()?)
    })
    .handle(req, env)
    .await

Durable Object — accept the socket and handle messages:

use worker::*;
use dioxus_cloudflare::prelude::*;

#[durable_object]
pub struct EchoDo {
    state: State,
    env: Env,
}

impl DurableObject for EchoDo {
    fn new(state: State, env: Env) -> Self { Self { state, env } }

    async fn fetch(&self, _req: Request) -> Result<Response> {
        let (server, resp) = cf::websocket_upgrade()?;
        self.state.accept_web_socket(&server);
        Ok(resp)
    }

    async fn websocket_message(&self, ws: WebSocket, message: WebSocketIncomingMessage) -> Result<()> {
        match message {
            WebSocketIncomingMessage::String(text) => ws.send_with_str(&format!("echo: {text}"))?,
            WebSocketIncomingMessage::Binary(bytes) => ws.send_with_bytes(&bytes)?,
        }
        Ok(())
    }

    async fn websocket_close(&self, _ws: WebSocket, _code: usize, _reason: String, _was_clean: bool) -> Result<()> {
        Ok(())
    }
}

wrangler.toml — bind the DO and route WebSocket paths:

[durable_objects]

bindings = [

  { name = "WS_DO", class_name = "EchoDo" }

]



[[migrations]]

tag = "v1"

new_sqlite_classes = ["EchoDo"]



[assets]

run_worker_first = ["/api/*", "/ws/*"]

use dioxus::prelude::*;
use dioxus_cloudflare::prelude::*;

#[server]
pub async fn create_order(items: Vec<Item>) -> Result<Order, ServerFnError> {
    let db = cf::d1("DB")?;

    db.prepare("INSERT INTO orders (items, total) VALUES (?, ?)")
        .bind(&[serde_json::to_string(&items)?.into(), total.into()])?
        .run()
        .await
        .cf()?;

    Ok(Order { items, total, status: "confirmed".into() })
}
use dioxus::prelude::*;
use shared::server_fns::create_order;

#[component]
fn OrderButton(items: Vec<Item>) -> Element {
    let order = use_resource(move || {
        let items = items.clone();
        async move { create_order(items).await }
    });

    match &*order.read() {
        Some(Ok(o)) => rsx! { p { "Order confirmed: {o.status}" } },
        Some(Err(e)) => rsx! { p { "Error: {e}" } },
        None => rsx! { p { "Placing order..." } },
    }
}
src/
├── lib.rs          # Public API: cf module, handle(), re-exports
├── bindings.rs     # Typed binding shorthands: d1(), kv(), r2(), durable_object(), queue(), ai(), service()
├── context.rs      # Thread-local Env + Request storage
├── handler.rs      # handle() — Worker↔Axum bridge, Handler builder, SSR fallback
├── cookie.rs       # Cookie read/write helpers + CookieBuilder
├── error.rs        # CfError newtype + CfResultExt trait
├── prelude.rs      # Convenience re-exports
├── session.rs      # Session middleware: KV/D1 backend, cookie-based session IDs
├── ssr.rs          # SSR rendering: single-shot + streaming, hydration data extraction
├── streaming.rs    # Out-of-order streaming data structures: MountPath, Mount, PendingSuspenseBoundary
└── websocket.rs    # WebSocket helpers: websocket_upgrade(), websocket_pair()
  • SSR inside Workers — render Dioxus components to HTML at the edge (requires ssr feature)
  • Hydration — client reuses server-rendered DOM instead of re-rendering (SSR always emits hydration markers)
  • Streaming SSR — send initial HTML with suspense fallbacks immediately, stream resolved content out-of-order via ReadableStream
  • WebSocket supportHandler::websocket() routes upgrade requests to Durable Objects; cf::websocket_upgrade() creates the WebSocketPair + 101 response
  • Session middlewareHandler::session(SessionConfig::kv("SESSIONS")) enables cf::session() in server functions; KV or D1 backend with automatic cookie management

Response bodies are streamed via ReadableStreamhandler.rs wraps axum::body::Body as a TryStream and pipes it through ResponseBuilder::from_stream(). Server functions returning TextStream, ByteStream, JsonStream, etc. stream chunks directly to the client without buffering the full response into memory.

Feature Enables
queue cf::queue() shorthand (activates worker/queue)
ssr Server-side rendering via Handler::with_ssr() / with_streaming_ssr() (adds dioxus-ssr, dioxus-history, futures-channel, wasm-bindgen-futures)
dioxus = { version = "=0.7.3", features = ["fullstack"] }

worker = { version = "0.7", features = ["http"] }

axum = { version = "0.8", default-features = false }

http = "1"

Copyright (C) 2026-2027 Jaffe Systems

This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).

If you use this software in a network service (SaaS, web application, API, etc.), you must make the complete source code of your application available to its users under the AGPL-3.0. This includes any modifications and derivative works.

Commercial License: If you need to use this software in a proprietary or closed-source application without the AGPL-3.0 obligations, a commercial license is available. See COMMERCIAL-LICENSE.md for details.

Use Case License Source Disclosure Required?
Open-source project AGPL-3.0 (free) Yes
Internal tools (not served to users) AGPL-3.0 (free) No
Proprietary SaaS / closed-source Commercial (paid) No