<div align="center">
# dioxus-cloudflare
**The missing bridge between Dioxus server functions and Cloudflare Workers.**
[](https://crates.io/crates/dioxus-cloudflare)
[](https://docs.rs/dioxus-cloudflare)
[](LICENSE)
</div>
---
<h2 align="center">What It Does</h2>
Write a `#[server]` function once. It runs on Cloudflare Workers. The client calls it like a normal async function. No manual routing, no manual serialization, no duplicated endpoints.
```rust
// shared crate — server function
use dioxus::prelude::*;
use dioxus_cloudflare::prelude::*;
#[server]
pub async fn get_user(id: String) -> Result<User, ServerFnError> {
let db = cf::d1("DB")?;
db.prepare("SELECT * FROM users WHERE id = ?")
.bind(&[id.into()])?
.first::<User>(None)
.await
.cf()?
.ok_or_else(|| ServerFnError::new("Not found"))
}
```
```rust
// client component — just call it
let user = get_user("abc".into()).await;
```
<h2 align="center">Architecture</h2>
```
Client WASM Cloudflare Worker
┌──────────┐ fetch() ┌─────────────────────┐
│ #[server] │ ───────────▶ │ handle(req, env) │
│ generates │ │ ↓ set_context() │
│ POST to │ │ ↓ worker→http req │
│ /api/... │ │ ↓ Axum dispatch │
│ │ ◀─ stream ─ │ ↓ http→worker resp │
└──────────┘ └─────────────────────┘
```
<h3 align="center">Request Flow</h3>
1. Client calls `get_user(id)` — Dioxus serializes args, sends POST to `/api/get_user`
2. Worker `#[event(fetch)]` receives the request
3. `dioxus_cloudflare::handle(req, env)` is called:
- Stores `Env` in thread-local (`cf::env()` becomes available)
- Stores raw `Request` in thread-local (`cf::req()` becomes available)
- Converts `worker::Request` → `http::Request`
- Dispatches through the Dioxus Axum router (`axum_core` feature)
- Converts `http::Response` → `worker::Response` (streaming via `ReadableStream`)
4. Worker returns the response
<h3 align="center">Why Thread-Local Works</h3>
Cloudflare Workers run one request per isolate at a time (single-threaded WASM). There is no concurrent access to thread-locals within a single Worker invocation.
<h2 align="center">The Crate Provides</h2>
| `cf::d1(name)` | D1 database — env + binding + error conversion in one call |
| `cf::kv(name)` | Workers KV namespace |
| `cf::r2(name)` | R2 bucket |
| `cf::durable_object(name)` | Durable Object namespace |
| `cf::queue(name)` | Queue producer (requires `queue` feature) |
| `cf::secret(name)` | Encrypted secret (`wrangler secret put`) |
| `cf::var(name)` | Plaintext environment variable (`[vars]` in wrangler.toml) |
| `cf::ai(name)` | Workers AI inference |
| `cf::service(name)` | Service binding (call other Workers) |
| `cf::env()` | Full Worker `Env` — for bindings without a shorthand |
| `cf::req()` | Raw `worker::Request` — headers, IP |
| `cf::cookie(name)` | Read a named cookie from the request |
| `cf::cookies()` | Read all cookies from the request |
| `cf::set_cookie()` | Set an HttpOnly/Secure auth cookie (secure defaults) |
| `cf::set_cookie_with()` | Set a cookie with custom options (builder pattern) |
| `cf::clear_cookie()` | Clear a cookie (logout) |
| `cf::session()` | Load session data (async); returns `Session` handle for sync get/set/remove |
| `SessionConfig` | Session backend configuration (KV or D1) — pass to `Handler::session()` |
| `handle(req, env)` | Main entry point — wire this into `#[event(fetch)]` |
| `Handler` | Builder with before/after middleware hooks + `.session()` + `.websocket()` routing |
| `cf::websocket_upgrade()` | Create a `WebSocketPair` + 101 response in one call (for Durable Objects) |
| `cf::websocket_pair()` | Create a raw `WebSocketPair` for custom handling |
| `CfError` | Newtype for `worker::Error` → `ServerFnError` conversion |
| `CfResultExt` | `.cf()` method on `Result<T, worker::Error>` and `Result<T, KvError>` |
<h2 align="center">Prerequisites</h2>
This crate requires a patched version of `dioxus-server` that adds `wasm32` target support. Add the following to your **workspace** `Cargo.toml`:
```toml
[patch.crates-io]
dioxus-server = { git = "https://github.com/JaffeSystems/dioxus-server-cf.git" }
```
This is necessary because upstream `dioxus-server` 0.7.3 does not compile for `wasm32-unknown-unknown`. The patch applies minimal `cfg`-gating to make it compatible with Cloudflare Workers.
<h2 align="center">Build Tool</h2>
Install [`dioxus-cloudflare-build`](https://crates.io/crates/dioxus-cloudflare-build) to automate the entire build pipeline — cargo build, wasm-bindgen, and JavaScript shim generation — in a single command:
```sh
cargo install dioxus-cloudflare-build
```
```
dioxus-cf-build [OPTIONS] -p <CRATE>
Options:
-p, --package <CRATE> Crate to build (cargo -p flag)
--release Build in release mode
--out-dir <DIR> Output directory [default: build/worker]
```
**What it does:**
1. **Windows MSVC PATH fix** — auto-detects the Git Bash `link.exe` conflict and prepends the real MSVC linker directory (no-op on other platforms)
2. **`cargo build`** — runs `cargo build --target wasm32-unknown-unknown -p <crate> [--release]`
3. **`wasm-bindgen`** — runs `wasm-bindgen --out-dir <dir> --target web` on the output `.wasm`
4. **Shim generation** — parses the wasm-bindgen `.d.ts` to auto-detect Durable Object classes and generates `shim.mjs`
**Use it as your wrangler build command:**
```toml
# wrangler.toml
[build]
command = "dioxus-cf-build --release -p my-worker"
```
Now `npx wrangler deploy` handles everything — no manual steps, no hand-written shim.
<h2 align="center">Usage</h2>
<h3 align="center">Worker Entry Point</h3>
```rust
use worker::*;
use dioxus_cloudflare::prelude::*;
// Import server functions so they register with inventory
use shared::server_fns::*;
extern "C" { fn __wasm_call_ctors(); }
#[event(fetch)]
async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> {
// Required: initialize inventory for #[server] function registration
// SAFETY: Called once per cold start. inventory crate needs this in WASM.
unsafe { __wasm_call_ctors(); }
dioxus_cloudflare::handle(req, env).await
}
```
<h3 align="center">Middleware Hooks</h3>
Use [`Handler`] for before/after middleware without touching bridge internals.
**CORS headers on all responses:**
```rust
use worker::*;
use dioxus_cloudflare::Handler;
#[event(fetch)]
async fn fetch(req: Request, env: Env, _ctx: Context) -> Result<Response> {
unsafe { __wasm_call_ctors(); }
Handler::new()
.after(|resp| {
resp.headers_mut().set("Access-Control-Allow-Origin", "*")?;
Ok(())
})
.handle(req, env)
.await
}
```
**Auth check (short-circuit unauthorized requests):**
```rust
Handler::new()
.before(|req| {
if req.headers().get("Authorization")?.is_none() {
return Ok(Some(Response::error("Unauthorized", 401)?));
}
Ok(None) // continue to server functions
})
.handle(req, env)
.await
```
**Before hooks** run after context is set (`cf::env()`, `cf::d1()`, etc. work). Return `Ok(None)` to continue, `Ok(Some(resp))` to short-circuit. **After hooks** run on all responses (including short-circuited ones) and can modify headers.
<h3 align="center">Session Middleware</h3>
Built-in session management backed by Workers KV or D1. Configure it on the `Handler` builder — `cf::session()` becomes available in all server functions.
**KV-backed sessions (automatic expiry via KV TTL):**
```rust
Handler::new()
.session(SessionConfig::kv("SESSIONS"))
.handle(req, env)
.await
```
**D1-backed sessions:**
```rust
Handler::new()
.session(SessionConfig::d1("DB", "sessions"))
.handle(req, env)
.await
```
D1 requires a table with this schema:
```sql
CREATE TABLE sessions (
id TEXT PRIMARY KEY,
data TEXT NOT NULL,
expires_at INTEGER NOT NULL
);
```
**Reading and writing session data:**
```rust
#[server]
pub async fn login(user: String) -> Result<(), ServerFnError> {
let session = cf::session().await?;
session.set("user_id", &user)?;
Ok(())
}
#[server]
pub async fn profile() -> Result<String, ServerFnError> {
let session = cf::session().await?;
let user: Option<String> = session.get("user_id")?;
Ok(user.unwrap_or_else(|| "not logged in".into()))
}
#[server]
pub async fn logout() -> Result<(), ServerFnError> {
let session = cf::session().await?;
session.destroy();
Ok(())
}
```
`cf::session()` is async (loads from KV/D1 on first call, cached after). `Session` methods (`get`, `set`, `remove`, `destroy`) are sync — they operate on the in-memory cache. Dirty data is flushed to the backend automatically before the response is sent.
**Custom configuration:**
```rust
SessionConfig::kv("SESSIONS")
.cookie_name("my_session") // default: "__session"
.max_age(60 * 60 * 24 * 7) // 7 days (default: 86400 = 24h)
```
**wrangler.toml — add the KV namespace:**
```toml
[[kv_namespaces]]
binding = "SESSIONS"
id = "your-kv-namespace-id"
```
<h3 align="center">Secrets and Environment Variables</h3>
Access encrypted secrets and plaintext variables from inside server functions.
**Secrets** are set via `wrangler secret put` or the Cloudflare dashboard — encrypted at rest, never in `wrangler.toml`:
```rust
#[server]
pub async fn verify_token(token: String) -> Result<bool, ServerFnError> {
let expected = cf::secret("API_TOKEN")?.to_string();
Ok(token == expected)
}
```
**Variables** are set in the `[vars]` section of `wrangler.toml` — plaintext, visible in source:
```rust
#[server]
pub async fn get_environment() -> Result<String, ServerFnError> {
Ok(cf::var("ENVIRONMENT")?.to_string())
}
```
```toml
# wrangler.toml
[vars]
ENVIRONMENT = "production"
```
<h3 align="center">Workers AI</h3>
Run AI inference from server functions using Cloudflare's built-in Workers AI models.
```rust
use serde::{Deserialize, Serialize};
#[derive(Serialize)]
struct AiInput { messages: Vec<AiMessage> }
#[derive(Serialize)]
struct AiMessage { role: String, content: String }
#[derive(Deserialize)]
struct AiOutput { response: Option<String> }
#[server]
pub async fn generate(prompt: String) -> Result<String, ServerFnError> {
use dioxus_cloudflare::prelude::*;
let ai = cf::ai("AI")?;
let resp: AiOutput = ai.run("@cf/meta/llama-3.1-8b-instruct", AiInput {
messages: vec![AiMessage { role: "user".into(), content: prompt }],
}).await.cf()?;
Ok(resp.response.unwrap_or_default())
}
```
```toml
# wrangler.toml
[ai]
binding = "AI"
```
Any model listed in the [Workers AI catalog](https://developers.cloudflare.com/workers-ai/models/) can be used — text generation, embeddings, image generation, etc. Define typed input/output structs matching the model's API — `serde_json::Value` does **not** work correctly through `serde_wasm_bindgen`.
<h3 align="center">Service Bindings</h3>
Call other Workers from server functions. The target Worker must be deployed separately and bound in `wrangler.toml`.
```rust
#[server]
pub async fn call_auth(token: String) -> Result<String, ServerFnError> {
use dioxus_cloudflare::prelude::*;
let auth = cf::service("AUTH")?;
let resp = auth.fetch("https://fake-host/verify", None).await.cf()?;
Ok(resp.text().await.cf()?)
}
```
```toml
# wrangler.toml
[[services]]
binding = "AUTH"
service = "auth-worker"
```
The URL host is ignored — the request goes directly to the bound Worker. Use any placeholder host.
<h3 align="center">SSR (Server-Side Rendering)</h3>
Render Dioxus components to HTML at the edge. Requires the `ssr` feature.
When the Axum router returns 404 and the request accepts `text/html`, the handler renders your app component and returns the HTML. Non-HTML requests (JS, CSS, WASM, JSON) pass through normally.
**Minimal SSR (default HTML shell, no client JS):**
```rust
Handler::new()
.with_ssr(App)
.handle(req, env)
.await
```
**SSR with custom index.html (SPA takeover after first paint):**
```rust
Handler::new()
.with_ssr(App)
.with_index_html(include_str!("path/to/index.html"))?
.handle(req, env)
.await
```
The custom `index.html` must contain an element with `id="main"` — rendered component output is inserted at that point.
Suspense is supported: `wait_for_suspense()` resolves server futures during SSR, so components that call `#[server]` functions via `use_server_future` will have their data ready in the initial HTML.
<h3 align="center">SSR with Hydration</h3>
SSR always renders with hydration markers (`data-node-hydration` attributes) and injects serialized hydration data. When the client WASM is built with `hydrate(true)`, it reuses the server-rendered DOM instead of re-rendering — providing instant first paint with no flash.
**Worker (server):**
```rust
Handler::new()
.with_ssr(App)
.with_index_html(include_str!("path/to/index.html"))?
.handle(req, env)
.await
```
**Client WASM (must render the same component):**
The client must enable the `fullstack` feature on `dioxus` (which activates `dioxus-web/hydrate`):
```toml
# Cargo.toml
[dependencies]
dioxus = { version = "=0.7.3", features = ["web", "fullstack"] }
```
```rust
fn main() {
dioxus::launch(App);
}
```
**Important:** Do not use `?` on `use_server_future` in hydrated components. The `?` operator suspends the component if the resource isn't immediately ready, which creates a VirtualDom/DOM tree mismatch and crashes the hydration walker. Instead, match on the `Result`:
```rust
#[component]
fn App() -> Element {
let data_text = match use_server_future(get_data) {
Ok(resource) => match &*resource.read() {
Some(Ok(s)) => s.clone(),
Some(Err(e)) => format!("Error: {e}"),
None => "Loading...".into(),
},
Err(_) => "Loading...".into(),
};
rsx! { p { "{data_text}" } }
}
```
**Build order:**
1. `dx build --release` — builds client WASM + `index.html`
2. `dioxus-cf-build --release -p your-worker` — builds worker WASM, runs wasm-bindgen, generates shim (see [Build Tool](#build-tool))
<h3 align="center">Streaming SSR</h3>
Send the initial HTML immediately with suspense fallbacks as placeholders, then stream resolved content out-of-order via `ReadableStream` as each suspense boundary completes. Fast data renders instantly; slow data streams in later. Requires the `ssr` feature.
```rust
Handler::new()
.with_streaming_ssr(App)
.with_index_html(include_str!("path/to/index.html"))?
.handle(req, env)
.await
```
If no suspense boundaries are pending after the initial render, streaming SSR automatically falls back to a single-shot response with no overhead — you can always use `with_streaming_ssr` without penalty.
The client-side JavaScript (`window.dx_hydrate`) swaps suspense placeholders with resolved content as chunks arrive. This is the same mechanism used by upstream Dioxus streaming SSR.
<h3 align="center">WebSocket Support</h3>
Real-time WebSocket connections via Durable Objects. The worker upgrades the request and forwards it to a DO, which creates the `WebSocketPair` and handles messages.
**Worker entry point — route WebSocket upgrades to a Durable Object:**
```rust
Handler::new()
.websocket("/ws", |req| async move {
let ns = cf::durable_object("WS_DO")?;
let room = req.path().strip_prefix("/ws/").unwrap_or("default");
let id = ns.id_from_name(room).cf()?;
let stub = id.get_stub().cf()?;
Ok(stub.fetch_with_request(req).await.cf()?)
})
.handle(req, env)
.await
```
**Durable Object — accept the socket and handle messages:**
```rust
use worker::*;
use dioxus_cloudflare::prelude::*;
#[durable_object]
pub struct EchoDo {
state: State,
env: Env,
}
impl DurableObject for EchoDo {
fn new(state: State, env: Env) -> Self { Self { state, env } }
async fn fetch(&self, _req: Request) -> Result<Response> {
let (server, resp) = cf::websocket_upgrade()?;
self.state.accept_web_socket(&server);
Ok(resp)
}
async fn websocket_message(&self, ws: WebSocket, message: WebSocketIncomingMessage) -> Result<()> {
match message {
WebSocketIncomingMessage::String(text) => ws.send_with_str(&format!("echo: {text}"))?,
WebSocketIncomingMessage::Binary(bytes) => ws.send_with_bytes(&bytes)?,
}
Ok(())
}
async fn websocket_close(&self, _ws: WebSocket, _code: usize, _reason: String, _was_clean: bool) -> Result<()> {
Ok(())
}
}
```
**wrangler.toml — bind the DO and route WebSocket paths:**
```toml
[durable_objects]
bindings = [
{ name = "WS_DO", class_name = "EchoDo" }
]
[[migrations]]
tag = "v1"
new_sqlite_classes = ["EchoDo"]
[assets]
run_worker_first = ["/api/*", "/ws/*"]
```
<h3 align="center">Server Function (Shared Crate)</h3>
```rust
use dioxus::prelude::*;
use dioxus_cloudflare::prelude::*;
#[server]
pub async fn create_order(items: Vec<Item>) -> Result<Order, ServerFnError> {
let db = cf::d1("DB")?;
db.prepare("INSERT INTO orders (items, total) VALUES (?, ?)")
.bind(&[serde_json::to_string(&items)?.into(), total.into()])?
.run()
.await
.cf()?;
Ok(Order { items, total, status: "confirmed".into() })
}
```
<h3 align="center">Client Component</h3>
```rust
use dioxus::prelude::*;
use shared::server_fns::create_order;
#[component]
fn OrderButton(items: Vec<Item>) -> Element {
let order = use_resource(move || {
let items = items.clone();
async move { create_order(items).await }
});
match &*order.read() {
Some(Ok(o)) => rsx! { p { "Order confirmed: {o.status}" } },
Some(Err(e)) => rsx! { p { "Error: {e}" } },
None => rsx! { p { "Placing order..." } },
}
}
```
<h2 align="center">Crate Structure</h2>
| `lib.rs` | Public API surface | `cf` module, `handle()`, `Handler` |
| `bindings.rs` | Typed CF binding shorthands | `cf::d1()`, `cf::kv()`, `cf::r2()`, `cf::durable_object()`, `cf::queue()`, `cf::ai()`, `cf::service()`, `cf::secret()`, `cf::var()` |
| `context.rs` | Thread-local `Env` + `Request` storage | `cf::env()`, `cf::req()`, `set_context()` |
| `handler.rs` | Worker↔Axum bridge + `Handler` builder | `handle()`, `Handler::new()`, `.before()`, `.after()`, `.websocket()` |
| `cookie.rs` | Cookie read/write helpers | `cf::cookie()`, `cf::cookies()`, `cf::set_cookie()`, `cf::set_cookie_with()`, `cf::clear_cookie()` |
| `session.rs` | Session middleware (KV or D1 backend) | `cf::session()`, `SessionConfig`, `Session` |
| `error.rs` | Error bridge to `ServerFnError` | `CfError`, `CfResultExt` (`.cf()` method) |
| `ssr.rs` | SSR rendering + hydration data extraction | `with_ssr()`, `with_streaming_ssr()`, `with_index_html()` |
| `streaming.rs` | Out-of-order streaming SSR internals | `MountPath`, `Mount`, `PendingSuspenseBoundary` |
| `websocket.rs` | WebSocket helpers for Durable Objects | `cf::websocket_upgrade()`, `cf::websocket_pair()` |
| `prelude.rs` | Convenience re-exports | `use dioxus_cloudflare::prelude::*` |
<h2 align="center">Feature Matrix</h2>
<h3 align="center">Cloudflare Bindings</h3>
| D1 | `cf::d1(name)` | `cf::d1("DB")?.prepare("SELECT ...").first::<T>(None).await.cf()?` |
| Workers KV | `cf::kv(name)` | `cf::kv("KV")?.get("key").text().await.cf()?` |
| R2 | `cf::r2(name)` | `cf::r2("BUCKET")?.put("key", data).execute().await.cf()?` |
| Durable Objects | `cf::durable_object(name)` | `cf::durable_object("DO")?.id_from_name("room").cf()?` |
| Queues | `cf::queue(name)` | `cf::queue("Q")?.send(msg).await.cf()?` (requires `queue` feature) |
| Workers AI | `cf::ai(name)` | `cf::ai("AI")?.run("@cf/meta/llama-3.1-8b-instruct", input).await.cf()?` |
| Service Bindings | `cf::service(name)` | `cf::service("AUTH")?.fetch(url, None).await.cf()?` |
| Secrets | `cf::secret(name)` | `cf::secret("API_KEY")?.to_string()` |
| Variables | `cf::var(name)` | `cf::var("ENVIRONMENT")?.to_string()` |
<h3 align="center">Request & Response</h3>
| Raw request | `cf::req()` | Access headers, IP, method from inside server functions |
| Read cookie | `cf::cookie(name)` | Parse a named cookie from the `Cookie` header |
| Read all cookies | `cf::cookies()` | Parse all cookies into a `HashMap` |
| Set cookie | `cf::set_cookie(name, value, max_age)` | Queue a `Set-Cookie` header (HttpOnly, Secure, SameSite=Strict) |
| Custom cookie | `cf::set_cookie_with(name, value)` | Builder pattern for custom `Domain`, `Path`, `SameSite`, etc. |
| Clear cookie | `cf::clear_cookie(name)` | Queue a cookie-clear header |
| Streaming | Return `TextStream` / `ByteStream` / `JsonStream` | Streamed via `ReadableStream` — no buffering |
<h3 align="center">Server Features</h3>
| Middleware | `Handler::new().before(f).after(f)` | Before hooks can short-circuit; after hooks modify all responses |
| Sessions | `Handler::new().session(SessionConfig::kv("KV"))` | KV or D1 backend, cookie-based session IDs, auto-flush |
| WebSockets | `Handler::new().websocket("/ws", handler)` | Routes upgrades to Durable Objects |
| SSR | `Handler::new().with_ssr(App)` | Single-shot server-side rendering (requires `ssr` feature) |
| Streaming SSR | `Handler::new().with_streaming_ssr(App)` | Out-of-order streaming with suspense (requires `ssr` feature) |
| Custom HTML | `.with_index_html(include_str!("index.html"))` | Use your own HTML shell (must contain `id="main"`) |
| Error bridge | `.cf()` on any `Result<T, worker::Error>` | Converts to `ServerFnError` automatically |
<h2 align="center">Optional Features</h2>
| `queue` | `cf::queue()` binding (activates `worker/queue`) | None |
| `ssr` | `with_ssr()`, `with_streaming_ssr()`, `with_index_html()` | `dioxus-ssr`, `dioxus-history`, `futures-channel`, `wasm-bindgen-futures` |
<h2 align="center">Companion Crates</h2>
| [`dioxus-cloudflare-build`](https://crates.io/crates/dioxus-cloudflare-build) | Build pipeline CLI: cargo build + wasm-bindgen + shim generation | `cargo install dioxus-cloudflare-build` |
| [`dioxus-server-cf`](https://github.com/JaffeSystems/dioxus-server-cf) | Patched `dioxus-server` 0.7.3 for wasm32 compatibility | `[patch.crates-io]` (see [Prerequisites](#prerequisites)) |
<h2 align="center">Dependencies</h2>
```toml
[dependencies]
dioxus = { version = "=0.7.3", features = ["fullstack"] }
dioxus-cloudflare = { version = "0.7", features = ["queue", "ssr"] }
worker = { version = "0.7", features = ["http", "d1"] }
wasm-bindgen = "0.2"
[patch.crates-io]
dioxus-server = { git = "https://github.com/JaffeSystems/dioxus-server-cf.git" }
```
<h2 align="center">Roadmap</h2>
- **Template project / `cargo generate`** — scaffolding for new projects with wrangler config, shared/web/worker crates, and build scripts
- **Remove `dioxus-server` patch requirement** — upstream the wasm32 `cfg`-gating to Dioxus core
<h2 align="center">License</h2>
Copyright (C) 2026-2027 Jaffe Systems
This project is licensed under the **GNU Affero General Public License v3.0 (AGPL-3.0)**.
If you use this software in a network service (SaaS, web application, API, etc.), you must make the complete source code of your application available to its users under the AGPL-3.0. This includes any modifications and derivative works.
**Commercial License**: If you need to use this software in a proprietary or closed-source application without the AGPL-3.0 obligations, a commercial license is available. See [COMMERCIAL-LICENSE.md](COMMERCIAL-LICENSE.md) for details.
| Open-source project | AGPL-3.0 (free) | Yes |
| Internal tools (not served to users) | AGPL-3.0 (free) | No |
| Proprietary SaaS / closed-source | Commercial (paid) | No |