containerflare
containerflare lets you run Axum inside Cloudflare Containers without re-implementing the platform glue. It exposes a tiny runtime that:
- boots an Axum router on the container’s loopback listener
- forwards Cloudflare request metadata into your handlers
- keeps a command channel open so you can reach host-managed capabilities (KV, D1, Queues, etc.)
The result feels like developing any other Axum app—only now it runs next to your Worker.
Highlights
- Axum-first runtime – bring your own router, tower layers, extractors, etc.
- Cloudflare metadata bridge – request ID, colo/region/country, client IP, worker name, and
URLs are injected via
ContainerContext. - Command channel client – talk JSON-over-STDIO (default), TCP, or Unix sockets to the host;
the IPC layer now ships as the standalone
containerflare-commandcrate for direct use. - Production-ready example –
examples/basicdemonstrates a full Worker + Durable Object + container deployment using Wrangler v4.
Installation
The crate targets Rust 1.90+ (edition 2024).
Quick start
use ;
use ;
async
async
ContainerContextis injected via Axum’s extractor system.RequestMetadatacontains everything Cloudflare knows about the request (worker name, colo, region,cf-ray, client IP, method/path/url, etc.).ContainerContext::command_client()provides the low-level JSON command channel; callinvokewhenever Cloudflare documents a capability.
Run the binary inside your container image. Cloudflare will proxy HTTP traffic from the
Worker/Durable Object to the listener bound by containerflare (defaults to 0.0.0.0:8787).
Override CF_CONTAINER_ADDR/CF_CONTAINER_PORT if you need something else locally. Use
CF_CMD_ENDPOINT when pointing the command client at a TCP or Unix socket shim.
Standalone command crate
If you only need access to the host-managed command bus (KV, R2, Queues, etc.), depend on
containerflare-command directly:
It exposes CommandClient, CommandRequest, CommandResponse, and the CommandEndpoint
parsers without pulling in the runtime/router pieces.
Running locally
# build and run the example container (amd64)
# curl echoes the RequestMetadata JSON – easy proof the bridge works
Deploying to Cloudflare Containers
The example’s wrangler.toml sets image_build_context = "../..", so the Docker build sees
the entire workspace (the example crate depends on this repo via path = "../.."). After
deploy Wrangler prints a workers.dev URL that proxies into your container:
Metadata bridge
The Worker shim (see examples/basic/worker/index.js) adds an x-containerflare-metadata
header before proxying every request into the container. That JSON payload includes:
- request identifier (
cf-ray) - colo / region / country codes
- client IP
- worker name (derived from the
CONTAINERFLARE_WORKERWrangler variable) - HTTP method, path, and full URL
On the Rust side you can read all of those fields via ContainerContext::metadata() (see
RequestMetadata in src/context.rs). If you customize the Worker, keep writing this header
so your Axum handlers continue to receive Cloudflare context.
Example project
examples/basic is a real Cargo crate that depends on containerflare via path = "../..".
It ships with:
- a Dockerfile that builds for
x86_64-unknown-linux-musl - a Worker/Durable Object that forwards metadata and proxies requests
- deployment scripts and docs for Wrangler v4
Use it as a template for your own containerized Workers.
Platform expectations
- Cloudflare currently expects Containers to be built for the
linux/amd64architecture, so we targetx86_64-unknown-linux-muslby default. You could just as easily use a debian/ubuntu based image, however alpine/musl is great for small container sizes - The runtime binds to
0.0.0.0:8787so the Cloudflare sidecar (which connects from10.0.0.1) can reach your Axum listener. OverrideCF_CONTAINER_ADDR/CF_CONTAINER_PORTfor custom setups. - The
CommandClientspeaks JSON-over-STDIO for now. When Cloudflare documents additional transports we can add typed helpers on top of it.
Contributions are welcome—file issues or PRs with ideas!