# connectrpc-workers
[](https://crates.io/crates/connectrpc-workers)
[](https://github.com/connyay/connectrpc-workers/actions)
[](https://docs.rs/connectrpc-workers)
[ConnectRPC] `ClientTransport` implementations backed by the Cloudflare
Workers fetch APIs.
[ConnectRPC]: https://crates.io/crates/connectrpc
## What's in here
Two transports. Both implement `connectrpc::client::ClientTransport`, so
generated `FooServiceClient<T>` structs work without any extra glue:
- `FetcherTransport` wraps a [`worker::Fetcher`] (a `[[services]]`
binding). Use this for inter-service calls within the same Cloudflare
zone. The runtime short-circuits the request, so there's no DNS
lookup, no TLS handshake, and no trip out to the public internet.
- `FetchTransport` wraps the global [`worker::Fetch`] for arbitrary
`http://` / `https://` URLs.
[`worker::Fetcher`]: https://docs.rs/worker/latest/worker/struct.Fetcher.html
[`worker::Fetch`]: https://docs.rs/worker/latest/worker/enum.Fetch.html
## Usage
```rust
use connectrpc::client::ClientConfig;
use connectrpc::Protocol;
use connectrpc_workers::FetcherTransport;
// `EchoServiceClient` is generated by `connectrpc-build` from your `.proto`.
use my_proto::echo::v1::EchoServiceClient;
#[event(fetch, respond_with_errors)]
async fn fetch(req: HttpRequest, env: Env, _ctx: Context) -> worker::Result<_> {
let transport = FetcherTransport::new(env.service("ECHO")?);
let config = ClientConfig::new("http://echo/".parse().unwrap())
.protocol(Protocol::Connect);
let echo = EchoServiceClient::new(transport, config);
let resp = echo
.echo(EchoRequest { message: "hi".into(), ..Default::default() })
.await?;
// ...
}
```
The base URI's authority doesn't matter for service-binding fetches,
since the runtime routes via the binding name rather than DNS. ConnectRPC
still wants something that parses as a URI for path construction, so use
a sentinel like `http://<binding-name>/` to keep your logs readable.
## Why `Send + Sync + 'static`?
`ClientTransport` requires `Send + Sync + 'static` on the type and a
`Send + 'static` future. Workers' fetch is `!Send` (everything in
JS-land is `!Send`). The crate uses `worker::send::SendFuture` and
`worker::send::SendWrapper` to satisfy the bound. workers-rs ships
these specifically because the Workers isolate is single-threaded, so
nothing is ever actually moved across threads.
## Caveats
- Stick to `Protocol::Connect` (or `GrpcWeb`). Workers fetch
subrequests don't expose raw HTTP/2, so gRPC's trailer requirement
won't survive. Connect over HTTP/1.1 and GrpcWeb (trailers in body)
both work fine.
- Each call counts as one Workers subrequest (50 free / 1000 paid).
- Server-streaming and client-streaming should work. Bidi over a single
fetch isn't exercised in the example repo, so verify it yourself
before depending on it.
## End-to-end example
[`examples/multi/`](examples/multi/) ships two Rust workers (a gateway
and an echo backend) wired together over a service binding using
`FetcherTransport`. It includes a vitest + miniflare integration suite
that boots both compiled wasm bundles and asserts the inter-service
hop actually happened. The example crates depend on this lib by path,
so editing `src/lib.rs` is exercised end-to-end by `npm test` in
`examples/multi/integration-tests/`.
## Versioning
This crate tracks two upstream 0.x deps (`connectrpc` and `worker`).
Expect a minor bump here whenever either bumps in a way that breaks
the transport surface, and patch releases for everything else. Pre-1.0
the API itself may evolve with feedback. See `ChangeLog.md`.
## License
MIT, see `LICENSE`.