Noxy
The darkness your packets pass through.
A TLS man-in-the-middle proxy with a pluggable HTTP middleware pipeline. Built on tower, noxy gives you full access to decoded HTTP requests and responses flowing through the proxy using standard tower Service and Layer abstractions -- including all existing tower-http middleware out of the box.
Features
- Tower middleware pipeline -- plug in any tower
LayerorServiceto inspect and modify HTTP traffic. Works with tower-http layers (compression, tracing, CORS, etc.) and your own custom services. - Built-in middleware -- traffic logging, latency injection, bandwidth throttling, fault injection, mock responses, and TypeScript scripting
- Conditional rules -- apply middleware only to requests matching a path or path prefix
- TOML config file -- configure the proxy and middleware rules declaratively
- Per-host certificate generation on the fly, signed by a user-provided CA
- HTTP/1.1 and HTTP/2 support (auto-negotiated via ALPN)
- Streaming bodies -- middleware can process data as it arrives without buffering
- Async I/O with Tokio and Hyper
Library Usage
use Proxy;
// A custom tower layer that adds a header to every response
let proxy = builder
.ca_pem_files?
.http_layer
.build;
proxy.listen.await?;
Any tower Layer<HttpService> works. The innermost service forwards requests to the upstream server; your layers wrap around it in an onion model and can inspect or modify requests before forwarding and responses after.
Installation
Pre-built binaries
Download a pre-built binary from the latest release:
| Platform | Architecture | Download |
|---|---|---|
| Linux (glibc) | x86_64 | noxy-x86_64-unknown-linux-gnu.tar.gz |
| Linux (glibc) | aarch64 | noxy-aarch64-unknown-linux-gnu.tar.gz |
| Linux (musl) | x86_64 | noxy-x86_64-unknown-linux-musl.tar.gz |
| Linux (musl) | aarch64 | noxy-aarch64-unknown-linux-musl.tar.gz |
| macOS | Apple Silicon | noxy-aarch64-apple-darwin.tar.gz |
# Example: install on Linux x86_64
|
Cargo
Quick Start
1. Generate a CA certificate
Or with OpenSSL:
2. Run the proxy
3. Make a request through the proxy
Trusting the CA system-wide
Instead of passing --cacert every time, you can install ca-cert.pem into your OS or browser trust store. This lets any application use the proxy transparently.
Important: Only do this in development/testing environments. Remove the CA when you're done.
CLI
The CLI provides flags for common middleware without needing a config file.
)
)
# Log all traffic
# Log traffic including request/response bodies
# Add 200ms latency to every request
# Add random latency between 100ms and 500ms
# Limit bandwidth to 10 KB/s
# Combine multiple flags
# Accept invalid upstream certificates (e.g. self-signed)
# Custom listen address and CA paths
Config File
For conditional rules and more complex setups, use a TOML config file.
CLI flags override config file settings for global options (listen address, CA paths, etc.) and append additional unconditional rules.
Example config
= "127.0.0.1:8080"
[]
= "ca-cert.pem"
= "ca-key.pem"
# accept_invalid_upstream_certs = true
# Log all traffic
[[]]
= true
# Log with request/response bodies
# [[rules]]
# log = { bodies = true }
# Add 200ms latency to API requests
[[]]
= { = "/api" }
= "200ms"
# Simulate slow downloads with random latency and bandwidth limit
[[]]
= { = "/downloads" }
= "50ms..200ms"
= 10240
# Inject faults on a specific endpoint
[[]]
= { = "/flaky" }
= { = 0.5, = 0.02 }
# Mock a health check endpoint
[[]]
= { = "/health" }
= { = "ok" }
# Return 503 for all paths under /fail
[[]]
= { = "/fail" }
= { = 503, = "service unavailable" }
Rules
Each rule has an optional match condition and one or more middleware configs. Rules without a match apply to all requests.
| Field | Description |
|---|---|
match |
{ path = "/exact" } or { path_prefix = "/prefix" } |
log |
true or { bodies = true } |
latency |
"200ms", "1s", or "100ms..500ms" for random range |
bandwidth |
Bytes per second throughput limit |
fault |
{ error_rate = 0.5, abort_rate = 0.02, error_status = 503 } |
respond |
{ body = "ok", status = 200 } -- returns a fixed response without forwarding upstream |
Scripting Middleware
Write request/response manipulation logic in TypeScript or JavaScript. Scripts run in an embedded V8 engine via deno_core. Requires the scripting feature.
use Proxy;
use ScriptLayer;
let proxy = builder
.ca_pem_files?
.http_layer
.build;
The script exports a default async function that receives the request and a respond function to forward it upstream:
// middleware.ts
export default async function(req: Request, respond: Function) {
// Add a header before forwarding
req.headers.set("x-proxy", "noxy");
// Forward to upstream
const res = await respond(req);
// Modify the response
res.headers.set("x-intercepted", "true");
return res;
}
Short-circuit responses without forwarding upstream:
export default async function(req: Request, respond: Function) {
if (req.url === "/health") {
return new Response("ok", { status: 200 });
}
return await respond(req);
}
Read request or response bodies (lazy -- only buffered if you call body()):
export default async function(req: Request, respond: Function) {
const body = await req.body(); // Uint8Array
console.log("Request size:", body.length);
const res = await respond(req);
const resBody = await res.body(); // Uint8Array
return new Response(resBody, {
status: res.status,
headers: res.headers,
});
}
By default, each connection gets its own V8 isolate, so global state in the script (like variables declared outside the handler) is scoped per connection. Use .shared() to reuse a single isolate across all connections:
// Per-connection (default) -- each connection gets a fresh isolate
from_file?
// Shared -- one isolate for all connections, global state is shared
from_file?.shared
How It Works
Normal HTTPS creates an encrypted tunnel between client and server -- nobody in the middle can read the traffic. Noxy breaks that tunnel into two separate TLS sessions and sits in between, with your middleware pipeline processing decoded HTTP traffic.
The flow
Client (curl) Noxy Server (example.com)
| | |
|--- CONNECT host --->| |
|<-- 200 OK ----------| |
| |--- TLS handshake ------------->|
| | (real cert verified) |
| | |
| Noxy generates a | |
| fake cert for | |
| "example.com" | |
| signed by our CA | |
| | |
|<- TLS handshake --->| |
| (fake cert) | |
| | |
|== TLS session 1 ====|======= TLS session 2 ==========|
| | |
| "GET / HTTP/1.1" | tower middleware pipeline |
|--- encrypted ------>| [Layer] -> [Layer] -> upstream |
| | |
| | response + layers |
|<--- re-encrypted ---| upstream -> [Layer] -> [Layer] |
Step by step
-
HTTP CONNECT -- the client sends an unencrypted
CONNECT example.com:443request to the proxy. The proxy learns the target hostname from this plaintext request. -
Upstream TLS -- Noxy opens a real TLS connection to
example.com, verifying the server's authentic certificate against Mozilla's root CAs. -
Fake certificate generation -- Noxy generates a TLS certificate for
example.comsigned by the user-provided CA, created on the fly per host. -
Client TLS -- Noxy performs a TLS handshake with the client using the fake certificate. The client accepts it because it trusts the CA.
-
HTTP relay with middleware -- with both TLS sessions established, Hyper handles the HTTP connection on both sides. Each request from the client passes through your tower middleware pipeline before being forwarded upstream, and each response passes back through the pipeline before being sent to the client.
License
MIT