Skip to main content

Crate http_handle

Crate http_handle 

Source
Expand description

HTTP Handle logo

HTTP Handle

A fast and lightweight Rust library for handling HTTP requests and responses.

Build Crates.io Docs.rs Coverage lib.rs


§Install

cargo add http-handle

Or add to Cargo.toml:

[dependencies]
http-handle = "0.0.5"

You need Rust 1.88.0 or later. Works on macOS, Linux, and Windows.


§Overview

HTTP Handle is a lightweight static-file HTTP server library. Start with Server::new for quick prototyping, graduate to ServerBuilder for policy configuration, and switch to the async perf_server modes (high-perf / high-perf-multi-thread) for production throughput. Single binary, no runtime dependencies beyond libc.

v0.0.5 highlights:

  • Static-file fast path: pre-serialised response cache + sendfile (Linux) hits 180 k req/s on Linux/arm64 (high-perf-mt mode) — see docs/PERFORMANCE.md.
  • HTTP/1.1 keep-alive on both the sync and async server paths.
  • Five start modes: start(), start_with_thread_pool(n), start_with_pooling(workers, max), start_with_graceful_shutdown(timeout), plus the async start_high_perf / start_high_perf_multi_thread.
  • HTTP/2 (h2c) with feature = "http2".
  • HTTP/3 ALPN routing + fallback chain with feature = "http3-profile" (QUIC termination is on the v0.2 roadmap; see docs/HTTP3_DESIGN.md).
  • Enterprise primitives: TLS / mTLS policy, API-key + JWT verifiers, RBAC + ABAC, hot-reload TOML config, multi-tenant isolation, distributed rate limiter, OTLP-shaped telemetry.

§Features

CapabilityDefault?Feature flag
Static file serving with auto MIME detectionyes
ServerBuilder (CORS, custom headers, timeouts, rate limit, cache TTL)yes
HTTP/1.1 keep-alive (RFC 7230 §6.3)yes
ThreadPool + ConnectionPool for bounded resourcesyes
Graceful shutdown with configurable drain timeoutyes
Directory traversal protection + path sanitisationyes
64 MiB cap on buffered response bodies (OOM guard)yes
Sharded rate limiter (16-way) + ETag LRU cacheyes
Precompressed asset negotiation (br / zstd / gzip)yes
LanguageDetector (built-in + custom regex patterns)yes
Async runtime helpers + async server (start_async)opt-inasync
High-perf async server (semaphore-bounded, sendfile)opt-inhigh-perf
Multi-thread Tokio runtime entry pointopt-inhigh-perf-multi-thread
Pre-serialised response cachewith high-perfhigh-perf
Concurrent batch file readsopt-inbatch
Pull-based chunked streaming (ChunkStream)opt-instreaming
Const MIME table + bitset language detectionopt-inoptimized
Structured tracing (tracing + subscriber)opt-inobservability
HTTP/2 server (h2c)opt-inhttp2
HTTP/3 ALPN profile + fallback chainopt-inhttp3-profile
Distributed rate limiter (pluggable backend)opt-indistributed-rate-limit
Multi-tenant config + scoped secretsopt-inmulti-tenant
Host-profile-derived auto-tuningopt-inautotune
TLS / mTLS policy primitivesopt-inenterprise (umbrella for tls, auth, config, observability)
API-key + JWT + RBAC/ABAC enforcementopt-inenterprise
TOML config + hot-reload watcheropt-inenterprise

#![deny(unsafe_code)] at the crate root with three documented exceptions (libc::sendfile, two test-module env-var mutations).


§Usage

use http_handle::Server;

fn main() -> std::io::Result<()> {
    let server = Server::new("127.0.0.1:8080", "./public");
    server.start()
}

For a complete capability-to-example matrix (basic server, builder pattern, graceful shutdown, feature-gated modules like async / HTTP/2 / streaming / enterprise auth), see docs/EXAMPLES.md.


§Examples

30 one-word examples cover every public API and feature flag. Each is registered as a [[example]] target in Cargo.toml and follows a shared layout (animated spinner + checkmark output via examples/support.rs).

The friction-free way: scripts/example.sh knows the feature mapping, so you don’t need to remember which Cargo flag each demo needs.

./scripts/example.sh hello              # core demos: no features
./scripts/example.sh enterprise         # auto-attaches --features enterprise
./scripts/example.sh dhat --release     # extra cargo flags pass through
./scripts/example.sh --list             # print every name

cargo run --example all builds and drives every demo in sequence.

Core (no features required):

ExampleCommandWhat it shows
hellocargo run --example helloMinimal Server::new
buildercargo run --example builderServerBuilder fluent API (CORS, headers, timeouts, validation)
requestcargo run --example requestRequest::from_stream parse over a real TCP roundtrip
responsecargo run --example responseResponse::send + set_connection_header
errorscargo run --example errorsServerError constructors
policiescargo run --example policiesCORS / security headers / timeouts / rate-limit / cache TTL
poolcargo run --example poolThreadPool and ConnectionPool bounded-resource semantics
shutdowncargo run --example shutdownShutdownSignal lifecycle and graceful drain
keepalivecargo run --example keepaliveHTTP/1.1 keep-alive over one TCP connection (5 GETs)
languagecargo run --example languageLanguageDetector built-in + custom regex patterns

Per Cargo feature flag (one example per flag — copy-paste ready):

ExampleCommandWhat it shows
asynccargo run --features async --example asyncrun_blocking + start_async
batchcargo run --features batch --example batchConcurrent file reads with parallelism cap
streamingcargo run --features streaming --example streamingChunkStream chunked file iteration
optimizedcargo run --features optimized --example optimizedConst MIME table + bitset language detection
observabilitycargo run --features observability --example observabilityStructured tracing via tracing-subscriber
http2cargo run --features http2 --example http2h2c server + framed body roundtrip
http3cargo run --features http3-profile --example http3ALPN routing + fallback chain
perfcargo run --features high-perf --example perfstart_high_perf with PerfLimits
multicargo run --features high-perf-multi-thread --example multistart_high_perf_multi_thread
autotunecargo run --features autotune --example autotuneHost-profile-derived PerfLimits
ratelimitcargo run --features distributed-rate-limit --example ratelimitDistributed limiter + in-memory backend
tenantcargo run --features multi-tenant --example tenantPer-tenant config + scoped secrets
tlscargo run --features enterprise --example tlsTLS / mTLS policy primitives
authcargo run --features enterprise --example authAPI-key + JWT verifiers
configcargo run --features enterprise --example configTOML config + hot-reload watcher
enterprisecargo run --features enterprise --example enterpriseRBAC adapter + per-request enforcement

Tooling / runners:

ExampleCommandWhat it does
fullcargo run --example full (or --all-features for full coverage)Unified runner across every enabled feature
allcargo run --example allSpawns every other example via cargo run --example
benchscripts/load_test.sh <mode>Bombardier target — sync / async / high-perf / high-perf-mt / http2
dhatcargo run --release --features high-perf --example dhatHeap-profile harness writing dhat-heap.json

§FAQ

Short answers to the questions that come up most often. The long-form matrix lives in docs/FAQ.md.

Which start* should I use? Default to Server::start() for prototyping. For production: start_with_pooling(workers, max_conns) on the sync path, or start_high_perf_multi_thread(server, limits, None) on the async path. The async multi-thread mode is the throughput leader on every Linux row in docs/PERFORMANCE.md.

When does sync win, and when does async win? Sync’s thread-per-connection model is competitive on memory-rich hosts with zero per-request I/O wait. Async (high-perf / high-perf-mt) wins on memory-constrained hosts, mixed-I/O workloads (DB / upstream awaits), and any deployment where the OS thread cap matters.

Are the perf numbers realistic for my workload? They’re upper bounds for a small static body over loopback. The Linux row (180 k req/s on high-perf-mt) is more representative for production than the macOS row because it tests the real epoll networking path. Reproduce with scripts/load_test.sh (host) or scripts/linux_bench.sh (container).

Can I run this over HTTPS? Not in-process today — enterprise::TlsPolicy is configuration-only. Terminate TLS at a reverse proxy (nginx, Caddy, ALB) and proxy plaintext HTTP/1.1 or h2c to http-handle over loopback. In-process rustls termination is on the v0.1 roadmap.

What about HTTP/3? ALPN routing + fallback chain ship as feature = "http3-profile". QUIC termination + h3 frames are deferred to v0.2 (design proposal in docs/HTTP3_DESIGN.md).

Can I plug in my own request handler? Not yet — the crate is a static-file server with configurable policies. A request-handler trait is on the v0.1 roadmap. Today, run http-handle for /static/* and a dynamic framework (axum, actix-web) for the rest.

error: target 'enterprise' requires the features: 'enterprise' — how do I run feature-gated examples? Use the wrapper: ./scripts/example.sh <name>. It auto-resolves the right --features flag. Or copy the literal command from the ## Examples table above.

How does the response cache work? v0.0.5 added a pre-serialised (head_prefix, body) cache on the high-perf static-file fast path, keyed by (canonical_path, mtime, file_len), capped at 256 entries ~16 MiB total. Hits skip the syscall + format work; the per-request cost on a hit is one Connection: header format and one write_all. Above-threshold files (>= sendfile_threshold_bytes) skip the cache entirely so the OOM guard remains intact.

For deployment patterns (nginx, Kubernetes), security model (rate limiter, traversal protection, body cap), tuning knobs, and 20+ more answers, see docs/FAQ.md.


§Migrating from 0.0.4 → 0.0.5

Request::headers changed from HashMap<String, String> to Vec<(String, String)> for lower per-request allocation pressure and faster lookup at typical header counts. The Request::header(name) accessor is unchanged; only direct construction and iteration are affected.

// before (0.0.4)
let mut headers = std::collections::HashMap::new();
headers.insert("content-type".to_string(), "text/plain".to_string());
let request = Request {
    method: "GET".into(),
    path: "/".into(),
    version: "HTTP/1.1".into(),
    headers,
};

// after (0.0.5)
let request = Request {
    method: "GET".into(),
    path: "/".into(),
    version: "HTTP/1.1".into(),
    headers: vec![("content-type".into(), "text/plain".into())],
};

See CHANGELOG.md for the full breaking-change list.


§Development

cargo build        # Build the project
cargo test         # Run all tests
cargo clippy       # Lint with Clippy
cargo fmt          # Format with rustfmt

See CONTRIBUTING.md for setup, signed commits, and PR guidelines.


THE ARCHITECTSebastien Rousseau THE ENGINEEUXIS ᛫ Enterprise Unified Execution Intelligence System


§License

Dual-licensed under Apache 2.0 or MIT, at your option.

Back to Top

Re-exports§

pub use error::ServerError;
pub use language::Language;
pub use language::LanguageDetector;
pub use server::ConnectionPool;
pub use server::Server;
pub use server::ServerBuilder;
pub use server::ShutdownSignal;
pub use server::ThreadPool;

Modules§

async_runtime
Async runtime helpers for panic-safe blocking execution.
async_serverasync
Async Tokio server entrypoints.
batchbatch
Batch processing utilities for concurrent file reads.
distributed_rate_limitdistributed-rate-limit
Distributed rate-limiting adapters and backend contracts.
enterpriseenterprise
Enterprise policy primitives for transport security, auth, telemetry, and runtime profiles.
error
Error model for runtime, parsing, and policy operations.
http2_serverhttp2
HTTP/2 server entrypoints (feature-gated).
http3_profilehttp3-profile
HTTP/3 production profile primitives.
language
Lightweight language detection with runtime-customizable patterns.
observability
Observability initialization helpers.
optimizedoptimized
Zero-allocation lookup helpers for hot-path operations.
perf_serverhigh-perf
High-performance async-first HTTP/1 server primitives.
protocol_state
Protocol byte-classification helpers for fuzzing and conformance tests.
request
HTTP/1.x request parsing and validation.
response
HTTP response construction and serialization.
runtime_autotuneautotune
Runtime auto-tuning helpers based on detected host resource profile.
server
Core HTTP server runtime.
streamingstreaming
Pull-based chunked streaming utilities for large files.
tenant_isolationmulti-tenant
Multi-tenant configuration isolation and secret-provider helpers.