# Witnet Wire Protocol
[](https://crates.io/crates/witnet)
**Witnet** is a lightweight, high-performance HTTP-like TCP server framework built explicitly for ultra-fast transfers of "Execution Witnesses" between Execution Clients and Provers.
It strips away all unnecessary application-layer overhead (like HTTP/2, gRPC, or QUIC framing) in favor of a zero-buffering, strictly 9-byte framed socket connection. It achieves **blazing fast throughput (>13.4 GB/s)** on local loopback, crushing standard Unix Domain Socket (IPC) latency limitations.
## Core Features
1. **HTTP-Like Developer Experience:** You structure your applications using `Router`, `Request`, and `Response` paradigms common in frameworks like `Actix` or `Axum`.
2. **Zero-Buffering Pipelines:** The `Request::into_body_stream()` returns a `Body` struct mapped directly to the active `tokio::net::TcpStream`. No `BytesMut` string-buffering takes place behind the scenes.
3. **Protection Limits:** Hard-coded `MAX_PAYLOAD_SIZE` drops connections immediately if payload allocations exceed 5GB, avoiding Node DoS vulnerabilities.
4. **Massive Throughput Enhancements:** Implements `TCP_NODELAY` and `tokio_util::io::ReaderStream` 64KB capacities under the hood to completely eliminate kernel allocation thrashing, beating manual loop-scripts.
## Installation
Add `witnet` to your `Cargo.toml`:
```toml
[dependencies]
witnet = "0.1.0"
```
## The Protocol Specification
A single message frame consists of:
* **1-byte Message Type identifier**
* **8-byte Length Prefix** (Big-Endian `u64`)
* **Raw Payload bytes**
As soon as the 9-byte header is parsed, the socket halts, yielding control to your `Router` handler logic exactly before payload ingress.
## Standard Usage
### Server Configuration
```rust
use witnet::{Server, Router, Request, Response, Body};
use futures::StreamExt;
use std::time::Duration;
const WITNESS_BY_NUMBER: u8 = 0x01;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let app = Router::new()
.route(WITNESS_BY_NUMBER, handle_by_number);
Server::bind("127.0.0.1:8080")
.with_handshake_timeout(Duration::from_secs(5)) // Optional: defaults to 10s
.with_tcp_nodelay(true) // Optional: defaults to false
.serve(app)
.await?;
Ok(())
}
async fn handle_by_number(req: Request) -> Response {
println!("Incoming Request with payload length: {}", req.len());
let body_stream = req.into_body_stream();
// Stream the payload dynamically directly out of the socket half
if let Body::Stream(_, mut stream) = body_stream {
while let Some(Ok(chunk)) = stream.next().await {
// Process the payload without loading gigabytes into RAM
}
}
// Always respond
Response::new(WITNESS_BY_NUMBER, "Processed Successfully")
}
```
### State Extraction (Axum-style)
Witnet provides a `State` extractor wrapper heavily inspired by Axum, allowing you to pass `Arc`/shared state objects directly into your handler functions!
```rust
use witnet::{Server, Router, Request, Response, State};
use std::sync::Arc;
#[derive(Clone)]
struct AppState {
db_pool: Arc<String>,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let state = AppState {
db_pool: Arc::new("postgres://user:pass@localhost".to_string()),
};
let app = Router::new()
.route(0x01, handle_with_state)
.with_state(state); // Bakes the state into the handlers
Server::bind("127.0.0.1:8081").serve(app).await?;
Ok(())
}
// Extract state gracefully using `State(state)`
async fn handle_with_state(req: Request, State(state): State<AppState>) -> Response {
println!("Type: {} | Config: {}", req.msg_type, state.db_pool);
Response::new(0x01, "Success")
}
```
### Client Configuration
```rust
use witnet::{Client, Request, Body};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = Client::connect("127.0.0.1:8080").await?;
let massive_mock_data = vec![0x11; 10 * 1024 * 1024];
let req = Request::builder(0x01, massive_mock_data);
let resp = client.send(req).await?;
println!("Response Mode: {}", resp.msg_type);
Ok(())
}
```
## Benchmarks
Benchmarked against an Apple Silicon M3 Max running on the `127.0.0.1` macOS loopback layer. Because of the `ReaderStream` 64KB scaling, `witnet` averages speeds drastically higher than manual socket polling scripts.
| **8 MB** | **2538 MB/s** |
| **20 MB** | **3481 MB/s** |
| **100 MB** | **4195 MB/s** |
| **300 MB** | **8774 MB/s** |
| **500 MB** | **13.4 GB/s** |
*Note: You can replicate the testing parameters yourself by running `cargo run --release --example dummy_witness`.*