# Vapor HTTP
A dynamically scaling HTTP server written in Rust that auto-tunes thread count to maintain 85% CPU utilization per thread.
## Features
- **Auto-scaling**: Starts at 1 thread, grows as needed up to 256 threads
- **Target utilization**: Scales up at >70% CPU usage, scales down at <30%
- **Backpressure**: Limits concurrent connections to available threads, preventing FD exhaustion
- **Zero config**: Works out of the box with sensible defaults
- **Minimal**: ~2MB memory baseline, bare metal performance
## Quick Start
```rust
use std::sync::Arc;
use vapor_http::{Vapor, Request, Response};
```
## Installation
```toml
[dependencies]
vapor-http = "0.1"
tokio = { version = "1", features = ["full"] }
tracing-subscriber = "0.3"
```
## Configuration
### Custom Port
```rust
Vapor::new(handler).port(3000).run().await;
```
### Custom Thread Pool
```rust
use vapor_http::DynamicThreadPool;
let pool = DynamicThreadPool::with_limits(4, 32);
Vapor::new(handler).pool(pool).run().await;
```
## Response Types
```rust
Response::ok("text") // 200 OK, text/plain
Response::json("{\"key\": \"value\"}") // 200 OK, application/json
Response::not_found() // 404 Not Found
Response::internal_error("msg") // 500 Internal Server Error
Response::status(418).body("I'm a teapot") // Custom status code
Response::ok("data").with_header("X-Custom", "value")
```
## Request Object
```rust
struct Request {
pub method: String, // "GET", "POST", etc.
pub path: String, // "/api/users"
pub headers: Vec<(String, String)>,
pub body: Option<String>,
}
```
## Examples
```bash
cargo run --example hello-world
cargo run --example json-api
```
## How Scaling Works
| Idle | 1 | ~0% |
| Low | 1 | ~30-50% |
| Moderate | 1 | ~70% (triggers scale up) |
| Heavy | 2-4 | ~70-85% each |
| Very Heavy | 4+ | Scales as needed |
- **Scale up**: +1 thread when utilization > 70%
- **Scale down**: -1 thread when utilization < 30%
- **Cooldown**: 100ms between scaling decisions
- **Max threads**: 256
## License
MIT