# valve
OpenAI-compatible proxy with adaptive token budgeting, per-session dashboards, and optional metrics exporters.
## Quick start
```bash
cargo install --path .
```
Create `config.toml` in `~/.config/valve/` (Linux/macOS) or `%APPDATA%\valve\` (Windows):
```toml
[server]
host = "127.0.0.1"
port = 7532
[proxy]
default_provider = "openai"
[proxy.providers.openai]
kind = "open_ai"
base_url = "https://api.openai.com/v1"
api_key = "sk-your-key"
[adapters]
observers = ["telemetry"]
```
Run the proxy:
```bash
valve
```
The terminal UI starts automatically. Use `Tab`, arrow keys, or `h`/`l` to move between tabs; `1-4` pick a history window; `+/-` cycle windows; `Up/Down` or `j/k` navigate sessions; `r` resets history; `q` quits. Append `--headless` for log-only mode.
## Telemetry exporters
Add an optional Prometheus scrape endpoint or OTLP exporter in `config.toml`:
```toml
[telemetry.prometheus]
enabled = true
listen = "127.0.0.1:9898"
[telemetry.otlp]
enabled = true
endpoint = "http://localhost:4317"
protocol = "grpc"
interval_secs = 60
```
## Examples
```bash
cargo run --example agent_dialogue
cargo run --example agent_dialogue -- --headless
```
## Benchmarks
```bash
cargo bench --bench throughput
```
## License
Dual-licensed under MIT or Apache-2.0. See `LICENSE-MIT` and `LICENSE-APACHE` for details.