throttlecrab-server
A high-performance rate limiting server with multiple protocol support, built on throttlecrab.
Features
- Multiple protocols: HTTP (JSON), gRPC, and Redis/RESP
- High performance: Lock-free shared state with Tokio async runtime
- Production ready: Health checks, metrics endpoint, configurable logging, systemd support
- Flexible deployment: Docker, binary, or source installation
- Shared rate limiter: All protocols share the same store for consistent limits
- Observability: Prometheus-compatible metrics for monitoring and alerting
Installation
Quick Start
Start the server and make rate-limited requests:
# Start the server with HTTP transport
# In another terminal, make requests with curl
# First request - allowed
# Response:
# {"allowed":true,"limit":3,"remaining":2,"reset_after":60,"retry_after":0}
# Make more requests to see rate limiting in action
# Response when rate limited:
# {"allowed":false,"limit":3,"remaining":0,"reset_after":58,"retry_after":6}
Environment Variables
All CLI arguments can be configured via environment variables with the THROTTLECRAB_ prefix:
# Transport configuration
# Store configuration
# General configuration
# CLI arguments override environment variables
THROTTLECRAB_HTTP_PORT=8080
# Server will use port 7070 (CLI takes precedence)
Transport Performance Comparison
| Transport | Protocol | Throughput | Latency (P99) | Latency (P50) |
|---|---|---|---|---|
| HTTP | JSON | 175K req/s | 327 μs | 176 μs |
| gRPC | Protobuf | 163K req/s | 377 μs | 188 μs |
| Redis | RESP | 184K req/s | 275 μs | 170 μs |
You can run tests on your hardware with cd integration-tests && ./run-transport-test.sh -t all -T 32 -r 10000
Protocol Documentation
HTTP REST API
Endpoint: POST /throttle
Request Body (JSON):
Note: quantity is optional (defaults to 1).
Response (JSON):
gRPC Protocol
See proto/throttlecrab.proto for the service definition. Use any gRPC client library to connect.
Redis Protocol
The server implements Redis Serialization Protocol (RESP), making it compatible with any Redis client.
Port: Default 6379 (configurable with --redis-port)
Commands:
THROTTLE key max_burst count_per_period period [quantity]- Check rate limitPING- Health checkQUIT- Close connection
Example using redis-cli:
> THROTTLE
) ()) ()) ()) ()) ()
Example using Redis client libraries:
=
=
# result: [1, 10, 9, 60, 0]
Client Integration
Use any HTTP client, gRPC client library, or Redis client to connect to throttlecrab-server. See examples/ directory for implementation examples.
Monitoring
- Health endpoint:
GET /health(available on HTTP port) - Metrics endpoint:
GET /metrics(Prometheus format, available on HTTP port) - Logs: Structured logging with configurable levels
- Performance metrics: Available via
/metricsendpoint
Available Metrics
Core Metrics
throttlecrab_uptime_seconds: Server uptime in secondsthrottlecrab_requests_total: Total requests processed across all transportsthrottlecrab_requests_by_transport{transport="http|grpc|redis"}: Requests per transportthrottlecrab_requests_allowed: Total allowed requeststhrottlecrab_requests_denied: Total denied requeststhrottlecrab_requests_errors: Total internal errorsthrottlecrab_top_denied_keys{key="...",rank="1-100"}: Top denied keys by count
Example Prometheus Queries
# Monitor denial rate
rate(throttlecrab_requests_denied[5m]) / rate(throttlecrab_requests_total[5m])
# Alert on high error rate
rate(throttlecrab_requests_errors[5m]) > 0.01
Store Types
| Store Type | Use Case | Cleanup Strategy |
|---|---|---|
periodic |
Predictable load | Fixed intervals |
probabilistic |
High throughput | Random sampling |
adaptive |
Variable load | Self-tuning |