chopin-pg
High-fidelity engineering for the modern virtuoso.
chopin-pg provides the high‑performance PostgreSQL driver used by the Chopin suite. It offers zero‑allocation query handling and per‑core connection pools.
🛠️ Usage Example
use ;
🔌 Connection Pool Sizing for High Concurrency
When handling high concurrency (e.g., 512+ concurrent connections), proper connection pool sizing is critical. Understanding the relationship between HTTP concurrency and database pool size is essential to avoid connection starvation and timeouts.
Why Pool Size Matters
A common mistake is assuming a 1:1 ratio between concurrent HTTP connections and database pool size. This fails because:
- Not all incoming requests hit the database simultaneously. At any moment, only 30-40% of HTTP connections are actively waiting on DB queries. The rest are parsing requests, serializing responses, or executing in middleware.
- Database connections are expensive. Each connection consumes memory and resources on both the client and server. Creating a connection for every possible concurrent request wastes resources.
- Connection starvation causes cascading failures. If all pool connections are busy and a new request arrives, it must wait. If many requests queue, timeouts increase exponentially.
The Right Formula
Pool Size per Worker = (Total Concurrent Connections / Number of Workers) × Connection Ratio
Connection Ratio (typical): 0.3 to 0.5 (or 2:1 to 5:1 HTTP:DB ratio)
Example: 512 Concurrent Connections
Assuming an 8-core system with 8 workers:
512 connections ÷ 8 workers = 64 connections per worker
❌ Pool size 64 per worker: 64:64 = 1:1 ratio (FAILS - connection starvation)
✅ Pool size 25 per worker: 64:25 = 2.5:1 ratio (RECOMMENDED)
✅ Pool size 20 per worker: 64:20 = 3.2:1 ratio (CONSERVATIVE)
✅ Pool size 32 per worker: 64:32 = 2:1 ratio (IF READ-HEAVY)
Why 64 failed: A 1:1 ratio means every HTTP connection needs its own DB connection. Since DB operations are fast, the pool becomes the bottleneck instead of the database. Requests queue up waiting for available connections, leading to timeouts.
Configuration
Set pool size when initializing the connection pool:
use ;
let config = from_url?;
// For 512 concurrent with 8 workers, use 25 per worker
let pool = new; // ← Recommended starting point
Load Testing Recommendations
After configuring pool size, validate under realistic load:
# Load test with 512 concurrent clients, 8 threads, 30 seconds
# Monitor for:
# - Connection pool timeouts
# - Response latency increases
# - "All connections busy" errors in logs
# Database connection stats (in psql):
) ;
; )
Tuning Guidelines
| Load Pattern | Suggested Pool Size | Ratio | Notes |
|---|---|---|---|
| Read-heavy (80%+ reads) | 30-35 per worker | 2:1 | Queries are fast; can support higher concurrency |
| Balanced (50/50) | 20-25 per worker | 2.5-3.2:1 | Starting point for most workloads |
| Write-heavy (80%+ writes) | 15-20 per worker | 4-5:1 | Queries are slower; queue requests instead |
| Microservices + API calls | 25-40 per worker | 2-3:1 | External latency means more waiting connections |
Monitoring & Alerts
Set up monitoring for pool exhaustion:
// Desired: Log when pool utilization > 80%
// If pool_size=25 and active_connections > 20, investigate
// Symptoms of undersized pool:
// - Increasing avg response time under sustained load
// - Queries queued in pg_stat_activity
// - Application logs: "Pool connection timeout"
// - Database slow query log fills up
Summary
- Never use 1:1 ratio of HTTP connections to DB pool size
- Start with 2.5:1 ratio (20-25 pool size for 512 concurrent / 8 workers)
- Load test under realistic conditions before production deployment
- Monitor pool utilization and adjust based on actual behavior
For 512 concurrent connections, a well-tuned pool of 25 connections per worker will handle typical API workloads efficiently while preventing resource exhaustion.