1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
//! Connection pooler feature module.
//!
//! # Architecture
//!
//! Owns the Postgres connection pool abstraction split across **three domain
//! layers**, each with its own file so responsibilities never blur:
//!
//! | Layer | Type | File | Concern |
//! | --- | --- | --- | --- |
//! | Factory | [`ConnectionPoolManager`] | `manager.rs` | Builds [`PgPoolOptions`](sqlx::postgres::PgPoolOptions) from defaults and opens pools. Carries no runtime state; cloning is cheap and intentional. |
//! | Lifecycle | [`ConnectionPool`] | `pool.rs` | Owns a live [`PgPool`] plus the manager that created it. Handles `open`/`close`/`reconfigure`. |
//! | Telemetry | [`ConnectionPoolSnapshot`] | `snapshot.rs` | Pure, read-only point-in-time view for logging/metrics. No I/O, no locks. |
//! | Prometheus | `impl ConnectionPoolSnapshot` | `prometheus.rs` | Serializes snapshots into `/metrics` exposition format (`athena_pg_pool_*`). |
//!
//! Struct **declarations** live in this file (`mod.rs`) so the shape of the
//! public API is visible at a glance. **Implementations** are in per-domain
//! sibling files so each file reads as one cohesive concern.
//!
//! # Typical lifecycle
//!
//! ```no_run
//! # use athena_rs::features::connection_pooler::ConnectionPoolManager;
//! # async fn demo(uri: &str) -> Result<(), sqlx::Error> {
//! // 1. Build a factory once, share its defaults everywhere.
//! let manager = ConnectionPoolManager::default()
//! .with_test_before_acquire(true);
//!
//! // 2. Open a live pool for a named client.
//! let mut pool = manager.open("logging".to_string(), uri).await?;
//!
//! // 3. Capture telemetry without stopping the world.
//! let snapshot = pool.snapshot();
//! tracing::info!(?snapshot, "logging pool occupancy");
//!
//! // 4. Reconfigure (atomic replace) when defaults change.
//! pool.reconfigure(uri, manager.clone()).await?;
//!
//! // 5. Drain on shutdown.
//! pool.close().await;
//! # Ok(()) }
//! ```
//!
//! # Why three types instead of one
//!
//! Separating the factory from the live pool lets multiple pools share
//! identical limits without reaching into a global. Separating the snapshot
//! from the pool keeps telemetry free of async/locking concerns: the snapshot
//! is a plain `Clone` data struct you can ship across threads, serialize,
//! persist, or hand to a Prometheus exposition writer without touching the
//! live pool again.
use cratePoolConfig;
use ;
use PgPool;
use Duration;
/// Factory for [`ConnectionPool`]s with shared defaults.
///
/// Centralizes pool-option construction so connection limits, lifetime caps,
/// and acquisition behaviour can change in exactly one place and apply to every
/// client pool produced afterwards.
///
/// The manager is pure configuration — it holds no runtime handles — so it is
/// cheap to [`Clone`] and safe to pass across tasks. Every [`ConnectionPool`]
/// keeps a clone of the manager that produced it so it can rebuild itself on
/// [`ConnectionPool::reconfigure`] without extra plumbing.
/// Live, owning wrapper around a Postgres [`PgPool`].
///
/// Carries its `client_name` label (used in logs/metrics) and the
/// [`ConnectionPoolManager`] that produced it, so the pool can be
/// reconfigured or snapshotted without any external registry lookup.
///
/// `Clone` is implemented for convenience — sqlx's [`PgPool`] is `Arc`-backed,
/// so cloning shares the same underlying pool. The manager clone is also
/// cheap. Use [`ConnectionPool::close`] exactly once (on the last shared
/// instance) to drain cleanly.
/// Point‑in‑time, read-only view of a [`ConnectionPool`]'s occupancy.
///
/// Captured by [`ConnectionPool::snapshot`]. Completely decoupled from the
/// live pool: no locks are held, no async is required, and the snapshot never
/// references back into the pool. Safe to [`Clone`], [`serde::Serialize`],
/// ship across threads, and persist to Postgres (`client_connections` table)
/// or emit as Prometheus samples (via
/// [`write_prometheus`](crate::features::connection_pooler::prometheus)).
///
/// `recorded_at` is the single source of truth for staleness detection —
/// consumers that care about freshness should compare it against `Utc::now()`
/// rather than relying on wall clock at read time.