1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
//! SQL placeholder generation, caching, and batch-size derivation.
//!
//! ## SQLite variable limit
//!
//! `SQLITE_MAX_VARIABLE_NUMBER` was 999 before v3.32 (2020) and is 32766
//! in current SQLite. cqs requires SQLite 3.35+ (for RETURNING) so the
//! limit is always 32766. The v1.22.0 audit (SHL-31/32/33) found 15 call
//! sites still using the old 999-derived batch sizes, producing 10-30×
//! more SQL statements than necessary. The [`max_rows_per_statement`]
//! helper centralizes the derivation so call sites don't need to
//! re-derive the constant.
/// Read `CQS_BUSY_TIMEOUT_MS` env var, falling back to `default_ms`. Single
/// source of truth so every SQLite pool (store, embedding cache, query
/// cache) honours the same tuning knob.
/// SQLite's `SQLITE_MAX_VARIABLE_NUMBER` since v3.32 (2020).
/// Single source of truth — all batch-size derivations reference this.
pub const SQLITE_MAX_VARIABLES: usize = 32766;
/// Generic headroom so a future caller adding one more bind variable
/// doesn't instantly trip the limit. NOT sized to absorb a full extra
/// column; adding a new column requires updating `vars_per_row` at the
/// call site (SHL-41 audit rationale correction).
pub const SAFETY_MARGIN_VARS: usize = 300;
/// Derive the maximum rows per INSERT/DELETE statement given the number
/// of bind variables per row. Centralizes the `(LIMIT - MARGIN) / N`
/// derivation that was previously inlined (and wrong) at 15+ sites.
///
/// For single-bind queries (e.g. `WHERE id IN (?, ?, ...)`), pass
/// `vars_per_row = 1`. For multi-column INSERTs, pass the column count.
pub const
/// Maximum batch size that is pre-built and cached at startup.
///
/// SHL-V1.25-14: sized exactly to cover the caller-facing max
/// (`max_rows_per_statement(1) = SQLITE_MAX_VARIABLES - SAFETY_MARGIN_VARS
/// = 32466`). With the previous 10_000 cap, single-bind batches beyond
/// 10k fell off the cache and re-built the ~120KB placeholder string
/// every call, negating the cache's purpose. The extra ~22k strings
/// cost ~1-2MB at startup in exchange for zero-alloc on the hot path.
const PLACEHOLDER_CACHE_MAX: usize = SQLITE_MAX_VARIABLES - SAFETY_MARGIN_VARS;
/// Pre-built placeholder strings for n = 1..=PLACEHOLDER_CACHE_MAX.
/// Index 0 is unused; index n holds the string for n placeholders.
static PLACEHOLDER_CACHE: LazyLock = new;
/// Build a placeholder string without caching (used by both cache init and large n).
/// Build a comma-separated list of numbered SQL placeholders: "?1,?2,...,?N".
///
/// Batch sizes up to [`PLACEHOLDER_CACHE_MAX`] are served from a static
/// cache as `Cow::Borrowed(&'static str)`; larger values build a fresh
/// `String` on demand and return `Cow::Owned`. The cache covers the full
/// caller-facing range — no production call site should fall off it.
///
/// PF-V1.25-7: previously returned `String` via `PLACEHOLDER_CACHE[n].clone()`,
/// which re-allocated the full placeholder string on every cache hit. A 500-id
/// batch cost ~4KB memcpy per call; on a hot reindex or batch-search loop
/// this adds up to measurable allocator pressure. The cache hit now returns a
/// `&'static str` borrow via `Cow::Borrowed`.
///
/// SHL-V1.25-14: `PLACEHOLDER_CACHE_MAX` is bound to `SQLITE_MAX_VARIABLES -
/// SAFETY_MARGIN_VARS` so large batches don't miss the cache.
pub