Expand description
Formal cost models for caches, scheduling, and batching (bd-lff4p.5.6).
This module provides principled mathematical models with explicit objective functions for three subsystems:
- Cache cost model — loss function
L(hit_rate, memory_bytes)for glyph atlas / cell caches, with optimal budget derivation under LRU. - Pipeline scheduling model — M/G/1 queue model for the
input → update → view → diff → presentpipeline, including optimal batch size and latency decomposition. - Patch batching model — cost of immediate-flush vs deferred-coalesce
for the
BufferDiff → CellPatch → GPU uploadpath.
§Mathematical Framework
All models follow the same pattern:
- Objective function
J(θ)parameterized by policy knobsθ. - Constraint set capturing hard limits (memory, latency budget, etc.).
- Optimal policy
θ*= argmin J(θ) subject to constraints. - Evidence comparing the chosen policy to alternatives.
§Cache Cost Model
§Objective
J(B) = c_miss × miss_rate(B) + c_mem × Bwhere B is cache budget in bytes, c_miss is the cost of a cache miss
(rasterization + upload latency in µs), and c_mem is the opportunity cost
per byte of atlas memory.
§Miss Rate Model (LRU on IRM workload)
Under the Independent Reference Model with Zipf(α) popularity:
miss_rate(B) ≈ 1 − min(1, (B / item_bytes) / N)^(1/α)where N is the working set size and α is the Zipf exponent.
The optimal budget minimizes J(B) subject to B ≤ B_max:
B* = item_bytes × N × (c_miss / (α × c_mem × N))^(α/(1+α))clamped to [item_bytes, B_max].
§Pipeline Scheduling Model
§M/G/1 Queue
Model the render pipeline as a single server with:
- Arrival rate
λ(frames/ms) - Service time distribution
Swith meanE[S]and varianceVar[S]
Pollaczek-Khinchine formula for mean sojourn time:
E[T] = E[S] + (λ × E[S²]) / (2 × (1 − ρ))where ρ = λ × E[S] is utilization.
§Stage Decomposition
S = S_input + S_update + S_view + S_diff + S_presentEach stage i has independent service time S_i with measured mean and
variance. Total variance: Var[S] = Σ Var[S_i].
§Patch Batching Model
§Batch vs Immediate
J_immediate = n × (c_overhead + c_per_patch)
J_batch(k) = ceil(n/k) × (c_overhead + k × c_per_patch) + (k−1) × c_latencyOptimal batch size:
k* = sqrt(n × c_overhead / c_latency)clamped to [1, n].
Structs§
- Batch
Cost Params - Parameters for the patch batching cost model.
- Batch
Cost Point - A single evaluation point on the batch cost surface.
- Batch
Cost Result - Result of batch cost model optimization.
- Cache
Cost Params - Parameters for the glyph/cell cache cost model.
- Cache
Cost Point - A single evaluation point on the cache cost surface.
- Cache
Cost Result - Result of cache cost model optimization.
- Pipeline
Cost Params - Parameters for the M/G/1 pipeline scheduling model.
- Pipeline
Cost Result - Result of pipeline cost model analysis.
- Sensitivity
Point - Sensitivity analysis: how does the optimal policy change as a parameter varies?
- Stage
Breakdown - Per-stage contribution to the pipeline.
- Stage
Stats - Service time statistics for a single pipeline stage.
Functions§
- batch_
sensitivity_ patches - Run sensitivity analysis on batch size vs total patches.
- cache_
sensitivity_ zipf - Run sensitivity analysis on cache budget vs Zipf alpha.