Expand description
Provider caching awareness — helps LLM providers cache repeated context.
Many LLM providers (Anthropic, OpenAI, Google) implement prefix caching: if the beginning of a prompt matches a previous request, the provider can skip re-processing those tokens. This module helps lean-ctx structure output to maximize prefix cache hits.
Strategies:
- Stable prefix ordering: Static context (project structure, types) placed BEFORE dynamic context (current file, recent changes)
- Hash-based change detection: Only re-emit context sections that changed
- Cacheable block markers: Mark stable blocks so the LLM host knows they can be cached aggressively
Structs§
- Cacheable
Section - A section of context with caching metadata.
- Provider
Cache State - Tracks which sections have been sent to the provider.
Enums§
Functions§
- order_
for_ caching - Order sections for optimal prefix caching. Stable sections first (system, project structure, types), dynamic sections last (recent changes, current task).
- render_
with_ cache_ hints - Render sections with cache boundary markers.