Skip to main content

Module provider_cache

Module provider_cache 

Source
Expand description

Provider caching awareness — helps LLM providers cache repeated context.

Many LLM providers (Anthropic, OpenAI, Google) implement prefix caching: if the beginning of a prompt matches a previous request, the provider can skip re-processing those tokens. This module helps lean-ctx structure output to maximize prefix cache hits.

Strategies:

  1. Stable prefix ordering: Static context (project structure, types) placed BEFORE dynamic context (current file, recent changes)
  2. Hash-based change detection: Only re-emit context sections that changed
  3. Cacheable block markers: Mark stable blocks so the LLM host knows they can be cached aggressively

Structs§

CacheableSection
A section of context with caching metadata.
ProviderCacheState
Tracks which sections have been sent to the provider.

Enums§

SectionPriority

Functions§

order_for_caching
Order sections for optimal prefix caching. Stable sections first (system, project structure, types), dynamic sections last (recent changes, current task).
render_with_cache_hints
Render sections with cache boundary markers.