Structs§
- Cached
Chat Model - A ChatModel wrapper that caches responses using an
LlmCache. - InMemory
Cache - In-memory LLM response cache with optional TTL expiration.
- Semantic
Cache - Cache that uses embedding similarity to match semantically equivalent queries.
Traits§
- LlmCache
- Trait for caching LLM responses.