Expand description
Contextual retrieval module for improving chunk embeddings.
Based on Anthropic’s contextual retrieval technique: before embedding each chunk, we prepend a context summary that places the chunk within the larger document. This helps semantic search find chunks that would otherwise miss due to lack of context.
For example, a chunk saying “I’ve been using basil and mint in my cooking lately” might get a context prefix like: “This is a conversation where the user discusses their cooking preferences and mentions growing herbs in their garden.”
This allows semantic queries like “dinner with homegrown ingredients” to find the chunk even though it doesn’t explicitly mention “dinner” or “homegrown”.
Enums§
- Contextual
Engine - Contextual retrieval engine that can use either OpenAI or local models.
Functions§
- apply_
contextual_ prefixes - Apply contextual prefixes to chunks for embedding. Returns new chunk texts with context prepended.