Expand description
codemem-embeddings: Pluggable embedding providers for Codemem.
Supports multiple backends:
- Candle (default): Local BAAI/bge-base-en-v1.5 via pure Rust ML
- Ollama: Local Ollama server with any embedding model
- OpenAI: OpenAI API or any compatible endpoint (Together, Azure, etc.)
Modules§
Structs§
- Cached
Provider - Wraps any
EmbeddingProviderwith an LRU cache. - Embedding
Service - Embedding service with Candle inference (no internal cache — use
CachedProviderwrapper).
Constants§
- CACHE_
CAPACITY - Default LRU cache capacity.
- DEFAULT_
BATCH_ SIZE - Default batch size for batched embedding forward passes.
Configurable via
EmbeddingConfig.batch_size. - DIMENSIONS
- Default embedding dimensions.
- MODEL_
NAME - Default model name.
Traits§
- Embedding
Provider - Trait for pluggable embedding providers.
Functions§
- from_
env - Create an embedding provider from environment variables.