Skip to main content

Crate codemem_embeddings

Crate codemem_embeddings 

Source
Expand description

codemem-embeddings: Pluggable embedding providers for Codemem.

Supports multiple backends:

  • Candle (default): Local BAAI/bge-base-en-v1.5 via pure Rust ML
  • Ollama: Local Ollama server with any embedding model
  • OpenAI: OpenAI API or any compatible endpoint (Together, Azure, etc.)

Modules§

ollama
Ollama embedding provider for Codemem.
openai
OpenAI-compatible embedding provider for Codemem.

Structs§

CachedProvider
Wraps any EmbeddingProvider with an LRU cache.
EmbeddingService
Embedding service with Candle inference (no internal cache — use CachedProvider wrapper).

Constants§

CACHE_CAPACITY
Default LRU cache capacity.
DEFAULT_BATCH_SIZE
Default batch size for batched embedding forward passes. Configurable via EmbeddingConfig.batch_size.
DIMENSIONS
Default embedding dimensions.
MODEL_NAME
Default model name.

Traits§

EmbeddingProvider
Trait for pluggable embedding providers.

Functions§

from_env
Create an embedding provider from environment variables.