Skip to main content

Crate mnem_llm_providers

Crate mnem_llm_providers 

Source
Expand description

§mnem-llm-providers

Text-generation adapters for mnem. Ships OpenAI chat completions and Ollama chat out of the box; both behind opt-in (on-by-default) cargo features.

§Scope

Per mnem_core::llm, mnem-core defines a TextGenerator trait. This crate provides the production adapters. Used today by mnem retrieve --hyde. The multi-query / RAG-Fusion variant is planned and will share the same trait. Future LLM-in-the-loop features (query rewriting, answer synthesis, retrieval grading) will build on this surface too.

§Invariants

  • No tokio / no async. All adapters are sync, built on top of ureq (rustls-backed). Matches mnem-embed-providers and mnem-rerank-providers.
  • No API keys in config / on disk. The config stores the name of the env var holding the key (api_key_env). The key itself is read from the environment at adapter-construction time.
  • mnem-core stays on no HTTP client. mnem-core still has zero network / HTTP / tokio in its dep tree, preserving the WASM-embeddability promises.

Re-exports§

pub use config::OllamaLlmConfig;
pub use config::OpenAiLlmConfig;
pub use config::ProviderConfig;
pub use config::open;

Modules§

config
ProviderConfig and the open factory for LLM text-generation adapters. Mirrors mnem-embed-providers::config.
ollama
Ollama chat adapter.
openai
OpenAI chat-completions adapter.