Expand description
§mnem-llm-providers
Text-generation adapters for mnem. Ships OpenAI chat completions and Ollama chat out of the box; both behind opt-in (on-by-default) cargo features.
§Scope
Per mnem_core::llm, mnem-core defines a
TextGenerator
trait. This crate provides the production adapters. Used today by
mnem retrieve --hyde. The multi-query / RAG-Fusion variant is
planned and will share the same trait. Future LLM-in-the-loop
features (query rewriting, answer synthesis, retrieval grading)
will build on this surface too.
§Invariants
- No tokio / no async. All adapters are sync, built on top of
ureq(rustls-backed). Matchesmnem-embed-providersandmnem-rerank-providers. - No API keys in config / on disk. The config stores the name
of the env var holding the key (
api_key_env). The key itself is read from the environment at adapter-construction time. mnem-corestays on no HTTP client.mnem-corestill has zero network / HTTP / tokio in its dep tree, preserving the WASM-embeddability promises.
Re-exports§
pub use config::OllamaLlmConfig;pub use config::OpenAiLlmConfig;pub use config::ProviderConfig;pub use config::open;
Modules§
- config
ProviderConfigand theopenfactory for LLM text-generation adapters. Mirrorsmnem-embed-providers::config.- ollama
- Ollama chat adapter.
- openai
- OpenAI chat-completions adapter.