pub struct ProviderFactory;Expand description
Provider factory for creating LLM and embedding providers.
Provides environment-based auto-detection and explicit provider selection.
Implementations§
Source§impl ProviderFactory
impl ProviderFactory
Sourcepub fn from_env() -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
pub fn from_env() -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
Auto-detect and create providers from environment.
§Priority
EDGEQUAKE_LLM_PROVIDERenvironment variable (explicit selection)- Auto-detect: OLLAMA_HOST → LMSTUDIO_HOST → OPENAI_API_KEY → Mock
§Returns
Returns a tuple of (LLMProvider, EmbeddingProvider). In most cases, the same provider implementation is used for both.
§Errors
Returns error if required configuration for selected provider is missing.
§Examples
std::env::set_var("OLLAMA_HOST", "http://localhost:11434");
let (llm, embedding) = ProviderFactory::from_env()?;
assert_eq!(llm.name(), "ollama");Sourcepub fn create(
provider_type: ProviderType,
) -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
pub fn create( provider_type: ProviderType, ) -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
Sourcepub fn create_with_model(
provider_type: ProviderType,
model: Option<&str>,
) -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
pub fn create_with_model( provider_type: ProviderType, model: Option<&str>, ) -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
OODA-04: Create specific provider type with a model override.
Like create() but allows specifying a model name instead of using defaults.
Useful for CLI where user specifies --provider openrouter --model mistral/model.
§Arguments
provider_type- The type of provider to createmodel- Optional model name to use instead of provider default
§Returns
Returns a tuple of (LLMProvider, EmbeddingProvider).
§Errors
Returns error if required configuration for the provider is missing.
Sourcepub fn from_config(
config: &ProviderConfig,
) -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
pub fn from_config( config: &ProviderConfig, ) -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
OODA-200: Create provider from TOML configuration.
This is the primary entry point for custom providers defined in models.toml. Supports both native providers (OpenAI, Ollama) and OpenAI-compatible APIs.
§Arguments
config- Provider configuration from models.toml
§Returns
Returns a tuple of (LLMProvider, EmbeddingProvider).
§Errors
Returns error if:
- Required API key environment variable is not set
- Base URL is not configured for OpenAI-compatible providers
§Examples
use edgequake_llm::{ModelsConfig, ProviderFactory};
let config = ModelsConfig::load()?;
let provider_config = config.get_provider("zai").unwrap();
let (llm, embedding) = ProviderFactory::from_config(provider_config)?;Sourcepub fn from_config_with_model(
config: &ProviderConfig,
model_name: Option<&str>,
) -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
pub fn from_config_with_model( config: &ProviderConfig, model_name: Option<&str>, ) -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
Create provider from configuration with a specific model override.
This is used when the user selects a specific model via /model command
that differs from the provider’s default model.
§Arguments
config- Provider configuration from models.tomlmodel_name- Optional model name to use instead of the default
§Returns
Tuple of (LLM provider, Embedding provider)
Sourcepub fn create_azure_openai() -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
pub fn create_azure_openai() -> Result<(Arc<dyn LLMProvider>, Arc<dyn EmbeddingProvider>)>
Create Azure OpenAI provider from environment variables.
Tries AZURE_OPENAI_CONTENTGEN_* first (common enterprise naming),
then falls back to AZURE_OPENAI_* standard variables.
Environment variables (CONTENTGEN variant):
AZURE_OPENAI_CONTENTGEN_API_ENDPOINT→ endpoint URLAZURE_OPENAI_CONTENTGEN_API_KEY→ API keyAZURE_OPENAI_CONTENTGEN_MODEL_DEPLOYMENT→ deployment nameAZURE_OPENAI_CONTENTGEN_API_VERSION→ (optional)
Environment variables (standard variant):
AZURE_OPENAI_ENDPOINT→ endpoint URLAZURE_OPENAI_API_KEY→ API keyAZURE_OPENAI_DEPLOYMENT_NAME→ deployment nameAZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME→ (optional)AZURE_OPENAI_API_VERSION→ (optional)
Sourcepub fn embedding_dimension() -> Result<usize>
pub fn embedding_dimension() -> Result<usize>
Sourcepub fn create_embedding_provider(
provider_name: &str,
model: &str,
_dimension: usize,
) -> Result<Arc<dyn EmbeddingProvider>>
pub fn create_embedding_provider( provider_name: &str, model: &str, _dimension: usize, ) -> Result<Arc<dyn EmbeddingProvider>>
Create an embedding provider from workspace configuration.
This is used to create workspace-specific embedding providers for query execution. The provider is configured with the workspace’s embedding model and dimension.
@implements SPEC-032: Workspace-specific embedding in query process
§Arguments
provider_name- Provider type (e.g., “openai”, “ollama”, “lmstudio”, “vscode-copilot”, “mock”)model- Embedding model name (e.g., “text-embedding-3-small”, “embeddinggemma:latest”)dimension- Embedding dimension (e.g., 1536, 768)
§Returns
Returns an Arc<dyn EmbeddingProvider> configured for the workspace.
§Errors
Returns error if the provider type is unknown or required configuration is missing.
§Examples
let provider = ProviderFactory::create_embedding_provider(
"ollama",
"embeddinggemma:latest",
768,
)?;
assert_eq!(provider.dimension(), 768);Sourcepub fn create_llm_provider(
provider_name: &str,
model: &str,
) -> Result<Arc<dyn LLMProvider>>
pub fn create_llm_provider( provider_name: &str, model: &str, ) -> Result<Arc<dyn LLMProvider>>
Create an LLM provider from workspace configuration.
This is used to create workspace-specific LLM providers for ingestion/extraction. The provider is configured with the workspace’s LLM model.
@implements SPEC-032: Workspace-specific LLM in ingestion process
§Arguments
provider_name- Provider type (e.g., “openai”, “ollama”, “lmstudio”, “mock”)model- LLM model name (e.g., “gpt-4o-mini”, “gemma3:12b”)
§Returns
Returns an Arc<dyn LLMProvider> configured for the workspace.
§Errors
Returns error if the provider type is unknown or required configuration is missing.
§Examples
let provider = ProviderFactory::create_llm_provider(
"ollama",
"gemma3:12b",
)?;
assert_eq!(provider.model(), "gemma3:12b");