pub struct AIConfig {
pub provider: String,
pub api_key: Option<String>,
pub model: String,
pub base_url: String,
pub max_sample_length: usize,
pub temperature: f32,
pub max_tokens: u32,
pub retry_attempts: u32,
pub retry_delay_ms: u64,
pub request_timeout_seconds: u64,
pub api_version: Option<String>,
}Expand description
AI service configuration parameters.
This structure defines all configuration options for AI providers, including authentication, model parameters, retry behavior, and timeouts.
§Examples
Creating a default configuration:
use subx_cli::config::AIConfig;
let ai_config = AIConfig::default();
assert_eq!(ai_config.provider, "openai");
assert_eq!(ai_config.model, "gpt-4.1-mini");
assert_eq!(ai_config.temperature, 0.3);Fields§
§provider: StringAI provider name.
Supported canonical values:
"openai"— hosted OpenAI API"openrouter"— hosted OpenRouter API"azure-openai"— hosted Azure OpenAI deployments"local"— any OpenAI-compatible local, LAN, or VPN endpoint (Ollama, LM Studio, llama.cppllama-server, vLLM, etc.)
The string "ollama" is accepted as an input alias and is
normalized to "local" by
crate::config::field_validator::normalize_ai_provider; the
persisted on-disk value is always the canonical form.
api_key: Option<String>API key for authentication.
model: StringAI model name to use.
base_url: StringAPI base URL.
max_sample_length: usizeMaximum sample length per request.
temperature: f32AI generation creativity parameter (0.0-1.0).
max_tokens: u32Maximum tokens in response.
retry_attempts: u32Number of retries on request failure.
retry_delay_ms: u64Retry interval in milliseconds.
request_timeout_seconds: u64HTTP request timeout in seconds. This controls how long to wait for a response from the AI service. For slow networks or complex requests, you may need to increase this value.
api_version: Option<String>Azure OpenAI API version (optional, defaults to latest)