pub fn llm_request_impl(
lua: &Lua,
args: (String, Option<Table>),
) -> Result<Table>Expand description
Executes an LLM chat request. Called from capability-gated context.
§Arguments (from Lua)
prompt- User message textopts- Optional table:provider- “ollama” (default), “openai”, “anthropic”base_url- Provider base URL (default per provider)model- Model name (default per provider)api_key- API key (falls back to env var)system_prompt- System prompt textsession_id- Session ID for multi-turn (nil = new session)temperature- Sampling temperaturemax_tokens- Max completion tokenstimeout- Per-request timeout in seconds (default: 120)overall_timeout- Wall-clock timeout in seconds for the entire resolve loop (default: nil = no limit). When set, the loop aborts witherror_kind = "overall_timeout"if the deadline is exceeded.
§Returns (Lua table)
ok- booleancontent- Response text (when ok=true)model- Model name from responsesession_id- Session ID (new or existing)error- Error message (when ok=false)error_kind- Error classification