Skip to main content

llm_request_impl

Function llm_request_impl 

Source
pub fn llm_request_impl(
    lua: &Lua,
    args: (String, Option<Table>),
) -> Result<Table>
Expand description

Executes an LLM chat request. Called from capability-gated context.

§Arguments (from Lua)

  • prompt - User message text
  • opts - Optional table:
    • provider - “ollama” (default), “openai”, “anthropic”
    • base_url - Provider base URL (default per provider)
    • model - Model name (default per provider)
    • api_key - API key (falls back to env var)
    • system_prompt - System prompt text
    • session_id - Session ID for multi-turn (nil = new session)
    • temperature - Sampling temperature
    • max_tokens - Max completion tokens
    • timeout - Request timeout in seconds (default: 120)

§Returns (Lua table)

  • ok - boolean
  • content - Response text (when ok=true)
  • model - Model name from response
  • session_id - Session ID (new or existing)
  • error - Error message (when ok=false)
  • error_kind - Error classification