Skip to main content

Module llm

Module llm 

Source

Enums§

LlmBackend
Which LLM backend is available for semantic grouping.

Functions§

invoke_llm_text
Invoke the LLM backend with text output format (for free-form markdown responses). For Claude, uses --output-format text instead of JSON to avoid the wrapper.
request_grouping
Invoke the LLM backend to group hunks by semantic intent.
request_grouping_with_timeout
Request semantic grouping from the detected LLM backend with a 30-second timeout.
request_incremental_grouping
Request incremental grouping: assign new/modified hunks to existing groups or create new ones.