Skip to main content

maybe_process_user_input

Function maybe_process_user_input 

Source
pub async fn maybe_process_user_input(
    client: &LlmClient,
    model: &str,
    input: &str,
) -> String
Expand description

Guard user input against context overflow.

  • If input.len() <= SKILLLITE_USER_INPUT_MAX_CHARS → pass through unchanged.
  • Otherwise → chunked LLM summarization via the same pipeline as tool results.

Unlike tool results (which have a cheap truncation tier), user inputs always use LLM summarization when over the limit — truncation would silently drop intent. A short notice is prepended so the model knows the input was compressed.