pub async fn summarize_long_content( client: &LlmClient, model: &str, content: &str, ) -> String
Summarize long content using LLM with configurable chunk selection strategy.