pub async fn generate(
patch: String,
remaining_tokens: usize,
model: Model,
settings: Option<&AppConfig>,
) -> Result<Response>Expand description
Generates a commit message using the AI model. Now uses the multi-step approach by default with fallback to single-step.
§Arguments
diff- The git diff to generate a commit message formax_tokens- Maximum number of tokens allowed for the responsemodel- The AI model to use for generationsettings- Optional application settings to customize the request
§Returns
Result<openai::Response>- The generated commit message or an error
§Errors
Returns an error if:
- max_tokens is 0
- OpenAI API call fails