pub async fn call_openai(
prompt: &str,
model: &str,
max_token: u32,
) -> Result<String, Box<dyn Error>>
Expand description
Sends a prompt to the OpenAI chat API and returns the generated response as a string.
This function uses the async-openai
crate to interact with a chat completion endpoint
(e.g., GPT-4, GPT-4o, GPT-3.5-turbo). The base URL can be overridden via the
OPENAI_BASE_URL
environment variable.
§Arguments
prompt
- The text prompt to send to the model.model
- The model ID to use (e.g.,"gpt-4o"
,"gpt-3.5-turbo"
).max_token
- Maximum number of tokens allowed in the response.
§Returns
A Result
containing the generated string response on success, or an error on failure.
§Errors
This function will return an error if the request fails, the environment variable is misconfigured, or if the response cannot be parsed correctly.
§Example
use git_commit_helper::call_openai;
#[tokio::main]
async fn main() {
let prompt = "Summarize the following diff...";
let model = "gpt-4o";
let max_token = 2048;
match call_openai(prompt, model, max_token).await {
Ok(response) => println!("LLM response: {}", response),
Err(e) => eprintln!("Error calling OpenAI: {}", e),
}
}