pub struct OpenAI<C: OpenAIConfig> {
pub client: Client,
pub api_key: String,
pub disable_live_stream: bool,
pub config: C,
}
Expand description
The OpenAI
struct is the main entry point for interacting with the OpenAI
API.
It contains the API key, the client, and the configuration for the API call,
such as the chat completion endpoint. It also contains a boolean flag to disable
the live stream of the chat endpoint.
Fields§
§client: Client
The HTTP client used to make requests to the OpenAI
API.
api_key: String
The API key used to authenticate with the OpenAI
API.
disable_live_stream: bool
A boolean flag to disable the live stream of the chat endpoint.
config: C
An endpoint specific configuration struct that holds all necessary parameters for the API call.
Implementations§
Source§impl<C: OpenAIConfig + Serialize + Debug> OpenAI<C>
impl<C: OpenAIConfig + Serialize + Debug> OpenAI<C>
Sourcepub fn new() -> Self
pub fn new() -> Self
Examples found in repository?
More examples
Sourcepub fn with_config(self, config: C) -> Self
pub fn with_config(self, config: C) -> Self
Sourcepub fn disable_stdout(self) -> Self
pub fn disable_stdout(self) -> Self
Disables standard output for the instance of OpenAi
, which is enabled by default.
This is only interesting for the chat completion, as it will otherwise print the
messages of the AI assistant to the terminal.
pub fn is_valid_temperature(&mut self, temperature: f64, limit: f64) -> bool
Sourcepub async fn models(
&mut self,
) -> Result<Vec<String>, Box<dyn Error + Send + Sync>>
pub async fn models( &mut self, ) -> Result<Vec<String>, Box<dyn Error + Send + Sync>>
Fetches a list of available models from the OpenAI
API.
This method sends a GET request to the OpenAI
API and returns a vector of identifiers of
all available models.
§Returns
A Result
which is:
Ok
if the request was successful, carrying aVec<String>
of model identifiers.Err
if the request or the parsing failed, carrying the error of typeBox<dyn std::error::Error + Send + Sync>
.
§Errors
This method will return an error if the GET request fails, or if the response from the
OpenAI
API cannot be parsed into a ModelsResponse
.
§Example
use aionic::openai::{OpenAI, Chat};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let mut client = OpenAI::<Chat>::new();
match client.models().await {
Ok(models) => println!("Models: {:?}", models),
Err(e) => println!("Error: {}", e),
}
Ok(())
}
§Note
This method is async
and needs to be awaited.
Sourcepub async fn check_model(
&mut self,
model: &str,
) -> Result<Model, Box<dyn Error + Send + Sync>>
pub async fn check_model( &mut self, model: &str, ) -> Result<Model, Box<dyn Error + Send + Sync>>
Fetches a specific model by identifier from the OpenAI
API.
This method sends a GET request to the OpenAI
API for a specific model and returns the Model
.
§Parameters
model
: A&str
that represents the name of the model to fetch.
§Returns
A Result
which is:
Ok
if the request was successful, carrying theModel
.Err
if the request or the parsing failed, carrying the error of typeBox<dyn std::error::Error + Send + Sync>
.
§Errors
This method will return an error if the GET request fails, or if the response from the
OpenAI
API cannot be parsed into a Model
.
§Example
use aionic::openai::{OpenAI, Chat};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let mut client = OpenAI::<Chat>::new();
match client.check_model("gpt-3.5-turbo").await {
Ok(model) => println!("Model: {:?}", model),
Err(e) => println!("Error: {}", e),
}
Ok(())
}
§Note
This method is async
and needs to be awaited.
Sourcepub async fn create_file_upload_part<P: AsRef<Path> + Send>(
&mut self,
path: P,
) -> Result<Part, Box<dyn Error + Send + Sync>>
pub async fn create_file_upload_part<P: AsRef<Path> + Send>( &mut self, path: P, ) -> Result<Part, Box<dyn Error + Send + Sync>>
Creates a file upload part for a multi-part upload operation.
This method reads the file at the given path, prepares it for uploading, and
returns a Part
that represents this file in the multi-part upload operation.
§Type Parameters
P
: The type of the file path. Must implement theAsRef<Path>
trait.
§Parameters
path
: The path of the file to upload. This can be any type that implementsAsRef<Path>
.
§Returns
A Result
which is:
Ok
if the file was read successfully and thePart
was created, carrying thePart
.Err
if there was an error reading the file or creating thePart
, carrying the error of typeBox<dyn Error + Send + Sync>
.
§Errors
This method will return an error if there was an error reading the file at the given path,
or if there was an error creating the Part
(for example, if the MIME type was not recognized).
§Example
use aionic::openai::{OpenAI, Chat};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let mut client = OpenAI::<Chat>::new();
match client.create_file_upload_part("path/to/file.txt").await {
Ok(part) => println!("Part created successfully."),
Err(e) => println!("Error: {}", e),
}
Ok(())
}
§Note
This method is async
and needs to be awaited.
Sourcepub async fn handle_api_errors(
&mut self,
res: Response,
) -> Result<Response, Box<dyn Error + Send + Sync>>
pub async fn handle_api_errors( &mut self, res: Response, ) -> Result<Response, Box<dyn Error + Send + Sync>>
A helper function to handle potential errors from OpenAI
API responses.
§Arguments
res
- AResponse
object from theOpenAI
API call.
§Returns
Result<Response, Box<dyn std::error::Error + Send + Sync>>
:
Returns the original Response
object if the status code indicates success.
If the status code indicates an error, it will attempt to deserialize the response
into an OpenAIError
and returns a std::io::Error
constructed from the error message.
Source§impl OpenAI<Chat>
impl OpenAI<Chat>
Sourcepub fn set_model<S: Into<String>>(self, model: S) -> Self
pub fn set_model<S: Into<String>>(self, model: S) -> Self
Sets the model of the AI assistant.
§Arguments
model
: A string that specifies the model name to be used by the AI assistant.
§Returns
This function returns the instance of the AI assistant with the specified model.
Examples found in repository?
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let context_primer = "Answer as if you were Yoda";
6 let message = "What is the meaning of life?";
7
8 let _res = OpenAI::<Chat>::new()
9 .set_model(Chat::get_default_model())
10 .set_temperature(Chat::get_default_temperature())
11 .set_max_tokens(Chat::get_default_max_tokens())
12 .set_stream_responses(Chat::get_default_stream())
13 .set_primer(context_primer)
14 .ask(message, false)
15 .await?;
16 Ok(())
17}
Sourcepub fn set_max_tokens(self, max_tokens: u64) -> Self
pub fn set_max_tokens(self, max_tokens: u64) -> Self
Sets the maximum number of tokens that the AI model can generate in a single response.
§Arguments
max_tokens
: An unsigned 64-bit integer that specifies the maximum number of tokens that the AI model can generate in a single response.
§Returns
This function returns the instance of the AI assistant with the specified maximum number of tokens.
Examples found in repository?
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let context_primer = "Answer as if you were Yoda";
6 let message = "What is the meaning of life?";
7
8 let _res = OpenAI::<Chat>::new()
9 .set_model(Chat::get_default_model())
10 .set_temperature(Chat::get_default_temperature())
11 .set_max_tokens(Chat::get_default_max_tokens())
12 .set_stream_responses(Chat::get_default_stream())
13 .set_primer(context_primer)
14 .ask(message, false)
15 .await?;
16 Ok(())
17}
Sourcepub fn set_messages(self, messages: Vec<Message>) -> Self
pub fn set_messages(self, messages: Vec<Message>) -> Self
Sourcepub fn set_temperature(self, temperature: f64) -> Self
pub fn set_temperature(self, temperature: f64) -> Self
Sets the temperature of the AI model’s responses.
The temperature setting adjusts the randomness of the AI’s responses. Higher values produce more random responses, while lower values produce more deterministic responses. The allowed range of values is between 0.0 and 2.0, with 0 being the most deterministic and 1 being the most random.
§Arguments
temperature
: A float that specifies the temperature.
§Returns
This function returns the instance of the AI assistant with the specified temperature.
Examples found in repository?
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let context_primer = "Answer as if you were Yoda";
6 let message = "What is the meaning of life?";
7
8 let _res = OpenAI::<Chat>::new()
9 .set_model(Chat::get_default_model())
10 .set_temperature(Chat::get_default_temperature())
11 .set_max_tokens(Chat::get_default_max_tokens())
12 .set_stream_responses(Chat::get_default_stream())
13 .set_primer(context_primer)
14 .ask(message, false)
15 .await?;
16 Ok(())
17}
Sourcepub fn set_stream_responses(self, streamed: bool) -> Self
pub fn set_stream_responses(self, streamed: bool) -> Self
Sets the streaming configuration of the AI assistant.
If streaming is enabled, the AI assistant will fetch and process the AI’s responses as they arrive. If it’s disabled, the assistant will collect all of the AI’s responses at once and return them as a single response.
§Arguments
streamed
: A boolean that specifies whether streaming should be enabled.
§Returns
This function returns the instance of the AI assistant with the specified streaming setting.
Examples found in repository?
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let context_primer = "Answer as if you were Yoda";
6 let message = "What is the meaning of life?";
7
8 let _res = OpenAI::<Chat>::new()
9 .set_model(Chat::get_default_model())
10 .set_temperature(Chat::get_default_temperature())
11 .set_max_tokens(Chat::get_default_max_tokens())
12 .set_stream_responses(Chat::get_default_stream())
13 .set_primer(context_primer)
14 .ask(message, false)
15 .await?;
16 Ok(())
17}
Sourcepub fn set_primer<S: Into<String>>(self, primer_msg: S) -> Self
pub fn set_primer<S: Into<String>>(self, primer_msg: S) -> Self
Sets a primer message for the AI assistant.
The primer message is inserted at the beginning of the messages
vector in the config
struct.
This can be used to prime the AI model with a certain context or instruction.
§Arguments
primer_msg
: A string that specifies the primer message.
§Returns
This function returns the instance of the AI assistant with the specified primer message.
Examples found in repository?
More examples
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let context_primer = "Answer as if you were Yoda";
6 let message = "What is the meaning of life?";
7
8 let mut client = OpenAI::<Chat>::new().set_primer(context_primer);
9
10 client
11 .ask(message, true) // <-- notice the change here
12 .await?;
13
14 client.ask("What did I just ask you earlier?", true).await?;
15 Ok(())
16}
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let context_primer = "Answer as if you were Yoda";
6 let message = "What is the meaning of life?";
7
8 let _res = OpenAI::<Chat>::new()
9 .set_model(Chat::get_default_model())
10 .set_temperature(Chat::get_default_temperature())
11 .set_max_tokens(Chat::get_default_max_tokens())
12 .set_stream_responses(Chat::get_default_stream())
13 .set_primer(context_primer)
14 .ask(message, false)
15 .await?;
16 Ok(())
17}
Sourcepub fn get_last_message(&self) -> Option<&Message>
pub fn get_last_message(&self) -> Option<&Message>
Returns the last message in the AI assistant’s configuration.
§Returns
This function returns an Option
that contains a reference to the last Message
in the config
struct if it exists, or None
if it doesn’t.
Sourcepub fn clear_state(self) -> Self
pub fn clear_state(self) -> Self
Clears the messages in the AI assistant’s configuration to start from a clean state. This is only necessary in very specific cases.
§Returns
This function returns the instance of the AI assistant with no messages in its configuration.
Sourcepub async fn ask<P: Into<Message> + Send>(
&mut self,
prompt: P,
persist_state: bool,
) -> Result<String, Box<dyn Error + Send + Sync>>
pub async fn ask<P: Into<Message> + Send>( &mut self, prompt: P, persist_state: bool, ) -> Result<String, Box<dyn Error + Send + Sync>>
Makes a request to OpenAI
’s GPT model and retrieves a response based on the provided prompt
.
This function accepts a prompt, converts it into a string, and sends a request to the OpenAI
API.
Depending on the streaming configuration (is_streamed
), the function either collects all of the AI’s responses
at once, or fetches and processes them as they arrive.
§Arguments
-
prompt
: A value that implementsInto<String>
. This will be converted into a string and sent to the API as the prompt for the AI model. -
persist_state
: If true, the function will push the AI’s response to themessages
vector in theconfig
struct. If false, it will remove the last message from themessages
vector.
§Returns
-
Ok(String)
: A success value containing the AI’s response as a string. -
Err(Box<dyn std::error::Error + Send + Sync>)
: An error value. This is a dynamic error, meaning it could represent various kinds of failures. The function will return an error if any step in the process fails, such as making the HTTP request, parsing the JSON response, or if there’s an issue with the streaming process.
§Errors
This function will return an error if the HTTP request fails, the JSON response from the API cannot be parsed, or if an error occurs during streaming.
§Examples
use aionic::openai::chat::Chat;
use aionic::openai::OpenAI;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let prompt = "Hello, world!";
let mut client = OpenAI::<Chat>::new();
let result = client.ask(prompt, true).await;
match result {
Ok(response) => println!("{}", response),
Err(e) => println!("Error: {}", e),
}
Ok(())
}
§Note
This function is async
and must be awaited when called.
Examples found in repository?
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let context_primer = "Answer as if you were Yoda";
6 let message = "What is the meaning of life?";
7
8 let mut client = OpenAI::<Chat>::new().set_primer(context_primer);
9
10 client
11 .ask(message, true) // <-- notice the change here
12 .await?;
13
14 client.ask("What did I just ask you earlier?", true).await?;
15 Ok(())
16}
More examples
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let context_primer = "Answer as if you were Yoda";
6 let message = "What is the meaning of life?";
7
8 let _res = OpenAI::<Chat>::new()
9 .set_model(Chat::get_default_model())
10 .set_temperature(Chat::get_default_temperature())
11 .set_max_tokens(Chat::get_default_max_tokens())
12 .set_stream_responses(Chat::get_default_stream())
13 .set_primer(context_primer)
14 .ask(message, false)
15 .await?;
16 Ok(())
17}
Sourcepub async fn chat(&mut self) -> Result<(), Box<dyn Error + Send + Sync>>
pub async fn chat(&mut self) -> Result<(), Box<dyn Error + Send + Sync>>
Starts a chat session with the AI assistant.
This function uses a Readline-style interface for input and output. The user types a message at the >>>
prompt,
and the message is sent to the AI assistant using the ask
function. The AI’s response is then printed to the console.
If the user enters CTRL-C, the function prints “CTRL-C” and exits the chat session.
If the user enters CTRL-D, the function prints “CTRL-D” and exits the chat session.
If there’s an error during readline, the function prints the error message and exits the chat session.
§Returns
-
Ok(())
: A success value indicating that the chat session ended normally. -
Err(Box<dyn std::error::Error + Send + Sync>)
: An error value. This is a dynamic error, meaning it could represent various kinds of failures. The function will return an error if any step in the process fails, such as reading a line from the console, or if there’s an error in theask
function.
§Errors
This function will return an error if the readline fails or if there’s an error in the ask
function.
§Examples
use aionic::openai::chat::Chat;
use aionic::openai::OpenAI;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let mut client = OpenAI::<Chat>::new();
let result = client.chat().await;
match result {
Ok(()) => println!("Chat session ended."),
Err(e) => println!("Error during chat session: {}", e),
}
Ok(())
}
§Note
This function is async
and must be awaited when called.
Source§impl OpenAI<Image>
impl OpenAI<Image>
Sourcepub fn set_response_format(self, response_format: &ResponseDataType) -> Self
pub fn set_response_format(self, response_format: &ResponseDataType) -> Self
Allows setting the return format of the response. ResponseDataType
is an enum with the
following variants:
Url
: The response will be a vector of URLs to the generated images.Base64Json
: The response will be a vector of base64 encoded images.
Sourcepub fn set_max_images(self, number_of_images: u64) -> Self
pub fn set_max_images(self, number_of_images: u64) -> Self
Allows setting the number of images to be generated.
Sourcepub fn set_size(self, size: &Size) -> Self
pub fn set_size(self, size: &Size) -> Self
Allows setting the dimensions of the generated images.
Sourcepub async fn create<S: Into<String> + Send>(
&mut self,
prompt: S,
) -> Result<Vec<String>, Box<dyn Error + Send + Sync>>
pub async fn create<S: Into<String> + Send>( &mut self, prompt: S, ) -> Result<Vec<String>, Box<dyn Error + Send + Sync>>
Generates an image based on a textual description.
This function sets the prompt to the given string and sends a request to the OpenAI
API to create an image.
The function then parses the response and returns a vector of image URLs.
§Arguments
prompt
: A string that describes the image to be generated.
§Returns
This function returns a Result
with a vector of strings on success, each string being a URL to an image.
If there’s an error, it returns a dynamic error.
Sourcepub async fn edit<S: Into<String> + Send>(
&mut self,
prompt: S,
image_file_path: S,
mask: Option<S>,
) -> Result<Vec<String>, Box<dyn Error + Send + Sync>>
pub async fn edit<S: Into<String> + Send>( &mut self, prompt: S, image_file_path: S, mask: Option<S>, ) -> Result<Vec<String>, Box<dyn Error + Send + Sync>>
Modifies an existing image based on a textual description.
This function sets the image and optionally the mask, then sets the prompt to the given string and sends a request to the OpenAI
API to modify the image.
The function then parses the response and returns a vector of image URLs.
§Arguments
prompt
: A string that describes the modifications to be made to the image.image_file_path
: A string that specifies the path to the image file to be modified.mask
: An optional string that specifies the path to a mask file. If the mask is not provided, it is set toNone
.
§Returns
This function returns a Result
with a vector of strings on success, each string being a URL to an image.
If there’s an error, it returns a dynamic error.
Sourcepub async fn variation<S: Into<String> + Send>(
&mut self,
image_file_path: S,
) -> Result<Vec<String>, Box<dyn Error + Send + Sync>>
pub async fn variation<S: Into<String> + Send>( &mut self, image_file_path: S, ) -> Result<Vec<String>, Box<dyn Error + Send + Sync>>
Generates variations of an existing image.
This function sets the image and sends a request to the OpenAI
API to create variations of the image.
The function then parses the response and returns a vector of image URLs.
§Arguments
image_file_path
: A string that specifies the path to the image file.
§Returns
This function returns a Result
with a vector of strings on success, each string being a URL to a new variation of the image.
If there’s an error, it returns a dynamic error.
Source§impl OpenAI<Embedding>
impl OpenAI<Embedding>
Sourcepub async fn embed<S: Into<InputType> + Send>(
&mut self,
prompt: S,
) -> Result<EmbeddingResponse, Box<dyn Error + Send + Sync>>
pub async fn embed<S: Into<InputType> + Send>( &mut self, prompt: S, ) -> Result<EmbeddingResponse, Box<dyn Error + Send + Sync>>
Sends a POST request to the OpenAI
API to get embeddings for the given prompt.
This method accepts a prompt of type S
which can be converted into InputType
(an enum that encapsulates the different types of possible inputs). The method converts
the provided prompt into InputType
and assigns it to the input
field of the config
instance variable. It then sends a POST request to the OpenAI
API and attempts to parse
the response as EmbeddingResponse
.
§Type Parameters
S
: The type of the prompt. Must implement theInto<InputType>
trait.
§Parameters
prompt
: The prompt for which to get embeddings. Can be aString
, aVec<String>
, aVec<u64>
, or a&str
that is converted into anInputType
.
§Returns
A Result
which is:
Ok
if the request was successful, carrying theEmbeddingResponse
which contains the embeddings.Err
if the request or the parsing failed, carrying the error of typeBox<dyn std::error::Error + Send + Sync>
.
§Errors
This method will return an error if the POST request fails, or if the response from the
OpenAI
API cannot be parsed into an EmbeddingResponse
.
§Example
use aionic::openai::embeddings::Embedding;
use aionic::openai::OpenAI;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let mut client = OpenAI::<Embedding>::new();
let prompt = "Hello, world!";
match client.embed(prompt).await {
Ok(response) => println!("Embeddings: {:?}", response),
Err(e) => println!("Error: {}", e),
}
Ok(())
}
§Note
This method is async
and needs to be awaited.
Source§impl OpenAI<Audio>
impl OpenAI<Audio>
Sourcepub fn set_prompt<S: Into<String>>(self, prompt: S) -> Self
pub fn set_prompt<S: Into<String>>(self, prompt: S) -> Self
Sourcepub fn set_response_format(&mut self, format: AudioResponseFormat) -> &mut Self
pub fn set_response_format(&mut self, format: AudioResponseFormat) -> &mut Self
Sourcepub async fn transcribe<P: AsRef<Path> + Sync + Send>(
&mut self,
audio_file: P,
) -> Result<AudioResponse, Box<dyn Error + Send + Sync>>
pub async fn transcribe<P: AsRef<Path> + Sync + Send>( &mut self, audio_file: P, ) -> Result<AudioResponse, Box<dyn Error + Send + Sync>>
Sourcepub async fn translate<P: AsRef<Path> + Send + Sync>(
&mut self,
audio_file: P,
) -> Result<AudioResponse, Box<dyn Error + Send + Sync>>
pub async fn translate<P: AsRef<Path> + Send + Sync>( &mut self, audio_file: P, ) -> Result<AudioResponse, Box<dyn Error + Send + Sync>>
Translate an audio file. Currently only supports translating to English.
§Arguments
audio_file
- The path to the audio file to translate.
§Returns
Result<AudioResponse, Box<dyn std::error::Error + Send + Sync>>
:
An AudioResponse
object representing the translation of the audio file,
or an error if the request fails.
Source§impl OpenAI<Files>
impl OpenAI<Files>
Sourcepub async fn list(
&mut self,
) -> Result<FileResponse, Box<dyn Error + Send + Sync>>
pub async fn list( &mut self, ) -> Result<FileResponse, Box<dyn Error + Send + Sync>>
List all files that have been uploaded.
§Returns
Result<FileResponse, Box<dyn std::error::Error + Send + Sync>>
:
A FileResponse
object representing all uploaded files,
or an error if the request fails.
Examples found in repository?
More examples
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let mut client = OpenAI::<Files>::new();
6 let res = client.list().await?;
7 println!("current uploads: {:#?}", res);
8
9 let res = client.upload("examples/samples/test.jsonl").await?;
10 println!("uploaded: {:#?}", res);
11
12 println!("waiting for file to be processed...");
13 // sleep for 3 seconds to allow the file to be processed
14 tokio::time::sleep(tokio::time::Duration::from_secs(3)).await;
15
16 let res = client.list().await?;
17 println!("current uploads: {:#?}", res);
18
19 let res = client.delete(res.data[0].id.clone()).await?;
20 println!("deleted: {:#?}", res);
21 Ok(())
22}
Sourcepub async fn retrieve<S: Into<String> + Display + Sync + Send>(
&mut self,
file_id: S,
) -> Result<FileData, Box<dyn Error + Send + Sync>>
pub async fn retrieve<S: Into<String> + Display + Sync + Send>( &mut self, file_id: S, ) -> Result<FileData, Box<dyn Error + Send + Sync>>
Sourcepub async fn retrieve_content<S: Into<String> + Display + Send + Sync>(
&mut self,
file_id: S,
) -> Result<Vec<PromptCompletion>, Box<dyn Error + Send + Sync>>
pub async fn retrieve_content<S: Into<String> + Display + Send + Sync>( &mut self, file_id: S, ) -> Result<Vec<PromptCompletion>, Box<dyn Error + Send + Sync>>
Sourcepub async fn upload<P: AsRef<Path> + Send + Sync>(
&mut self,
file: P,
) -> Result<FileData, Box<dyn Error + Send + Sync>>
pub async fn upload<P: AsRef<Path> + Send + Sync>( &mut self, file: P, ) -> Result<FileData, Box<dyn Error + Send + Sync>>
Upload a file to the OpenAI
API.
§Arguments
file
- The path to the file to upload.purpose
- The purpose of the upload (e.g., ‘answers’, ‘questions’).
§Returns
Result<FileData, Box<dyn std::error::Error + Send + Sync>>
:
A FileData
object representing the uploaded file’s details,
or an error if the request fails.
Examples found in repository?
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let mut client = OpenAI::<Files>::new();
6 let res = client.list().await?;
7 println!("current uploads: {:#?}", res);
8
9 let res = client.upload("examples/samples/test.jsonl").await?;
10 println!("uploaded: {:#?}", res);
11
12 println!("waiting for file to be processed...");
13 // sleep for 3 seconds to allow the file to be processed
14 tokio::time::sleep(tokio::time::Duration::from_secs(3)).await;
15
16 let res = client.list().await?;
17 println!("current uploads: {:#?}", res);
18
19 let res = client.delete(res.data[0].id.clone()).await?;
20 println!("deleted: {:#?}", res);
21 Ok(())
22}
Sourcepub async fn delete<S: Into<String> + Display + Send + Sync>(
&mut self,
file_id: S,
) -> Result<DeleteResponse, Box<dyn Error + Send + Sync>>
pub async fn delete<S: Into<String> + Display + Send + Sync>( &mut self, file_id: S, ) -> Result<DeleteResponse, Box<dyn Error + Send + Sync>>
Delete a specific file.
§Arguments
file_id
- A string that holds the unique id of the file.
§Returns
Result<DeleteResponse, Box<dyn std::error::Error + Send + Sync>>
:
A DeleteResponse
object representing the response from the delete request,
or an error if the request fails.
Examples found in repository?
4async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
5 let mut client = OpenAI::<Files>::new();
6 let res = client.list().await?;
7 println!("current uploads: {:#?}", res);
8
9 let res = client.upload("examples/samples/test.jsonl").await?;
10 println!("uploaded: {:#?}", res);
11
12 println!("waiting for file to be processed...");
13 // sleep for 3 seconds to allow the file to be processed
14 tokio::time::sleep(tokio::time::Duration::from_secs(3)).await;
15
16 let res = client.list().await?;
17 println!("current uploads: {:#?}", res);
18
19 let res = client.delete(res.data[0].id.clone()).await?;
20 println!("deleted: {:#?}", res);
21 Ok(())
22}
Source§impl OpenAI<FineTune>
impl OpenAI<FineTune>
Sourcepub async fn create<S: Into<String> + Send + Sync>(
&mut self,
training_file: S,
) -> Result<FineTuneResponse, Box<dyn Error + Send + Sync>>
pub async fn create<S: Into<String> + Send + Sync>( &mut self, training_file: S, ) -> Result<FineTuneResponse, Box<dyn Error + Send + Sync>>
Create a fine-tune from an uploaded training_file
.
§Arguments
training_file
- A string that holds the unique id of the file.
§Returns
Result<FineTuneResponse, Box<dyn std::error::Error + Send + Sync>>
:
A FineTuneResponse
object representing the result of the fine-tune request,
or an error if the request fails.
Sourcepub async fn list(
&mut self,
) -> Result<FineTuneListResponse, Box<dyn Error + Send + Sync>>
pub async fn list( &mut self, ) -> Result<FineTuneListResponse, Box<dyn Error + Send + Sync>>
List all fine-tunes.
§Returns
Result<FineTuneListResponse, Box<dyn std::error::Error + Send + Sync>>
:
A FineTuneResponse
object representing the result of the list fine-tunes request,
or an error if the request fails.
Sourcepub async fn retrieve<S: Into<String> + Send + Sync + Display>(
&mut self,
fine_tune_id: S,
) -> Result<FineTuneResponse, Box<dyn Error + Send + Sync>>
pub async fn retrieve<S: Into<String> + Send + Sync + Display>( &mut self, fine_tune_id: S, ) -> Result<FineTuneResponse, Box<dyn Error + Send + Sync>>
Sourcepub async fn cancel<S: Into<String> + Send + Sync + Display>(
&mut self,
fine_tune_id: S,
) -> Result<FineTuneResponse, Box<dyn Error + Send + Sync>>
pub async fn cancel<S: Into<String> + Send + Sync + Display>( &mut self, fine_tune_id: S, ) -> Result<FineTuneResponse, Box<dyn Error + Send + Sync>>
Immediately cancel a fine-tune job.
§Arguments
fine_tune_id
- A string that holds the unique id of the file.
§Returns
Result<FineTuneResponse, Box<dyn std::error::Error + Send + Sync>>
:
A FineTuneResponse
object representing the result of the cancel fine-tune request,
or an error if the request fails.
Sourcepub async fn list_events<S: Into<String> + Send + Sync + Display>(
&mut self,
fine_tune_id: S,
) -> Result<FineTuneEventResponse, Box<dyn Error + Send + Sync>>
pub async fn list_events<S: Into<String> + Send + Sync + Display>( &mut self, fine_tune_id: S, ) -> Result<FineTuneEventResponse, Box<dyn Error + Send + Sync>>
Get fine-grained status updates for a fine-tune job.
§Arguments
fine_tune_id
- A string that holds the unique id of the file.
§Returns
Result<FineTuneEventResponse, Box<dyn std::error::Error + Send + Sync>>
:
A FineTuneEventResponse
object representing the result of the list fine-tunes request,
or an error if the request fails.
Source§impl OpenAI<Moderation>
impl OpenAI<Moderation>
Sourcepub async fn moderate<S: Into<String> + Send + Sync>(
&mut self,
input: S,
) -> Result<ModerationResponse, Box<dyn Error + Send + Sync>>
pub async fn moderate<S: Into<String> + Send + Sync>( &mut self, input: S, ) -> Result<ModerationResponse, Box<dyn Error + Send + Sync>>
Trait Implementations§
Auto Trait Implementations§
impl<C> Freeze for OpenAI<C>where
C: Freeze,
impl<C> !RefUnwindSafe for OpenAI<C>
impl<C> Send for OpenAI<C>
impl<C> Sync for OpenAI<C>
impl<C> Unpin for OpenAI<C>where
C: Unpin,
impl<C> !UnwindSafe for OpenAI<C>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more