Struct openai_openapi::OpenAiClient
source · [−]pub struct OpenAiClient { /* private fields */ }
Implementations
sourceimpl OpenAiClient
impl OpenAiClient
sourceimpl OpenAiClient
impl OpenAiClient
pub fn new(url: &str) -> Self
pub fn with_authentication(self, authentication: OpenAiAuthentication) -> Self
pub fn with_middleware<M: Middleware + 'static>(self, middleware: M) -> Self
sourcepub async fn list_engines(&self) -> Result<ListEnginesResponse>
pub async fn list_engines(&self) -> Result<ListEnginesResponse>
Lists the currently available engines, and provides basic information about each one such as the owner and availability.
See endpoint docs at https://beta.openai.com/docs/api-reference/engines/list.
sourcepub async fn retrieve_engine(&self, engine_id: String) -> Result<Engine>
pub async fn retrieve_engine(&self, engine_id: String) -> Result<Engine>
Retrieves an engine instance, providing basic information about the engine such as the owner and availability.
sourcepub async fn create_completion(
&self,
engine_id: String,
prompt: Option<Value>,
suffix: Option<Value>,
max_tokens: Option<i64>,
temperature: Option<f64>,
top_p: Option<f64>,
n: Option<i64>,
stream: Option<bool>,
logprobs: Option<i64>,
echo: Option<bool>,
stop: Option<Value>,
presence_penalty: Option<f64>,
frequency_penalty: Option<f64>,
best_of: Option<i64>,
logit_bias: Option<Value>,
user: String
) -> Result<CreateCompletionResponse>
pub async fn create_completion(
&self,
engine_id: String,
prompt: Option<Value>,
suffix: Option<Value>,
max_tokens: Option<i64>,
temperature: Option<f64>,
top_p: Option<f64>,
n: Option<i64>,
stream: Option<bool>,
logprobs: Option<i64>,
echo: Option<bool>,
stop: Option<Value>,
presence_penalty: Option<f64>,
frequency_penalty: Option<f64>,
best_of: Option<i64>,
logit_bias: Option<Value>,
user: String
) -> Result<CreateCompletionResponse>
Creates a new completion for the provided prompt and parameters
sourcepub async fn create_completion_from_model(
&self
) -> Result<CreateCompletionResponse>
pub async fn create_completion_from_model(
&self
) -> Result<CreateCompletionResponse>
Creates a completion using a fine-tuned model
sourcepub async fn create_edit(
&self,
engine_id: String,
input: Option<String>,
instruction: String,
temperature: Option<f64>,
top_p: Option<f64>
) -> Result<CreateEditResponse>
pub async fn create_edit(
&self,
engine_id: String,
input: Option<String>,
instruction: String,
temperature: Option<f64>,
top_p: Option<f64>
) -> Result<CreateEditResponse>
Creates a new edit for the provided input, instruction, and parameters
sourcepub async fn create_search(
&self,
engine_id: String,
query: String,
documents: Option<Vec<String>>,
file: Option<String>,
max_rerank: Option<i64>,
return_metadata: Option<bool>,
user: String
) -> Result<CreateSearchResponse>
pub async fn create_search(
&self,
engine_id: String,
query: String,
documents: Option<Vec<String>>,
file: Option<String>,
max_rerank: Option<i64>,
return_metadata: Option<bool>,
user: String
) -> Result<CreateSearchResponse>
The search endpoint computes similarity scores between provided query and documents. Documents can be passed directly to the API if there are no more than 200 of them.
To go beyond the 200 document limit, documents can be processed offline and then used for efficient retrieval at query time. When file
is set, the search endpoint searches over all the documents in the given file and returns up to the max_rerank
number of documents. These documents will be returned along with their search scores.
The similarity score is a positive score that usually ranges from 0 to 300 (but can sometimes go higher), where a score above 200 usually means the document is semantically similar to the query.
sourcepub async fn list_files(&self) -> Result<ListFilesResponse>
pub async fn list_files(&self) -> Result<ListFilesResponse>
Returns a list of files that belong to the user’s organization.
sourcepub async fn create_file(&self) -> Result<OpenAIFile>
pub async fn create_file(&self) -> Result<OpenAIFile>
Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.
sourcepub async fn retrieve_file(&self, file_id: String) -> Result<OpenAIFile>
pub async fn retrieve_file(&self, file_id: String) -> Result<OpenAIFile>
Returns information about a specific file.
sourcepub async fn delete_file(&self, file_id: String) -> Result<DeleteFileResponse>
pub async fn delete_file(&self, file_id: String) -> Result<DeleteFileResponse>
Delete a file.
sourcepub async fn download_file(&self, file_id: String) -> Result<String>
pub async fn download_file(&self, file_id: String) -> Result<String>
Returns the contents of the specified file
sourcepub async fn create_answer(
&self,
model: String,
question: String,
examples: Vec<Vec<String>>,
examples_context: String,
documents: Option<Vec<String>>,
file: Option<String>,
search_model: Option<String>,
max_rerank: Option<i64>,
temperature: Option<f64>,
logprobs: Option<i64>,
max_tokens: Option<i64>,
stop: Option<Value>,
n: Option<i64>,
logit_bias: Option<Value>,
return_metadata: Option<bool>,
return_prompt: Option<bool>,
expand: Option<Vec<Value>>,
user: String
) -> Result<CreateAnswerResponse>
pub async fn create_answer(
&self,
model: String,
question: String,
examples: Vec<Vec<String>>,
examples_context: String,
documents: Option<Vec<String>>,
file: Option<String>,
search_model: Option<String>,
max_rerank: Option<i64>,
temperature: Option<f64>,
logprobs: Option<i64>,
max_tokens: Option<i64>,
stop: Option<Value>,
n: Option<i64>,
logit_bias: Option<Value>,
return_metadata: Option<bool>,
return_prompt: Option<bool>,
expand: Option<Vec<Value>>,
user: String
) -> Result<CreateAnswerResponse>
Answers the specified question using the provided documents and examples.
The endpoint first searches over provided documents or files to find relevant context. The relevant context is combined with the provided examples and question to create the prompt for completion.
sourcepub async fn create_classification(
&self,
model: String,
query: String,
examples: Option<Vec<Vec<String>>>,
file: Option<String>,
labels: Option<Vec<String>>,
search_model: Option<String>,
temperature: Option<f64>,
logprobs: Option<i64>,
max_examples: Option<i64>,
logit_bias: Option<Value>,
return_prompt: Option<bool>,
return_metadata: Option<bool>,
expand: Option<Vec<Value>>,
user: String
) -> Result<CreateClassificationResponse>
pub async fn create_classification(
&self,
model: String,
query: String,
examples: Option<Vec<Vec<String>>>,
file: Option<String>,
labels: Option<Vec<String>>,
search_model: Option<String>,
temperature: Option<f64>,
logprobs: Option<i64>,
max_examples: Option<i64>,
logit_bias: Option<Value>,
return_prompt: Option<bool>,
return_metadata: Option<bool>,
expand: Option<Vec<Value>>,
user: String
) -> Result<CreateClassificationResponse>
Classifies the specified query
using provided examples.
The endpoint first searches over the labeled examples to select the ones most relevant for the particular query. Then, the relevant examples are combined with the query to construct a prompt to produce the final label via the completions endpoint.
Labeled examples can be provided via an uploaded file
, or explicitly listed in the
request using the examples
parameter for quick tests and small scale use cases.
sourcepub async fn list_fine_tunes(&self) -> Result<ListFineTunesResponse>
pub async fn list_fine_tunes(&self) -> Result<ListFineTunesResponse>
List your organization’s fine-tuning jobs
sourcepub async fn create_fine_tune(
&self,
training_file: String,
validation_file: Option<String>,
model: Option<String>,
n_epochs: Option<i64>,
batch_size: Option<i64>,
learning_rate_multiplier: Option<f64>,
prompt_loss_weight: Option<f64>,
compute_classification_metrics: Option<bool>,
classification_n_classes: Option<i64>,
classification_positive_class: Option<String>,
classification_betas: Option<Vec<f64>>,
suffix: Option<String>
) -> Result<FineTune>
pub async fn create_fine_tune(
&self,
training_file: String,
validation_file: Option<String>,
model: Option<String>,
n_epochs: Option<i64>,
batch_size: Option<i64>,
learning_rate_multiplier: Option<f64>,
prompt_loss_weight: Option<f64>,
compute_classification_metrics: Option<bool>,
classification_n_classes: Option<i64>,
classification_positive_class: Option<String>,
classification_betas: Option<Vec<f64>>,
suffix: Option<String>
) -> Result<FineTune>
Creates a job that fine-tunes a specified model from a given dataset.
Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
sourcepub async fn retrieve_fine_tune(&self, fine_tune_id: String) -> Result<FineTune>
pub async fn retrieve_fine_tune(&self, fine_tune_id: String) -> Result<FineTune>
Gets info about the fine-tune job.
sourcepub async fn cancel_fine_tune(&self, fine_tune_id: String) -> Result<FineTune>
pub async fn cancel_fine_tune(&self, fine_tune_id: String) -> Result<FineTune>
Immediately cancel a fine-tune job.
sourcepub async fn list_fine_tune_events(
&self,
fine_tune_id: String,
stream: bool
) -> Result<ListFineTuneEventsResponse>
pub async fn list_fine_tune_events(
&self,
fine_tune_id: String,
stream: bool
) -> Result<ListFineTuneEventsResponse>
Get fine-grained status updates for a fine-tune job.
sourcepub async fn delete_model(&self, model: String) -> Result<DeleteModelResponse>
pub async fn delete_model(&self, model: String) -> Result<DeleteModelResponse>
Delete a fine-tuned model. You must have the Owner role in your organization.
sourcepub async fn create_embedding(
&self,
engine_id: String,
input: Value,
user: String
) -> Result<CreateEmbeddingResponse>
pub async fn create_embedding(
&self,
engine_id: String,
input: Value,
user: String
) -> Result<CreateEmbeddingResponse>
Creates an embedding vector representing the input text.
Auto Trait Implementations
impl !RefUnwindSafe for OpenAiClient
impl Send for OpenAiClient
impl Sync for OpenAiClient
impl Unpin for OpenAiClient
impl !UnwindSafe for OpenAiClient
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more