pub struct Evals<'c, C: Config> { /* private fields */ }Expand description
Create, manage, and run evals in the OpenAI platform. Related guide: Evals
Implementations§
Source§impl<'c, C: Config> Evals<'c, C>
impl<'c, C: Config> Evals<'c, C>
pub fn new(client: &'c Client<C>) -> Self
evals only.Sourcepub fn runs(&self, eval_id: &str) -> EvalRuns<'_, C>
Available on crate feature evals only.
pub fn runs(&self, eval_id: &str) -> EvalRuns<'_, C>
evals only.EvalRuns API group
Sourcepub async fn list(&self) -> Result<EvalList, OpenAIError>
Available on crate feature evals only.
pub async fn list(&self) -> Result<EvalList, OpenAIError>
evals only.List evaluations for a project. List evaluations for a project.
Sourcepub async fn list_byot<R: DeserializeOwned>(&self) -> Result<R, OpenAIError>
Available on crate feature evals only.
pub async fn list_byot<R: DeserializeOwned>(&self) -> Result<R, OpenAIError>
evals only.List evaluations for a project.
Sourcepub async fn create(
&self,
request: CreateEvalRequest,
) -> Result<Eval, OpenAIError>
Available on crate feature evals only.
pub async fn create( &self, request: CreateEvalRequest, ) -> Result<Eval, OpenAIError>
evals only.Create the structure of an evaluation that can be used to test a model’s performance. An evaluation is a set of testing criteria and the config for a data source, which dictates the schema of the data used in the evaluation. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the Evals guide. Create the structure of an evaluation that can be used to test a model’s performance. An evaluation is a set of testing criteria and the config for a data source, which dictates the schema of the data used in the evaluation. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the Evals guide.
Sourcepub async fn create_byot<T0: Serialize, R: DeserializeOwned>(
&self,
request: T0,
) -> Result<R, OpenAIError>
Available on crate feature evals only.
pub async fn create_byot<T0: Serialize, R: DeserializeOwned>( &self, request: T0, ) -> Result<R, OpenAIError>
evals only.Create the structure of an evaluation that can be used to test a model’s performance. An evaluation is a set of testing criteria and the config for a data source, which dictates the schema of the data used in the evaluation. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the Evals guide.
Sourcepub async fn retrieve(&self, eval_id: &str) -> Result<Eval, OpenAIError>
Available on crate feature evals only.
pub async fn retrieve(&self, eval_id: &str) -> Result<Eval, OpenAIError>
evals only.Get an evaluation by ID. Get an evaluation by ID.
Sourcepub async fn retrieve_byot<T0: Display, R: DeserializeOwned>(
&self,
eval_id: T0,
) -> Result<R, OpenAIError>
Available on crate feature evals only.
pub async fn retrieve_byot<T0: Display, R: DeserializeOwned>( &self, eval_id: T0, ) -> Result<R, OpenAIError>
evals only.Get an evaluation by ID.
Sourcepub async fn update(
&self,
eval_id: &str,
request: UpdateEvalRequest,
) -> Result<Eval, OpenAIError>
Available on crate feature evals only.
pub async fn update( &self, eval_id: &str, request: UpdateEvalRequest, ) -> Result<Eval, OpenAIError>
evals only.Update certain properties of an evaluation. Update certain properties of an evaluation.
Sourcepub async fn update_byot<T0: Display, T1: Serialize, R: DeserializeOwned>(
&self,
eval_id: T0,
request: T1,
) -> Result<R, OpenAIError>
Available on crate feature evals only.
pub async fn update_byot<T0: Display, T1: Serialize, R: DeserializeOwned>( &self, eval_id: T0, request: T1, ) -> Result<R, OpenAIError>
evals only.Update certain properties of an evaluation.
Sourcepub async fn delete(
&self,
eval_id: &str,
) -> Result<DeleteEvalResponse, OpenAIError>
Available on crate feature evals only.
pub async fn delete( &self, eval_id: &str, ) -> Result<DeleteEvalResponse, OpenAIError>
evals only.Delete an evaluation. Delete an evaluation.
Sourcepub async fn delete_byot<T0: Display, R: DeserializeOwned>(
&self,
eval_id: T0,
) -> Result<R, OpenAIError>
Available on crate feature evals only.
pub async fn delete_byot<T0: Display, R: DeserializeOwned>( &self, eval_id: T0, ) -> Result<R, OpenAIError>
evals only.Delete an evaluation.
Trait Implementations§
Source§impl<'c, C: Config> RequestOptionsBuilder for Evals<'c, C>
Available on crate feature _api only.
impl<'c, C: Config> RequestOptionsBuilder for Evals<'c, C>
_api only.