Struct aleph_alpha_client::Client
source · pub struct Client { /* private fields */ }
Expand description
Execute Jobs against the Aleph Alpha API
Implementations§
source§impl Client
impl Client
sourcepub fn new(api_token: &str) -> Result<Self, Error>
pub fn new(api_token: &str) -> Result<Self, Error>
A new instance of an Aleph Alpha client helping you interact with the Aleph Alpha API.
sourcepub fn with_base_url(host: String, api_token: &str) -> Result<Self, Error>
pub fn with_base_url(host: String, api_token: &str) -> Result<Self, Error>
In production you typically would want set this to https://api.aleph-alpha.com. Yet you may want to use a different instances for testing.
sourcepub async fn execute<T: Task>(
&self,
model: &str,
task: &T,
how: &How
) -> Result<T::Output, Error>
👎Deprecated: Please use output_of instead.
pub async fn execute<T: Task>( &self, model: &str, task: &T, how: &How ) -> Result<T::Output, Error>
Execute a task with the aleph alpha API and fetch its result.
use aleph_alpha_client::{Client, How, TaskCompletion, Error};
async fn print_completion() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::new("AA_API_TOKEN")?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day
// ..."
let task = TaskCompletion::from_text("An apple a day", 10);
// Retrieve answer from API
let response = client.execute(model, &task, &How::default()).await?;
// Print entire sentence with completion
println!("An apple a day{}", response.completion);
Ok(())
}
sourcepub async fn output_of<T: Job>(
&self,
task: &T,
how: &How
) -> Result<T::Output, Error>
pub async fn output_of<T: Job>( &self, task: &T, how: &How ) -> Result<T::Output, Error>
Execute any task with the aleph alpha API and fetch its result. This is most usefull in
generic code then you want to execute arbitrary task types. Otherwise prefer methods taking
concrete tasks like Self::completion
for improved readability.
sourcepub async fn semantic_embedding(
&self,
task: &TaskSemanticEmbedding<'_>,
how: &How
) -> Result<SemanticEmbeddingOutput, Error>
pub async fn semantic_embedding( &self, task: &TaskSemanticEmbedding<'_>, how: &How ) -> Result<SemanticEmbeddingOutput, Error>
An embedding trying to capture the semantic meaning of a text. Cosine similarity can be used find out how well two texts (or multimodal prompts) match. Useful for search usecases.
See the example for cosine_similarity
.
sourcepub async fn completion(
&self,
task: &TaskCompletion<'_>,
model: &str,
how: &How
) -> Result<CompletionOutput, Error>
pub async fn completion( &self, task: &TaskCompletion<'_>, model: &str, how: &How ) -> Result<CompletionOutput, Error>
Instruct a model served by the aleph alpha API to continue writing a piece of text (or multimodal document).
use aleph_alpha_client::{Client, How, TaskCompletion, Task, Error};
async fn print_completion() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::new("AA_API_TOKEN")?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day
// ..."
let task = TaskCompletion::from_text("An apple a day", 10);
// Retrieve answer from API
let response = client.completion(&task, model, &How::default()).await?;
// Print entire sentence with completion
println!("An apple a day{}", response.completion);
Ok(())
}
sourcepub async fn explanation(
&self,
task: &TaskExplanation<'_>,
model: &str,
how: &How
) -> Result<ExplanationOutput, Error>
pub async fn explanation( &self, task: &TaskExplanation<'_>, model: &str, how: &How ) -> Result<ExplanationOutput, Error>
Returns an explanation given a prompt and a target (typically generated by a previous completion request). The explanation describes how individual parts of the prompt influenced the target.
use aleph_alpha_client::{Client, How, TaskCompletion, Task, Error, Granularity, TaskExplanation, Stopping, Prompt, Sampling};
async fn print_explanation() -> Result<(), Error> {
let client = Client::new("AA_API_TOKEN")?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// input for the completion
let prompt = Prompt::from_text("An apple a day");
let task = TaskCompletion {
prompt: prompt.clone(),
stopping: Stopping::from_maximum_tokens(10),
sampling: Sampling::MOST_LIKELY,
};
let response = client.completion(&task, model, &How::default()).await?;
let task = TaskExplanation {
prompt: prompt, // same input as for completion
target: &response.completion, // output of completion
granularity: Granularity::default(),
};
let response = client.explanation(&task, model, &How::default()).await?;
dbg!(&response);
Ok(())
}