pub struct Client { /* private fields */ }
Expand description
Execute Jobs against the Aleph Alpha API
Implementations§
Source§impl Client
impl Client
Sourcepub fn new(host: String, api_token: Option<String>) -> Result<Self, Error>
pub fn new(host: String, api_token: Option<String>) -> Result<Self, Error>
A new instance of an Aleph Alpha client helping you interact with the Aleph Alpha API.
For “normal” client applications you may likely rather use Self::with_authentication
or
Self::with_base_url
.
You may want to only use request based authentication and skip default authentication. This is useful if writing an application which invokes the client on behalf of many different users. Having neither request, nor default authentication is considered a bug and will cause a panic.
Sourcepub fn with_authentication(api_token: impl Into<String>) -> Result<Self, Error>
pub fn with_authentication(api_token: impl Into<String>) -> Result<Self, Error>
Use the Aleph Alpha SaaS offering with your API token for all requests.
Sourcepub fn with_base_url(
host: String,
api_token: impl Into<String>,
) -> Result<Self, Error>
pub fn with_base_url( host: String, api_token: impl Into<String>, ) -> Result<Self, Error>
Use your on-premise inference with your API token for all requests.
In production you typically would want set this to https://api.aleph-alpha.com. Yet you may want to use a different instances for testing.
Sourcepub async fn execute<T: Task>(
&self,
model: &str,
task: &T,
how: &How,
) -> Result<T::Output, Error>
👎Deprecated: Please use output_of instead.
pub async fn execute<T: Task>( &self, model: &str, task: &T, how: &How, ) -> Result<T::Output, Error>
Execute a task with the aleph alpha API and fetch its result.
use aleph_alpha_client::{Client, How, TaskCompletion, Error};
async fn print_completion() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::with_authentication("AA_API_TOKEN")?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day
// ..."
let task = TaskCompletion::from_text("An apple a day");
// Retrieve answer from API
let response = client.execute(model, &task, &How::default()).await?;
// Print entire sentence with completion
println!("An apple a day{}", response.completion);
Ok(())
}
Sourcepub async fn output_of<T: Job>(
&self,
task: &T,
how: &How,
) -> Result<T::Output, Error>
pub async fn output_of<T: Job>( &self, task: &T, how: &How, ) -> Result<T::Output, Error>
Execute any task with the aleph alpha API and fetch its result. This is most usefull in
generic code then you want to execute arbitrary task types. Otherwise prefer methods taking
concrete tasks like Self::completion
for improved readability.
Sourcepub async fn semantic_embedding(
&self,
task: &TaskSemanticEmbedding<'_>,
how: &How,
) -> Result<SemanticEmbeddingOutput, Error>
pub async fn semantic_embedding( &self, task: &TaskSemanticEmbedding<'_>, how: &How, ) -> Result<SemanticEmbeddingOutput, Error>
An embedding trying to capture the semantic meaning of a text. Cosine similarity can be used find out how well two texts (or multimodal prompts) match. Useful for search usecases.
See the example for cosine_similarity
.
Sourcepub async fn batch_semantic_embedding(
&self,
task: &TaskBatchSemanticEmbedding<'_>,
how: &How,
) -> Result<BatchSemanticEmbeddingOutput, Error>
pub async fn batch_semantic_embedding( &self, task: &TaskBatchSemanticEmbedding<'_>, how: &How, ) -> Result<BatchSemanticEmbeddingOutput, Error>
An batch of embeddings trying to capture the semantic meaning of a text.
Sourcepub async fn completion(
&self,
task: &TaskCompletion<'_>,
model: &str,
how: &How,
) -> Result<CompletionOutput, Error>
pub async fn completion( &self, task: &TaskCompletion<'_>, model: &str, how: &How, ) -> Result<CompletionOutput, Error>
Instruct a model served by the aleph alpha API to continue writing a piece of text (or multimodal document).
use aleph_alpha_client::{Client, How, TaskCompletion, Task, Error};
async fn print_completion() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::with_authentication("AA_API_TOKEN")?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day
// ..."
let task = TaskCompletion::from_text("An apple a day");
// Retrieve answer from API
let response = client.completion(&task, model, &How::default()).await?;
// Print entire sentence with completion
println!("An apple a day{}", response.completion);
Ok(())
}
Sourcepub async fn stream_completion(
&self,
task: &TaskCompletion<'_>,
model: &str,
how: &How,
) -> Result<Pin<Box<dyn Stream<Item = Result<CompletionEvent, Error>> + Send>>, Error>
pub async fn stream_completion( &self, task: &TaskCompletion<'_>, model: &str, how: &How, ) -> Result<Pin<Box<dyn Stream<Item = Result<CompletionEvent, Error>> + Send>>, Error>
Instruct a model served by the aleph alpha API to continue writing a piece of text. Stream the response as a series of events.
use aleph_alpha_client::{Client, How, TaskCompletion, Error, CompletionEvent};
use futures_util::StreamExt;
async fn print_stream_completion() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::with_authentication("AA_API_TOKEN")?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day
// ..."
let task = TaskCompletion::from_text("An apple a day");
// Retrieve stream from API
let mut stream = client.stream_completion(&task, model, &How::default()).await?;
while let Some(Ok(event)) = stream.next().await {
if let CompletionEvent::StreamChunk(chunk) = event {
println!("{}", chunk.completion);
}
}
Ok(())
}
Sourcepub async fn chat(
&self,
task: &TaskChat<'_>,
model: &str,
how: &How,
) -> Result<ChatOutput, Error>
pub async fn chat( &self, task: &TaskChat<'_>, model: &str, how: &How, ) -> Result<ChatOutput, Error>
Send a chat message to a model.
use aleph_alpha_client::{Client, How, TaskChat, Error, Message};
async fn print_chat() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::with_authentication("AA_API_TOKEN")?;
// Name of a model that supports chat.
let model = "pharia-1-llm-7b-control";
// Create a chat task with a user message.
let message = Message::user("Hello, how are you?");
let task = TaskChat::with_message(message);
// Send the message to the model.
let response = client.chat(&task, model, &How::default()).await?;
// Print the model response
println!("{}", response.message.content);
Ok(())
}
Sourcepub async fn stream_chat(
&self,
task: &TaskChat<'_>,
model: &str,
how: &How,
) -> Result<Pin<Box<dyn Stream<Item = Result<ChatStreamChunk, Error>> + Send>>, Error>
pub async fn stream_chat( &self, task: &TaskChat<'_>, model: &str, how: &How, ) -> Result<Pin<Box<dyn Stream<Item = Result<ChatStreamChunk, Error>> + Send>>, Error>
Send a chat message to a model. Stream the response as a series of events.
use aleph_alpha_client::{Client, How, TaskChat, Error, Message};
use futures_util::StreamExt;
async fn print_stream_chat() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::with_authentication("AA_API_TOKEN")?;
// Name of a model that supports chat.
let model = "pharia-1-llm-7b-control";
// Create a chat task with a user message.
let message = Message::user("Hello, how are you?");
let task = TaskChat::with_message(message);
// Send the message to the model.
let mut stream = client.stream_chat(&task, model, &How::default()).await?;
while let Some(Ok(event)) = stream.next().await {
println!("{}", event.delta.content);
}
Ok(())
}
Sourcepub async fn explanation(
&self,
task: &TaskExplanation<'_>,
model: &str,
how: &How,
) -> Result<ExplanationOutput, Error>
pub async fn explanation( &self, task: &TaskExplanation<'_>, model: &str, how: &How, ) -> Result<ExplanationOutput, Error>
Returns an explanation given a prompt and a target (typically generated by a previous completion request). The explanation describes how individual parts of the prompt influenced the target.
use aleph_alpha_client::{Client, How, TaskCompletion, Task, Error, Granularity, TaskExplanation, Stopping, Prompt, Sampling};
async fn print_explanation() -> Result<(), Error> {
let client = Client::with_authentication("AA_API_TOKEN")?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// input for the completion
let prompt = Prompt::from_text("An apple a day");
let task = TaskCompletion {
prompt: prompt.clone(),
stopping: Stopping::from_maximum_tokens(10),
sampling: Sampling::MOST_LIKELY,
};
let response = client.completion(&task, model, &How::default()).await?;
let task = TaskExplanation {
prompt: prompt, // same input as for completion
target: &response.completion, // output of completion
granularity: Granularity::default(),
};
let response = client.explanation(&task, model, &How::default()).await?;
dbg!(&response);
Ok(())
}
Sourcepub async fn tokenize(
&self,
task: &TaskTokenization<'_>,
model: &str,
how: &How,
) -> Result<TokenizationOutput, Error>
pub async fn tokenize( &self, task: &TaskTokenization<'_>, model: &str, how: &How, ) -> Result<TokenizationOutput, Error>
Tokenize a prompt for a specific model.
use aleph_alpha_client::{Client, Error, How, TaskTokenization};
async fn tokenize() -> Result<(), Error> {
let client = Client::with_authentication("AA_API_TOKEN")?;
// Name of the model for which we want to tokenize text.
let model = "luminous-base";
// Text prompt to be tokenized.
let prompt = "An apple a day";
let task = TaskTokenization {
prompt,
tokens: true, // return text-tokens
token_ids: true, // return numeric token-ids
};
let respones = client.tokenize(&task, model, &How::default()).await?;
dbg!(&respones);
Ok(())
}
Sourcepub async fn detokenize(
&self,
task: &TaskDetokenization<'_>,
model: &str,
how: &How,
) -> Result<DetokenizationOutput, Error>
pub async fn detokenize( &self, task: &TaskDetokenization<'_>, model: &str, how: &How, ) -> Result<DetokenizationOutput, Error>
Detokenize a list of token ids into a string.
use aleph_alpha_client::{Client, Error, How, TaskDetokenization};
async fn detokenize() -> Result<(), Error> {
let client = Client::with_authentication("AA_API_TOKEN")?;
// Specify the name of the model whose tokenizer was used to generate the input token ids.
let model = "luminous-base";
// Token ids to convert into text.
let token_ids: Vec<u32> = vec![556, 48741, 247, 2983];
let task = TaskDetokenization {
token_ids: &token_ids,
};
let respones = client.detokenize(&task, model, &How::default()).await?;
dbg!(&respones);
Ok(())
}
pub async fn tokenizer_by_model( &self, model: &str, api_token: Option<String>, ) -> Result<Tokenizer, Error>
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more