pub struct Client { /* private fields */ }
Expand description
Execute Jobs against the Aleph Alpha API
Implementations§
Source§impl Client
impl Client
Sourcepub fn new(
host: impl Into<String>,
api_token: Option<String>,
) -> Result<Self, Error>
pub fn new( host: impl Into<String>, api_token: Option<String>, ) -> Result<Self, Error>
A new instance of an Aleph Alpha client helping you interact with the Aleph Alpha API.
Setting the token to None allows specifying it on a per request basis. You may want to only use request based authentication and skip default authentication. This is useful if writing an application which invokes the client on behalf of many different users. Having neither request, nor default authentication is considered a bug and will cause a panic.
Sourcepub fn with_auth(
host: impl Into<String>,
api_token: impl Into<String>,
) -> Result<Self, Error>
pub fn with_auth( host: impl Into<String>, api_token: impl Into<String>, ) -> Result<Self, Error>
A client instance that always uses the same token for all requests.
pub fn from_env() -> Result<Self, Error>
Sourcepub async fn execute<T: Task>(
&self,
model: &str,
task: &T,
how: &How,
) -> Result<T::Output, Error>
👎Deprecated: Please use output_of instead.
pub async fn execute<T: Task>( &self, model: &str, task: &T, how: &How, ) -> Result<T::Output, Error>
Execute a task with the aleph alpha API and fetch its result.
use aleph_alpha_client::{Client, How, TaskCompletion, Error};
async fn print_completion() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::from_env()?;
// Name of the model we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day
// ..."
let task = TaskCompletion::from_text("An apple a day");
// Retrieve answer from API
let response = client.execute(model, &task, &How::default()).await?;
// Print entire sentence with completion
println!("An apple a day{}", response.completion);
Ok(())
}
Sourcepub async fn output_of<T: Job>(
&self,
task: &T,
how: &How,
) -> Result<T::Output, Error>
pub async fn output_of<T: Job>( &self, task: &T, how: &How, ) -> Result<T::Output, Error>
Execute any task with the aleph alpha API and fetch its result. This is most useful in
generic code then you want to execute arbitrary task types. Otherwise prefer methods taking
concrete tasks like Self::completion
for improved readability.
Sourcepub async fn semantic_embedding(
&self,
task: &TaskSemanticEmbedding<'_>,
how: &How,
) -> Result<SemanticEmbeddingOutput, Error>
pub async fn semantic_embedding( &self, task: &TaskSemanticEmbedding<'_>, how: &How, ) -> Result<SemanticEmbeddingOutput, Error>
An embedding trying to capture the semantic meaning of a text. Cosine similarity can be used find out how well two texts (or multimodal prompts) match. Useful for search use cases.
See the example for cosine_similarity
.
Sourcepub async fn batch_semantic_embedding(
&self,
task: &TaskBatchSemanticEmbedding<'_>,
how: &How,
) -> Result<BatchSemanticEmbeddingOutput, Error>
pub async fn batch_semantic_embedding( &self, task: &TaskBatchSemanticEmbedding<'_>, how: &How, ) -> Result<BatchSemanticEmbeddingOutput, Error>
A batch of embeddings trying to capture the semantic meaning of a text.
Sourcepub async fn semantic_embedding_with_instruction(
&self,
task: &TaskSemanticEmbeddingWithInstruction<'_>,
how: &How,
) -> Result<SemanticEmbeddingOutput, Error>
pub async fn semantic_embedding_with_instruction( &self, task: &TaskSemanticEmbeddingWithInstruction<'_>, how: &How, ) -> Result<SemanticEmbeddingOutput, Error>
An embedding trying to capture the semantic meaning of a text.
By providing instructions, you can help the model better understand the nuances of your specific data, leading to embeddings that are more useful for your use case.
Sourcepub async fn completion(
&self,
task: &TaskCompletion<'_>,
model: &str,
how: &How,
) -> Result<CompletionOutput, Error>
pub async fn completion( &self, task: &TaskCompletion<'_>, model: &str, how: &How, ) -> Result<CompletionOutput, Error>
Instruct a model served by the aleph alpha API to continue writing a piece of text (or multimodal document).
use aleph_alpha_client::{Client, How, TaskCompletion, Task, Error};
async fn print_completion() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::from_env()?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day
// ..."
let task = TaskCompletion::from_text("An apple a day");
// Retrieve answer from API
let response = client.completion(&task, model, &How::default()).await?;
// Print entire sentence with completion
println!("An apple a day{}", response.completion);
Ok(())
}
Sourcepub async fn stream_completion<'task>(
&self,
task: &'task TaskCompletion<'task>,
model: &'task str,
how: &How,
) -> Result<Pin<Box<dyn Stream<Item = Result<CompletionEvent, Error>> + Send + 'task>>, Error>
pub async fn stream_completion<'task>( &self, task: &'task TaskCompletion<'task>, model: &'task str, how: &How, ) -> Result<Pin<Box<dyn Stream<Item = Result<CompletionEvent, Error>> + Send + 'task>>, Error>
Instruct a model served by the aleph alpha API to continue writing a piece of text. Stream the response as a series of events.
use aleph_alpha_client::{Client, How, TaskCompletion, Error, CompletionEvent};
use futures_util::StreamExt;
async fn print_stream_completion() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::from_env()?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day
// ..."
let task = TaskCompletion::from_text("An apple a day");
// Retrieve stream from API
let mut stream = client.stream_completion(&task, model, &How::default()).await?;
while let Some(Ok(event)) = stream.next().await {
if let CompletionEvent::Delta { completion, logprobs: _ } = event {
println!("{}", completion);
}
}
Ok(())
}
Sourcepub async fn chat(
&self,
task: &TaskChat<'_>,
model: &str,
how: &How,
) -> Result<ChatOutput, Error>
pub async fn chat( &self, task: &TaskChat<'_>, model: &str, how: &How, ) -> Result<ChatOutput, Error>
Send a chat message to a model.
use aleph_alpha_client::{Client, How, TaskChat, Error, Message};
async fn print_chat() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::from_env()?;
// Name of a model that supports chat.
let model = "pharia-1-llm-7b-control";
// Create a chat task with a user message.
let message = Message::user("Hello, how are you?");
let task = TaskChat::with_message(message);
// Send the message to the model.
let response = client.chat(&task, model, &How::default()).await?;
// Print the model response
println!("{}", response.message.content);
Ok(())
}
Sourcepub async fn stream_chat<'task>(
&self,
task: &'task TaskChat<'_>,
model: &'task str,
how: &How,
) -> Result<Pin<Box<dyn Stream<Item = Result<ChatEvent, Error>> + Send + 'task>>, Error>
pub async fn stream_chat<'task>( &self, task: &'task TaskChat<'_>, model: &'task str, how: &How, ) -> Result<Pin<Box<dyn Stream<Item = Result<ChatEvent, Error>> + Send + 'task>>, Error>
Send a chat message to a model. Stream the response as a series of events.
use aleph_alpha_client::{Client, How, TaskChat, Error, Message, ChatEvent};
use futures_util::StreamExt;
async fn print_stream_chat() -> Result<(), Error> {
// Authenticate against API. Fetches token.
let client = Client::from_env()?;
// Name of a model that supports chat.
let model = "pharia-1-llm-7b-control";
// Create a chat task with a user message.
let message = Message::user("Hello, how are you?");
let task = TaskChat::with_message(message);
// Send the message to the model.
let mut stream = client.stream_chat(&task, model, &How::default()).await?;
while let Some(Ok(event)) = stream.next().await {
if let ChatEvent::MessageDelta { content, logprobs: _ } = event {
println!("{}", content);
}
}
Ok(())
}
Sourcepub async fn explanation(
&self,
task: &TaskExplanation<'_>,
model: &str,
how: &How,
) -> Result<ExplanationOutput, Error>
pub async fn explanation( &self, task: &TaskExplanation<'_>, model: &str, how: &How, ) -> Result<ExplanationOutput, Error>
Returns an explanation given a prompt and a target (typically generated by a previous completion request). The explanation describes how individual parts of the prompt influenced the target.
use aleph_alpha_client::{Client, How, TaskCompletion, Task, Error, Granularity,
TaskExplanation, Stopping, Prompt, Sampling, Logprobs};
async fn print_explanation() -> Result<(), Error> {
let client = Client::from_env()?;
// Name of the model we we want to use. Large models give usually better answer, but are
// also slower and more costly.
let model = "luminous-base";
// input for the completion
let prompt = Prompt::from_text("An apple a day");
let task = TaskCompletion {
prompt: prompt.clone(),
stopping: Stopping::from_maximum_tokens(10),
sampling: Sampling::MOST_LIKELY,
special_tokens: false,
logprobs: Logprobs::No,
echo: false,
};
let response = client.completion(&task, model, &How::default()).await?;
let task = TaskExplanation {
prompt: prompt, // same input as for completion
target: &response.completion, // output of completion
granularity: Granularity::default(),
};
let response = client.explanation(&task, model, &How::default()).await?;
dbg!(&response);
Ok(())
}
Sourcepub async fn tokenize(
&self,
task: &TaskTokenization<'_>,
model: &str,
how: &How,
) -> Result<TokenizationOutput, Error>
pub async fn tokenize( &self, task: &TaskTokenization<'_>, model: &str, how: &How, ) -> Result<TokenizationOutput, Error>
Tokenize a prompt for a specific model.
use aleph_alpha_client::{Client, Error, How, TaskTokenization};
async fn tokenize() -> Result<(), Error> {
let client = Client::from_env()?;
// Name of the model for which we want to tokenize text.
let model = "luminous-base";
// Text prompt to be tokenized.
let prompt = "An apple a day";
let task = TaskTokenization {
prompt,
tokens: true, // return text-tokens
token_ids: true, // return numeric token-ids
};
let responses = client.tokenize(&task, model, &How::default()).await?;
dbg!(&responses);
Ok(())
}
Sourcepub async fn detokenize(
&self,
task: &TaskDetokenization<'_>,
model: &str,
how: &How,
) -> Result<DetokenizationOutput, Error>
pub async fn detokenize( &self, task: &TaskDetokenization<'_>, model: &str, how: &How, ) -> Result<DetokenizationOutput, Error>
Detokenize a list of token ids into a string.
use aleph_alpha_client::{Client, Error, How, TaskDetokenization};
async fn detokenize() -> Result<(), Error> {
let client = Client::from_env()?;
// Specify the name of the model whose tokenizer was used to generate the input token ids.
let model = "luminous-base";
// Token ids to convert into text.
let token_ids: Vec<u32> = vec![556, 48741, 247, 2983];
let task = TaskDetokenization {
token_ids: &token_ids,
};
let responses = client.detokenize(&task, model, &How::default()).await?;
dbg!(&responses);
Ok(())
}
pub async fn tokenizer_by_model( &self, model: &str, api_token: Option<String>, context: Option<TraceContext>, ) -> Result<Tokenizer, Error>
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more