Struct Client

Source
pub struct Client { /* private fields */ }
Expand description

Execute Jobs against the Aleph Alpha API

Implementations§

Source§

impl Client

Source

pub fn new( host: impl Into<String>, api_token: Option<String>, ) -> Result<Self, Error>

A new instance of an Aleph Alpha client helping you interact with the Aleph Alpha API.

Setting the token to None allows specifying it on a per request basis. You may want to only use request based authentication and skip default authentication. This is useful if writing an application which invokes the client on behalf of many different users. Having neither request, nor default authentication is considered a bug and will cause a panic.

Source

pub fn with_auth( host: impl Into<String>, api_token: impl Into<String>, ) -> Result<Self, Error>

A client instance that always uses the same token for all requests.

Source

pub fn from_env() -> Result<Self, Error>

Source

pub async fn execute<T: Task>( &self, model: &str, task: &T, how: &How, ) -> Result<T::Output, Error>

👎Deprecated: Please use output_of instead.

Execute a task with the aleph alpha API and fetch its result.

use aleph_alpha_client::{Client, How, TaskCompletion, Error};

async fn print_completion() -> Result<(), Error> {
    // Authenticate against API. Fetches token.
    let client = Client::from_env()?;

    // Name of the model we want to use. Large models give usually better answer, but are
    // also slower and more costly.
    let model = "luminous-base";

    // The task we want to perform. Here we want to continue the sentence: "An apple a day
    // ..."
    let task = TaskCompletion::from_text("An apple a day");

    // Retrieve answer from API
    let response = client.execute(model, &task, &How::default()).await?;

    // Print entire sentence with completion
    println!("An apple a day{}", response.completion);
    Ok(())
}
Source

pub async fn output_of<T: Job>( &self, task: &T, how: &How, ) -> Result<T::Output, Error>

Execute any task with the aleph alpha API and fetch its result. This is most useful in generic code then you want to execute arbitrary task types. Otherwise prefer methods taking concrete tasks like Self::completion for improved readability.

Source

pub async fn semantic_embedding( &self, task: &TaskSemanticEmbedding<'_>, how: &How, ) -> Result<SemanticEmbeddingOutput, Error>

An embedding trying to capture the semantic meaning of a text. Cosine similarity can be used find out how well two texts (or multimodal prompts) match. Useful for search use cases.

See the example for cosine_similarity.

Source

pub async fn batch_semantic_embedding( &self, task: &TaskBatchSemanticEmbedding<'_>, how: &How, ) -> Result<BatchSemanticEmbeddingOutput, Error>

A batch of embeddings trying to capture the semantic meaning of a text.

Source

pub async fn semantic_embedding_with_instruction( &self, task: &TaskSemanticEmbeddingWithInstruction<'_>, how: &How, ) -> Result<SemanticEmbeddingOutput, Error>

An embedding trying to capture the semantic meaning of a text.

By providing instructions, you can help the model better understand the nuances of your specific data, leading to embeddings that are more useful for your use case.

Source

pub async fn completion( &self, task: &TaskCompletion<'_>, model: &str, how: &How, ) -> Result<CompletionOutput, Error>

Instruct a model served by the aleph alpha API to continue writing a piece of text (or multimodal document).

use aleph_alpha_client::{Client, How, TaskCompletion, Task, Error};

async fn print_completion() -> Result<(), Error> {
    // Authenticate against API. Fetches token.
    let client = Client::from_env()?;

    // Name of the model we we want to use. Large models give usually better answer, but are
    // also slower and more costly.
    let model = "luminous-base";

    // The task we want to perform. Here we want to continue the sentence: "An apple a day
    // ..."
    let task = TaskCompletion::from_text("An apple a day");

    // Retrieve answer from API
    let response = client.completion(&task, model, &How::default()).await?;

    // Print entire sentence with completion
    println!("An apple a day{}", response.completion);
    Ok(())
}
Source

pub async fn stream_completion<'task>( &self, task: &'task TaskCompletion<'task>, model: &'task str, how: &How, ) -> Result<Pin<Box<dyn Stream<Item = Result<CompletionEvent, Error>> + Send + 'task>>, Error>

Instruct a model served by the aleph alpha API to continue writing a piece of text. Stream the response as a series of events.

use aleph_alpha_client::{Client, How, TaskCompletion, Error, CompletionEvent};
use futures_util::StreamExt;

async fn print_stream_completion() -> Result<(), Error> {
    // Authenticate against API. Fetches token.
    let client = Client::from_env()?;

    // Name of the model we we want to use. Large models give usually better answer, but are
    // also slower and more costly.
    let model = "luminous-base";

    // The task we want to perform. Here we want to continue the sentence: "An apple a day
    // ..."
    let task = TaskCompletion::from_text("An apple a day");

    // Retrieve stream from API
    let mut stream = client.stream_completion(&task, model, &How::default()).await?;
    while let Some(Ok(event)) = stream.next().await {
        if let CompletionEvent::Delta { completion, logprobs: _ } = event {
            println!("{}", completion);
        }
    }
    Ok(())
}
Source

pub async fn chat( &self, task: &TaskChat<'_>, model: &str, how: &How, ) -> Result<ChatOutput, Error>

Send a chat message to a model.

use aleph_alpha_client::{Client, How, TaskChat, Error, Message};

async fn print_chat() -> Result<(), Error> {
    // Authenticate against API. Fetches token.
    let client = Client::from_env()?;

    // Name of a model that supports chat.
    let model = "pharia-1-llm-7b-control";

    // Create a chat task with a user message.
    let message = Message::user("Hello, how are you?");
    let task = TaskChat::with_message(message);

    // Send the message to the model.
    let response = client.chat(&task, model, &How::default()).await?;

    // Print the model response
    println!("{}", response.message.content);
    Ok(())
}
Source

pub async fn stream_chat<'task>( &self, task: &'task TaskChat<'_>, model: &'task str, how: &How, ) -> Result<Pin<Box<dyn Stream<Item = Result<ChatEvent, Error>> + Send + 'task>>, Error>

Send a chat message to a model. Stream the response as a series of events.

use aleph_alpha_client::{Client, How, TaskChat, Error, Message, ChatEvent};
use futures_util::StreamExt;

async fn print_stream_chat() -> Result<(), Error> {
    // Authenticate against API. Fetches token.
    let client = Client::from_env()?;

    // Name of a model that supports chat.
    let model = "pharia-1-llm-7b-control";

    // Create a chat task with a user message.
    let message = Message::user("Hello, how are you?");
    let task = TaskChat::with_message(message);

    // Send the message to the model.
    let mut stream = client.stream_chat(&task, model, &How::default()).await?;
    while let Some(Ok(event)) = stream.next().await {
        if let ChatEvent::MessageDelta { content, logprobs: _ } = event {
            println!("{}", content);
        }
    }
    Ok(())
}
Source

pub async fn explanation( &self, task: &TaskExplanation<'_>, model: &str, how: &How, ) -> Result<ExplanationOutput, Error>

Returns an explanation given a prompt and a target (typically generated by a previous completion request). The explanation describes how individual parts of the prompt influenced the target.

use aleph_alpha_client::{Client, How, TaskCompletion, Task, Error, Granularity,
    TaskExplanation, Stopping, Prompt, Sampling, Logprobs};

async fn print_explanation() -> Result<(), Error> {
    let client = Client::from_env()?;

    // Name of the model we we want to use. Large models give usually better answer, but are
    // also slower and more costly.
    let model = "luminous-base";

    // input for the completion
    let prompt = Prompt::from_text("An apple a day");

    let task = TaskCompletion {
        prompt: prompt.clone(),
        stopping: Stopping::from_maximum_tokens(10),
        sampling: Sampling::MOST_LIKELY,
        special_tokens: false,
        logprobs: Logprobs::No,
        echo: false,
    };
    let response = client.completion(&task, model, &How::default()).await?;

    let task = TaskExplanation {
        prompt: prompt,               // same input as for completion
        target: &response.completion,  // output of completion
        granularity: Granularity::default(),
    };
    let response = client.explanation(&task, model, &How::default()).await?;

    dbg!(&response);
    Ok(())
}
Source

pub async fn tokenize( &self, task: &TaskTokenization<'_>, model: &str, how: &How, ) -> Result<TokenizationOutput, Error>

Tokenize a prompt for a specific model.

use aleph_alpha_client::{Client, Error, How, TaskTokenization};

async fn tokenize() -> Result<(), Error> {
    let client = Client::from_env()?;

    // Name of the model for which we want to tokenize text.
    let model = "luminous-base";

    // Text prompt to be tokenized.
    let prompt = "An apple a day";

    let task = TaskTokenization {
        prompt,
        tokens: true,       // return text-tokens
        token_ids: true,    // return numeric token-ids
    };
    let responses = client.tokenize(&task, model, &How::default()).await?;

    dbg!(&responses);
    Ok(())
}
Source

pub async fn detokenize( &self, task: &TaskDetokenization<'_>, model: &str, how: &How, ) -> Result<DetokenizationOutput, Error>

Detokenize a list of token ids into a string.

use aleph_alpha_client::{Client, Error, How, TaskDetokenization};

async fn detokenize() -> Result<(), Error> {
    let client = Client::from_env()?;

    // Specify the name of the model whose tokenizer was used to generate the input token ids.
    let model = "luminous-base";

    // Token ids to convert into text.
    let token_ids: Vec<u32> = vec![556, 48741, 247, 2983];

    let task = TaskDetokenization {
        token_ids: &token_ids,
    };
    let responses = client.detokenize(&task, model, &How::default()).await?;

    dbg!(&responses);
    Ok(())
}
Source

pub async fn tokenizer_by_model( &self, model: &str, api_token: Option<String>, context: Option<TraceContext>, ) -> Result<Tokenizer, Error>

Auto Trait Implementations§

§

impl Freeze for Client

§

impl !RefUnwindSafe for Client

§

impl Send for Client

§

impl Sync for Client

§

impl Unpin for Client

§

impl !UnwindSafe for Client

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> PolicyExt for T
where T: ?Sized,

Source§

fn and<P, B, E>(self, other: P) -> And<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow only if self and other return Action::Follow. Read more
Source§

fn or<P, B, E>(self, other: P) -> Or<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow if either self or other returns Action::Follow. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> ErasedDestructor for T
where T: 'static,