Crate aleph_alpha_client

Source
Expand description

Usage sample

use aleph_alpha_client::{Client, TaskCompletion, How};

#[tokio::main(flavor = "current_thread")]
async fn main() {
    // Authenticate against API. Fetches token.
    let client = Client::from_env().unwrap();

    // Name of the model we want to use. Large models give usually better answer, but are also
    // more costly.
    let model = "luminous-base";

    // The task we want to perform. Here we want to continue the sentence: "An apple a day ..."
    let task = TaskCompletion::from_text("An apple a day");

    // Retrieve the answer from the API
    let response = client.completion(&task, model, &How::default()).await.unwrap();

    // Print entire sentence with completion
    println!("An apple a day{}", response.completion);
}

Structs§

ChatOutput
ChatSampling
Sampling controls how the tokens (“words”) are selected for the completion. This is different from crate::Sampling, because it does not supprot the top_k parameter.
Client
Execute Jobs against the Aleph Alpha API
CompletionOutput
Completion and metainformation returned by a completion task
DetokenizationOutput
Distribution
Logprob information for a single token
Explanation
The explanation for the target.
ExplanationOutput
The result of an explanation request.
Granularity
Granularity parameters for the TaskExplanation
How
Controls of how to execute a task
ImageScore
Score for a part of an image.
Logprob
Message
Prompt
A prompt which is passed to the model for inference. Usually it is one text item, but it could also be a combination of several modalities like images and text.
Sampling
Sampling controls how the tokens (“words”) are selected for the completion.
Stopping
Controls the conditions under which the language models stops generating text.
TaskBatchSemanticEmbedding
Create embeddings for multiple prompts
TaskChat
TaskCompletion
Completes a prompt. E.g. continues a text.
TaskDetokenization
Input for a crate::Client::detokenize request.
TaskExplanation
Input for a crate::Client::explanation request.
TaskSemanticEmbedding
Create embeddings for prompts which can be used for downstream tasks. E.g. search, classifiers
TaskSemanticEmbeddingWithInstruction
Allows you to choose a semantic representation fitting for your use case.
TaskTokenization
Input for a crate::Client::tokenize request.
TextScore
Score for the part of a text-modality
TokenizationOutput
Usage

Enums§

ChatEvent
CompletionEvent
Error
Errors returned by the Aleph Alpha Client
ItemExplanation
Explanation scores for a crate::prompt::Modality or the target. There is one score for each part of a modality respectively the target with the parts being choosen according to the PromptGranularity
Logprobs
Modality
The prompt for models can be a combination of different modalities (Text and Image). The type of modalities which are supported depend on the Model in question.
PromptGranularity
At which granularity should the target be explained in terms of the prompt. If you choose, for example, PromptGranularity::Sentence then we report the importance score of each sentence in the prompt towards generating the target output. The default is PromptGranularity::Auto which means we will try to find the granularity that brings you closest to around 30 explanations. For large prompts, this would likely be sentences. For short prompts this might be individual words or even tokens.
SemanticRepresentation
Allows you to choose a semantic representation fitting for your use case.

Traits§

Job
A job send to the Aleph Alpha Api using the http client. A job wraps all the knowledge required for the Aleph Alpha API to specify its result. Notably it includes the model(s) the job is executed on. This allows this trait to hold in the presence of services, which use more than one model and task type to achieve their result. On the other hand a bare crate::TaskCompletion can not implement this trait directly, since its result would depend on what model is chosen to execute it. You can remedy this by turning completion task into a job, calling Task::with_model.
StreamJob
A job send to the Aleph Alpha Api using the http client. A job wraps all the knowledge required for the Aleph Alpha API to specify its result. Notably it includes the model(s) the job is executed on. This allows this trait to hold in the presence of services, which use more than one model and task type to achieve their result. On the other hand a bare crate::TaskCompletion can not implement this trait directly, since its result would depend on what model is chosen to execute it. You can remedy this by turning completion task into a job, calling [Task::with_model].
StreamTask
A task send to the Aleph Alpha Api using the http client. Requires to specify a model before it can be executed. Will return a stream of results.
Task
A task send to the Aleph Alpha Api using the http client. Requires to specify a model before it can be executed.

Functions§

cosine_similarity
Intended to compare embeddings.