Crate aleph_alpha_client

source ·
Expand description

Usage sample

use aleph_alpha_client::{Client, TaskCompletion, How};

#[tokio::main(flavor = "current_thread")]
async fn main() {
    // Authenticate against API. Fetches token.
    let client = Client::with_authentication("AA_API_TOKEN").unwrap();

    // Name of the model we we want to use. Large models give usually better answer, but are also
    // more costly.
    let model = "luminous-base";

    // The task we want to perform. Here we want to continue the sentence: "An apple a day ..."
    let task = TaskCompletion::from_text("An apple a day");
     
    // Retrieve the answer from the API
    let response = client.completion(&task, model, &How::default()).await.unwrap();

    // Print entire sentence with completion
    println!("An apple a day{}", response.completion);
}

Structs§

Enums§

  • Errors returned by the Aleph Alpha Client
  • Explanation scores for a crate::prompt::Modality or the target. There is one score for each part of a modality respectively the target with the parts being choosen according to the PromptGranularity
  • The prompt for models can be a combination of different modalities (Text and Image). The type of modalities which are supported depend on the Model in question.
  • At which granularity should the target be explained in terms of the prompt. If you choose, for example, PromptGranularity::Sentence then we report the importance score of each sentence in the prompt towards generating the target output. The default is PromptGranularity::Auto which means we will try to find the granularity that brings you closest to around 30 explanations. For large prompts, this would likely be sentences. For short prompts this might be individual words or even tokens.
  • Allows you to choose a semantic representation fitting for your usecase.

Traits§

  • A job send to the Aleph Alpha Api using the http client. A job wraps all the knowledge required for the Aleph Alpha API to specify its result. Notably it includes the model(s) the job is executed on. This allows this trait to hold in the presence of services, which use more than one model and task type to achieve their result. On the other hand a bare crate::TaskCompletion can not implement this trait directly, since its result would depend on what model is chosen to execute it. You can remedy this by turning completion task into a job, calling Task::with_model.
  • A task send to the Aleph Alpha Api using the http client. Requires to specify a model before it can be executed.

Functions§