Crate aleph_alpha_client
source ·Expand description
Usage sample
use aleph_alpha_client::{Client, TaskCompletion, How};
#[tokio::main(flavor = "current_thread")]
async fn main() {
// Authenticate against API. Fetches token.
let client = Client::with_authentication("AA_API_TOKEN").unwrap();
// Name of the model we we want to use. Large models give usually better answer, but are also
// more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day ..."
let task = TaskCompletion::from_text("An apple a day");
// Retrieve the answer from the API
let response = client.completion(&task, model, &How::default()).await.unwrap();
// Print entire sentence with completion
println!("An apple a day{}", response.completion);
}
Structs§
- Execute Jobs against the Aleph Alpha API
- Completion and metainformation returned by a completion task
- The explanation for the target.
- The result of an explanation request.
- Granularity parameters for the TaskExplanation
- Controls of how to execute a task
- Score for a part of an image.
- A prompt which is passed to the model for inference. Usually it is one text item, but it could also be a combination of several modalities like images and text.
- Sampling controls how the tokens (“words”) are selected for the completion.
- Controls the conditions under which the language models stops generating text.
- Create embeddings for multiple prompts
- Completes a prompt. E.g. continues a text.
- Input for a crate::Client::detokenize request.
- Input for a crate::Client::explanation request.
- Create embeddings for prompts which can be used for downstream tasks. E.g. search, classifiers
- Input for a crate::Client::tokenize request.
- Score for the part of a text-modality
Enums§
- Errors returned by the Aleph Alpha Client
- Explanation scores for a crate::prompt::Modality or the target. There is one score for each part of a
modality
respectively the target with the parts being choosen according to the PromptGranularity - The prompt for models can be a combination of different modalities (Text and Image). The type of modalities which are supported depend on the Model in question.
- At which granularity should the target be explained in terms of the prompt. If you choose, for example, PromptGranularity::Sentence then we report the importance score of each sentence in the prompt towards generating the target output. The default is PromptGranularity::Auto which means we will try to find the granularity that brings you closest to around 30 explanations. For large prompts, this would likely be sentences. For short prompts this might be individual words or even tokens.
- Allows you to choose a semantic representation fitting for your usecase.
Traits§
- A job send to the Aleph Alpha Api using the http client. A job wraps all the knowledge required for the Aleph Alpha API to specify its result. Notably it includes the model(s) the job is executed on. This allows this trait to hold in the presence of services, which use more than one model and task type to achieve their result. On the other hand a bare
crate::TaskCompletion
can not implement this trait directly, since its result would depend on what model is chosen to execute it. You can remedy this by turning completion task into a job, callingTask::with_model
. - A task send to the Aleph Alpha Api using the http client. Requires to specify a model before it can be executed.
Functions§
- Intended to compare embeddings.