Expand description
Usage sample
use aleph_alpha_client::{Client, TaskCompletion, How};
#[tokio::main(flavor = "current_thread")]
async fn main() {
// Authenticate against API. Fetches token.
let client = Client::from_env().unwrap();
// Name of the model we want to use. Large models give usually better answer, but are also
// more costly.
let model = "luminous-base";
// The task we want to perform. Here we want to continue the sentence: "An apple a day ..."
let task = TaskCompletion::from_text("An apple a day");
// Retrieve the answer from the API
let response = client.completion(&task, model, &How::default()).await.unwrap();
// Print entire sentence with completion
println!("An apple a day{}", response.completion);
}
Structs§
- Chat
Output - Chat
Sampling - Sampling controls how the tokens (“words”) are selected for the completion. This is different
from
crate::Sampling
, because it does not supprot thetop_k
parameter. - Client
- Execute Jobs against the Aleph Alpha API
- Completion
Output - Completion and metainformation returned by a completion task
- Detokenization
Output - Distribution
- Logprob information for a single token
- Explanation
- The explanation for the target.
- Explanation
Output - The result of an explanation request.
- Granularity
- Granularity parameters for the TaskExplanation
- How
- Controls of how to execute a task
- Image
Score - Score for a part of an image.
- Logprob
- Message
- Prompt
- A prompt which is passed to the model for inference. Usually it is one text item, but it could also be a combination of several modalities like images and text.
- Sampling
- Sampling controls how the tokens (“words”) are selected for the completion.
- Stopping
- Controls the conditions under which the language models stops generating text.
- Task
Batch Semantic Embedding - Create embeddings for multiple prompts
- Task
Chat - Task
Completion - Completes a prompt. E.g. continues a text.
- Task
Detokenization - Input for a crate::Client::detokenize request.
- Task
Explanation - Input for a crate::Client::explanation request.
- Task
Semantic Embedding - Create embeddings for prompts which can be used for downstream tasks. E.g. search, classifiers
- Task
Semantic Embedding With Instruction - Allows you to choose a semantic representation fitting for your use case.
- Task
Tokenization - Input for a crate::Client::tokenize request.
- Text
Score - Score for the part of a text-modality
- Tokenization
Output - Usage
Enums§
- Chat
Event - Completion
Event - Error
- Errors returned by the Aleph Alpha Client
- Item
Explanation - Explanation scores for a crate::prompt::Modality or the target.
There is one score
for each part of a
modality
respectively the target with the parts being choosen according to the PromptGranularity - Logprobs
- Modality
- The prompt for models can be a combination of different modalities (Text and Image). The type of modalities which are supported depend on the Model in question.
- Prompt
Granularity - At which granularity should the target be explained in terms of the prompt. If you choose, for example, PromptGranularity::Sentence then we report the importance score of each sentence in the prompt towards generating the target output. The default is PromptGranularity::Auto which means we will try to find the granularity that brings you closest to around 30 explanations. For large prompts, this would likely be sentences. For short prompts this might be individual words or even tokens.
- Semantic
Representation - Allows you to choose a semantic representation fitting for your use case.
Traits§
- Job
- A job send to the Aleph Alpha Api using the http client. A job wraps all the knowledge required
for the Aleph Alpha API to specify its result. Notably it includes the model(s) the job is
executed on. This allows this trait to hold in the presence of services, which use more than one
model and task type to achieve their result. On the other hand a bare
crate::TaskCompletion
can not implement this trait directly, since its result would depend on what model is chosen to execute it. You can remedy this by turning completion task into a job, callingTask::with_model
. - Stream
Job - A job send to the Aleph Alpha Api using the http client. A job wraps all the knowledge required
for the Aleph Alpha API to specify its result. Notably it includes the model(s) the job is
executed on. This allows this trait to hold in the presence of services, which use more than one
model and task type to achieve their result. On the other hand a bare
crate::TaskCompletion
can not implement this trait directly, since its result would depend on what model is chosen to execute it. You can remedy this by turning completion task into a job, calling [Task::with_model
]. - Stream
Task - A task send to the Aleph Alpha Api using the http client. Requires to specify a model before it can be executed. Will return a stream of results.
- Task
- A task send to the Aleph Alpha Api using the http client. Requires to specify a model before it can be executed.
Functions§
- cosine_
similarity - Intended to compare embeddings.