Expand description
§Inofficial Rust client library for the Aleph Alpha API
Example usage:
use aleph_alpha_api::{error::ApiError, Client, CompletionRequest, LUMINOUS_BASE};
const AA_API_TOKEN: &str = "<YOUR_AA_API_TOKEN>";
async fn print_completion() -> Result<(), ApiError> {
let client = Client::new(AA_API_TOKEN.to_owned())?;
let request =
CompletionRequest::from_text(LUMINOUS_BASE.to_owned(), "An apple a day".to_owned(), 10)
.temperature(0.8)
.top_k(50)
.top_p(0.95)
.best_of(2)
.minimum_tokens(2);
let response = client.completion(&request, Some(true)).await?;
println!("An apple a day{}", response.best_text());
Ok(())
}
Modules§
Macros§
Structs§
- Batch
Semantic Embedding Request - Batch
Semantic Embedding Response - Bounding
Box - Bounding box in logical coordinates. From 0 to 1. With (0,0) being the upper left corner, and relative to the entire image.
- Client
- Completion
Output - Completion
Request - Completion
Response - Detokenization
Request - Detokenization
Response - Embedding
Request - Embedding
Response - Evaluation
Request - Evaluation
Response - Evaluation
Result - Explanation
Item - Explanation
Request - Explanation
Response - The top-level response data structure that will be returned from an explanation request.
- Image
Control - Prompt
- Prompt
Granularity - Scored
Rect - Scored
Segment - Semantic
Embedding Request - Embeds a prompt using a specific model and semantic embedding method. Resulting vectors that can be used for downstream tasks (e.g. semantic similarity) and models (e.g. classifiers).
- Semantic
Embedding Response - Text
Control - Token
Control - Tokenization
Request - Tokenization
Response
Enums§
- Control
Token Overlap - Embedding
Representation - Type of embedding representation to embed the prompt with.
- Hosting
- Optional parameter that specifies which datacenters may process the request. You can either set the parameter to “aleph-alpha” or omit it (defaulting to null).
- Item
Importance - Modality
- The prompt for models can be a combination of different modalities (Text and Image). The type of modalities which are supported depend on the Model in question.
- Postprocessing
- Prompt
Granularity Type - Target
Granularity - How many explanations should be returned in the output.