pub struct EmbeddingRequest {
pub model: String,
pub hosting: Option<Hosting>,
pub prompt: Prompt,
pub layers: Vec<i32>,
pub tokens: Option<bool>,
pub pooling: Vec<String>,
pub embedding_type: Option<String>,
pub normalize: Option<bool>,
pub contextual_control_threshold: Option<f64>,
pub control_log_additive: Option<bool>,
}
Fields§
§model: String
Name of model to use. A model name refers to a model architecture (number of parameters among others). Always the latest version of model is used. The model output contains information as to the model version.
hosting: Option<Hosting>
Possible values: [aleph-alpha, None] Optional parameter that specifies which datacenters may process the request. You can either set the parameter to “aleph-alpha” or omit it (defaulting to null). Not setting this value, or setting it to None, gives us maximal flexibility in processing your request in our own datacenters and on servers hosted with other providers. Choose this option for maximum availability. Setting it to “aleph-alpha” allows us to only process the request in our own datacenters. Choose this option for maximal data privacy.
prompt: Prompt
This field is used to send prompts to the model. A prompt can either be a text prompt or a multimodal prompt. A text prompt is a string of text. A multimodal prompt is an array of prompt items. It can be a combination of text, images, and token ID arrays. In the case of a multimodal prompt, the prompt items will be concatenated and a single prompt will be used for the model. Tokenization: Token ID arrays are used as as-is. Text prompt items are tokenized using the tokenizers specific to the model. Each image is converted into 144 tokens.
layers: Vec<i32>
A list of layer indices from which to return embeddings.
- Index 0 corresponds to the word embeddings used as input to the first transformer layer
- Index 1 corresponds to the hidden state as output by the first transformer layer, index 2 to the output of the second layer etc.
- Index -1 corresponds to the last transformer layer (not the language modelling head), index -2 to the second last
tokens: Option<bool>
Flag indicating whether the tokenized prompt is to be returned (True) or not (False)
pooling: Vec<String>
Pooling operation to use. Pooling operations include:
- “mean”: Aggregate token embeddings across the sequence dimension using an average.
- “weighted_mean”: Position weighted mean across sequence dimension with latter tokens having a higher weight.
- “max”: Aggregate token embeddings across the sequence dimension using a maximum.
- “last_token”: Use the last token.
- “abs_max”: Aggregate token embeddings across the sequence dimension using a maximum of absolute values.
embedding_type: Option<String>
Explicitly set embedding type to be passed to the model. This parameter was created to allow for semantic_embed embeddings and will be deprecated. Please use the semantic_embed-endpoint instead.
normalize: Option<bool>
Return normalized embeddings. This can be used to save on additional compute when applying a cosine similarity metric.
contextual_control_threshold: Option<f64>
If set to None
, attention control parameters only apply to those tokens that have explicitly been set
in the request. If set to a non-null value, we apply the control parameters to similar tokens as
well. Controls that have been applied to one token will then be applied to all other tokens that have
at least the similarity score defined by this parameter. The similarity score is the cosine
similarity of token embeddings.
control_log_additive: Option<bool>
true
: apply controls on prompt items by adding the log(control_factor)
to attention scores.
false
: apply controls on prompt items by (attention_scores - -attention_scores.min(-1)) * control_factor
Implementations§
Source§impl EmbeddingRequest
impl EmbeddingRequest
pub fn tokens(self, tokens: bool) -> Self
pub fn embedding_type(self, embedding_type: String) -> Self
pub fn normalize(self, normalize: bool) -> Self
pub fn contextual_control_threshold( self, contextual_control_threshold: f64, ) -> Self
pub fn control_log_additive(self, control_log_additive: bool) -> Self
Trait Implementations§
Source§impl Debug for EmbeddingRequest
impl Debug for EmbeddingRequest
Source§impl Default for EmbeddingRequest
impl Default for EmbeddingRequest
Source§fn default() -> EmbeddingRequest
fn default() -> EmbeddingRequest
Auto Trait Implementations§
impl Freeze for EmbeddingRequest
impl RefUnwindSafe for EmbeddingRequest
impl Send for EmbeddingRequest
impl Sync for EmbeddingRequest
impl Unpin for EmbeddingRequest
impl UnwindSafe for EmbeddingRequest
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<R, P> ReadPrimitive<R> for P
impl<R, P> ReadPrimitive<R> for P
Source§fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
ReadEndian::read_from_little_endian()
.