pub struct SemanticEmbeddingRequest {
pub model: String,
pub hosting: Option<Hosting>,
pub prompt: Prompt,
pub representation: EmbeddingRepresentation,
pub compress_to_size: Option<i32>,
pub normalize: Option<bool>,
pub contextual_control_threshold: Option<f64>,
pub control_log_additive: Option<bool>,
}
Expand description
Embeds a prompt using a specific model and semantic embedding method. Resulting vectors that can be used for downstream tasks (e.g. semantic similarity) and models (e.g. classifiers).
Fields§
§model: String
Name of the model to use. A model name refers to a model’s architecture (number of parameters among others). The most recent version of the model is always used. The model output contains information as to the model version. To create semantic embeddings, please use luminous-base
.
hosting: Option<Hosting>
Possible values: [aleph-alpha, None] Optional parameter that specifies which datacenters may process the request. You can either set the parameter to “aleph-alpha” or omit it (defaulting to null). Not setting this value, or setting it to None, gives us maximal flexibility in processing your request in our own datacenters and on servers hosted with other providers. Choose this option for maximum availability. Setting it to “aleph-alpha” allows us to only process the request in our own datacenters. Choose this option for maximal data privacy.
prompt: Prompt
This field is used to send prompts to the model. A prompt can either be a text prompt or a multimodal prompt. A text prompt is a string of text. A multimodal prompt is an array of prompt items. It can be a combination of text, images, and token ID arrays. In the case of a multimodal prompt, the prompt items will be concatenated and a single prompt will be used for the model. Tokenization: Token ID arrays are used as as-is. Text prompt items are tokenized using the tokenizers specific to the model. Each image is converted into 144 tokens.
representation: EmbeddingRepresentation
Type of embedding representation to embed the prompt with.
compress_to_size: Option<i32>
The default behavior is to return the full embedding with 5120 dimensions. With this parameter you can compress the returned embedding to 128 dimensions. The compression is expected to result in a small drop in accuracy performance (4-6%), with the benefit of being much smaller, which makes comparing these embeddings much faster for use cases where speed is critical. With the compressed embedding can also perform better if you are embedding really short texts or documents.
normalize: Option<bool>
Return normalized embeddings. This can be used to save on additional compute when applying a cosine similarity metric.
contextual_control_threshold: Option<f64>
If set to null
, attention control parameters only apply to those tokens that have explicitly been set in the request.
If set to a non-null value, we apply the control parameters to similar tokens as well.
Controls that have been applied to one token will then be applied to all other tokens
that have at least the similarity score defined by this parameter.
The similarity score is the cosine similarity of token embeddings.
control_log_additive: Option<bool>
true
: apply controls on prompt items by adding the log(control_factor)
to attention scores.
false
: apply controls on prompt items by (attention_scores - -attention_scores.min(-1)) * control_factor
Implementations§
Source§impl SemanticEmbeddingRequest
impl SemanticEmbeddingRequest
pub fn hosting(self, hosting: Hosting) -> Self
pub fn compress_to_size(self, compress_to_size: i32) -> Self
pub fn normalize(self, normalize: bool) -> Self
pub fn contextual_control_threshold( self, contextual_control_threshold: f64, ) -> Self
pub fn control_log_additive(self, control_log_additive: bool) -> Self
Trait Implementations§
Source§impl Debug for SemanticEmbeddingRequest
impl Debug for SemanticEmbeddingRequest
Source§impl Default for SemanticEmbeddingRequest
impl Default for SemanticEmbeddingRequest
Source§fn default() -> SemanticEmbeddingRequest
fn default() -> SemanticEmbeddingRequest
Auto Trait Implementations§
impl Freeze for SemanticEmbeddingRequest
impl RefUnwindSafe for SemanticEmbeddingRequest
impl Send for SemanticEmbeddingRequest
impl Sync for SemanticEmbeddingRequest
impl Unpin for SemanticEmbeddingRequest
impl UnwindSafe for SemanticEmbeddingRequest
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<R, P> ReadPrimitive<R> for P
impl<R, P> ReadPrimitive<R> for P
Source§fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
ReadEndian::read_from_little_endian()
.