Struct sif_embedding::sif::Sif
source · pub struct Sif<'w, 'p, W, P> { /* private fields */ }Expand description
An implementation of Smooth Inverse Frequency and Common Component Removal, simple but pewerful techniques for sentence embeddings described in the paper: Sanjeev Arora, Yingyu Liang, and Tengyu Ma, A Simple but Tough-to-Beat Baseline for Sentence Embeddings, ICLR 2017.
Brief description of API
The algorithm consists of two steps:
- Compute sentence embeddings with the SIF weighting.
- Remove the common components from the sentence embeddings.
The common components are computed from input sentences.
Our API is designed to allow reuse of common components once computed because it is not always possible to obtain a sufficient number of sentences as queries to compute.
Sif::fit computes the common components from input sentences and returns a fitted instance of Sif.
Sif::embeddings computes sentence embeddings with the fitted components.
Examples
use std::io::BufReader;
use finalfusion::compat::text::ReadText;
use finalfusion::embeddings::Embeddings;
use wordfreq::WordFreq;
use sif_embedding::{Sif, SentenceEmbedder};
// Loads word embeddings from a pretrained model.
let word_embeddings_text = "las 0.0 1.0 2.0\nvegas -3.0 -4.0 -5.0\n";
let mut reader = BufReader::new(word_embeddings_text.as_bytes());
let word_embeddings = Embeddings::read_text(&mut reader)?;
// Loads word probabilities from a pretrained model.
let word_probs = WordFreq::new([("las", 0.4), ("vegas", 0.6)]);
// Prepares input sentences.
let sentences = ["las vegas", "mega vegas"];
// Fits the model with input sentences.
let model = Sif::new(&word_embeddings, &word_probs);
let model = model.fit(&sentences)?;
// Computes sentence embeddings in shape (n, m),
// where n is the number of sentences and m is the number of dimensions.
let sent_embeddings = model.embeddings(sentences)?;
assert_eq!(sent_embeddings.shape(), &[2, 3]);Only SIF weighting
If you want to apply only the SIF weighting to avoid the computation of common components,
use Sif::with_parameters and set n_components to 0.
In this case, you can skip Sif::fit and directly perform Sif::embeddings
because there is no parameter to fit
(although the quality of the embeddings may be worse).
use std::io::BufReader;
use finalfusion::compat::text::ReadText;
use finalfusion::embeddings::Embeddings;
use wordfreq::WordFreq;
use sif_embedding::{Sif, SentenceEmbedder};
// Loads word embeddings from a pretrained model.
let word_embeddings_text = "las 0.0 1.0 2.0\nvegas -3.0 -4.0 -5.0\n";
let mut reader = BufReader::new(word_embeddings_text.as_bytes());
let word_embeddings = Embeddings::read_text(&mut reader)?;
// Loads word probabilities from a pretrained model.
let word_probs = WordFreq::new([("las", 0.4), ("vegas", 0.6)]);
// When setting `n_components` to `0`, no common components are removed, and
// the sentence embeddings can be computed without `fit`.
let model = Sif::with_parameters(&word_embeddings, &word_probs, 1e-3, 0)?;
let sent_embeddings = model.embeddings(["las vegas", "mega vegas"])?;
assert_eq!(sent_embeddings.shape(), &[2, 3]);Serialization of fitted parameters
If you want to serialize and deserialize the fitted parameters,
use Sif::serialize and Sif::deserialize.
use std::io::BufReader;
use approx::assert_relative_eq;
use finalfusion::compat::text::ReadText;
use finalfusion::embeddings::Embeddings;
use wordfreq::WordFreq;
use sif_embedding::{Sif, SentenceEmbedder};
// Loads word embeddings from a pretrained model.
let word_embeddings_text = "las 0.0 1.0 2.0\nvegas -3.0 -4.0 -5.0\n";
let mut reader = BufReader::new(word_embeddings_text.as_bytes());
let word_embeddings = Embeddings::read_text(&mut reader)?;
// Loads word probabilities from a pretrained model.
let word_probs = WordFreq::new([("las", 0.4), ("vegas", 0.6)]);
// Prepares input sentences.
let sentences = ["las vegas", "mega vegas"];
// Fits the model and computes sentence embeddings.
let model = Sif::new(&word_embeddings, &word_probs);
let model = model.fit(&sentences)?;
let sent_embeddings = model.embeddings(&sentences)?;
// Serializes and deserializes the fitted parameters.
let bytes = model.serialize()?;
let other = Sif::deserialize(&bytes, &word_embeddings, &word_probs)?;
let other_embeddings = other.embeddings(&sentences)?;
assert_relative_eq!(sent_embeddings, other_embeddings);Implementations§
source§impl<'w, 'p, W, P> Sif<'w, 'p, W, P>where
W: WordEmbeddings,
P: WordProbabilities,
impl<'w, 'p, W, P> Sif<'w, 'p, W, P>where
W: WordEmbeddings,
P: WordProbabilities,
sourcepub const fn new(word_embeddings: &'w W, word_probs: &'p P) -> Self
pub const fn new(word_embeddings: &'w W, word_probs: &'p P) -> Self
Creates a new instance with default parameters defined by
DEFAULT_PARAM_A and DEFAULT_N_COMPONENTS.
Arguments
word_embeddings- Word embeddings.word_probs- Word probabilities.
sourcepub fn with_parameters(
word_embeddings: &'w W,
word_probs: &'p P,
param_a: Float,
n_components: usize
) -> Result<Self>
pub fn with_parameters( word_embeddings: &'w W, word_probs: &'p P, param_a: Float, n_components: usize ) -> Result<Self>
Creates a new instance with manually specified parameters.
Arguments
word_embeddings- Word embeddings.word_probs- Word probabilities.param_a- A parameterafor SIF-weighting that should be positive.n_components- The number of principal components to remove.
When setting n_components to 0, no principal components are removed.
Errors
Returns an error if param_a is not positive.
sourcepub const fn separator(self, separator: char) -> Self
pub const fn separator(self, separator: char) -> Self
Sets a separator for sentence segmentation (default: DEFAULT_SEPARATOR).
sourcepub fn n_samples_to_fit(self, n_samples_to_fit: usize) -> Result<Self>
pub fn n_samples_to_fit(self, n_samples_to_fit: usize) -> Result<Self>
Sets the number of samples to fit the model (default: DEFAULT_N_SAMPLES_TO_FIT).
Errors
Returns an error if n_samples_to_fit is 0.
sourcepub fn deserialize(
bytes: &[u8],
word_embeddings: &'w W,
word_probs: &'p P
) -> Result<Self>
pub fn deserialize( bytes: &[u8], word_embeddings: &'w W, word_probs: &'p P ) -> Result<Self>
Deserializes the model.
Arguments
bytes- Byte sequence exported bySelf::serialize.word_embeddings- Word embeddings.word_probs- Word probabilities.
word_embeddings and word_probs must be the same as those used in serialization.
Trait Implementations§
source§impl<'w, 'p, W, P> SentenceEmbedder for Sif<'w, 'p, W, P>where
W: WordEmbeddings,
P: WordProbabilities,
impl<'w, 'p, W, P> SentenceEmbedder for Sif<'w, 'p, W, P>where
W: WordEmbeddings,
P: WordProbabilities,
source§fn embedding_size(&self) -> usize
fn embedding_size(&self) -> usize
Returns the number of dimensions for sentence embeddings, which is the same as the number of dimensions for word embeddings.
source§fn fit<S>(self, sentences: &[S]) -> Result<Self>
fn fit<S>(self, sentences: &[S]) -> Result<Self>
Fits the model with input sentences.
Sentences to fit are randomly sampled from sentences with Self::n_samples_to_fit.
If n_components is 0, does nothing and returns self.
Errors
Returns an error if sentences is empty.
Complexities
- Time complexity:
O(L*D*S + max(D,S)^3) - Space complexity:
O(D*S + max(D,S)^2)
where
Lis the average number of words in a sentence.Dis the number of dimensions for word embeddings (embedding_size).Sis the number of sentences used to fit (n_samples_to_fit).
source§fn embeddings<I, S>(&self, sentences: I) -> Result<Array2<Float>>
fn embeddings<I, S>(&self, sentences: I) -> Result<Array2<Float>>
Computes embeddings for input sentences using the fitted model.
If n_components is 0, the fitting is not required.
Errors
Returns an error if the model is not fitted.
Complexities
- Time complexity:
O(L*D*N + C*D*N) - Space complexity:
O(D*N)
where
Lis the average number of words in a sentence.Dis the number of dimensions for word embeddings (embedding_size).Nis the number of sentences (sentences.len()).Cis the number of components to remove (n_components).