🍕 Features
- Supports synchronous usage. No dependency on Tokio.
- Uses @huggingface/tokenizers for blazing-fast encodings.
- Supports batch embedddings with parallelism using Rayon.
The default embedding supports "query" and "passage" prefixes for the input text. The default model is Flag Embedding, which is top of the MTEB leaderboard.
🔍 Not looking for Rust?
- Python 🐍: fastembed
- Go 🐳: fastembed-go
- JavaScript 🌐: fastembed-js
🤖 Models
- BAAI/bge-base-en
- BAAI/bge-base-en-v1.5
- BAAI/bge-small-en
- BAAI/bge-small-en-v1.5 - Default
- BAAI/bge-base-zh-v1.5
- sentence-transformers/all-MiniLM-L6-v2
- intfloat/multilingual-e5-large
🚀 Installation
Run the following command in your project directory:
Or add the following line to your Cargo.toml:
= "2"
📖 Usage
use ;
// With default InitOptions
let model: FlagEmbedding = try_new?;
// With custom InitOptions
let model: FlagEmbedding = try_new?;
let documents = vec!;
// Generate embeddings with the default batch size, 256
let embeddings = model.embed?;
println!; // -> Embeddings length: 4
println!; // -> Embedding dimension: 768
Supports passage and query embeddings for more accurate results
// Generate embeddings for the passages
// The texts are prefixed with "passage" for better results
let passages = vec!;
let embeddings = model.passage_embed?;
println!; // -> Embeddings length: 3
println!; // -> Passage embedding dimension: 768
// Generate embeddings for the query
// The text is prefixed with "query" for better retrieval
let query = "What is the answer to this generic question?";
let query_embedding = model.query_embed?;
println!; // -> Query embedding dimension: 768
🚒 Under the hood
Why fast?
It's important we justify the "fast" in FastEmbed. FastEmbed is fast because:
- Quantized model weights
- ONNX Runtime which allows for inference on CPU, GPU, and other dedicated runtimes
Why light?
- No hidden dependencies via Huggingface Transformers
Why accurate?
- Better than OpenAI Ada-002
- Top of the Embedding leaderboards e.g. MTEB
📄 LICENSE
Apache 2.0 © 2024