🍕 Features
- Supports synchronous usage. No dependency on Tokio.
- Uses @huggingface/tokenizers for fast encodings.
- Supports batch embedddings generation with parallelism using @rayon-rs/rayon.
The default embedding supports "query" and "passage" prefixes for the input text. The default model is Flag Embedding, which is top of the MTEB leaderboard.
🔍 Not looking for Rust?
- Python 🐍: fastembed
- Go 🐳: fastembed-go
- JavaScript 🌐: fastembed-js
🤖 Models
- BAAI/bge-base-en-v1.5
- BAAI/bge-small-en-v1.5 - Default
- BAAI/bge-large-en-v1.5
- sentence-transformers/all-MiniLM-L6-v2
- sentence-transformers/paraphrase-MiniLM-L12-v2
- nomic-ai/nomic-embed-text-v1
🚀 Installation
Run the following command in your project directory:
Or add the following line to your Cargo.toml:
[]
= "3"
📖 Usage
use ;
// With default InitOptions
let model = try_new?;
// With custom InitOptions
let model = try_new?;
let documents = vec!;
// Generate embeddings with the default batch size, 256
let embeddings = model.embed?;
println!; // -> Embeddings length: 4
println!; // -> Embedding dimension: 384
🚒 Under the hood
Why fast?
It's important we justify the "fast" in FastEmbed. FastEmbed is fast because:
- Quantized model weights
- ONNX Runtime which allows for inference on CPU, GPU, and other dedicated runtimes
Why light?
- No hidden dependencies via Huggingface Transformers
Why accurate?
- Better than OpenAI Ada-002
- Top of the Embedding leaderboards e.g. MTEB
📄 LICENSE
Apache 2.0 © 2024