EmbedAnything is a minimalist, yet highly performant, lightning-fast, lightweight, multisource, multimodal, and local embedding pipeline built in Rust. Whether you're working with text, images, audio, PDFs, websites, or other media, EmbedAnything streamlines the process of generating embeddings from various sources and seamlessly streaming (memory-efficient-indexing) them to a vector database. It supports dense, sparse, ONNX, model2vec and late-interaction embeddings, offering flexibility for a wide range of use cases.
🚀 Key Features
- Candle Backend : Supports BERT, Jina, ColPali, Splade, ModernBERT
- ONNX Backend: Supports BERT, Jina, ColPali, ColBERT Splade, Reranker, ModernBERT
- Cloud Embedding Models:: Supports OpenAI and Cohere.
- MultiModality : Works with text sources like PDFs, txt, md, Images JPG and Audio, .WAV
- Rust : All the file processing is done in rust for speed and efficiency
- GPU support : We have taken care of hardware acceleration on GPU as well.
- Python Interface: Packaged as a Python library for seamless integration into your existing projects.
- Vector Streaming: Continuously create and stream embeddings if you have low resource.
- No Dependency on Pytorch Easy to deploy on cloud, as it comes with low memory footprint.
💡What is Vector Streaming
Vector Streaming enables you to process and generate embeddings for files and stream them, so if you have 10 GB of file, it can continuously generate embeddings Chunk by Chunk, that you can segment semantically, and store them in the vector database of your choice, Thus it eliminates bulk embeddings storage on RAM at once.
The embedding process happens separetly from the main process, so as to maintain high performance enabled by rust MPSC, and no memory leak as embeddings are directly saved to vector database. Find our blog.
🦀 Why Embed Anything
➡️Faster execution. ➡️No Pytorch Dependency, thus low-memory footprint and easy to deploy on cloud. ➡️Memory Management: Rust enforces memory management simultaneously, preventing memory leaks and crashes that can plague other languages ➡️True multithreading ➡️Running embedding models locally and efficiently ➡️Candle allows inferences on CUDA-enabled GPUs right out of the box. ➡️Decrease the memory usage. ➡️Supports range of models, Dense, Sparse, Late-interaction, ReRanker, ModernBert.
🍓 Our Past Collaborations:
We have collaborated with reputed enterprise like Elastic, Weaviate, SingleStore and Datahours
You can get in touch with us for further collaborations.
Benchmarks
Only measures embedding model inference speed, on onnx-runtime. Code
⭐ Supported Models
We support any hugging-face models on Candle. And We also support ONNX runtime for BERT and ColPali.
How to add custom model on candle: from_pretrained_hf
=
=
=
| Model | HF link |
|---|---|
| Jina | Jina Models |
| Bert | All Bert based models |
| CLIP | openai/clip-* |
| Whisper | OpenAI Whisper models |
| ColPali | starlight-ai/colpali-v1.2-merged-onnx |
| Colbert | answerdotai/answerai-colbert-small-v1, jinaai/jina-colbert-v2 and more |
| Splade | Splade Models and other Splade like models |
| Reranker | Jina Reranker Models, Xenova/bge-reranker |
| Model2Vec | model2vec, minishlab/potion-base-8M |
Splade Models:
=
ONNX-Runtime: from_pretrained_onnx
BERT
=
ColPali
: =
Colbert
=
=
=
ModernBERT
=
ReRankers
=
: =
Embed 4
# Initialize the model once
: =
For Semantic Chunking
=
# with semantic encoder
=
=
For late-chunking
=
# Embed a single file
: =
🧑🚀 Getting Started
💚 Installation
pip install embed-anything
For GPUs and using special models like ColPali
pip install embed-anything-gpu
Usage
➡️ Usage For 0.3 and later version
To use local embedding: we support Bert and Jina
=
=
For multimodal embedding: we support CLIP
Requirements Directory with pictures you want to search for example we have test_files with images of cat, dogs etc
=
: =
=
=
=
=
=
Audio Embedding using Whisper
requirements: Audio .wav files.
# choose any whisper or distilwhisper model from https://huggingface.co/distil-whisper or https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013
=
=
=
=
Using ONNX Models
To use ONNX models, you can either use the ONNXModel enum or the model_id from the Hugging Face model.
=
For some models, you can also specify the dtype to use for the model.
=
Using the above method is best to ensure that the model works correctly as these models are tested. But if you want to use other models, like finetuned models, you can use the hf_model_id and path_in_repo to load the model like below.
=
To see all the ONNX models supported with model_name, see here
⁉️FAQ
Do I need to know rust to use or contribute to embedanything?
The answer is No. EmbedAnything provides you pyo3 bindings, so you can run any function in python without any issues. To contibute you should check out our guidelines and python folder example of adapters.
How is it different from fastembed?
We provide both backends, candle and onnx. On top of it we also give an end-to-end pipeline, that is you can ingest different data-types and index to any vector database, and inference any model. Fastembed is just an onnx-wrapper.
We've received quite a few questions about why we're using Candle.
One of the main reasons is that Candle doesn't require any specific ONNX format models, which means it can work seamlessly with any Hugging Face model. This flexibility has been a key factor for us. However, we also recognize that we’ve been compromising a bit on speed in favor of that flexibility.
🚧 Contributing to EmbedAnything
First of all, thank you for taking the time to contribute to this project. We truly appreciate your contributions, whether it's bug reports, feature suggestions, or pull requests. Your time and effort are highly valued in this project. 🚀
This document provides guidelines and best practices to help you to contribute effectively. These are meant to serve as guidelines, not strict rules. We encourage you to use your best judgment and feel comfortable proposing changes to this document through a pull request.
🏎️ RoadMap
Accomplishments
One of the aims of EmbedAnything is to allow AI engineers to easily use state of the art embedding models on typical files and documents. A lot has already been accomplished here and these are the formats that we support right now and a few more have to be done.
Adding Fine-tuning
One of the major goals of this year is to add finetuning these models on your data. Like a simple sentence transformer does.
🖼️ Modalities and Source
We’re excited to share that we've expanded our platform to support multiple modalities, including:
-
Audio files
-
Markdowns
-
Websites
-
Images
-
Videos
-
Graph
This gives you the flexibility to work with various data types all in one place! 🌐
⚙️ Performance
We now support both candle and Onnx backend ➡️ Support for GGUF models
🫐Embeddings:
We had multimodality from day one for our infrastructure. We have already included it for websites, images and audios but we want to expand it further to.
➡️ Graph embedding -- build deepwalks embeddings depth first and word to vec ➡️ Video Embedding ➡️ Yolo Clip
🌊Expansion to other Vector Adapters
We currently support a wide range of vector databases for streaming embeddings, including:
- Elastic: thanks to amazing and active Elastic team for the contribution
- Weaviate
- Pinecone
- Qdrant
- Milvus
- Chroma
How to add an adpters: https://starlight-search.com/blog/2024/02/25/adapter-development-guide.md
💥 Create WASM demos to integrate embedanything directly to the browser.
💜 Add support for ingestion from remote sources
➡️ Support for S3 bucket ➡️ Support for azure storage ➡️ Support for google drive/dropbox
But we're not stopping there! We're actively working to expand this list.
Want to Contribute? If you’d like to add support for your favorite vector database, we’d love to have your help! Check out our contribution.md for guidelines, or feel free to reach out directly starlight-search@proton.me. Let's build something amazing together! 💡
