embed_anything 0.6.7

Embed anything at lightning speed
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778


<p align ="center">
<img width=400 src = "https://res.cloudinary.com/dltwftrgc/image/upload/v1712504276/Projects/EmbedAnything_500_x_200_px_a4l8xu.png">
</p>



<div align="center">

[![Downloads](https://static.pepy.tech/badge/embed-anything)](https://pepy.tech/project/embed-anything)
[![gpu](https://static.pepy.tech/badge/embed-anything-gpu)](https://www.pepy.tech/projects/embed-anything-gpu)
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CowJrqZxDDYJzkclI-rbHaZHgL9C6K3p?usp=sharing)
[![roadmap](https://img.shields.io/badge/Discord-%235865F2.svg?style=flat&logo=discord&logoColor=white)](https://discord.gg/juETVTMdZu)
[![MkDocs](https://img.shields.io/badge/Blogs-F38020?.svg?logoColor=fff)](https://embed-anything.com/blog/)

</div>


<div align="center">

  <p align="center">
    <b> Highly Performant, Modular and Memory Safe</b>
    <br />
    <b> Ingestion, Inference and Indexing in Rust 🦀</b>
    <br />
    <a href="https://embed-anything.com/references/">Python docs »</a>
    <br />
    <a href="https://docs.rs/embed_anything/latest/embed_anything/">Rust docs »</a>
    <br />
    <a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#benchmarks"><strong>Benchmarks</strong></a>
    ·
    <a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#%EF%B8%8Ffaq"><strong>FAQ</strong></a>
    ·
    <a href="https://github.com/StarlightSearch/EmbedAnything/tree/main/examples/adapters"><strong>Adapters</strong></a>
    .
    <a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#-our-past-collaborations"><strong>Collaborations</strong></a>
    .
     <a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#-notebooks"><strong>Notebooks</strong></a>


    
  </p>
</div>


EmbedAnything is a minimalist, yet highly performant, modular, lightning-fast, lightweight, multisource, multimodal, and local embedding pipeline built in Rust. Whether you're working with text, images, audio, PDFs, websites, or other media, EmbedAnything streamlines the process of generating embeddings from various sources and seamlessly streaming (memory-efficient-indexing) them to a vector database. It supports dense, sparse, ONNX, model2vec and late-interaction embeddings, offering flexibility for a wide range of use cases.

<p align ="center">
<img width=400 src = "https://res.cloudinary.com/dogbbs77y/image/upload/v1766251819/streaming_popagm.png">
</p>

<!-- TABLE OF CONTENTS -->
<details>
  <summary>Table of Contents</summary>
  <ol>
    <li>
      <a href="#about-the-project">About The Project</a>
      <ul>
        <li><a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#the-benefit-of-rust-for-speed">Built With Rust</a></li>
        <li><a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#why-candle">Why Candle?</a></li>
      </ul>
    </li>
    <li>
      <a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#-getting-started">Getting Started</a>
      <ul>
        <li><a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#-installation">Installation</a></li>
      </ul>
    </li>
    <li><a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#-getting-started">Usage</a></li>
    <li><a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#roadmap">Roadmap</a></li>
    <li><a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#quick-start">Contributing</a></li>
    <li><a href="https://github.com/StarlightSearch/EmbedAnything?tab=readme-ov-file#Supported-Models">How to add custom model and chunk size</a></li>
    
  </ol>
</details>


## 🚀 Key Features


- **No Dependency on Pytorch**: Easy to deploy on cloud, comes with low memory footprint.
- **Highly Modular** : Choose any vectorDB adapter for RAG, with ~~1 line~~ 1 word of code
- **Candle Backend** : Supports BERT, Jina, ColPali, Splade, ModernBERT, Reranker, Qwen
- **ONNX Backend**: Supports BERT, Jina, ColPali, ColBERT Splade, Reranker, ModernBERT, Qwen
- **Cloud Embedding Models:**: Supports OpenAI, Cohere, and Gemini.
- **MultiModality** : Works with text sources like PDFs, txt, md, Images JPG and Audio, .WAV
- **GPU support** : Hardware acceleration on GPU as well.
- **Chunking** : In-built chunking methods like semantic, late-chunking
- **Vector Streaming:** Separate file processing, Indexing and Inferencing on different threads, reduces latency.

## 💡What is Vector Streaming

 Embedding models are computationally expensive and time-consuming. By separating document preprocessing from model inference, you can significantly reduce pipeline latency and improve throughput.

Vector streaming transforms a sequential bottleneck into an efficient, concurrent workflow.

The embedding process happens separetly from the main process, so as to maintain high performance enabled by rust MPSC, and no memory leak as embeddings are directly saved to vector database. Find our [blog](https://starlight-search.com/blog/2025/02/25/vector%20database/).

[![EmbedAnythingXWeaviate](https://res.cloudinary.com/dltwftrgc/image/upload/v1731166897/demo_o8auu4.gif)](https://www.youtube.com/watch?v=OJRWPLQ44Dw)

## 🦀 Why Embed Anything 

➡️Faster execution. <br />
➡️No Pytorch Dependency, thus low-memory footprint and easy to deploy on cloud. <br />
➡️True multithreading <br />
➡️Running embedding models locally and efficiently <br />
➡️In-built chunking methods like semantic, late-chunking <br/>
➡️Supports range of models, Dense, Sparse, Late-interaction, ReRanker, ModernBert.<br />
➡️Memory Management: Rust enforces memory management simultaneously, preventing memory leaks and crashes that can plague other languages <br />

## 🍓 Our Past Collaborations:

We have collaborated with reputed enterprise like
[Elastic](https://www.youtube.com/live/OzQopxkxHyY?si=l6KasNNuCNOKky6f), [Weaviate](https://www.linkedin.com/posts/sonam-pankaj_machinelearning-data-ai-activity-7238832243622768644-gB8c?utm_source=share&utm_medium=member_desktop&rcm=ACoAABlF_IAB4Y74d5JJwj0CUwpTkhuskE0PAt4), [SingleStore](https://www.linkedin.com/events/buildingdomain-specificragappli7295319309566775297/theater/), [Milvus](https://milvus.io/docs/build_RAG_with_milvus_and_embedAnything.md) 
and [Analytics Vidya Datahours](https://community.analyticsvidhya.com/c/datahour/multimodal-embeddings-and-search-with-embed-anything-6adba0)

You can get in touch with us for further collaborations.

## Benchmarks

### Inference Speed benchmarks.
Only measures embedding model inference speed, on onnx-runtime. [Code](https://colab.research.google.com/drive/1nXvd25hDYO-j7QGOIIC0M7MDpovuPCaD?usp=sharing)

<img src="https://res.cloudinary.com/dltwftrgc/image/upload/v1730405688/embed_time_zusmua.png" width="500">


Benchmarks with other fromeworks coming soon!! 🚀
# ⭐ Supported Models

We support any hugging-face models on Candle. And We also support ONNX runtime for BERT and ColPali.

## How to add custom model on candle: from_pretrained_hf

```python
from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load a custom BERT model from Hugging Face
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert, 
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Configure embedding parameters
config = TextEmbedConfig(
    chunk_size=1000,      # Maximum characters per chunk
    batch_size=32,        # Number of chunks to process in parallel
    splitting_strategy="sentence"  # How to split text: "sentence", "word", or "semantic"
)

# Embed a file (supports PDF, TXT, MD, etc.)
data = embed_anything.embed_file("path/to/your/file.pdf", embedder=model, config=config)

# Access the embeddings and text
for item in data:
    print(f"Text: {item.text[:100]}...")  # First 100 characters
    print(f"Embedding shape: {len(item.embedding)}")
    print(f"Metadata: {item.metadata}")
    print("---" * 20)
```


| Model  | HF link |
| ------------- | ------------- | 
| Jina  | [Jina Models](https://huggingface.co/collections/jinaai/jina-embeddings-v2-65708e3ec4993b8fb968e744) | 
| Bert | All Bert based models |
| CLIP | openai/clip-* | 
| Whisper| [OpenAI Whisper models](https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013)|
| ColPali | starlight-ai/colpali-v1.2-merged-onnx|
| Colbert | answerdotai/answerai-colbert-small-v1, jinaai/jina-colbert-v2 and more |
| Splade | [Splade Models](https://huggingface.co/collections/naver/splade-667eb6df02c2f3b0c39bd248) and other Splade like models |
| Model2Vec | model2vec, minishlab/potion-base-8M |
| Qwen3-Embedding | Qwen/Qwen3-Embedding-0.6B |
| Reranker | [Jina Reranker Models](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual), Xenova/bge-reranker, Qwen/Qwen3-Reranker-4B |




## Splade Models (Sparse Embeddings)

Sparse embeddings are useful for keyword-based retrieval and hybrid search scenarios.

```python
from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load a SPLADE model for sparse embeddings
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.SparseBert, 
    model_id="prithivida/Splade_PP_en_v1"
)

# Configure the embedding process
config = TextEmbedConfig(chunk_size=1000, batch_size=32)

# Embed text files
data = embed_anything.embed_file("test_files/document.txt", embedder=model, config=config)

# Sparse embeddings are useful for hybrid search (combining dense and sparse)
for item in data:
    print(f"Text: {item.text}")
    print(f"Sparse embedding (non-zero values): {sum(1 for x in item.embedding if x != 0)}")
```

## ONNX-Runtime: from_pretrained_onnx

ONNX models provide faster inference and lower memory usage. Use the `ONNXModel` enum for pre-configured models or provide a custom model path.

### BERT Models

```python
from embed_anything import EmbeddingModel, WhichModel, ONNXModel, Dtype, TextEmbedConfig
import embed_anything

# Option 1: Use a pre-configured ONNX model (recommended)
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert, 
    model_id=ONNXModel.BGESmallENV15Q  # Quantized BGE model for faster inference
)

# Option 2: Use a custom ONNX model from Hugging Face
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert, 
    model_id="onnx_model_link",
    dtype=Dtype.F16  # Use half precision for faster inference
)

# Embed files with ONNX model
config = TextEmbedConfig(chunk_size=1000, batch_size=32)
data = embed_anything.embed_file("test_files/document.pdf", embedder=model, config=config)
```

### ModernBERT (Quantized)

ModernBERT is a state-of-the-art BERT variant optimized for efficiency.

```python
from embed_anything import EmbeddingModel, WhichModel, ONNXModel, Dtype

# Load quantized ModernBERT for maximum efficiency
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert, 
    model_id=ONNXModel.ModernBERTBase, 
    dtype=Dtype.Q4F16  # 4-bit quantized for minimal memory usage
)

# Use it like any other model
data = embed_anything.embed_file("test_files/document.pdf", embedder=model)
```

### ColPali (Document Embedding)

ColPali is optimized for document and image-text embedding tasks.

```python
from embed_anything import ColpaliModel
import numpy as np

# Load ColPali ONNX model
model = ColpaliModel.from_pretrained_onnx(
    "starlight-ai/colpali-v1.2-merged-onnx", 
    None
)

# Embed a PDF file (ColPali processes pages as images)
data = model.embed_file("test_files/document.pdf", batch_size=1)

# Query the embedded document
query = "What is the main topic?"
query_embedding = model.embed_query(query)

# Calculate similarity scores
file_embeddings = np.array([e.embedding for e in data])
query_emb = np.array([e.embedding for e in query_embedding])

# Find most relevant pages
scores = np.einsum("bnd,csd->bcns", query_emb, file_embeddings).max(axis=3).sum(axis=2).squeeze()
top_pages = np.argsort(scores)[::-1][:5]

for page_idx in top_pages:
    print(f"Page {data[page_idx].metadata['page_number']}: {data[page_idx].text[:200]}")
```

### ColBERT (Late-Interaction Embeddings)

ColBERT provides token-level embeddings for fine-grained semantic matching.

```python
from embed_anything import ColbertModel
import numpy as np

# Load ColBERT ONNX model
model = ColbertModel.from_pretrained_onnx(
    "jinaai/jina-colbert-v2", 
    path_in_repo="onnx/model.onnx"
)

# Embed sentences
sentences = [
    "The quick brown fox jumps over the lazy dog", 
    "The cat is sleeping on the mat", 
    "The dog is barking at the moon", 
    "I love pizza", 
    "The dog is sitting in the park"
]

# ColBERT returns token-level embeddings
embeddings = model.embed(sentences, batch_size=2)

# Each embedding is a matrix: [num_tokens, embedding_dim]
for i, emb in enumerate(embeddings):
    print(f"Sentence {i+1}: {sentences[i]}")
    print(f"Embedding shape: {emb.shape}")  # Shape: (num_tokens, embedding_dim)
```

### ReRankers

Rerankers improve retrieval quality by re-scoring candidate documents.

```python
from embed_anything import Reranker, Dtype, RerankerResult, DocumentRank

# Load a reranker model
reranker = Reranker.from_pretrained(
    "jinaai/jina-reranker-v1-turbo-en", 
    dtype=Dtype.F16
)

# Query and candidate documents
query = "What is the capital of France?"
candidates = [
    "France is a country in Europe.", 
    "Paris is the capital of France.",
    "The Eiffel Tower is in Paris."
]

# Rerank documents (returns top-k results)
results: list[RerankerResult] = reranker.rerank(
    [query], 
    candidates, 
    top_k=2  # Return top 2 results
)

# Access reranked results
for result in results:
    documents: list[DocumentRank] = result.documents
    for doc in documents:
        print(f"Score: {doc.score:.4f} | Text: {doc.text}")
```

### Cloud Embedding Models (Cohere Embed v4)

Use cloud models for high-quality embeddings without local model deployment.

```python
from embed_anything import EmbeddingModel, WhichModel
import os

# Set your API key
os.environ["COHERE_API_KEY"] = "your-api-key-here"

# Initialize the cloud model
model = EmbeddingModel.from_pretrained_cloud(
    WhichModel.CohereVision, 
    model_id="embed-v4.0"
)

# Use it like any other model
data = embed_anything.embed_file("test_files/document.pdf", embedder=model)
```

### Qwen 3 - Embedding

Qwen3 supports over 100 languages including various programming languages.

```python
from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig, Dtype
import numpy as np

# Initialize Qwen3 embedding model
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Qwen3, 
    model_id="Qwen/Qwen3-Embedding-0.6B",
    dtype=Dtype.F32
)

# Configure embedding
config = TextEmbedConfig(
    chunk_size=1000,
    batch_size=2,
    splitting_strategy="sentence"
)

# Embed a file
data = model.embed_file("test_files/document.pdf", config=config)

# Query embedding
query = "Which GPU is used for training"
query_embedding = np.array(model.embed_query([query])[0].embedding)

# Calculate similarities
embedding_array = np.array([e.embedding for e in data])
similarities = np.matmul(query_embedding, embedding_array.T)

# Get top results
top_5_indices = np.argsort(similarities)[-5:][::-1]
for idx in top_5_indices:
    print(f"Score: {similarities[idx]:.4f} | {data[idx].text[:200]}")
```


## For Semantic Chunking

Semantic chunking preserves meaning by splitting text at semantically meaningful boundaries rather than fixed sizes.

```python
from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Main embedding model for generating final embeddings
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert, 
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Semantic encoder for determining chunk boundaries
# This model analyzes text to find natural semantic breaks
semantic_encoder = EmbeddingModel.from_pretrained_hf(
    WhichModel.Jina, 
    model_id="jinaai/jina-embeddings-v2-small-en"
)

# Configure semantic chunking
config = TextEmbedConfig(
    chunk_size=1000,                    # Target chunk size
    batch_size=32,                      # Batch processing size
    splitting_strategy="semantic",      # Use semantic splitting
    semantic_encoder=semantic_encoder    # Model for semantic analysis
)

# Embed with semantic chunking
data = embed_anything.embed_file("test_files/document.pdf", embedder=model, config=config)

# Chunks will be split at semantically meaningful boundaries
for item in data:
    print(f"Chunk: {item.text[:200]}...")
    print("---" * 20)
```

## For Late-Chunking

Late-chunking splits text into smaller units first, then combines them during embedding for better context preservation.

```python
from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig, EmbedData

# Load your embedding model
model = EmbeddingModel.from_pretrained_hf(
    WhichModel.Bert,
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Configure late-chunking
config = TextEmbedConfig(
    chunk_size=1000,              # Maximum chunk size
    batch_size=8,                 # Batch size for processing
    splitting_strategy="sentence", # Split by sentences first
    late_chunking=True,           # Enable late-chunking
)

# Embed a file with late-chunking
data: list[EmbedData] = model.embed_file("test_files/attention.pdf", config=config)

# Late-chunking helps preserve context across sentence boundaries
for item in data:
    print(f"Text: {item.text}")
    print(f"Embedding dimension: {len(item.embedding)}")
    print("---" * 20)
```

# 🧑‍🚀 Getting Started

## 💚 Installation

`
pip install embed-anything
`<br/>

For GPUs and using special models like ColPali <br/>

`
pip install embed-anything-gpu
`

🚧❌ If it shows cuda error while running on windowns, run the following command:

```
os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin")
```
## 📒 Notebooks


|   |   
| ------------- | 
| [End-to-End Retrieval and Reranking using VectorDB Adapters](https://colab.research.google.com/drive/1gct0lEplyW8VWGPXUgpLcQuMQeZDl6D5?usp=sharing)  | 
| [ColPali-Onnx](https://colab.research.google.com/drive/1yCVbpkoe53ymiCxG8ttJNbRhECy1Q-Du?usp=sharing)  | 
| [Adapters](https://github.com/StarlightSearch/EmbedAnything/tree/main/examples/adapters) |  |
| [Qwen3- Embedings](https://colab.research.google.com/drive/1OlUJwTtPvj28h5tCVerf6ebEnAf8kPAh?usp=sharing) | 
| [Benchmarks](https://colab.research.google.com/drive/1nXvd25hDYO-j7QGOIIC0M7MDpovuPCaD?usp=sharing) | 


# Usage

## ➡️ Usage For 0.3 and later version

### Basic Text Embedding

```python
from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load a model from Hugging Face
model = EmbeddingModel.from_pretrained_local(
    WhichModel.Bert, 
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Simple file embedding with default config
data = embed_anything.embed_file("test_files/test.pdf", embedder=model)

# Access results
for item in data:
    print(f"Text chunk: {item.text[:100]}...")
    print(f"Embedding shape: {len(item.embedding)}")
```

### Advanced Usage with Configuration

```python
from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load model
model = EmbeddingModel.from_pretrained_local(
    WhichModel.Jina,
    model_id="jinaai/jina-embeddings-v2-small-en"
)

# Configure embedding parameters
config = TextEmbedConfig(
    chunk_size=1000,              # Characters per chunk
    batch_size=32,                # Process 32 chunks at once
    buffer_size=64,               # Buffer size for streaming
    splitting_strategy="sentence" # Split by sentences
)

# Embed with custom configuration
data = embed_anything.embed_file(
    "test_files/document.pdf", 
    embedder=model, 
    config=config
)

# Process embeddings
for item in data:
    print(f"Chunk: {item.text}")
    print(f"Metadata: {item.metadata}")
```

### Embedding Queries

```python
from embed_anything import EmbeddingModel, WhichModel
import embed_anything
import numpy as np

# Load model
model = EmbeddingModel.from_pretrained_local(
    WhichModel.Bert,
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Embed a query
queries = ["What is machine learning?", "How does neural networks work?"]
query_embeddings = embed_anything.embed_query(queries, embedder=model)

# Use embeddings for similarity search
for i, query_emb in enumerate(query_embeddings):
    print(f"Query: {queries[i]}")
    print(f"Embedding shape: {len(query_emb.embedding)}")
```

### Embedding Directories

```python
from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig
import embed_anything

# Load model
model = EmbeddingModel.from_pretrained_local(
    WhichModel.Bert,
    model_id="sentence-transformers/all-MiniLM-L12-v2"
)

# Configure
config = TextEmbedConfig(chunk_size=1000, batch_size=32)

# Embed all files in a directory
data = embed_anything.embed_directory(
    "test_files/", 
    embedder=model, 
    config=config
)

print(f"Total chunks: {len(data)}")
```



### Using ONNX Models

ONNX models provide faster inference and lower memory usage. You can use pre-configured models via the `ONNXModel` enum or load custom ONNX models.

#### Using Pre-configured ONNX Models (Recommended)

```python
from embed_anything import EmbeddingModel, WhichModel, ONNXModel, Dtype, TextEmbedConfig
import embed_anything

# Use a pre-configured ONNX model (tested and optimized)
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Bert,
    model_id=ONNXModel.BGESmallENV15Q,  # Quantized BGE model
    dtype=Dtype.Q4F16                    # Quantized 4-bit float16
)

# Embed files
config = TextEmbedConfig(chunk_size=1000, batch_size=32)
data = embed_anything.embed_file("test_files/document.pdf", embedder=model, config=config)
```

#### Using Custom ONNX Models

For custom or fine-tuned models, specify the Hugging Face model ID and path to the ONNX file:

```python
from embed_anything import EmbeddingModel, WhichModel, Dtype

# Load a custom ONNX model from Hugging Face
model = EmbeddingModel.from_pretrained_onnx(
    WhichModel.Jina,
    hf_model_id="jinaai/jina-embeddings-v2-small-en",
    path_in_repo="model.onnx",  # Path to ONNX file in the repo
    dtype=Dtype.F16              # Use half precision
)

# Use the model
data = embed_anything.embed_file("test_files/document.pdf", embedder=model)
```

**Note**: Using pre-configured models (via `ONNXModel` enum) is recommended as these models are tested and optimized. For a complete list of supported ONNX models, see [ONNX Models Guide](/docs/guides/onnx_models.md).

## ⁉️FAQ

### Do I need to know rust to use or contribute to embedanything?
The answer is No. EmbedAnything provides you pyo3 bindings, so you can run any function in python without any issues. To contibute you should check out our guidelines and python folder example of adapters.

### How is it different from fastembed?

We provide both backends, candle and onnx. On top of it we also give an end-to-end pipeline, that is you can ingest different data-types and index to any vector database, and inference any model. Fastembed is just an onnx-wrapper.

### We've received quite a few questions about why we're using Candle.

One of the main reasons is that Candle doesn't require any specific ONNX format models, which means it can work seamlessly with any Hugging Face model. This flexibility has been a key factor for us. However, we also recognize that we’ve been compromising a bit on speed in favor of that flexibility.


## 🚧 Contributing to EmbedAnything

First of all, thank you for taking the time to contribute to this project. We truly appreciate your contributions, whether it's bug reports, feature suggestions, or pull requests. Your time and effort are highly valued in this project. 🚀

This document provides guidelines and best practices to help you to contribute effectively. These are meant to serve as guidelines, not strict rules. We encourage you to use your best judgment and feel comfortable proposing changes to this document through a pull request.



<li><a href="##-RoadMap">Roadmap</a></li>
<li><a href="##-Quick-Start">Quick Start</a></li>
<li><a href="##-Contributing-Guidelines">Guidelines</a></li>


# 🏎️ RoadMap 

## Accomplishments

One of the aims of EmbedAnything is to allow AI engineers to easily use state of the art embedding models on typical files and documents. A lot has already been accomplished here and these are the formats that we support right now and a few more have to be done. <br />


### 🖼️ Modalities and Source

We’re excited to share that we've expanded our platform to support multiple modalities, including:

- [x] Audio files

- [x] Markdowns

- [x] Websites

- [x] Images

- [ ] Videos

- [ ] Graph

This gives you the flexibility to work with various data types all in one place! 🌐 <br />



### ⚙️ Performance 


We now support both candle and Onnx backend<br/>
➡️ Support for GGUF models </br >


### 🫐Embeddings:

We had multimodality from day one for our infrastructure. We have already included it for websites, images and audios but we want to expand it further to.

➡️ Graph embedding -- build deepwalks embeddings depth first and word to vec <br />
➡️ Video Embedding <br/>
➡️ Yolo Clip <br/>


### 🌊Expansion to other Vector Adapters

We currently support a wide range of vector databases for streaming embeddings, including:

- Elastic: thanks to amazing and active Elastic team for the contribution <br/>
- Weaviate <br/>
- Pinecone <br/>
- Qdrant <br/>
- Milvus<br/>
- Chroma <br/>

How to add an adpters: https://starlight-search.com/blog/2024/02/25/adapter-development-guide.md

### 💥 Create WASM demos to integrate embedanything directly to the browser. <br/>

### 💜 Add support for ingestion from remote sources
➡️ Support for S3 bucket </br >
➡️ Support for azure storage </br >
➡️ Support for google drive/dropbox</br >




But we're not stopping there! We're actively working to expand this list.

Want to Contribute?
If you’d like to add support for your favorite vector database, we’d love to have your help! Check out our contribution.md for guidelines, or feel free to reach out directly turingatverge@gmail.com . Let's build something amazing together! 💡

## AWESOME Projects built on EmbedAnything.
1. A Rust-based cursor like chat with your codebase tool: https://github.com/timpratim/cargo-chat
2. A simple vector-based search engine, also supports ordinary text search : https://github.com/szuwgh/vectorbase2
3. Semantic file tracker in CLI operated through daemon built with rust.: https://github.com/sam-salehi/sophist
4. FogX-Store is a dataset store service that collects and serves large robotics datasets : https://github.com/J-HowHuang/FogX-Store
5. A Dart Wrapper for EmbedAnything Crate: https://github.com/cotw-fabier/embedanythingindart
6. Generate embeddings in Rust with tauri on MacOS : https://github.com/do-me/tauri-embedanything-ios
7. RAG with EmbedAnything and Milvus: https://milvus.io/docs/v2.5.x/build_RAG_with_milvus_and_embedAnything.md




## A big Thank you to all our StarGazers

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=StarlightSearch/EmbedAnything&type=Date)](https://star-history.com/#StarlightSearch/EmbedAnything&Date)