1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
//! Text embedding using Model2Vec
//!
//! Uses the potion-base-8M model for fast, lightweight text embeddings.
//! This model produces 256-dimensional embeddings and is optimized for
//! low-latency inference, making it suitable for WASM deployment.
//!
//! ## Model Details
//!
//! - **Model**: Model2Vec potion-base-8M
//! - **Dimensions**: 256
//! - **License**: MIT
//!
//! ## Example
//!
//! ```rust,ignore
//! use elid::models::embed_text;
//!
//! let embedding = embed_text("Hello, world!")?;
//! assert_eq!(embedding.len(), 256);
//! ```
use ModelError;
/// Embed text into a vector representation
///
/// Uses the Model2Vec potion-base-8M model to generate a 256-dimensional
/// embedding vector from the input text.
///
/// # Arguments
///
/// * `text` - The input text to embed
///
/// # Returns
///
/// A 256-dimensional embedding vector as `Vec<f32>`
///
/// # Errors
///
/// Returns `ModelError::ModelLoad` if the model file is not found or cannot be loaded.
/// Returns `ModelError::Preprocessing` if text tokenization fails.
/// Returns `ModelError::Inference` if model inference fails.
///
/// # Example
///
/// ```rust,ignore
/// use elid::models::embed_text;
///
/// let embedding = embed_text("Hello, world!")?;
/// assert_eq!(embedding.len(), 256);
///
/// // Similar texts should produce similar embeddings
/// let emb1 = embed_text("The quick brown fox")?;
/// let emb2 = embed_text("The fast brown fox")?;
/// // emb1 and emb2 should be close in vector space
/// ```