ripvec-core 0.13.3

Semantic code search engine β€” GPU-accelerated ModernBERT embeddings, tree-sitter chunking, hybrid BM25+vector ranking
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
# ripvec

[![CI](https://github.com/fnordpig/ripvec/actions/workflows/ci.yml/badge.svg)](https://github.com/fnordpig/ripvec/actions/workflows/ci.yml)
[![crates.io](https://img.shields.io/crates/v/ripvec.svg)](https://crates.io/crates/ripvec)
[![License: MIT/Apache-2.0](https://img.shields.io/badge/license-MIT%2FApache--2.0-blue.svg)](LICENSE-MIT)

**Semantic code search + multi-language LSP. One binary, 19 grammars, zero setup.**

ripvec finds code by meaning, provides structural code intelligence across
every language it knows, and ranks results by how important each function is
in your codebase. It runs locally, bundles its own embedding model, and
uses whatever GPU you have.

```sh
$ ripvec "retry logic with exponential backoff" ~/src/my-project

 1. retry_handler.rs:42-78                                        [0.91]
    pub async fn with_retry<F, T>(f: F, max_attempts: u32) -> Result<T>
    where F: Fn() -> Future<Output = Result<T>> {
        let mut delay = Duration::from_millis(100);
        for attempt in 0..max_attempts {
            match f().await {
                Ok(v) => return Ok(v),
                Err(e) if attempt < max_attempts - 1 => {
                    sleep(delay).await;
                    delay *= 2;  // exponential backoff
    ...

 2. http_client.rs:156-189                                        [0.84]
    impl HttpClient {
        async fn request_with_backoff(&self, req: Request) -> Response {
    ...
```

The function is called `with_retry`, the variable is `delay` β€” "exponential
backoff" appears nowhere in the source. grep can't find this. ripvec can,
because it embeds both your query and the code into the same vector space
and measures similarity.

## When to use what

ripvec has three interfaces. Here's when each one matters:

| Interface | When to use it | Who uses it |
|-----------|---------------|-------------|
| **CLI** (`ripvec "query" .`) | Terminal search, interactive TUI, one-shot queries | You, directly |
| **MCP server** (`ripvec-mcp`) | AI agent needs to search or understand your codebase | Claude Code, Cursor, any MCP client |
| **LSP server** (`ripvec-mcp --lsp`) | Editor/agent needs symbols, definitions, diagnostics | Claude Code's LSP tool, editors |

The MCP server gives AI agents 7 tools (semantic search, repo maps, etc.).
The LSP server gives editors structural intelligence (outlines, go-to-definition,
syntax diagnostics). The CLI is for humans. Same binary for all three.

If you're using **Claude Code**, install the plugin β€” it sets up both MCP and LSP
automatically. Claude will use `search_code` when you ask conceptual questions
and the LSP for symbol navigation.

## Workflow: orient, search, navigate

```mermaid
graph LR
    A["πŸ—ΊοΈ Orient<br/>get_repo_map"] --> B["πŸ” Search<br/>search_code"]
    B --> C["🧭 Navigate<br/>LSP operations"]
    C -->|"need more context"| B
    C -->|"found it"| D["✏️ Edit"]
```

**Orient** β€” `get_repo_map` returns a structural overview ranked by function-level
importance. One tool call replaces 10+ sequential file reads. Start here when
working on unfamiliar code.

**Search** β€” `search_code "authentication middleware"` finds implementations by
meaning across all 19 languages simultaneously. Results are ranked by relevance
and structural importance.

**Navigate** β€” LSP `documentSymbol` shows the file outline. `goToDefinition`
jumps to the likely definition. `findReferences` shows usage sites.
`incomingCalls`/`outgoingCalls` traces the call graph.

## Semantic search

You describe behavior, ripvec finds the implementation:

| What you want | grep / ripgrep | ripvec |
|---------------|----------------|--------|
| "retry with backoff" | Nothing (code says `delay *= 2`) | Finds the retry handler |
| "database connection pool" | Comments mentioning "pool" | The pool implementation |
| "authentication middleware" | `// TODO: add auth` | The auth guard |
| "WebSocket lifecycle" | String "WebSocket" | Connect/disconnect handlers |

Search modes: `--mode hybrid` (default, semantic + BM25 fusion), `--mode semantic`
(pure vector similarity), `--mode keyword` (pure BM25). Hybrid is usually best.

## Multi-language LSP

ripvec serves LSP from a single binary for all 19 grammars. No per-language
server installs. It provides:

- **`documentSymbol`** β€” file outline: functions, fields, enum variants, constants, types, headings
- **`workspaceSymbol`** β€” cross-language symbol search with PageRank boost
- **`goToDefinition`** β€” name-based resolution ranked by structural importance
- **`findReferences`** β€” usage sites via hybrid search + content filtering
- **`hover`** β€” scope chain, signature, enriched context
- **`publishDiagnostics`** β€” tree-sitter syntax error detection after every edit
- **`incomingCalls` / `outgoingCalls`** β€” function-level call graph

For languages with dedicated LSPs (Rust, Python, Go, TypeScript), ripvec runs
alongside them β€” the dedicated server handles types, ripvec handles semantic
search and cross-language features. For languages without dedicated LSPs
(bash, HCL, Ruby, Kotlin, Swift, Scala), ripvec is the primary code intelligence.

JSON, YAML, TOML, and Markdown get structural outlines (keys, mappings, headings)
and syntax diagnostics β€” useful for navigating large config files, not comparable
to language-aware intelligence.

## Function-level PageRank

```mermaid
graph LR
    subgraph "Call Graph"
        A["main()"] --> B["handle_request()"]
        A --> C["init_db()"]
        B --> D["authenticate()"]
        B --> E["dispatch()"]
        D --> F["verify_token()"]
        E --> D
    end
    subgraph "PageRank"
        D2["authenticate() β˜…β˜…β˜…"]
        B2["handle_request() β˜…β˜…"]
        E2["dispatch() β˜…"]
    end
```

ripvec extracts call expressions from every function body using tree-sitter,
resolves callee names to definitions, and computes PageRank on the resulting
call graph. Functions called by many others rank higher β€” `authenticate()` in
the example above is more structurally important than `dispatch()` because
more code depends on it.

This directly improves search: when two functions both match your query,
the one that's more central to the codebase ranks first.

## Install

### Pre-built binaries (fastest)

```sh
cargo binstall ripvec ripvec-mcp
```

Requires [cargo-binstall](https://github.com/cargo-bins/cargo-binstall).
Downloads a pre-built binary for your platform β€” no compilation.

### From source

```sh
cargo install ripvec ripvec-mcp
```

For CUDA (Linux with NVIDIA GPU):

```sh
cargo install ripvec ripvec-mcp --features cuda
```

### Claude Code plugin

```sh
claude plugin install ripvec@fnordpig-my-claude-plugins
```

The plugin auto-downloads the binary for your platform on first use and
configures both MCP and LSP servers. It includes 3 skills (codebase orientation,
semantic discovery, change impact analysis), 3 commands (`/map`, `/find`,
`/repo-index`), and a code exploration agent. CUDA is auto-detected via `nvidia-smi`.

### Platforms

| Platform | Backends | GPU |
|----------|----------|-----|
| macOS Apple Silicon | Metal + MLX + CPU (Accelerate) | Metal auto-enabled |
| Linux x86_64 | CPU (OpenBLAS) | CUDA with `--features cuda` |
| Linux ARM64 (Graviton) | CPU (OpenBLAS) | CUDA with `--features cuda` |

Model weights (~100MB) download automatically on first run.

## Usage

### CLI

```sh
ripvec "error handling" .                    # Search current directory
ripvec "form validation hooks" -n 5          # Top 5 results
ripvec "database migration" --mode keyword   # BM25 only
ripvec "auth flow" --fast                    # Lighter model (BGE-small, 4x faster)
ripvec -i --index .                          # Interactive TUI with persistent index
```

### MCP server

```json
{ "mcpServers": { "ripvec": { "command": "ripvec-mcp" } } }
```

Tools: `search_code`, `search_text`, `find_similar`, `get_repo_map`,
`reindex`, `index_status`, `up_to_date`.

### LSP server

```sh
ripvec-mcp --lsp   # serves LSP over stdio
```

Same binary, `--lsp` flag selects protocol.

## Index architecture

ripvec works without an index (`ripvec "query" .` embeds on-the-fly), but
persistent indexing makes subsequent searches instant.

### Cache layout

```mermaid
graph TD
    subgraph "~/.cache/ripvec/&lt;project_hash&gt;/v3-modernbert/"
        M["manifest.json<br/>file entries + Merkle hashes"]
        L["manifest.lock<br/>advisory fd-lock"]
        subgraph "objects/ (content-addressed)"
            O1["ab/cdef12...<br/>(zstd-compressed FileCache)"]
            O2["3f/a891bc...<br/>(zstd-compressed FileCache)"]
        end
    end
```

Each file's chunks and embeddings are serialized into a `FileCache` object,
compressed with zstd (~8x), and stored by blake3 content hash in a git-style
`xx/hash` sharded object store. The manifest tracks metadata: mtime, size,
content hash, chunk count per file, plus Merkle directory hashes.

### Change detection β€” two-level diff

```mermaid
graph TD
    F["File on disk"] --> M{"mtime + size<br/>match manifest?"}
    M -->|"yes"| SKIP["Unchanged<br/>(fast path, no I/O)"]
    M -->|"no"| HASH{"blake3 content hash<br/>matches manifest?"}
    HASH -->|"yes"| TOUCH["Touched but identical<br/>(heal mtime in manifest)"]
    HASH -->|"no"| DIRTY["Dirty β†’ re-embed"]
```

Level 1 (mtime+size) is a stat call β€” microseconds. Level 2 (blake3 hash)
reads the file but avoids re-embedding if content hasn't changed. After
`git clone` (where all mtimes are wrong), the first run hashes everything
but re-embeds nothing β€” then heals the manifest mtimes for fast-path on
subsequent runs.

### Serialization β€” two formats

| Format | Used for | Portable? |
|--------|----------|-----------|
| **rkyv** (zero-copy) | User-level cache (~/.cache) | No (architecture-dependent) |
| **bitcode** | Repo-level cache (.ripvec/) | Yes (cross-architecture) |

Auto-detected on read via magic bytes: `0x42 0x43` = bitcode, otherwise rkyv.
Both are zstd-compressed. Repo-level indices use bitcode so they can be
committed to git and shared between x86 CI and ARM developer machines.

### Concurrency and locking

```mermaid
sequenceDiagram
    participant MCP as MCP Server
    participant Watcher as File Watcher
    participant Lock as manifest.lock
    participant Cache as Object Store

    Note over MCP: Query arrives
    MCP->>Lock: acquire read lock
    MCP->>Cache: load objects
    Lock-->>MCP: release

    Note over Watcher: File change detected (2s debounce)
    Watcher->>Lock: acquire write lock
    Watcher->>Cache: re-embed dirty files
    Watcher->>Cache: write new objects
    Watcher->>Cache: save manifest + GC
    Lock-->>Watcher: release
```

The file watcher debounces for 2 seconds of quiet before triggering
re-indexing. Advisory `fd-lock` on `manifest.lock` prevents readers from
seeing a half-written manifest. The MCP server holds a read lock during
queries; the watcher holds a write lock during index updates. Multiple
readers can proceed concurrently; writers block all readers.

Garbage collection runs after each incremental update β€” unreferenced objects
(from deleted or re-embedded files) are removed from the store.

### Dual search index

```mermaid
graph LR
    subgraph "HybridIndex"
        subgraph "SearchIndex (dense vectors)"
            EMB["768-dim embeddings<br/>(TurboQuant 4-bit compressed)"]
            EMB --> CS["Cosine similarity scan"]
        end
        subgraph "Bm25Index (tantivy)"
            TAN["Inverted index<br/>(code-aware tokenizer)"]
            TAN --> BM["BM25 scoring<br/>(name 3Γ— / path 1.5Γ— / body 1Γ—)"]
        end
        CS --> RRF["RRF fusion (k=60)"]
        BM --> RRF
    end
```

The BM25 index uses a code-aware tokenizer that splits `parseJsonConfig` into
`[parse, json, config]` and `my_func_name` into `[my, func, name]` β€” so keyword
search finds `json config parser` even if the function is named in camelCase.
Function names are boosted 3x over body text.

TurboQuant compresses 768-dim vectors from 3KB to ~380 bytes (4-bit) with a
rotation matrix for better quantization. This enables ~5x faster scanning for
large indices while maintaining ranking quality through exact re-ranking of
the top candidates.

### Repo-level indexing

```sh
ripvec --index --repo-level "query"
git add .ripvec/ && git commit -m "add search index"
```

Creates `.ripvec/config.toml` (pins model + version) and `.ripvec/cache/`
(manifest + objects). Teammates who clone get instant search. The config
is validated on load β€” if the model doesn't match the runtime model, ripvec
falls back to the user-level cache with a warning.

### Cache resolution

```mermaid
graph TD
    A["--cache-dir override"] -->|"highest priority"| R["Resolved cache dir"]
    B[".ripvec/config.toml<br/>(repo-local)"] -->|"if model matches"| R
    C["RIPVEC_CACHE env var"] --> R
    D["~/.cache/ripvec/<br/>(XDG default)"] -->|"lowest priority"| R
```

## Supported languages

19 tree-sitter grammars, 30 file extensions:

| Language | Extensions | Extracted elements |
|----------|-----------|-------------------|
| Rust | `.rs` | functions, structs, enums, variants, fields, impls, traits, consts, mods |
| Python | `.py` | functions, classes, assignments |
| JavaScript | `.js` `.jsx` | functions, classes, methods, variables |
| TypeScript | `.ts` `.tsx` | functions, classes, interfaces, type aliases, enums |
| Go | `.go` | functions, methods, types, constants |
| Java | `.java` | methods, classes, interfaces, enums, fields, constructors |
| C | `.c` `.h` | functions, structs, enums, typedefs |
| C++ | `.cpp` `.cc` `.cxx` `.hpp` | functions, classes, namespaces, enums, fields |
| Bash | `.sh` `.bash` `.bats` | functions, variables |
| Ruby | `.rb` | methods, classes, modules, constants |
| HCL / Terraform | `.tf` `.tfvars` `.hcl` | blocks (resources, data, variables) |
| Kotlin | `.kt` `.kts` | functions, classes, objects, properties |
| Swift | `.swift` | functions, classes, protocols, properties |
| Scala | `.scala` | functions, classes, traits, objects, vals, types |
| TOML | `.toml` | tables, key-value pairs |
| JSON | `.json` | object keys |
| YAML | `.yaml` `.yml` | mapping keys |
| Markdown | `.md` | headings |

Unsupported file types get sliding-window plain-text chunking. The embedding
model handles any language β€” tree-sitter just provides better chunk boundaries.

## Performance

**Without an index** (first run on a codebase):

| Hardware | Throughput | Time (Flask corpus, 2383 chunks) |
|----------|-----------|----------------------------------|
| RTX 4090 (CUDA) | 435 chunks/s | ~5s |
| M2 Max (Metal) | 73.8 chunks/s | ~32s |
| M2 Max (CPU/Accelerate) | 73.5 chunks/s | ~32s |

Metal and CPU show similar throughput on M2 Max because macOS Accelerate
routes BLAS operations through the AMX coprocessor regardless of backend.
The Metal backend has headroom on larger batches and non-BLAS operations.

**With an index**: milliseconds. Merkle diff skips unchanged files entirely.

**Memory**: ~500MB during embedding (model weights + batch buffers). Index
queries use ~100MB (loaded embeddings + BM25 inverted index).

## How it compares

| Tool | Type | Key difference from ripvec |
|------|------|--------------------------|
| ripgrep | Text search | No semantic understanding |
| Sourcegraph | Cloud AI platform | $49-59/user/month, code leaves your machine |
| grepai | Local semantic search | Requires Ollama for embeddings |
| mgrep | Semantic search | Uses cloud embeddings (Mixedbread AI) |
| Serena | MCP symbol navigation | Requires per-language LSP servers installed |
| Bloop | Was semantic + navigation | Archived Jan 2025 |
| VS Code anycode | Tree-sitter outlines | Editor-only, no cross-file search |
| Cursor @Codebase | IDE semantic search | Cursor-only, sends embeddings to cloud |

ripvec is self-contained (no Ollama, no cloud, no per-language setup), runs
on your GPU, and combines search + LSP + structural ranking in one binary.

## Scoring pipeline

```mermaid
graph TD
    Q["query"] --> E["ModernBERT embedding<br/>(768-dim)"]
    E --> S["Cosine similarity<br/>ranking"]
    E --> K["BM25 keyword<br/>ranking"]
    S --> F["Reciprocal Rank Fusion<br/>(k=60)"]
    K --> F
    F --> N["Normalize to [0, 1]"]
    N --> P["Γ— PageRank boost<br/>(log-saturated, per-function)"]
    P --> T["Threshold + top-k"]
```

The RRF fusion follows Cormack et al. (2009) β€” rank-based combination that
handles the scale mismatch between cosine similarity and BM25 without tuning.
The PageRank boost is multiplicative: zero-relevance stays at zero regardless
of structural importance.

Min-max normalization maps the best result to 1.0 within each query, making
the threshold relative rather than absolute. This is a known tradeoff; future
versions may switch to z-score normalization for better calibration.

## Limitations

- **goToDefinition is best-effort**: resolves by name matching and structural
  importance, not by type system analysis. Use dedicated LSPs (rust-analyzer,
  pyright, gopls) when you need exact resolution for overloaded symbols.
- **Call graph is approximate**: common names like `new`, `run`, `render` may
  resolve to the wrong definition. Cross-crate resolution limited to workspace
  members.
- **Cold start**: first search without an index embeds everything β€” 5s on CUDA,
  32s on Apple Silicon for a medium codebase. Use `--index` for repeated searches.
- **English-centric**: ModernBERT was trained primarily on English text. Queries
  and code comments in other languages will have lower recall.

## Architecture

Cargo workspace with three crates:

| Crate | Role |
|-------|------|
| [`ripvec-core`]crates/ripvec-core | Backends, chunking, embedding, search, repo map, cache, call graph |
| [`ripvec`]crates/ripvec | CLI binary (clap + ratatui TUI) |
| [`ripvec-mcp`]crates/ripvec-mcp | MCP + LSP server binary (rmcp + tower-lsp-server) |

### Driver / Architecture split

The core design insight: the forward pass is written ONCE as a generic
`ModernBertArch<D: Driver>`, and each backend implements the `Driver` trait
with platform-specific operations. Same model, same math, different hardware.

```mermaid
graph TB
    subgraph "Architecture (written once)"
        FP["ModernBertArch&lt;D: Driver&gt;<br/>forward()"]
        FP --> L1["Layer 1: Attention + FFN"]
        L1 --> L2["Layer 2: Attention + FFN"]
        L2 --> LN["...22 layers..."]
        LN --> Pool["Mean pool + L2 norm"]
    end
    subgraph "Driver trait implementations"
        FP -.->|"D = Metal"| M["MetalDriver<br/>MPS GEMMs + custom MSL kernels"]
        FP -.->|"D = CUDA"| CU["CudaDriver<br/>cuBLAS tensor cores + NVRTC kernels"]
        FP -.->|"D = CPU"| CP["CpuDriver<br/>ndarray + Accelerate/OpenBLAS"]
        FP -.->|"D = MLX"| ML["MlxDriver<br/>lazy eval β†’ auto-fused Metal"]
    end
```

### What each backend actually does per layer

Each of the 22 ModernBERT layers runs attention + FFN. Here's how the same
operations map to different hardware:

```mermaid
graph LR
    subgraph "Attention"
        LN1["LayerNorm"] --> QKV["QKV projection<br/>(GEMM)"]
        QKV --> PAD["Pad + Split"]
        PAD --> ROPE["RoPE rotation"]
        ROPE --> ATTN["Q @ K^T<br/>(batched GEMM)"]
        ATTN --> SM["Scale + Mask<br/>+ Softmax"]
        SM --> AV["Scores @ V<br/>(batched GEMM)"]
        AV --> UNPAD["Reshape + Unpad"]
        UNPAD --> OPROJ["Output proj<br/>(GEMM)"]
        OPROJ --> RES1["Residual add"]
    end
    subgraph "FFN"
        RES1 --> LN2["LayerNorm"]
        LN2 --> WI["Wi projection<br/>(GEMM)"]
        WI --> GEGLU["Split + GeGLU"]
        GEGLU --> WO["Wo projection<br/>(GEMM)"]
        WO --> RES2["Residual add"]
    end
```

| Operation | Metal | CUDA | CPU | MLX |
|-----------|-------|------|-----|-----|
| **GEMM** | MPS (AMX) | cuBLAS FP16 tensor cores | Accelerate / OpenBLAS | Auto-fused |
| **Softmax+Scale+Mask** | Fused MSL kernel | Fused NVRTC kernel | Scalar loop | Auto-fused |
| **RoPE** | Custom MSL kernel | Custom NVRTC kernel | Scalar loop | Lazy ops |
| **GeGLU (split+gelu+gate)** | Fused MSL kernel | Fused NVRTC kernel | Scalar loop | Auto-fused |
| **Pad/Unpad/Reshape** | Custom MSL kernels | Custom NVRTC kernels | Rust loops | Free (metadata) |
| **FP16 support** | Yes (all kernels) | Yes (all kernels) | No | No |

Metal and CUDA have **hand-written fused kernels** for softmax, GeGLU, and
attention reshape β€” these eliminate intermediate buffers and reduce memory
bandwidth. MLX gets fusion automatically via lazy evaluation (the entire
forward pass typically compiles to 2-3 Metal kernel dispatches). CPU uses
explicit scalar loops for everything except GEMM.

### Embedding pipeline

```mermaid
graph LR
    subgraph "Stage 1: Chunk (rayon)"
        F["Files"] --> TS["Tree-sitter<br/>parse"]
        TS --> C["Semantic<br/>chunks"]
    end
    subgraph "Stage 2: Tokenize"
        C --> T["ModernBERT<br/>tokenizer"]
        T --> B["Padded<br/>batches"]
    end
    subgraph "Stage 3: Embed (GPU)"
        B --> FW["Forward pass<br/>(22 layers)"]
        FW --> P["Mean pool<br/>+ L2 norm"]
        P --> V["768-dim<br/>vectors"]
    end
    C -.->|"bounded channel<br/>backpressure"| T
    T -.->|"bounded channel<br/>backpressure"| FW
```

For large corpora (1000+ files), stages run concurrently as a streaming
pipeline with bounded channels for backpressure. The GPU starts embedding
after the first batch (~50ms), not after all files are chunked.

### Embedding models

- **ModernBERT** (default) β€” 768-dim, mean pooling, 22 layers
- **BGE-small** (`--fast`) β€” 384-dim, CLS pooling, 12 layers

## Development

```sh
cargo fmt --check && cargo clippy --all-targets -- -D warnings && cargo test --workspace
```

See [CLAUDE.md](CLAUDE.md) for detailed development conventions, architecture
notes, and MCP tool namespace resolution.

### Docs

- [Metal/MPS Architecture]docs/METAL_MPS_ARCHITECTURE.md
- [CUDA Architecture]docs/CUDA_ARCHITECTURE.md
- [Development Learnings]docs/LEARNINGS.md

## License

Licensed under either of [Apache-2.0](LICENSE-APACHE) or [MIT](LICENSE-MIT) at your option.