lnmp-embedding 0.5.4

Vector embedding support for LNMP (LLM Native Minimal Protocol) with similarity calculations and delta encoding
Documentation
# LNMP-Embedding Performance Analysis

## Benchmark Results Summary

### Vector Creation (from_f32)
- 128-dim: ~101 ns
- 256-dim: ~153 ns
- 512-dim: ~227 ns
- 768-dim: ~312 ns
- 1536-dim: ~594 ns

**Analysis**: Linear scaling with dimension, reasonable performance for memory allocation.

### Encoding
- 128-dim: ~64 ns
- 256-dim: ~67 ns  
- 512-dim: ~77 ns
- 768-dim: ~87 ns
- 1536-dim: ~133 ns

**Analysis**: Excellent performance, mostly constant time with slight increase for larger vectors. Very efficient.

### Decoding
- 128-dim: ~30 ns
- 256-dim: ~36 ns
- 512-dim: ~43 ns
- 768-dim: ~50 ns
- 1536-dim: ~94 ns

**Analysis**: Even faster than encoding! Minimal overhead.

### Similarity Calculations

#### Cosine Similarity
- 128-dim: ~298 ns
- 256-dim: ~654 ns
- 512-dim: ~1.3 µs
- 768-dim: ~2.1 µs
- 1536-dim: ~4.5 µs

#### Euclidean Distance
- 128-dim: ~227 ns
- 256-dim: ~423 ns
- 512-dim: ~767 ns
- 768-dim: ~1.2 µs
- 1536-dim: ~2.5 µs

#### Dot Product
- 128-dim: ~218 ns
- 256-dim: ~416 ns
- 512-dim: ~762 ns
- 768-dim: ~1.2 µs
- 1536-dim: ~2.5 µs

**Analysis**: Similarity calculations show the most room for optimization. They scale linearly with dimension and dominate performance for larger vectors.

### Round-trip (Encode + Decode)
- 128-dim: ~106 ns
- 256-dim: ~114 ns
- 512-dim: ~132 ns
- 768-dim: ~149 ns
- 1536-dim: ~250 ns

**Analysis**: Excellent round-trip performance, validates efficient serialization.

## Optimization Opportunities

### High Priority
1. **SIMD Optimization for Similarity Calculations**: Use platform-specific SIMD instructions for vectorized operations. Can improve similarity calculations by 2-4x.
2. **Unsafe Optimizations**: Use unsafe code for critical paths to eliminate bounds checking.
3. **Memory Alignment**: Ensure vectors are properly aligned for SIMD operations.

### Medium Priority
1. **Parallel Processing**: For very large batches of similarity calculations, use rayon for parallelization.
2. **Caching**: Add caching for frequently computed norms in cosine similarity.
3. **Specialized Functions**: Create fast paths for common dimensions (128, 256, 512, 768, 1536).

### Low Priority (Already Good)
1. Encoding/Decoding is already very fast (~30-130ns)
2. Vector creation is efficient
3. Round-trip performance is excellent

## Recommended Optimizations

### 1. SIMD for F32 Operations (Immediate)
Add `portable-simd` or `packed_simd` for vectorized operations on F32 arrays.

### 2. Unsafe Optimizations (Immediate)
Remove bounds checking in hot loops for similarity calculations.

### 3. Pre-computed Values (Easy Win)
Cache norm calculations for cosine similarity when vectors are reused.

## Performance Targets
- Cosine similarity 1536-dim: Target < 2 µs (current: 4.5 µs) = 2.25x improvement
- Euclidean distance 1536-dim: Target < 1.5 µs (current: 2.5 µs) = 1.67x improvement
- Dot product 1536-dim: Target < 1.5 µs (current: 2.5 µs) = 1.67x improvement