1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
#!/usr/bin/env python3
"""Verify embedding lookup"""
=
=
# Find token_embd.weight (Q4_K)
# [896, 151936]
# 6 = Q4_K
# (151936, 616)
# Q4_K structure (from ggml):
# block_q4_K: QK_K = 256 values per super-block
# - d: fp16 scale for first half (2 bytes)
# - dmin: fp16 min for first half (2 bytes)
# - scales: 12 bytes of packed scales
# - qs: 128 bytes of 4-bit quantized values
# Total: 2 + 2 + 12 + 128 = 144 bytes per super-block
# 896 values = 896/256 = 3.5 super-blocks
# But Q4_K has QK_K=256, so 896 doesn't divide evenly...
# Actually llama.cpp might use a different block size for smaller tensors
# Let's check 616 bytes per row:
# If block size is 256, then 896 values = 3.5 blocks (doesn't work)
# If block size is 128, then 896 values = 7 blocks
# 7 * 72 bytes per block = 504 bytes (not 616)
# Actually for Q4_K, the super-block is 256 values
# Maybe they pad to 1024 values? 1024/256 = 4 super-blocks
# 4 * 144 = 576 bytes (not 616)
# Let's just check if the layout is [vocab, packed_hidden] like output.weight
= # shape (151936, 616)
# For Q4_K, values are 4-bit, so 896 values = 448 bytes for quants alone
# Plus scales and mins... 616 seems reasonable
# Key question: is embedding for token T in row T of the data?
# Based on output.weight analysis, yes - rows index vocab tokens
break
# Save a reference embedding for comparison
# We need to actually dequantize Q4_K which is complex
# Let's use llama-cpp to get the embedding instead
=
# Get embedding for single token
= # "="
=
# Actually let me try a different approach - compute with the model
# and save the first layer's input (which is just the embedding + any normalization)
# Unfortunately llama-cpp doesn't expose raw embeddings easily
# Let's just verify our dequantization for Q4_K is working