1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
#!/usr/bin/env python3
"""Verify embedding values by manually dequantizing"""
=
=
# Q4_K dequantization is complex, but let's try to understand the layout first
# by looking at the raw data shape
# [896, 151936]
# 6 = Q4_K
# (151936, 616)
# The raw data shape (151936, 616) suggests:
# - 151936 rows, one per vocab token
# - 616 bytes per row = packed hidden dimension
# For Q4_K with 896 hidden dim:
# - Q4_K uses super-blocks of 256 values
# - 896 / 256 = 3.5, so need to understand how GGUF handles this
# Actually, looking at bytes: 616 bytes
# Q4_K block size 256: 144 bytes per 256 values
# 896 values = ?
#
# Let me check if maybe they pad to 1024:
# 1024 values = 4 blocks * 144 = 576 bytes (not 616)
#
# Or maybe the format is different for sub-super-block sizes
# Let's compare with output.weight (Q8_0) which we know works:
# output.weight: shape (151936, 952) where 952 = 28 Q8_0 blocks * 34 bytes
# 28 * 32 = 896 elements
# For Q4_K with 616 bytes per row:
# If we assume similar structure (row = vocab entry's hidden dim weights)
# then we need to figure out how 896 floats pack into 616 bytes
# Q4_K: 4 bits per value = 0.5 bytes per value
# 896 * 0.5 = 448 bytes just for quants
# Plus scales, mins, etc = ~616 bytes seems reasonable
# Let's manually extract embedding for token 28 by finding its row
=
# Save for comparison - but we can't easily dequantize Q4_K in Python
# without the proper implementation
break
# Alternative: use llama-cpp to get the embedding and save it
=
# Try to get embedding for token 28
# Note: llama-cpp's embed() function works with text, not tokens
# We need a different approach
# Let's see if we can access internal state
# Actually, let's just save our computed embedding stats and compare
# These values seem reasonable for an embedding
# The question is whether they match what llama-cpp uses internally
# Let me try a different approach: compute logits with identity hidden state
# If we pass all 1s through final layer norm and output, we should get weight sums
# This might help verify the matrix multiplication