1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
//! # zensim
//!
//! Fast psychovisual image similarity metric combining ideas from
//! SSIMULACRA2 and butteraugli. Multi-scale SSIM + edge + high-frequency
//! features in XYB color space, with trained weights and AVX2/AVX-512 SIMD.
//!
//! ## Quick start
//!
//! ```
//! use zensim::{Zensim, ZensimProfile, RgbSlice};
//! # let (src_pixels, dst_pixels) = (vec![[0u8; 3]; 64], vec![[0u8; 3]; 64]);
//! let z = Zensim::new(ZensimProfile::latest());
//! let source = RgbSlice::new(&src_pixels, 8, 8);
//! let distorted = RgbSlice::new(&dst_pixels, 8, 8);
//! let result = z.compute(&source, &distorted)?;
//! println!("{}: {:.2}", result.profile(), result.score());
//! # Ok::<(), zensim::ZensimError>(())
//! ```
//!
//! ## Batch comparison (one reference, many distorted)
//!
//! ```
//! use zensim::{Zensim, ZensimProfile, RgbSlice};
//! # let (ref_pixels, width, height) = (vec![[0u8; 3]; 64], 8usize, 8usize);
//! # let distorted_images: Vec<Vec<[u8; 3]>> = vec![];
//! let z = Zensim::new(ZensimProfile::latest());
//! let source = RgbSlice::new(&ref_pixels, width, height);
//! let precomputed = z.precompute_reference(&source)?;
//! for dst_pixels in &distorted_images {
//! let dst = RgbSlice::new(dst_pixels, width, height);
//! let result = z.compute_with_ref(&precomputed, &dst)?;
//! println!("score: {:.2}", result.score());
//! }
//! # Ok::<(), zensim::ZensimError>(())
//! ```
//!
//! ## RGBA support
//!
//! ```
//! use zensim::{Zensim, ZensimProfile, RgbaSlice};
//! # let (src_rgba, dst_rgba) = (vec![[0u8; 4]; 64], vec![[0u8; 4]; 64]);
//! let z = Zensim::new(ZensimProfile::latest());
//! let source = RgbaSlice::new(&src_rgba, 8, 8);
//! let distorted = RgbaSlice::new(&dst_rgba, 8, 8);
//! let result = z.compute(&source, &distorted)?;
//! # Ok::<(), zensim::ZensimError>(())
//! ```
//!
//! ## zenpixels support
//!
//! With the `zenpixels` feature, any [`zenpixels::PixelSlice`] or
//! [`zenpixels::PixelBuffer`] can be used directly via [`ZenpixelsSource`]:
//!
//! ```ignore
//! use zensim::{Zensim, ZensimProfile, ZenpixelsSource};
//!
//! let source = ZenpixelsSource::try_from_slice(&pixel_slice)?;
//! let distorted = ZenpixelsSource::try_from_slice(&other_slice)?;
//! let result = Zensim::new(ZensimProfile::latest()).compute(&source, &distorted)?;
//! ```
//!
//! Supported: Rgb8, Rgba8, Bgra8, Rgbx8, Bgrx8, Rgba16, RgbaF32 (sRGB/BT.709/linear).
//! Premultiplied alpha is un-premultiplied automatically. RGBX/BGRX padding bytes
//! are treated as opaque automatically. HDR (PQ, HLG) and grayscale are rejected
//! with [`UnsupportedFormat`].
//!
//! ## Input requirements
//!
//! - **Color space:** All inputs must be **sRGB-encoded** (gamma ~2.2) — the
//! standard output of JPEG, PNG, and WebP decoders. For linear-light data,
//! use `PixelFormat::LinearF32Rgba` via [`StridedBytes`].
//! - **Wide gamut:** Display P3 and BT.2020 primaries are accepted via
//! [`ColorPrimaries`] on [`StridedBytes`] — gamut-mapped to sRGB internally.
//! Passing wide-gamut data as sRGB will produce incorrect scores.
//! - **Pixel formats:** [`RgbSlice`] (sRGB u8), [`RgbaSlice`] (sRGB u8 + alpha),
//! `imgref::ImgRef` (sRGB u8, stride-aware, default feature),
//! [`ZenpixelsSource`] (zenpixels `PixelSlice`/`PixelBuffer`, `zenpixels` feature),
//! [`StridedBytes`] (any of `Srgb8Rgb`, `Srgb8Rgba`, `Srgb8Bgra`,
//! `Srgb16Rgba`, `LinearF32Rgba`), or implement [`ImageSource`] directly.
//! - **Alpha:** RGBA inputs are composited over a deterministic noise
//! background so alpha differences are detected without the structured-pattern
//! amplification of a checkerboard. Supports `Straight` and `Opaque` alpha modes.
//! - **Dimensions:** Both images must be the same width × height, minimum 8×8.
//!
//! ## Score semantics
//!
//! 100 = identical, higher = more similar. Score mapping:
//! `100 - 18 × d^0.7` where `d` is the per-scale weighted feature distance.
//! Calibrated from 0–100 on 344k training pairs; extreme distortions can
//! score below 0 (uncalibrated outside the training range).
//!
//! [`ZensimResult`] also provides [`approx_ssim2()`](ZensimResult::approx_ssim2),
//! [`approx_dssim()`](ZensimResult::approx_dssim), and
//! [`approx_butteraugli()`](ZensimResult::approx_butteraugli) for direct
//! metric approximations. The [`mapping`] module has bidirectional interpolation
//! tables for score-level conversions.
//!
//! ## Determinism
//!
//! Deterministic for the same input on the same architecture. Cross-architecture
//! results (e.g. AVX2 vs scalar vs AVX-512) may differ by small ULP due to
//! different FMA contraction behavior.
//!
//! ## Design
//!
//! - **XYB color space** — cube root LMS, same perceptual space as ssimulacra2/butteraugli
//! - **Modified SSIM** — ssimulacra2's variant: drops the luminance denominator
//! (no C1), uses `1 - (mu1-mu2)²` directly. Correct for perceptually-uniform spaces.
//! - **19 features per channel per scale** — 13 basic (SSIM, edge artifact/detail
//! loss, MSE, high-frequency) + 6 peak features, all scored
//! - **4-scale pyramid** — 1×, 2×, 4×, 8× via box downscale (ssimulacra2 uses 6)
//! - **O(1)-per-pixel box blur** — single-pass with fused SIMD kernel
//! - **228 trained weights** — optimized on 344k synthetic pairs across 6 codecs
//! - **AVX2/AVX-512 SIMD** throughout via [archmage](https://crates.io/crates/archmage)
//!
//! See the `metric` module source for the full feature extraction math.
// --- Primary API ---
pub use ZensimError;
pub use ;
/// Classification API — requires `features = ["classification"]`.
///
/// Exposes `classify()`, error categorization, and per-pixel delta statistics
/// for regression testing workflows.
pub use ;
pub use ZensimProfile;
pub use ;
pub use ;
pub use PrecomputedReference;
/// Training/research API — requires `features = ["training"]`.
///
/// These items expose metric internals (blur kernel shape, scale count,
/// masking, weight vectors) that change metric behavior. Scores produced
/// with non-default `ZensimConfig` are **not comparable** to the default
/// trained weights or the 0-100 score scale.
pub use ;
pub use UnsupportedFormat;
pub use ZenpixelsSource;
/// Number of downscale levels. Each level halves resolution.
/// 4 scales covers 1x, 2x, 4x, 8x — sufficient for most perceptual effects.
pub const NUM_SCALES: usize = 4;