moonpool_explorer/sancov.rs
1//! LLVM SanitizerCoverage (`inline-8bit-counters`) for code edge coverage.
2//!
3//! This module hooks into LLVM's SanitizerCoverage instrumentation to track
4//! which code edges the simulation actually executes. The explorer uses this
5//! as a second coverage signal — alongside the assertion-path fork bitmap —
6//! to decide whether a forked timeline discovered something new. Together,
7//! the two signals make the adaptive loop significantly more precise at
8//! distinguishing productive forks from barren ones.
9//!
10//! All public functions are no-ops when sancov instrumentation is not
11//! present (i.e., `COUNTERS_PTR` is null).
12//!
13//! # Why two coverage systems?
14//!
15//! The explorer has two independent coverage tracking mechanisms:
16//!
17//! ```text
18//! System Where it lives What it tracks
19//! ──────────────────── ───────────────────── ─────────────────────────────────
20//! Fork bitmap coverage.rs Which assert_sometimes! /
21//! (8192 bits) assert_sometimes_each! fired.
22//! Hash-based, high collision rate.
23//!
24//! Sancov edge coverage sancov.rs (this file) Which branches/loops/conditions
25//! (one u8 per edge) in the Rust source were executed.
26//! LLVM-instrumented, no collisions.
27//! ```
28//!
29//! The fork bitmap answers "did we trigger a new assertion path?" while
30//! sancov answers "did we execute new code?". Two timelines can trigger
31//! the exact same assertions but take radically different code paths through
32//! the system under test — sancov catches that.
33//!
34//! Both signals feed into the adaptive loop's `batch_has_new` flag in
35//! [`split_loop`](crate::split_loop). A timeline is considered productive
36//! if it contributes new bits to **either** system:
37//!
38//! ```text
39//! batch_has_new |= fork_bitmap.has_new_bits() // assertion-level
40//! batch_has_new |= has_new_sancov_coverage() // code-edge-level
41//! ```
42//!
43//! # How LLVM inline-8bit-counters work
44//!
45//! When you compile with the right flags, LLVM inserts a `counter[edge_id]++`
46//! instruction at every control-flow edge in the program. The counters live
47//! in a BSS array (zero-initialized, process-global):
48//!
49//! ```text
50//! rustc + LLVM passes
51//! │
52//! ▼
53//! BSS counter array (one u8 per code edge)
54//! │
55//! ▼ __sanitizer_cov_8bit_counters_init(start, stop)
56//! │ called during static constructors, before main()
57//! │
58//! ▼
59//! COUNTERS_PTR + COUNTERS_LEN captured in global atomics
60//! ```
61//!
62//! Multiple compilation units each call the init callback with their own
63//! array bounds. We merge them via `min(start)` / `max(stop)` so a single
64//! contiguous range covers all TUs. See [`__sanitizer_cov_8bit_counters_init`].
65//!
66//! # Selective instrumentation with `SANCOV_CRATES`
67//!
68//! You don't want to instrument *everything* — the simulation runtime,
69//! exploration engine, and chaos framework have thousands of edges that
70//! are irrelevant to the system under test. Instrumenting only the
71//! application crate(s) keeps the edge count small and meaningful.
72//!
73//! The build pipeline:
74//!
75//! ```text
76//! flake.nix
77//! └── RUSTC_WRAPPER="$PWD/scripts/sancov-rustc.sh"
78//!
79//! sancov-rustc.sh intercepts every rustc invocation:
80//! 1. Reads --crate-name from args
81//! 2. Checks if crate is in SANCOV_CRATES (comma-separated whitelist)
82//! 3. If yes → adds LLVM flags:
83//! -Cpasses=sancov-module
84//! -Cllvm-args=-sanitizer-coverage-level=3
85//! -Cllvm-args=-sanitizer-coverage-inline-8bit-counters
86//! -Ccodegen-units=1
87//! 4. If no → pass-through (no instrumentation)
88//! 5. Build scripts and proc-macros are never instrumented
89//! ```
90//!
91//! When `SANCOV_CRATES` is unset or empty, the wrapper is a pure
92//! pass-through and all functions in this module are no-ops.
93//!
94//! The `xtask` runner (`cargo xtask sim`) sets `SANCOV_CRATES` per binary
95//! and builds into `target/sancov` to avoid cache conflicts with normal
96//! (non-instrumented) builds.
97//!
98//! # The shared memory data flow
99//!
100//! The BSS counters are process-local — a forked child increments its own
101//! copy, but the parent can't see them. Shared memory bridges the gap:
102//!
103//! ```text
104//! CHILD process PARENT process
105//! ───────────── ──────────────
106//! BSS counters increment
107//! during simulation
108//! │
109//! ▼ copy_counters_to_shared()
110//! TRANSFER buffer ──── MAP_SHARED ────────── classify_counts()
111//! (or pool slot) bucketed values
112//! │
113//! ▼ has_new_coverage_inner()
114//! HISTORY map ── global max per edge
115//! │
116//! batch_has_new = true if any
117//! bucketed > history[i]
118//! ```
119//!
120//! Three shared memory regions:
121//!
122//! - **Transfer buffer** ([`SANCOV_TRANSFER`]): child writes raw counters
123//! via [`copy_counters_to_shared`] before `_exit()`. In sequential mode,
124//! one buffer is reused. In parallel mode, each concurrent child gets
125//! its own pool slot instead.
126//!
127//! - **History map** (`SANCOV_HISTORY`): global maximum of bucketed values
128//! per edge. Never reset within a seed. Preserved across seeds by
129//! [`prepare_next_seed()`](crate::prepare_next_seed) (cumulative, like
130//! the explored map).
131//!
132//! - **Pool** ([`SANCOV_POOL`]): parallel mode allocates
133//! `slot_count × edge_count` bytes. Each concurrent child writes to its
134//! own slot. Parent reads the slot after `waitpid()`. Allocated lazily
135//! by [`get_or_init_sancov_pool`].
136//!
137//! # AFL-style bucketing
138//!
139//! Raw counter values are noisy — an edge hit 5 times vs 7 times is the
140//! same execution pattern, but hit 1 time vs 5 times is meaningfully
141//! different. The `COUNT_CLASS_LOOKUP` table maps raw counts to coarser
142//! buckets, following AFL's proven approach:
143//!
144//! ```text
145//! Raw count Bucket Meaning
146//! ───────── ────── ─────────────────────
147//! 0 0 not hit
148//! 1 1 hit once
149//! 2 2 hit twice
150//! 3 4 hit a few times
151//! 4–7 8 hit several times
152//! 8–15 16 hit many times
153//! 16–31 32 hit frequently
154//! 32–127 64 hit very frequently
155//! 128–255 128 hit extremely often
156//! ```
157//!
158//! This means: an edge going from 5 hits to 7 hits (both bucket 8) is
159//! not novel. But going from 1 hit to 5 hits (bucket 1 → bucket 8) *is*
160//! novel — the code exercised that edge in a meaningfully different way.
161//!
162//! Bucketing is applied in-place by `classify_counts` before comparison.
163//!
164//! # Novelty detection
165//!
166//! `has_new_coverage_inner` is the core novelty check:
167//!
168//! 1. Apply AFL bucketing to the buffer in-place
169//! 2. For each edge: if `bucketed > history[i]`, update history and mark novel
170//! 3. Does **not** early-return — must update all history entries in one pass
171//! (otherwise a novel edge in position 100 would cause edges 101+ to
172//! be skipped, leaving stale history values)
173//! 4. Zero entries (unvisited edges) are skipped
174//!
175//! The public API has two entry points:
176//! - [`has_new_sancov_coverage`]: reads from the transfer buffer (sequential)
177//! - [`has_new_sancov_coverage_from`]: reads from a specific pool slot (parallel)
178//!
179//! Novelty feeds into `batch_has_new` in [`split_loop`](crate::split_loop)
180//! alongside the fork bitmap's
181//! [`has_new_bits()`](crate::coverage::ExploredMap::has_new_bits).
182//!
183//! # Integration with the fork loop
184//!
185//! This module hooks into [`split_loop`](crate::split_loop) at five points:
186//!
187//! ```text
188//! setup_child() After fork: reset_bss_counters() so child
189//! captures only its OWN edges. Reset pool pointers
190//! so nested splits allocate fresh pools.
191//!
192//! exit_child() Before _exit(): copy_counters_to_shared() so
193//! the parent can read the child's coverage.
194//!
195//! Sequential reap After waitpid: has_new_sancov_coverage() checks
196//! the transfer buffer for novelty.
197//!
198//! Parallel reap After waitpid: has_new_sancov_coverage_from(slot)
199//! checks the child's pool slot for novelty.
200//!
201//! batch_has_new Both coverage signals are OR'd together:
202//! batch_has_new |= sancov_novelty
203//! A barren/productive decision uses BOTH signals.
204//! ```
205//!
206//! # Why binary targets? (sancov requires `main()`)
207//!
208//! LLVM calls [`__sanitizer_cov_8bit_counters_init`] during static
209//! constructors, before `main()`. In `cargo test`, the test harness is
210//! `main()` — the counter array is initialized once for the harness, not
211//! for each `#[test]` function. Worse, the BSS counters are process-global
212//! state that accumulates across test functions, making per-test
213//! measurement impossible. And `fork()` in a test harness interacts badly
214//! with the harness's own process management.
215//!
216//! Solution: each simulation runs as a standalone `[[bin]]` target
217//! managed by `cargo xtask sim`. The `xtask` sets `SANCOV_CRATES` per
218//! binary and uses `--target-dir target/sancov` to separate instrumented
219//! from non-instrumented builds.
220//!
221//! # Reporting
222//!
223//! Coverage stats flow through the reporting pipeline:
224//!
225//! - [`sancov_edge_count`] and [`sancov_edges_covered`] provide the raw numbers
226//! - [`ExplorationStats`](crate::shared_stats::ExplorationStats) includes
227//! `sancov_edges_total` and `sancov_edges_covered`
228//! - `ExplorationReport` carries them to the `SimulationReport`
229//! - The terminal display shows a "Code Cov" progress bar alongside
230//! the "Exploration" (fork bitmap) progress bar
231//! - Percentage = `edges_covered / edges_total`
232//!
233//! # Running code coverage
234//!
235//! ```bash
236//! # Run all simulations with sancov:
237//! cargo xtask sim run-all
238//!
239//! # Run a specific simulation:
240//! cargo xtask sim run maze
241//!
242//! # List available binaries:
243//! cargo xtask sim list
244//!
245//! # Instrument specific crates manually:
246//! SANCOV_CRATES=moonpool_sim_examples cargo run \
247//! --bin sim-maze-explore --target-dir target/sancov
248//! ```
249//!
250//! # Lifecycle summary
251//!
252//! ```text
253//! init_sancov_shared() allocate transfer + history in MAP_SHARED
254//! │
255//! ├── per-child:
256//! │ reset_bss_counters() zero BSS array after fork
257//! │ ... simulation runs ... BSS counters increment
258//! │ copy_counters_to_shared() copy BSS → transfer/pool slot
259//! │ exit_child() _exit()
260//! │
261//! ├── per-reap:
262//! │ has_new_sancov_coverage() bucket + compare against history
263//! │
264//! ├── prepare_next_seed():
265//! │ clear_transfer_buffer() zero transfer buffer
266//! │ reset_bss_counters() zero BSS array
267//! │ (history preserved) cumulative across seeds
268//! │
269//! └── cleanup_sancov_shared() free transfer + history + pool
270//! ```
271
272use std::cell::Cell;
273use std::sync::atomic::{AtomicPtr, AtomicUsize, Ordering};
274
275// ---------------------------------------------------------------------------
276// Global statics — set during static init, before main()
277// ---------------------------------------------------------------------------
278
279/// Pointer to the LLVM-generated BSS counter array.
280///
281/// Set by [`__sanitizer_cov_8bit_counters_init`] during static
282/// constructors. Remains null if sancov is not enabled.
283static COUNTERS_PTR: AtomicPtr<u8> = AtomicPtr::new(std::ptr::null_mut());
284
285/// Number of edges (counters) in the instrumented binary.
286static COUNTERS_LEN: AtomicUsize = AtomicUsize::new(0);
287
288// ---------------------------------------------------------------------------
289// Thread-local state — set during init() from main()
290// ---------------------------------------------------------------------------
291
292thread_local! {
293 /// MAP_SHARED buffer for child→parent counter transfer.
294 ///
295 /// Public so `split_loop.rs` can redirect per-child in parallel mode.
296 pub static SANCOV_TRANSFER: Cell<*mut u8> = const { Cell::new(std::ptr::null_mut()) };
297
298 /// MAP_SHARED global max map (history of highest bucketed values).
299 static SANCOV_HISTORY: Cell<*mut u8> = const { Cell::new(std::ptr::null_mut()) };
300
301 /// MAP_SHARED pool base for parallel mode (one slot per concurrent child).
302 ///
303 /// Public so `setup_child()` can reset for nested splits.
304 pub static SANCOV_POOL: Cell<*mut u8> = const { Cell::new(std::ptr::null_mut()) };
305
306 /// Number of slots in the sancov pool.
307 ///
308 /// Public so `setup_child()` can reset for nested splits.
309 pub static SANCOV_POOL_SLOTS: Cell<usize> = const { Cell::new(0) };
310}
311
312// ---------------------------------------------------------------------------
313// LLVM callbacks
314// ---------------------------------------------------------------------------
315
316/// Called by LLVM during static initialization for each compilation unit.
317///
318/// Merges ranges via min(start)/max(stop) so multiple TUs are handled.
319///
320/// # Safety
321///
322/// `start` and `stop` must point to valid memory. `stop` must be ≥ `start`.
323/// This is only called by LLVM instrumentation infrastructure.
324#[unsafe(no_mangle)]
325pub unsafe extern "C" fn __sanitizer_cov_8bit_counters_init(start: *mut u8, stop: *mut u8) {
326 if start.is_null() || stop.is_null() || stop <= start {
327 return;
328 }
329
330 // Safety: start and stop are valid pointers provided by LLVM, stop >= start
331 let new_len = unsafe { stop.offset_from(start) } as usize;
332
333 // Merge: keep the lowest start and highest stop across all TUs.
334 let prev = COUNTERS_PTR.load(Ordering::Relaxed);
335 if prev.is_null() || (start as usize) < (prev as usize) {
336 COUNTERS_PTR.store(start, Ordering::Relaxed);
337 }
338
339 // Accumulate length: the total span is max(stop) - min(start).
340 // Since TUs may be non-contiguous, we track the max of all
341 // (start + len) and then recompute len as max_stop - min_start.
342 let current_start = COUNTERS_PTR.load(Ordering::Relaxed) as usize;
343 let new_stop = start as usize + new_len;
344 let current_stop = current_start + COUNTERS_LEN.load(Ordering::Relaxed);
345 let final_stop = current_stop.max(new_stop);
346 COUNTERS_LEN.store(final_stop - current_start, Ordering::Relaxed);
347}
348
349/// Called by LLVM for PC table initialization. Stub — we don't use PC info.
350///
351/// # Safety
352///
353/// Only called by LLVM instrumentation infrastructure.
354#[unsafe(no_mangle)]
355pub unsafe extern "C" fn __sanitizer_cov_pcs_init(_pcs_beg: *const usize, _pcs_end: *const usize) {}
356
357// ---------------------------------------------------------------------------
358// AFL bucketing
359// ---------------------------------------------------------------------------
360
361/// AFL-style hit-count bucketing table.
362///
363/// Maps raw edge counts to coarser buckets to reduce noise from
364/// minor count variations. The mapping is:
365/// - 0 → 0 (not hit)
366/// - 1 → 1 (hit once)
367/// - 2 → 2 (hit twice)
368/// - 3 → 4
369/// - 4..=7 → 8
370/// - 8..=15 → 16
371/// - 16..=31 → 32
372/// - 32..=127 → 64
373/// - 128..=255 → 128
374const COUNT_CLASS_LOOKUP: [u8; 256] = {
375 let mut table = [0u8; 256];
376 // 0 stays 0
377 table[1] = 1;
378 table[2] = 2;
379 table[3] = 4;
380 let mut i = 4;
381 while i <= 7 {
382 table[i] = 8;
383 i += 1;
384 }
385 i = 8;
386 while i <= 15 {
387 table[i] = 16;
388 i += 1;
389 }
390 i = 16;
391 while i <= 31 {
392 table[i] = 32;
393 i += 1;
394 }
395 i = 32;
396 while i <= 127 {
397 table[i] = 64;
398 i += 1;
399 }
400 i = 128;
401 while i <= 255 {
402 table[i] = 128;
403 i += 1;
404 }
405 table
406};
407
408/// Apply AFL bucketing to a buffer of edge counts in-place.
409fn classify_counts(buffer: *mut u8, len: usize) {
410 for i in 0..len {
411 // Safety: caller ensures buffer has at least `len` bytes
412 unsafe {
413 let val = *buffer.add(i);
414 *buffer.add(i) = COUNT_CLASS_LOOKUP[val as usize];
415 }
416 }
417}
418
419// ---------------------------------------------------------------------------
420// Novelty detection
421// ---------------------------------------------------------------------------
422
423/// Check for novel coverage in a buffer against a history map.
424///
425/// Applies AFL bucketing to `buffer` in-place, then compares each
426/// bucketed entry against `history`. If `bucketed > history[i]`,
427/// updates history and marks novelty found.
428///
429/// Does NOT early-return: must update all history entries in one pass.
430/// Skips zero entries (unvisited edges).
431fn has_new_coverage_inner(buffer: *mut u8, history: *mut u8, len: usize) -> bool {
432 classify_counts(buffer, len);
433
434 let mut found_new = false;
435 for i in 0..len {
436 // Safety: caller ensures both buffer and history have at least `len` bytes
437 unsafe {
438 let bucketed = *buffer.add(i);
439 if bucketed == 0 {
440 continue;
441 }
442 let prev = *history.add(i);
443 if bucketed > prev {
444 *history.add(i) = bucketed;
445 found_new = true;
446 }
447 }
448 }
449 found_new
450}
451
452/// Check for novel sancov coverage in the transfer buffer (sequential path).
453///
454/// Returns `false` when sancov is unavailable.
455pub fn has_new_sancov_coverage() -> bool {
456 if !sancov_is_available() {
457 return false;
458 }
459 let transfer = SANCOV_TRANSFER.with(|c| c.get());
460 let history = SANCOV_HISTORY.with(|c| c.get());
461 if transfer.is_null() || history.is_null() {
462 return false;
463 }
464 let len = COUNTERS_LEN.load(Ordering::Relaxed);
465 has_new_coverage_inner(transfer, history, len)
466}
467
468/// Check for novel sancov coverage from a specific pool slot (parallel path).
469///
470/// Returns `false` when sancov is unavailable.
471pub fn has_new_sancov_coverage_from(slot_ptr: *mut u8) -> bool {
472 if !sancov_is_available() || slot_ptr.is_null() {
473 return false;
474 }
475 let history = SANCOV_HISTORY.with(|c| c.get());
476 if history.is_null() {
477 return false;
478 }
479 let len = COUNTERS_LEN.load(Ordering::Relaxed);
480 has_new_coverage_inner(slot_ptr, history, len)
481}
482
483// ---------------------------------------------------------------------------
484// Public query API
485// ---------------------------------------------------------------------------
486
487/// Check if LLVM sancov instrumentation is present.
488///
489/// Returns `true` when the binary was compiled with sancov and the
490/// LLVM callback has registered the counter array.
491pub fn sancov_is_available() -> bool {
492 !COUNTERS_PTR.load(Ordering::Relaxed).is_null()
493}
494
495/// Return the number of instrumented edges.
496///
497/// Returns 0 when sancov is unavailable.
498pub fn sancov_edge_count() -> usize {
499 COUNTERS_LEN.load(Ordering::Relaxed)
500}
501
502/// Count non-zero entries in the history map (edges ever covered).
503///
504/// Returns 0 when sancov is unavailable or history is not initialized.
505pub fn sancov_edges_covered() -> usize {
506 if !sancov_is_available() {
507 return 0;
508 }
509 let history = SANCOV_HISTORY.with(|c| c.get());
510 if history.is_null() {
511 return 0;
512 }
513 let len = COUNTERS_LEN.load(Ordering::Relaxed);
514 let mut count = 0usize;
515 for i in 0..len {
516 // Safety: history was allocated with at least `len` bytes
517 if unsafe { *history.add(i) } != 0 {
518 count += 1;
519 }
520 }
521 count
522}
523
524// ---------------------------------------------------------------------------
525// Lifecycle
526// ---------------------------------------------------------------------------
527
528/// Initialize sancov shared memory buffers (transfer + history).
529///
530/// No-op when sancov instrumentation is not available.
531///
532/// # Errors
533///
534/// Returns an error if shared memory allocation fails.
535pub fn init_sancov_shared() -> Result<(), std::io::Error> {
536 if !sancov_is_available() {
537 return Ok(());
538 }
539 let len = COUNTERS_LEN.load(Ordering::Relaxed);
540 if len == 0 {
541 return Ok(());
542 }
543
544 let transfer = crate::shared_mem::alloc_shared(len)?;
545 let history = crate::shared_mem::alloc_shared(len)?;
546
547 SANCOV_TRANSFER.with(|c| c.set(transfer));
548 SANCOV_HISTORY.with(|c| c.set(history));
549
550 Ok(())
551}
552
553/// Free sancov shared memory (transfer, history, and pool).
554///
555/// Nulls all pointers after freeing. No-op if not initialized.
556pub fn cleanup_sancov_shared() {
557 let len = COUNTERS_LEN.load(Ordering::Relaxed);
558
559 let transfer = SANCOV_TRANSFER.with(|c| c.get());
560 if !transfer.is_null() {
561 // Safety: transfer was returned by alloc_shared(len) in init_sancov_shared().
562 // Pointer is non-null (checked above) and has not been previously freed.
563 unsafe { crate::shared_mem::free_shared(transfer, len) };
564 SANCOV_TRANSFER.with(|c| c.set(std::ptr::null_mut()));
565 }
566
567 let history = SANCOV_HISTORY.with(|c| c.get());
568 if !history.is_null() {
569 // Safety: history was returned by alloc_shared(len) in init_sancov_shared().
570 // Pointer is non-null (checked above) and has not been previously freed.
571 unsafe { crate::shared_mem::free_shared(history, len) };
572 SANCOV_HISTORY.with(|c| c.set(std::ptr::null_mut()));
573 }
574
575 let pool = SANCOV_POOL.with(|c| c.get());
576 if !pool.is_null() {
577 let slots = SANCOV_POOL_SLOTS.with(|c| c.get());
578 if slots > 0 {
579 // Safety: pool was returned by alloc_shared(slots * len) in
580 // get_or_init_sancov_pool(). Size matches the original allocation.
581 unsafe { crate::shared_mem::free_shared(pool, slots * len) };
582 }
583 SANCOV_POOL.with(|c| c.set(std::ptr::null_mut()));
584 SANCOV_POOL_SLOTS.with(|c| c.set(0));
585 }
586}
587
588/// Zero the transfer buffer before forking a child.
589///
590/// No-op when sancov is unavailable or transfer buffer is null.
591pub fn clear_transfer_buffer() {
592 if !sancov_is_available() {
593 return;
594 }
595 let transfer = SANCOV_TRANSFER.with(|c| c.get());
596 if transfer.is_null() {
597 return;
598 }
599 let len = COUNTERS_LEN.load(Ordering::Relaxed);
600 // Safety: transfer was returned by alloc_shared(len) and is non-null (checked
601 // above). write_bytes zeroes exactly len bytes, matching the allocation size.
602 unsafe {
603 std::ptr::write_bytes(transfer, 0, len);
604 }
605}
606
607// ---------------------------------------------------------------------------
608// Child operations
609// ---------------------------------------------------------------------------
610
611/// Copy BSS counters to the shared transfer buffer.
612///
613/// Call in the child process before `_exit()` so the parent can
614/// inspect coverage. No-op when sancov is unavailable.
615pub fn copy_counters_to_shared() {
616 if !sancov_is_available() {
617 return;
618 }
619 let transfer = SANCOV_TRANSFER.with(|c| c.get());
620 if transfer.is_null() {
621 return;
622 }
623 let src = COUNTERS_PTR.load(Ordering::Relaxed);
624 let len = COUNTERS_LEN.load(Ordering::Relaxed);
625 // Safety: src points to the LLVM-generated BSS counter array (set by
626 // __sanitizer_cov_8bit_counters_init), transfer points to alloc_shared(len).
627 // Both are valid for len bytes. The regions do not overlap (BSS vs mmap).
628 unsafe {
629 std::ptr::copy_nonoverlapping(src, transfer, len);
630 }
631}
632
633/// Zero BSS counters after fork.
634///
635/// Call in the child process immediately after `fork()` so the child's
636/// counters start from zero. No-op when sancov is unavailable.
637pub fn reset_bss_counters() {
638 if !sancov_is_available() {
639 return;
640 }
641 let ptr = COUNTERS_PTR.load(Ordering::Relaxed);
642 let len = COUNTERS_LEN.load(Ordering::Relaxed);
643 // Safety: ptr points to the LLVM-generated BSS counter array (set by
644 // __sanitizer_cov_8bit_counters_init). It is valid for len bytes and writable
645 // (BSS is read-write). write_bytes zeroes exactly len bytes.
646 unsafe {
647 std::ptr::write_bytes(ptr, 0, len);
648 }
649}
650
651// ---------------------------------------------------------------------------
652// Parallel pool
653// ---------------------------------------------------------------------------
654
655/// Get or initialize the sancov pool for parallel exploration.
656///
657/// Returns the pool base pointer. Reuses the existing pool if it has
658/// enough slots; otherwise frees and reallocates.
659/// Returns null if sancov is unavailable or allocation fails.
660pub fn get_or_init_sancov_pool(slot_count: usize) -> *mut u8 {
661 if !sancov_is_available() {
662 return std::ptr::null_mut();
663 }
664 let len = COUNTERS_LEN.load(Ordering::Relaxed);
665 if len == 0 {
666 return std::ptr::null_mut();
667 }
668
669 let existing = SANCOV_POOL.with(|c| c.get());
670 let existing_slots = SANCOV_POOL_SLOTS.with(|c| c.get());
671
672 if !existing.is_null() && existing_slots >= slot_count {
673 return existing;
674 }
675
676 // Free old pool if too small
677 if !existing.is_null() {
678 // Safety: existing was returned by alloc_shared(existing_slots * len)
679 // in a prior call. Size matches the original allocation.
680 unsafe {
681 crate::shared_mem::free_shared(existing, existing_slots * len);
682 }
683 SANCOV_POOL.with(|c| c.set(std::ptr::null_mut()));
684 SANCOV_POOL_SLOTS.with(|c| c.set(0));
685 }
686
687 match crate::shared_mem::alloc_shared(slot_count * len) {
688 Ok(ptr) => {
689 SANCOV_POOL.with(|c| c.set(ptr));
690 SANCOV_POOL_SLOTS.with(|c| c.set(slot_count));
691 ptr
692 }
693 Err(_) => std::ptr::null_mut(),
694 }
695}
696
697/// Return a pointer to slot `idx` within the sancov pool.
698///
699/// # Safety
700///
701/// Caller must ensure `idx < slot_count` and `pool_base` was returned
702/// by [`get_or_init_sancov_pool`].
703pub unsafe fn sancov_pool_slot(pool_base: *mut u8, idx: usize) -> *mut u8 {
704 let len = COUNTERS_LEN.load(Ordering::Relaxed);
705 // Safety: pool_base is valid for slot_count * len bytes, idx < slot_count
706 unsafe { pool_base.add(idx * len) }
707}
708
709// ---------------------------------------------------------------------------
710// Tests
711// ---------------------------------------------------------------------------
712
713#[cfg(test)]
714mod tests {
715 use super::*;
716
717 #[test]
718 fn test_bucketing_table() {
719 assert_eq!(COUNT_CLASS_LOOKUP[0], 0);
720 assert_eq!(COUNT_CLASS_LOOKUP[1], 1);
721 assert_eq!(COUNT_CLASS_LOOKUP[2], 2);
722 assert_eq!(COUNT_CLASS_LOOKUP[3], 4);
723 for i in 4..=7 {
724 assert_eq!(COUNT_CLASS_LOOKUP[i], 8, "bucket mismatch at {i}");
725 }
726 for i in 8..=15 {
727 assert_eq!(COUNT_CLASS_LOOKUP[i], 16, "bucket mismatch at {i}");
728 }
729 for i in 16..=31 {
730 assert_eq!(COUNT_CLASS_LOOKUP[i], 32, "bucket mismatch at {i}");
731 }
732 for i in 32..=127 {
733 assert_eq!(COUNT_CLASS_LOOKUP[i], 64, "bucket mismatch at {i}");
734 }
735 for i in 128..=255 {
736 assert_eq!(COUNT_CLASS_LOOKUP[i], 128, "bucket mismatch at {i}");
737 }
738 }
739
740 #[test]
741 fn test_novelty_detection_basic() {
742 // First observation should be novel
743 let mut buffer = [0u8; 8];
744 let mut history = [0u8; 8];
745 buffer[0] = 1; // edge 0 hit once
746
747 let novel = has_new_coverage_inner(buffer.as_mut_ptr(), history.as_mut_ptr(), 8);
748 assert!(novel);
749 // History should now have the bucketed value
750 assert_eq!(history[0], 1);
751 }
752
753 #[test]
754 fn test_known_coverage_skipped() {
755 let mut buffer = [0u8; 8];
756 let mut history = [0u8; 8];
757
758 // First pass: establish coverage
759 buffer[0] = 1;
760 let novel = has_new_coverage_inner(buffer.as_mut_ptr(), history.as_mut_ptr(), 8);
761 assert!(novel);
762
763 // Second pass: same coverage → not novel
764 buffer[0] = 1; // re-set since bucketing was applied in-place
765 let novel = has_new_coverage_inner(buffer.as_mut_ptr(), history.as_mut_ptr(), 8);
766 assert!(!novel);
767 }
768
769 #[test]
770 fn test_higher_bucket_is_novel() {
771 let mut buffer = [0u8; 4];
772 let mut history = [0u8; 4];
773
774 // Hit once → bucket 1
775 buffer[0] = 1;
776 let novel = has_new_coverage_inner(buffer.as_mut_ptr(), history.as_mut_ptr(), 4);
777 assert!(novel);
778 assert_eq!(history[0], 1);
779
780 // Hit 5 times → bucket 8 (higher than 1) → novel
781 buffer[0] = 5;
782 let novel = has_new_coverage_inner(buffer.as_mut_ptr(), history.as_mut_ptr(), 4);
783 assert!(novel);
784 assert_eq!(history[0], 8);
785
786 // Hit 3 times → bucket 4 (lower than 8) → not novel
787 buffer[0] = 3;
788 let novel = has_new_coverage_inner(buffer.as_mut_ptr(), history.as_mut_ptr(), 4);
789 assert!(!novel);
790 assert_eq!(history[0], 8); // unchanged
791 }
792
793 #[test]
794 fn test_zeros_skipped() {
795 let mut buffer = [0u8; 8];
796 let mut history = [0u8; 8];
797
798 // All zeros → no novelty
799 let novel = has_new_coverage_inner(buffer.as_mut_ptr(), history.as_mut_ptr(), 8);
800 assert!(!novel);
801 }
802
803 #[test]
804 fn test_unavailable_noop() {
805 // COUNTERS_PTR is null in test builds (no LLVM instrumentation)
806 assert!(!sancov_is_available());
807 assert_eq!(sancov_edge_count(), 0);
808 assert_eq!(sancov_edges_covered(), 0);
809 assert!(!has_new_sancov_coverage());
810 assert!(!has_new_sancov_coverage_from(std::ptr::null_mut()));
811
812 // These should all be safe no-ops
813 copy_counters_to_shared();
814 reset_bss_counters();
815 clear_transfer_buffer();
816 cleanup_sancov_shared();
817
818 let pool = get_or_init_sancov_pool(4);
819 assert!(pool.is_null());
820 }
821
822 #[test]
823 fn test_init_cleanup_lifecycle() {
824 // When sancov is unavailable, init/cleanup are no-ops
825 init_sancov_shared().expect("init should succeed as no-op");
826 let transfer = SANCOV_TRANSFER.with(|c| c.get());
827 assert!(transfer.is_null(), "no buffers allocated without sancov");
828 cleanup_sancov_shared();
829 }
830
831 #[test]
832 fn test_classify_counts_in_place() {
833 let mut buf = [0u8, 1, 2, 3, 5, 10, 20, 50, 200];
834 classify_counts(buf.as_mut_ptr(), buf.len());
835 assert_eq!(buf, [0, 1, 2, 4, 8, 16, 32, 64, 128]);
836 }
837}