1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
//! Bounded rayon global pool for CPU render dispatches.
//!
//! Rationale
//! ---------
//! Premiere renders many frames in parallel (observed `concurrent` up to 16).
//! The default rayon global pool has `num_cpus` workers. That is fine for a
//! single dispatch but becomes problematic when Premiere itself is CPU-bound
//! on UI, audio, and layout threads — they get starved of cores.
//!
//! We re-initialize the rayon **global** pool once with a bounded worker
//! count (default: `max(1, num_cpus - 2)`). `par_chunks_mut(...).for_each(...)`
//! then naturally runs on that bounded pool without needing `install()`
//! wrappers (which, with N > workers concurrent installs, serialize the
//! outer closures and starve the inner fork-join — exactly the 20x regression
//! observed when we tried `install()`).
//!
//! Overridable at runtime via `EX_RENDER_WORKERS`.
use Once;
use ;
static INIT: Once = new;
static ACTIVE_WORKERS: AtomicUsize = new;
/// Default policy: leave two logical cores for the host UI/audio/layout
/// threads. `rayon::current_num_threads()` is safe to call before the global
/// pool is installed.
/// Initialize the rayon global pool with a bounded worker count. Safe to
/// invoke multiple times — the underlying `build_global()` is guarded by
/// `Once`. If another library already installed the global pool, we fall
/// back silently and still report the observed worker count.
/// Number of workers active in the render pool. Reported in the per-dispatch
/// diagnostic line so we can verify the policy is active.