1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
// SPDX-License-Identifier: MIT OR Apache-2.0
//! GPU memory measurement dispatchers and backend modules.
//!
//! Each backend (`nvml`, `dxgi`, `nvidia_smi`) is gated by a Cargo
//! feature; the dispatchers below try them in priority order and surface
//! the first success.
use crate::;
/// Number of NVIDIA GPUs visible to `NVML` (`NVML`-canonical ordering).
///
/// On Windows the count uses `NVML`; if `NVML` is unavailable, falls back
/// to counting `DXGI` adapters with non-zero dedicated `VRAM`.
///
/// # Errors
///
/// Returns [`HypomnesisError::Nvml`] if `NVML` fails to load or report a count,
/// or [`HypomnesisError::NoGpuSource`] if no measurement backend is enabled.
/// Device-wide info for a specific GPU index (`NVML`-canonical ordering).
///
/// On Windows: `NVML` if available; otherwise `DXGI`'s first NVIDIA adapter
/// with non-zero dedicated `VRAM`. iGPUs and the Microsoft Basic Render
/// Driver are skipped.
///
/// # Errors
///
/// Returns [`HypomnesisError::DeviceIndexOutOfRange`] if `index` exceeds the device count.
/// Returns [`HypomnesisError::NoGpuSource`] if no backend can satisfy the query.
/// Per-process GPU memory used by the calling process on the given device.
///
/// Tries (in order): `DXGI` on Windows, `NVML`, then `nvidia-smi` fallback.
/// The returned `ProcessGpuInfo` carries an `is_per_process` flag and a
/// `source` discriminator so callers can distinguish a true per-process
/// reading from a device-wide fallback.
///
/// # Errors
///
/// Returns [`HypomnesisError::NoGpuSource`] if every available backend fails.
// Wave 2 impl will do FFI dispatch (not const)