Skip to main content

Module numeric

Module numeric 

Source
Expand description

Numeric types used across the CUDA stack.

These are thin, #[repr(transparent)] / #[repr(C)] wrappers chosen to match the layout NVIDIA’s headers use for __half, __nv_bfloat16, cuFloatComplex, and cuDoubleComplex. All conversion methods return the same bit patterns the CUDA runtime itself would produce for typical inputs; exact agreement with NVIDIA’s rounding on edge cases is tested in the integration suite against half and CUDA itself.

If you already depend on half / num-complex, enable the half-crate / num-complex-crate features for zero-cost From/Into adapters.

Structs§

BFloat16
Brain Floating Point 16 (__nv_bfloat16 in CUDA). The top 16 bits of an IEEE 754 f32.
Complex32
Single-precision complex number (cuFloatComplex, layout-compatible with float2).
Complex64
Double-precision complex number (cuDoubleComplex, layout-compatible with double2).
Half
IEEE 754 binary16 (“half-precision”, __half in CUDA).