1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
//! Abstraction layer for OpenCL and CUDA.
//!
//! Environment variables
//! ---------------------
//!
//! - `RUST_GPU_TOOLS_CUSTOM_GPU`
//!
//! rust-gpu-tools has a hard-coded list of GPUs and their CUDA core count. If your card is not
//! part of that list, you can add it via `RUST_GPU_TOOLS_CUSTOM_GPU`. The value is a comma
//! separated list of `name:cores`. Example:
//!
//! ```text
//! RUST_GPU_TOOLS_CUSTOM_GPU="GeForce RTX 2080 Ti:4352,GeForce GTX 1060:1280"
//! ```
//!
//! Feature flags
//! -------------
//!
//! There are two [feature flags], `cuda` and `opencl`. By default `opencl` is enabled. You can
//! enable both at the same time. At least one of them needs to be enabled at any time.
//!
//! [feature flags]: https://doc.rust-lang.org/cargo/reference/manifest.html#the-features-section
pub use CUDA_CORES;
pub use ;
pub use GPUError;
pub use Program;
compile_error!;
/// A buffer on the GPU.
///
/// The concept of a local buffer is from OpenCL. In CUDA you don't allocate a buffer directly
/// via API call. Instead you pass in the amount of shared memory that should be used.
///
/// There can be at most a single local buffer per kernel. On CUDA a null pointer will be passed
/// in, instead of an actual value. The memory that should get allocated is then passed into the
/// kernel call automatically.