Enum rust_gpu_tools::Program
source · pub enum Program {
Cuda(Program),
Opencl(Program),
}
Expand description
Abstraction for running programs on CUDA or OpenCL.
Variants§
Implementations§
source§impl Program
impl Program
sourcepub fn run<F1, F2, R, E, A>(&self, fun: (F1, F2), arg: A) -> Result<R, E>where
E: From<GPUError>,
F1: FnOnce(&Program, A) -> Result<R, E>,
F2: FnOnce(&Program, A) -> Result<R, E>,
pub fn run<F1, F2, R, E, A>(&self, fun: (F1, F2), arg: A) -> Result<R, E>where E: From<GPUError>, F1: FnOnce(&Program, A) -> Result<R, E>, F2: FnOnce(&Program, A) -> Result<R, E>,
Run some code in the context of the program.
There is an implementation for OpenCL and for CUDA. Both use different Rust types, but
opencl::Program
and cuda::Program
implement the same API. This means that same
code code can be used to run on either of them. The only difference is the type of the
Program
.
You need to pass in two closures, one for OpenCL, one for CUDA, both get their
corresponding program type as parameter. For convenience there is the [program_closures
]
macro defined, which can help reducing code duplication by creating two closures out of
a single one.
CUDA and OpenCL support can be enabled/disabled by the opencl
and cuda
features. If
one of them is disabled, you still need to pass in two closures. This way the API stays,
the same, but you can disable it things at compile-time.
The second parameter is a single arbitrary argument, which will be passed on into the closure. This is useful when you e.g. need to pass in a mutable reference. Such a reference cannot be shared between closures, hence we pass it on, so that the compiler knows that it is used at most once.
Examples found in repository?
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
pub fn main() {
// Define some data that should be operated on.
let aa: Vec<u32> = vec![1, 2, 3, 4];
let bb: Vec<u32> = vec![5, 6, 7, 8];
// This is the core. Here we write the interaction with the GPU independent of whether it is
// CUDA or OpenCL.
let closures = program_closures!(|program, _args| -> Result<Vec<u32>, GPUError> {
// Make sure the input data has the same length.
assert_eq!(aa.len(), bb.len());
let length = aa.len();
// Copy the data to the GPU.
let aa_buffer = program.create_buffer_from_slice(&aa)?;
let bb_buffer = program.create_buffer_from_slice(&bb)?;
// The result buffer has the same length as the input buffers.
let result_buffer = unsafe { program.create_buffer::<u32>(length)? };
// Get the kernel.
let kernel = program.create_kernel("add", 1, 1)?;
// Execute the kernel.
kernel
.arg(&(length as u32))
.arg(&aa_buffer)
.arg(&bb_buffer)
.arg(&result_buffer)
.run()?;
// Get the resulting data.
let mut result = vec![0u32; length];
program.read_into_buffer(&result_buffer, &mut result)?;
Ok(result)
});
// Get the first available device.
let device = *Device::all().first().unwrap();
// First we run it on CUDA.
let cuda_program = cuda(device);
let cuda_result = cuda_program.run(closures, ()).unwrap();
assert_eq!(cuda_result, [6, 8, 10, 12]);
println!("CUDA result: {:?}", cuda_result);
// Then we run it on OpenCL.
let opencl_program = opencl(device);
let opencl_result = opencl_program.run(closures, ()).unwrap();
assert_eq!(opencl_result, [6, 8, 10, 12]);
println!("OpenCL result: {:?}", opencl_result);
}
sourcepub fn device_name(&self) -> &str
pub fn device_name(&self) -> &str
Returns the name of the GPU, e.g. “GeForce RTX 3090”.