Expand description
Multi-GPU context management with per-device context pools.
When working with multiple GPUs, it is common to maintain one CUDA
context per device and dispatch work across them. DevicePool
automates context lifecycle management and provides scheduling
helpers (round-robin, best-available) for multi-GPU workloads.
§Thread safety
DevicePool is Send + Sync. Each context is wrapped in an
Arc<Context> so it can be shared across threads. The caller is
responsible for calling Context::set_current on the appropriate
thread before issuing driver calls.
§Example
use oxicuda_driver::multi_gpu::DevicePool;
oxicuda_driver::init()?;
let pool = DevicePool::new()?;
println!("managing {} devices", pool.device_count());
for (dev, ctx) in pool.iter() {
ctx.set_current()?;
println!("device {}: {}", dev.ordinal(), dev.name()?);
}Structs§
- Device
Pool - Per-device context pool for multi-GPU management.