pub struct Batch { /* private fields */ }Expand description
A batch of dispatches recorded into a single command buffer.
Created via Device::batch.
Use barrier between dispatches that have data
dependencies (where one dispatch reads from another’s output).
Implementations§
Source§impl Batch
impl Batch
Sourcepub fn run(
&mut self,
kernel: &Kernel,
buffers: &[&dyn GpuBuf],
invocations: u32,
) -> Result<&mut Self>
pub fn run( &mut self, kernel: &Kernel, buffers: &[&dyn GpuBuf], invocations: u32, ) -> Result<&mut Self>
Record a kernel dispatch with auto-calculated workgroups.
Sourcepub fn run_with_push_constants(
&mut self,
kernel: &Kernel,
buffers: &[&dyn GpuBuf],
invocations: u32,
push_constants: &[u8],
) -> Result<&mut Self>
pub fn run_with_push_constants( &mut self, kernel: &Kernel, buffers: &[&dyn GpuBuf], invocations: u32, push_constants: &[u8], ) -> Result<&mut Self>
Record a kernel dispatch with push constants.
Sourcepub fn run_configured(
&mut self,
kernel: &Kernel,
buffers: &[&dyn GpuBuf],
workgroups: [u32; 3],
push_constants: Option<&[u8]>,
) -> Result<&mut Self>
pub fn run_configured( &mut self, kernel: &Kernel, buffers: &[&dyn GpuBuf], workgroups: [u32; 3], push_constants: Option<&[u8]>, ) -> Result<&mut Self>
Record a kernel dispatch with explicit workgroups and optional push constants.
Sourcepub fn barrier(&mut self) -> &mut Self
pub fn barrier(&mut self) -> &mut Self
Insert a compute-to-compute barrier.
Use this between dispatches where a later dispatch reads from an earlier dispatch’s output buffer. Without a barrier, the GPU may execute dispatches out of order or overlap writes with reads.
Sourcepub fn submit(self) -> Result<()>
pub fn submit(self) -> Result<()>
Submit all recorded dispatches and wait for completion.
All dispatches execute in a single command buffer with one fence wait, eliminating per-dispatch synchronization overhead.
Equivalent to self.submit_async()?.wait().
Sourcepub fn submit_async(self) -> Result<Ticket>
pub fn submit_async(self) -> Result<Ticket>
Submit all recorded dispatches and return a Ticket for
non-blocking completion tracking.
The GPU work is queued immediately. Use Ticket::wait to block
until completion, or Ticket::is_ready to poll.
§Example
let mut batch = gpu.batch()?;
batch.run(&kernel, &[&input, &output], n)?;
let ticket = batch.submit_async()?;
// ... CPU work while GPU runs ...
ticket.wait()?;
let result: Vec<f32> = output.download()?;