pub struct Stream { /* private fields */ }Expand description
A CUDA stream (GPU command queue).
Streams provide ordered, asynchronous execution of GPU commands. Commands enqueued on the same stream execute sequentially, while commands on different streams may execute concurrently.
The stream holds an Arc<Context> to ensure the parent context
outlives the stream.
Implementations§
Source§impl Stream
impl Stream
Sourcepub fn new(ctx: &Arc<Context>) -> CudaResult<Self>
pub fn new(ctx: &Arc<Context>) -> CudaResult<Self>
Creates a new stream with CU_STREAM_NON_BLOCKING flag.
Non-blocking streams do not implicitly synchronise with the default (NULL) stream, allowing maximum concurrency.
§Errors
Returns a CudaError if the driver
call fails (e.g. invalid context, out of resources).
Sourcepub fn with_priority(ctx: &Arc<Context>, priority: i32) -> CudaResult<Self>
pub fn with_priority(ctx: &Arc<Context>, priority: i32) -> CudaResult<Self>
Creates a new stream with the specified priority and
CU_STREAM_NON_BLOCKING flag.
Lower numerical values indicate higher priority. The valid range
can be queried via cuCtxGetStreamPriorityRange.
§Errors
Returns a CudaError if the priority
is out of range or the driver call otherwise fails.
Sourcepub fn synchronize(&self) -> CudaResult<()>
pub fn synchronize(&self) -> CudaResult<()>
Sourcepub fn wait_event(&self, event: &Event) -> CudaResult<()>
pub fn wait_event(&self, event: &Event) -> CudaResult<()>
Makes all future work submitted to this stream wait until the given event has been recorded and completed.
This is the primary mechanism for inter-stream synchronisation:
record an Event on one stream, then call wait_event on
another stream to establish an ordering dependency.
§Errors
Returns a CudaError if the driver
call fails (e.g. invalid event handle).