vulkano/sync/future/
mod.rs

1//! Represents an event that will happen on the GPU in the future.
2//!
3//! Whenever you ask the GPU to start an operation by using a function of the vulkano library (for
4//! example executing a command buffer), this function will return a *future*. A future is an
5//! object that implements [the `GpuFuture` trait](GpuFuture) and that represents the
6//! point in time when this operation is over.
7//!
8//! No function in vulkano immediately sends an operation to the GPU (with the exception of some
9//! unsafe low-level functions). Instead they return a future that is in the pending state. Before
10//! the GPU actually starts doing anything, you have to *flush* the future by calling the `flush()`
11//! method or one of its derivatives.
12//!
13//! Futures serve several roles:
14//!
15//! - Futures can be used to build dependencies between operations and makes it possible to ask
16//!   that an operation starts only after a previous operation is finished.
17//! - Submitting an operation to the GPU is a costly operation. By chaining multiple operations
18//!   with futures you will submit them all at once instead of one by one, thereby reducing this
19//!   cost.
20//! - Futures keep alive the resources and objects used by the GPU so that they don't get destroyed
21//!   while they are still in use.
22//!
23//! The last point means that you should keep futures alive in your program for as long as their
24//! corresponding operation is potentially still being executed by the GPU. Dropping a future
25//! earlier will block the current thread (after flushing, if necessary) until the GPU has finished
26//! the operation, which is usually not what you want.
27//!
28//! If you write a function that submits an operation to the GPU in your program, you are
29//! encouraged to let this function return the corresponding future and let the caller handle it.
30//! This way the caller will be able to chain multiple futures together and decide when it wants to
31//! keep the future alive or drop it.
32//!
33//! # Executing an operation after a future
34//!
35//! Respecting the order of operations on the GPU is important, as it is what *proves* vulkano that
36//! what you are doing is indeed safe. For example if you submit two operations that modify the
37//! same buffer, then you need to execute one after the other instead of submitting them
38//! independently. Failing to do so would mean that these two operations could potentially execute
39//! simultaneously on the GPU, which would be unsafe.
40//!
41//! This is done by calling one of the methods of the `GpuFuture` trait. For example calling
42//! `prev_future.then_execute(command_buffer)` takes ownership of `prev_future` and will make sure
43//! to only start executing `command_buffer` after the moment corresponding to `prev_future`
44//! happens. The object returned by the `then_execute` function is itself a future that corresponds
45//! to the moment when the execution of `command_buffer` ends.
46//!
47//! ## Between two different GPU queues
48//!
49//! When you want to perform an operation after another operation on two different queues, you
50//! **must** put a *semaphore* between them. Failure to do so would result in a runtime error.
51//! Adding a semaphore is a simple as replacing `prev_future.then_execute(...)` with
52//! `prev_future.then_signal_semaphore().then_execute(...)`.
53//!
54//! > **Note**: A common use-case is using a transfer queue (ie. a queue that is only capable of
55//! > performing transfer operations) to write data to a buffer, then read that data from the
56//! > rendering queue.
57//!
58//! What happens when you do so is that the first queue will execute the first set of operations
59//! (represented by `prev_future` in the example), then put a semaphore in the signalled state.
60//! Meanwhile the second queue blocks (if necessary) until that same semaphore gets signalled, and
61//! then only will execute the second set of operations.
62//!
63//! Since you want to avoid blocking the second queue as much as possible, you probably want to
64//! flush the operation to the first queue as soon as possible. This can easily be done by calling
65//! `then_signal_semaphore_and_flush()` instead of `then_signal_semaphore()`.
66//!
67//! ## Between several different GPU queues
68//!
69//! The `then_signal_semaphore()` method is appropriate when you perform an operation in one queue,
70//! and want to see the result in another queue. However in some situations you want to start
71//! multiple operations on several different queues.
72//!
73//! TODO: this is not yet implemented
74//!
75//! # Fences
76//!
77//! A `Fence` is an object that is used to signal the CPU when an operation on the GPU is finished.
78//!
79//! Signalling a fence is done by calling `then_signal_fence()` on a future. Just like semaphores,
80//! you are encouraged to use `then_signal_fence_and_flush()` instead.
81//!
82//! Signalling a fence is kind of a "terminator" to a chain of futures
83
84pub use self::{
85    fence_signal::{FenceSignalFuture, FenceSignalFutureBehavior},
86    join::JoinFuture,
87    now::{now, NowFuture},
88    semaphore_signal::SemaphoreSignalFuture,
89};
90use super::{fence::Fence, semaphore::Semaphore};
91use crate::{
92    buffer::{Buffer, BufferState},
93    command_buffer::{
94        CommandBufferExecError, CommandBufferExecFuture, CommandBufferResourcesUsage,
95        CommandBufferState, CommandBufferSubmitInfo, CommandBufferUsage,
96        PrimaryCommandBufferAbstract, SubmitInfo,
97    },
98    device::{DeviceOwned, Queue},
99    image::{Image, ImageLayout, ImageState},
100    memory::sparse::BindSparseInfo,
101    swapchain::{self, PresentFuture, PresentInfo, Swapchain, SwapchainPresentInfo},
102    DeviceSize, Validated, ValidationError, VulkanError, VulkanObject,
103};
104use foldhash::HashMap;
105use parking_lot::MutexGuard;
106use smallvec::{smallvec, SmallVec};
107use std::{
108    error::Error,
109    fmt::{Display, Error as FmtError, Formatter},
110    ops::Range,
111    sync::{atomic::Ordering, Arc},
112};
113
114mod fence_signal;
115mod join;
116mod now;
117mod semaphore_signal;
118
119/// Represents an event that will happen on the GPU in the future.
120///
121/// See the documentation of the `sync` module for explanations about futures.
122// TODO: consider switching all methods to take `&mut self` for optimization purposes
123pub unsafe trait GpuFuture: DeviceOwned {
124    /// If possible, checks whether the submission has finished. If so, gives up ownership of the
125    /// resources used by these submissions.
126    ///
127    /// It is highly recommended to call `cleanup_finished` from time to time. Doing so will
128    /// prevent memory usage from increasing over time, and will also destroy the locks on
129    /// resources used by the GPU.
130    fn cleanup_finished(&mut self);
131
132    /// Builds a submission that, if submitted, makes sure that the event represented by this
133    /// `GpuFuture` will happen, and possibly contains extra elements (eg. a semaphore wait or an
134    /// event wait) that makes the dependency with subsequent operations work.
135    ///
136    /// It is the responsibility of the caller to ensure that the submission is going to be
137    /// submitted only once. However keep in mind that this function can perfectly be called
138    /// multiple times (as long as the returned object is only submitted once).
139    /// Also note that calling `flush()` on the future  may change the value returned by
140    /// `build_submission()`.
141    ///
142    /// It is however the responsibility of the implementation to not return the same submission
143    /// from multiple different future objects. For example if you implement `GpuFuture` on
144    /// `Arc<Foo>` then `build_submission()` must always return `SubmitAnyBuilder::Empty`,
145    /// otherwise it would be possible for the user to clone the `Arc` and make the same
146    /// submission be submitted multiple times.
147    ///
148    /// It is also the responsibility of the implementation to ensure that it works if you call
149    /// `build_submission()` and submits the returned value without calling `flush()` first. In
150    /// other words, `build_submission()` should perform an implicit flush if necessary.
151    ///
152    /// Once the caller has submitted the submission and has determined that the GPU has finished
153    /// executing it, it should call `signal_finished`. Failure to do so will incur a large runtime
154    /// overhead, as the future will have to block to make sure that it is finished.
155    unsafe fn build_submission(&self) -> Result<SubmitAnyBuilder, Validated<VulkanError>>;
156
157    /// Flushes the future and submits to the GPU the actions that will permit this future to
158    /// occur.
159    ///
160    /// The implementation must remember that it was flushed. If the function is called multiple
161    /// times, only the first time must result in a flush.
162    fn flush(&self) -> Result<(), Validated<VulkanError>>;
163
164    /// Sets the future to its "complete" state, meaning that it can safely be destroyed.
165    ///
166    /// This must only be done if you called `build_submission()`, submitted the returned
167    /// submission, and determined that it was finished.
168    ///
169    /// The implementation must be aware that this function can be called multiple times on the
170    /// same future.
171    unsafe fn signal_finished(&self);
172
173    /// Returns the queue that triggers the event. Returns `None` if unknown or irrelevant.
174    ///
175    /// If this function returns `None` and `queue_change_allowed` returns `false`, then a panic
176    /// is likely to occur if you use this future. This is only a problem if you implement
177    /// the `GpuFuture` trait yourself for a type outside of vulkano.
178    fn queue(&self) -> Option<Arc<Queue>>;
179
180    /// Returns `true` if elements submitted after this future can be submitted to a different
181    /// queue than the other returned by `queue()`.
182    fn queue_change_allowed(&self) -> bool;
183
184    /// Checks whether submitting something after this future grants access (exclusive or shared,
185    /// depending on the parameter) to the given buffer on the given queue.
186    ///
187    /// > **Note**: Returning `Ok` means "access granted", while returning `Err` means
188    /// > "don't know". Therefore returning `Err` is never unsafe.
189    fn check_buffer_access(
190        &self,
191        buffer: &Buffer,
192        range: Range<DeviceSize>,
193        exclusive: bool,
194        queue: &Queue,
195    ) -> Result<(), AccessCheckError>;
196
197    /// Checks whether submitting something after this future grants access (exclusive or shared,
198    /// depending on the parameter) to the given image on the given queue.
199    ///
200    /// Implementations must ensure that the image is in the given layout. However if the `layout`
201    /// is `Undefined` then the implementation should accept any actual layout.
202    ///
203    /// > **Note**: Returning `Ok` means "access granted", while returning `Err` means
204    /// > "don't know". Therefore returning `Err` is never unsafe.
205    ///
206    /// > **Note**: Keep in mind that changing the layout of an image also requires exclusive
207    /// > access.
208    fn check_image_access(
209        &self,
210        image: &Image,
211        range: Range<DeviceSize>,
212        exclusive: bool,
213        expected_layout: ImageLayout,
214        queue: &Queue,
215    ) -> Result<(), AccessCheckError>;
216
217    /// Checks whether accessing a swapchain image is permitted.
218    ///
219    /// > **Note**: Setting `before` to `true` should skip checking the current future and always
220    /// > forward the call to the future before.
221    fn check_swapchain_image_acquired(
222        &self,
223        swapchain: &Swapchain,
224        image_index: u32,
225        before: bool,
226    ) -> Result<(), AccessCheckError>;
227
228    /// Joins this future with another one, representing the moment when both events have happened.
229    // TODO: handle errors
230    fn join<F>(self, other: F) -> JoinFuture<Self, F>
231    where
232        Self: Sized,
233        F: GpuFuture,
234    {
235        join::join(self, other)
236    }
237
238    /// Executes a command buffer after this future.
239    ///
240    /// > **Note**: This is just a shortcut function. The actual implementation is in the
241    /// > `CommandBuffer` trait.
242    fn then_execute(
243        self,
244        queue: Arc<Queue>,
245        command_buffer: Arc<impl PrimaryCommandBufferAbstract + 'static>,
246    ) -> Result<CommandBufferExecFuture<Self>, CommandBufferExecError>
247    where
248        Self: Sized,
249    {
250        command_buffer.execute_after(self, queue)
251    }
252
253    /// Executes a command buffer after this future, on the same queue as the future.
254    ///
255    /// > **Note**: This is just a shortcut function. The actual implementation is in the
256    /// > `CommandBuffer` trait.
257    fn then_execute_same_queue(
258        self,
259        command_buffer: Arc<impl PrimaryCommandBufferAbstract + 'static>,
260    ) -> Result<CommandBufferExecFuture<Self>, CommandBufferExecError>
261    where
262        Self: Sized,
263    {
264        let queue = self.queue().unwrap();
265        command_buffer.execute_after(self, queue)
266    }
267
268    /// Signals a semaphore after this future. Returns another future that represents the signal.
269    ///
270    /// Call this function when you want to execute some operations on a queue and want to see the
271    /// result on another queue.
272    #[inline]
273    fn then_signal_semaphore(self) -> SemaphoreSignalFuture<Self>
274    where
275        Self: Sized,
276    {
277        semaphore_signal::then_signal_semaphore(self)
278    }
279
280    /// Signals a semaphore after this future and flushes it. Returns another future that
281    /// represents the moment when the semaphore is signalled.
282    ///
283    /// This is a just a shortcut for `then_signal_semaphore()` followed with `flush()`.
284    ///
285    /// When you want to execute some operations A on a queue and some operations B on another
286    /// queue that need to see the results of A, it can be a good idea to submit A as soon as
287    /// possible while you're preparing B.
288    ///
289    /// If you ran A and B on the same queue, you would have to decide between submitting A then
290    /// B, or A and B simultaneously. Both approaches have their trade-offs. But if A and B are
291    /// on two different queues, then you would need two submits anyway and it is always
292    /// advantageous to submit A as soon as possible.
293    #[inline]
294    fn then_signal_semaphore_and_flush(
295        self,
296    ) -> Result<SemaphoreSignalFuture<Self>, Validated<VulkanError>>
297    where
298        Self: Sized,
299    {
300        let f = self.then_signal_semaphore();
301        f.flush()?;
302
303        Ok(f)
304    }
305
306    /// Signals a fence after this future. Returns another future that represents the signal.
307    ///
308    /// > **Note**: More often than not you want to immediately flush the future after calling this
309    /// > function. If so, consider using `then_signal_fence_and_flush`.
310    #[inline]
311    fn then_signal_fence(self) -> FenceSignalFuture<Self>
312    where
313        Self: Sized,
314    {
315        fence_signal::then_signal_fence(self, FenceSignalFutureBehavior::Continue)
316    }
317
318    /// Signals a fence after this future. Returns another future that represents the signal.
319    ///
320    /// This is a just a shortcut for `then_signal_fence()` followed with `flush()`.
321    #[inline]
322    fn then_signal_fence_and_flush(self) -> Result<FenceSignalFuture<Self>, Validated<VulkanError>>
323    where
324        Self: Sized,
325    {
326        let f = self.then_signal_fence();
327        f.flush()?;
328
329        Ok(f)
330    }
331
332    /// Presents a swapchain image after this future.
333    ///
334    /// You should only ever do this indirectly after a `SwapchainAcquireFuture` of the same image,
335    /// otherwise an error will occur when flushing.
336    ///
337    /// > **Note**: This is just a shortcut for the `Swapchain::present()` function.
338    #[inline]
339    fn then_swapchain_present(
340        self,
341        queue: Arc<Queue>,
342        swapchain_info: SwapchainPresentInfo,
343    ) -> PresentFuture<Self>
344    where
345        Self: Sized,
346    {
347        swapchain::present(self, queue, swapchain_info)
348    }
349
350    /// Turn the current future into a `Box<dyn GpuFuture>`.
351    ///
352    /// This is a helper function that calls `Box::new(yourFuture) as Box<dyn GpuFuture>`.
353    #[inline]
354    fn boxed(self) -> Box<dyn GpuFuture>
355    where
356        Self: Sized + 'static,
357    {
358        Box::new(self) as _
359    }
360
361    /// Turn the current future into a `Box<dyn GpuFuture + Send>`.
362    ///
363    /// This is a helper function that calls `Box::new(yourFuture) as Box<dyn GpuFuture + Send>`.
364    #[inline]
365    fn boxed_send(self) -> Box<dyn GpuFuture + Send>
366    where
367        Self: Sized + Send + 'static,
368    {
369        Box::new(self) as _
370    }
371
372    /// Turn the current future into a `Box<dyn GpuFuture + Sync>`.
373    ///
374    /// This is a helper function that calls `Box::new(yourFuture) as Box<dyn GpuFuture + Sync>`.
375    #[inline]
376    fn boxed_sync(self) -> Box<dyn GpuFuture + Sync>
377    where
378        Self: Sized + Sync + 'static,
379    {
380        Box::new(self) as _
381    }
382
383    /// Turn the current future into a `Box<dyn GpuFuture + Send + Sync>`.
384    ///
385    /// This is a helper function that calls `Box::new(yourFuture) as Box<dyn GpuFuture + Send +
386    /// Sync>`.
387    #[inline]
388    fn boxed_send_sync(self) -> Box<dyn GpuFuture + Send + Sync>
389    where
390        Self: Sized + Send + Sync + 'static,
391    {
392        Box::new(self) as _
393    }
394}
395
396unsafe impl<F: ?Sized> GpuFuture for Box<F>
397where
398    F: GpuFuture,
399{
400    fn cleanup_finished(&mut self) {
401        (**self).cleanup_finished()
402    }
403
404    unsafe fn build_submission(&self) -> Result<SubmitAnyBuilder, Validated<VulkanError>> {
405        unsafe { (**self).build_submission() }
406    }
407
408    fn flush(&self) -> Result<(), Validated<VulkanError>> {
409        (**self).flush()
410    }
411
412    unsafe fn signal_finished(&self) {
413        unsafe { (**self).signal_finished() }
414    }
415
416    fn queue_change_allowed(&self) -> bool {
417        (**self).queue_change_allowed()
418    }
419
420    fn queue(&self) -> Option<Arc<Queue>> {
421        (**self).queue()
422    }
423
424    fn check_buffer_access(
425        &self,
426        buffer: &Buffer,
427        range: Range<DeviceSize>,
428        exclusive: bool,
429        queue: &Queue,
430    ) -> Result<(), AccessCheckError> {
431        (**self).check_buffer_access(buffer, range, exclusive, queue)
432    }
433
434    fn check_image_access(
435        &self,
436        image: &Image,
437        range: Range<DeviceSize>,
438        exclusive: bool,
439        expected_layout: ImageLayout,
440        queue: &Queue,
441    ) -> Result<(), AccessCheckError> {
442        (**self).check_image_access(image, range, exclusive, expected_layout, queue)
443    }
444
445    #[inline]
446    fn check_swapchain_image_acquired(
447        &self,
448        swapchain: &Swapchain,
449        image_index: u32,
450        before: bool,
451    ) -> Result<(), AccessCheckError> {
452        (**self).check_swapchain_image_acquired(swapchain, image_index, before)
453    }
454}
455
456/// Contains all the possible submission builders.
457#[derive(Debug)]
458pub enum SubmitAnyBuilder {
459    Empty,
460    SemaphoresWait(SmallVec<[Arc<Semaphore>; 8]>),
461    CommandBuffer(SubmitInfo, Option<Arc<Fence>>),
462    QueuePresent(PresentInfo),
463    BindSparse(SmallVec<[BindSparseInfo; 1]>, Option<Arc<Fence>>),
464}
465
466impl SubmitAnyBuilder {
467    /// Returns true if equal to `SubmitAnyBuilder::Empty`.
468    #[inline]
469    pub fn is_empty(&self) -> bool {
470        matches!(self, SubmitAnyBuilder::Empty)
471    }
472}
473
474/// Access to a resource was denied.
475#[derive(Clone, Debug, PartialEq, Eq)]
476pub enum AccessError {
477    /// The resource is already in use, and there is no tracking of concurrent usages.
478    AlreadyInUse,
479
480    UnexpectedImageLayout {
481        allowed: ImageLayout,
482        requested: ImageLayout,
483    },
484
485    /// Trying to use an image without transitioning it from the "undefined" or "preinitialized"
486    /// layouts first.
487    ImageNotInitialized {
488        /// The layout that was requested for the image.
489        requested: ImageLayout,
490    },
491
492    /// Trying to use a swapchain image without depending on a corresponding acquire image future.
493    SwapchainImageNotAcquired,
494}
495
496impl Error for AccessError {}
497
498impl Display for AccessError {
499    fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
500        let value = match self {
501            AccessError::AlreadyInUse => {
502                "the resource is already in use, and there is no tracking of concurrent usages"
503            }
504            AccessError::UnexpectedImageLayout { allowed, requested } => {
505                return write!(
506                    f,
507                    "unexpected image layout: requested {:?}, allowed {:?}",
508                    allowed, requested
509                )
510            }
511            AccessError::ImageNotInitialized { .. } => {
512                "trying to use an image without transitioning it from the undefined or \
513                preinitialized layouts first"
514            }
515            AccessError::SwapchainImageNotAcquired => {
516                "trying to use a swapchain image without depending on a corresponding acquire \
517                image future"
518            }
519        };
520
521        write!(f, "{}", value,)
522    }
523}
524
525/// Error that can happen when checking whether we have access to a resource.
526#[derive(Clone, Debug, PartialEq, Eq)]
527pub enum AccessCheckError {
528    /// Access to the resource has been denied.
529    Denied(AccessError),
530    /// The resource is unknown, therefore we cannot possibly answer whether we have access or not.
531    Unknown,
532}
533
534impl Error for AccessCheckError {}
535
536impl Display for AccessCheckError {
537    fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
538        match self {
539            AccessCheckError::Denied(err) => {
540                write!(f, "access to the resource has been denied: {}", err)
541            }
542            AccessCheckError::Unknown => write!(f, "the resource is unknown"),
543        }
544    }
545}
546
547impl From<AccessError> for AccessCheckError {
548    fn from(err: AccessError) -> AccessCheckError {
549        AccessCheckError::Denied(err)
550    }
551}
552
553pub(crate) unsafe fn queue_bind_sparse(
554    queue: &Arc<Queue>,
555    bind_infos: impl IntoIterator<Item = BindSparseInfo>,
556    fence: Option<Arc<Fence>>,
557) -> Result<(), Validated<VulkanError>> {
558    let bind_infos: SmallVec<[_; 4]> = bind_infos.into_iter().collect();
559    queue
560        .with(|mut queue_guard| unsafe { queue_guard.bind_sparse(&bind_infos, fence.as_ref()) })?;
561
562    Ok(())
563}
564
565pub(crate) unsafe fn queue_present(
566    queue: &Arc<Queue>,
567    present_info: PresentInfo,
568) -> Result<impl ExactSizeIterator<Item = Result<bool, VulkanError>>, Validated<VulkanError>> {
569    let results: SmallVec<[_; 1]> = queue
570        .with(|mut queue_guard| unsafe { queue_guard.present(&present_info) })?
571        .collect();
572
573    let PresentInfo {
574        wait_semaphores: _,
575        swapchain_infos: swapchains,
576        _ne: _,
577    } = &present_info;
578
579    // If a presentation results in a loss of full-screen exclusive mode,
580    // signal that to the relevant swapchain.
581    for (&result, swapchain_info) in results.iter().zip(swapchains) {
582        if result == Err(VulkanError::FullScreenExclusiveModeLost) {
583            unsafe { swapchain_info.swapchain.full_screen_exclusive_held() }
584                .store(false, Ordering::SeqCst);
585        }
586    }
587
588    Ok(results.into_iter())
589}
590
591pub(crate) unsafe fn queue_submit(
592    queue: &Arc<Queue>,
593    submit_info: SubmitInfo,
594    fence: Option<Arc<Fence>>,
595    future: &dyn GpuFuture,
596) -> Result<(), Validated<VulkanError>> {
597    let submit_infos: SmallVec<[_; 4]> = smallvec![submit_info];
598    let mut states = States::from_submit_infos(&submit_infos);
599
600    for submit_info in &submit_infos {
601        for command_buffer_submit_info in &submit_info.command_buffers {
602            let &CommandBufferSubmitInfo {
603                ref command_buffer,
604                _ne: _,
605            } = command_buffer_submit_info;
606
607            let state = states
608                .command_buffers
609                .get(&command_buffer.handle())
610                .unwrap();
611
612            match command_buffer.usage() {
613                CommandBufferUsage::OneTimeSubmit => {
614                    if state.has_been_submitted() {
615                        return Err(Box::new(ValidationError {
616                            problem: "a command buffer, or one of the secondary \
617                                command buffers it executes, was created with the \
618                                `CommandBufferUsage::OneTimeSubmit` usage, but \
619                                it has already been submitted in the past"
620                                .into(),
621                            vuids: &["VUID-vkQueueSubmit2-commandBuffer-03874"],
622                            ..Default::default()
623                        })
624                        .into());
625                    }
626                }
627                CommandBufferUsage::MultipleSubmit => {
628                    if state.is_submit_pending() {
629                        return Err(Box::new(ValidationError {
630                            problem: "a command buffer, or one of the secondary \
631                                command buffers it executes, was not created with the \
632                                `CommandBufferUsage::SimultaneousUse` usage, but \
633                                it is already in use by the device"
634                                .into(),
635                            vuids: &["VUID-vkQueueSubmit2-commandBuffer-03875"],
636                            ..Default::default()
637                        })
638                        .into());
639                    }
640                }
641                CommandBufferUsage::SimultaneousUse => (),
642            }
643
644            let CommandBufferResourcesUsage {
645                buffers,
646                images,
647                buffer_indices: _,
648                image_indices: _,
649            } = command_buffer.resources_usage();
650
651            for usage in buffers {
652                let state = states.buffers.get_mut(&usage.buffer.handle()).unwrap();
653
654                for (range, range_usage) in usage.ranges.iter() {
655                    match future.check_buffer_access(
656                        &usage.buffer,
657                        range.clone(),
658                        range_usage.mutable,
659                        queue,
660                    ) {
661                        Err(AccessCheckError::Denied(error)) => {
662                            return Err(Box::new(ValidationError {
663                                problem: format!(
664                                    "access to a resource has been denied \
665                                    (resource use: {:?}, error: {})",
666                                    range_usage.first_use, error
667                                )
668                                .into(),
669                                ..Default::default()
670                            })
671                            .into());
672                        }
673                        Err(AccessCheckError::Unknown) => {
674                            let result = if range_usage.mutable {
675                                state.check_gpu_write(range.clone())
676                            } else {
677                                state.check_gpu_read(range.clone())
678                            };
679
680                            if let Err(error) = result {
681                                return Err(Box::new(ValidationError {
682                                    problem: format!(
683                                        "access to a resource has been denied \
684                                        (resource use: {:?}, error: {})",
685                                        range_usage.first_use, error
686                                    )
687                                    .into(),
688                                    ..Default::default()
689                                })
690                                .into());
691                            }
692                        }
693                        _ => (),
694                    }
695                }
696            }
697
698            for usage in images {
699                let state = states.images.get_mut(&usage.image.handle()).unwrap();
700
701                for (range, range_usage) in usage.ranges.iter() {
702                    match future.check_image_access(
703                        &usage.image,
704                        range.clone(),
705                        range_usage.mutable,
706                        range_usage.expected_layout,
707                        queue,
708                    ) {
709                        Err(AccessCheckError::Denied(error)) => {
710                            return Err(Box::new(ValidationError {
711                                problem: format!(
712                                    "access to a resource has been denied \
713                                    (resource use: {:?}, error: {})",
714                                    range_usage.first_use, error
715                                )
716                                .into(),
717                                ..Default::default()
718                            })
719                            .into());
720                        }
721                        Err(AccessCheckError::Unknown) => {
722                            let result = if range_usage.mutable {
723                                state.check_gpu_write(range.clone(), range_usage.expected_layout)
724                            } else {
725                                state.check_gpu_read(range.clone(), range_usage.expected_layout)
726                            };
727
728                            if let Err(error) = result {
729                                return Err(Box::new(ValidationError {
730                                    problem: format!(
731                                        "access to a resource has been denied \
732                                        (resource use: {:?}, error: {})",
733                                        range_usage.first_use, error
734                                    )
735                                    .into(),
736                                    ..Default::default()
737                                })
738                                .into());
739                            }
740                        }
741                        _ => (),
742                    };
743                }
744            }
745        }
746    }
747
748    queue.with(|mut queue_guard| unsafe { queue_guard.submit(&submit_infos, fence.as_ref()) })?;
749
750    for submit_info in &submit_infos {
751        let SubmitInfo {
752            wait_semaphores: _,
753            command_buffers,
754            signal_semaphores: _,
755            _ne: _,
756        } = submit_info;
757
758        for command_buffer_submit_info in command_buffers {
759            let CommandBufferSubmitInfo {
760                command_buffer,
761                _ne: _,
762            } = command_buffer_submit_info;
763
764            let state = states
765                .command_buffers
766                .get_mut(&command_buffer.handle())
767                .unwrap();
768            unsafe { state.add_queue_submit() };
769
770            let CommandBufferResourcesUsage {
771                buffers,
772                images,
773                buffer_indices: _,
774                image_indices: _,
775            } = command_buffer.resources_usage();
776
777            for usage in buffers {
778                let state = states.buffers.get_mut(&usage.buffer.handle()).unwrap();
779
780                for (range, range_usage) in usage.ranges.iter() {
781                    if range_usage.mutable {
782                        unsafe { state.gpu_write_lock(range.clone()) };
783                    } else {
784                        unsafe { state.gpu_read_lock(range.clone()) };
785                    }
786                }
787            }
788
789            for usage in images {
790                let state = states.images.get_mut(&usage.image.handle()).unwrap();
791
792                for (range, range_usage) in usage.ranges.iter() {
793                    if range_usage.mutable {
794                        unsafe { state.gpu_write_lock(range.clone(), range_usage.final_layout) };
795                    } else {
796                        unsafe { state.gpu_read_lock(range.clone()) };
797                    }
798                }
799            }
800        }
801    }
802
803    Ok(())
804}
805
806// This struct exists to ensure that every object gets locked exactly once.
807// Otherwise we get deadlocks.
808#[derive(Debug)]
809struct States<'a> {
810    buffers: HashMap<ash::vk::Buffer, MutexGuard<'a, BufferState>>,
811    command_buffers: HashMap<ash::vk::CommandBuffer, MutexGuard<'a, CommandBufferState>>,
812    images: HashMap<ash::vk::Image, MutexGuard<'a, ImageState>>,
813}
814
815impl<'a> States<'a> {
816    fn from_submit_infos(submit_infos: &'a [SubmitInfo]) -> Self {
817        let mut buffers = HashMap::default();
818        let mut command_buffers = HashMap::default();
819        let mut images = HashMap::default();
820
821        for submit_info in submit_infos {
822            let SubmitInfo {
823                wait_semaphores: _,
824                command_buffers: info_command_buffers,
825                signal_semaphores: _,
826                _ne: _,
827            } = submit_info;
828
829            for command_buffer_submit_info in info_command_buffers {
830                let &CommandBufferSubmitInfo {
831                    ref command_buffer,
832                    _ne: _,
833                } = command_buffer_submit_info;
834
835                command_buffers
836                    .entry(command_buffer.handle())
837                    .or_insert_with(|| command_buffer.state());
838
839                let CommandBufferResourcesUsage {
840                    buffers: buffers_usage,
841                    images: images_usage,
842                    buffer_indices: _,
843                    image_indices: _,
844                } = command_buffer.resources_usage();
845
846                for usage in buffers_usage {
847                    let buffer = &usage.buffer;
848                    buffers
849                        .entry(buffer.handle())
850                        .or_insert_with(|| buffer.state());
851                }
852
853                for usage in images_usage {
854                    let image = &usage.image;
855                    images
856                        .entry(image.handle())
857                        .or_insert_with(|| image.state());
858                }
859            }
860        }
861
862        Self {
863            buffers,
864            command_buffers,
865            images,
866        }
867    }
868}