li_wgpu_core/
hub.rs

1/*! Allocating resource ids, and tracking the resources they refer to.
2
3The `wgpu_core` API uses identifiers of type [`Id<R>`] to refer to
4resources of type `R`. For example, [`id::DeviceId`] is an alias for
5`Id<Device<Empty>>`, and [`id::BufferId`] is an alias for
6`Id<Buffer<Empty>>`. `Id` implements `Copy`, `Hash`, `Eq`, `Ord`, and
7of course `Debug`.
8
9Each `Id` contains not only an index for the resource it denotes but
10also a [`Backend`] indicating which `wgpu` backend it belongs to. You
11can use the [`gfx_select`] macro to dynamically dispatch on an id's
12backend to a function specialized at compile time for a specific
13backend. See that macro's documentation for details.
14
15`Id`s also incorporate a generation number, for additional validation.
16
17The resources to which identifiers refer are freed explicitly.
18Attempting to use an identifier for a resource that has been freed
19elicits an error result.
20
21## Assigning ids to resources
22
23The users of `wgpu_core` generally want resource ids to be assigned
24in one of two ways:
25
26- Users like `wgpu` want `wgpu_core` to assign ids to resources itself.
27  For example, `wgpu` expects to call `Global::device_create_buffer`
28  and have the return value indicate the newly created buffer's id.
29
30- Users like `player` and Firefox want to allocate ids themselves, and
31  pass `Global::device_create_buffer` and friends the id to assign the
32  new resource.
33
34To accommodate either pattern, `wgpu_core` methods that create
35resources all expect an `id_in` argument that the caller can use to
36specify the id, and they all return the id used. For example, the
37declaration of `Global::device_create_buffer` looks like this:
38
39```ignore
40impl<G: GlobalIdentityHandlerFactory> Global<G> {
41    /* ... */
42    pub fn device_create_buffer<A: HalApi>(
43        &self,
44        device_id: id::DeviceId,
45        desc: &resource::BufferDescriptor,
46        id_in: Input<G, id::BufferId>,
47    ) -> (id::BufferId, Option<resource::CreateBufferError>) {
48        /* ... */
49    }
50    /* ... */
51}
52```
53
54Users that want to assign resource ids themselves pass in the id they
55want as the `id_in` argument, whereas users that want `wgpu_core`
56itself to choose ids always pass `()`. In either case, the id
57ultimately assigned is returned as the first element of the tuple.
58
59Producing true identifiers from `id_in` values is the job of an
60[`IdentityHandler`] implementation, which has an associated type
61[`Input`] saying what type of `id_in` values it accepts, and a
62[`process`] method that turns such values into true identifiers of
63type `I`. There are two kinds of `IdentityHandler`s:
64
65- Users that want `wgpu_core` to assign ids generally use
66  [`IdentityManager`] ([wrapped in a mutex]). Its `Input` type is
67  `()`, and it tracks assigned ids and generation numbers as
68  necessary. (This is what `wgpu` does.)
69
70- Users that want to assign ids themselves use an `IdentityHandler`
71  whose `Input` type is `I` itself, and whose `process` method simply
72  passes the `id_in` argument through unchanged. For example, the
73  `player` crate uses an `IdentityPassThrough` type whose `process`
74  method simply adjusts the id's backend (since recordings can be
75  replayed on a different backend than the one they were created on)
76  but passes the rest of the id's content through unchanged.
77
78Because an `IdentityHandler<I>` can only create ids for a single
79resource type `I`, constructing a [`Global`] entails constructing a
80separate `IdentityHandler<I>` for each resource type `I` that the
81`Global` will manage: an `IdentityHandler<DeviceId>`, an
82`IdentityHandler<TextureId>`, and so on.
83
84The [`Global::new`] function could simply take a large collection of
85`IdentityHandler<I>` implementations as arguments, but that would be
86ungainly. Instead, `Global::new` expects a `factory` argument that
87implements the [`GlobalIdentityHandlerFactory`] trait, which extends
88[`IdentityHandlerFactory<I>`] for each resource id type `I`. This
89trait, in turn, has a `spawn` method that constructs an
90`IdentityHandler<I>` for the `Global` to use.
91
92What this means is that the types of resource creation functions'
93`id_in` arguments depend on the `Global`'s `G` type parameter. A
94`Global<G>`'s `IdentityHandler<I>` implementation is:
95
96```ignore
97<G as IdentityHandlerFactory<I>>::Filter
98```
99
100where `Filter` is an associated type of the `IdentityHandlerFactory` trait.
101Thus, its `id_in` type is:
102
103```ignore
104<<G as IdentityHandlerFactory<I>>::Filter as IdentityHandler<I>>::Input
105```
106
107The [`Input<G, I>`] type is an alias for this construction.
108
109## Id allocation and streaming
110
111Perhaps surprisingly, allowing users to assign resource ids themselves
112enables major performance improvements in some applications.
113
114The `wgpu_core` API is designed for use by Firefox's [WebGPU]
115implementation. For security, web content and GPU use must be kept
116segregated in separate processes, with all interaction between them
117mediated by an inter-process communication protocol. As web content uses
118the WebGPU API, the content process sends messages to the GPU process,
119which interacts with the platform's GPU APIs on content's behalf,
120occasionally sending results back.
121
122In a classic Rust API, a resource allocation function takes parameters
123describing the resource to create, and if creation succeeds, it returns
124the resource id in a `Result::Ok` value. However, this design is a poor
125fit for the split-process design described above: content must wait for
126the reply to its buffer-creation message (say) before it can know which
127id it can use in the next message that uses that buffer. On a common
128usage pattern, the classic Rust design imposes the latency of a full
129cross-process round trip.
130
131We can avoid incurring these round-trip latencies simply by letting the
132content process assign resource ids itself. With this approach, content
133can choose an id for the new buffer, send a message to create the
134buffer, and then immediately send the next message operating on that
135buffer, since it already knows its id. Allowing content and GPU process
136activity to be pipelined greatly improves throughput.
137
138To help propagate errors correctly in this style of usage, when resource
139creation fails, the id supplied for that resource is marked to indicate
140as much, allowing subsequent operations using that id to be properly
141flagged as errors as well.
142
143[`Backend`]: wgt::Backend
144[`Global`]: crate::global::Global
145[`Global::new`]: crate::global::Global::new
146[`gfx_select`]: crate::gfx_select
147[`IdentityHandler`]: crate::identity::IdentityHandler
148[`Input`]: crate::identity::IdentityHandler::Input
149[`process`]: crate::identity::IdentityHandler::process
150[`Id<R>`]: crate::id::Id
151[wrapped in a mutex]: ../identity/trait.IdentityHandler.html#impl-IdentityHandler%3CI%3E-for-Mutex%3CIdentityManager%3E
152[WebGPU]: https://www.w3.org/TR/webgpu/
153[`IdentityManager`]: crate::identity::IdentityManager
154[`Input<G, I>`]: crate::identity::Input
155[`IdentityHandlerFactory<I>`]: crate::identity::IdentityHandlerFactory
156*/
157
158use crate::{
159    binding_model::{BindGroup, BindGroupLayout, PipelineLayout},
160    command::{CommandBuffer, RenderBundle},
161    device::Device,
162    hal_api::HalApi,
163    id,
164    identity::GlobalIdentityHandlerFactory,
165    instance::{Adapter, HalSurface, Instance, Surface},
166    pipeline::{ComputePipeline, RenderPipeline, ShaderModule},
167    registry::Registry,
168    resource::{Buffer, QuerySet, Sampler, StagingBuffer, Texture, TextureClearMode, TextureView},
169    storage::{Element, Storage, StorageReport},
170};
171
172use wgt::{strict_assert_eq, strict_assert_ne};
173
174#[cfg(any(debug_assertions, feature = "strict_asserts"))]
175use std::cell::Cell;
176use std::{fmt::Debug, marker::PhantomData};
177
178/// Type system for enforcing the lock order on [`Hub`] fields.
179///
180/// If type `A` implements `Access<B>`, that means we are allowed to
181/// proceed with locking resource `B` after we lock `A`.
182///
183/// The implementations of `Access` basically describe the edges in an
184/// acyclic directed graph of lock transitions. As long as it doesn't have
185/// cycles, any number of threads can acquire locks along paths through
186/// the graph without deadlock. That is, if you look at each thread's
187/// lock acquisitions as steps along a path in the graph, then because
188/// there are no cycles in the graph, there must always be some thread
189/// that is able to acquire its next lock, or that is about to release
190/// a lock. (Assume that no thread just sits on its locks forever.)
191///
192/// Locks must be acquired in the following order:
193///
194/// - [`Adapter`]
195/// - [`Device`]
196/// - [`CommandBuffer`]
197/// - [`RenderBundle`]
198/// - [`PipelineLayout`]
199/// - [`BindGroupLayout`]
200/// - [`BindGroup`]
201/// - [`ComputePipeline`]
202/// - [`RenderPipeline`]
203/// - [`ShaderModule`]
204/// - [`Buffer`]
205/// - [`StagingBuffer`]
206/// - [`Texture`]
207/// - [`TextureView`]
208/// - [`Sampler`]
209/// - [`QuerySet`]
210///
211/// That is, you may only acquire a new lock on a `Hub` field if it
212/// appears in the list after all the other fields you're already
213/// holding locks for. When you are holding no locks, you can start
214/// anywhere.
215///
216/// It's fine to add more `Access` implementations as needed, as long
217/// as you do not introduce a cycle. In other words, as long as there
218/// is some ordering you can put the resource types in that respects
219/// the extant `Access` implementations, that's fine.
220///
221/// See the documentation for [`Hub`] for more details.
222pub trait Access<A> {}
223
224pub enum Root {}
225
226// These impls are arranged so that the target types (that is, the `T`
227// in `Access<T>`) appear in locking order.
228//
229// TODO: establish an order instead of declaring all the pairs.
230impl Access<Instance> for Root {}
231impl Access<Surface> for Root {}
232impl Access<Surface> for Instance {}
233impl<A: HalApi> Access<Adapter<A>> for Root {}
234impl<A: HalApi> Access<Adapter<A>> for Surface {}
235impl<A: HalApi> Access<Device<A>> for Root {}
236impl<A: HalApi> Access<Device<A>> for Surface {}
237impl<A: HalApi> Access<Device<A>> for Adapter<A> {}
238impl<A: HalApi> Access<CommandBuffer<A>> for Root {}
239impl<A: HalApi> Access<CommandBuffer<A>> for Device<A> {}
240impl<A: HalApi> Access<RenderBundle<A>> for Device<A> {}
241impl<A: HalApi> Access<RenderBundle<A>> for CommandBuffer<A> {}
242impl<A: HalApi> Access<PipelineLayout<A>> for Root {}
243impl<A: HalApi> Access<PipelineLayout<A>> for Device<A> {}
244impl<A: HalApi> Access<PipelineLayout<A>> for RenderBundle<A> {}
245impl<A: HalApi> Access<BindGroupLayout<A>> for Root {}
246impl<A: HalApi> Access<BindGroupLayout<A>> for Device<A> {}
247impl<A: HalApi> Access<BindGroupLayout<A>> for PipelineLayout<A> {}
248impl<A: HalApi> Access<BindGroupLayout<A>> for QuerySet<A> {}
249impl<A: HalApi> Access<BindGroup<A>> for Root {}
250impl<A: HalApi> Access<BindGroup<A>> for Device<A> {}
251impl<A: HalApi> Access<BindGroup<A>> for BindGroupLayout<A> {}
252impl<A: HalApi> Access<BindGroup<A>> for PipelineLayout<A> {}
253impl<A: HalApi> Access<BindGroup<A>> for CommandBuffer<A> {}
254impl<A: HalApi> Access<ComputePipeline<A>> for Device<A> {}
255impl<A: HalApi> Access<ComputePipeline<A>> for BindGroup<A> {}
256impl<A: HalApi> Access<RenderPipeline<A>> for Device<A> {}
257impl<A: HalApi> Access<RenderPipeline<A>> for BindGroup<A> {}
258impl<A: HalApi> Access<RenderPipeline<A>> for ComputePipeline<A> {}
259impl<A: HalApi> Access<ShaderModule<A>> for Device<A> {}
260impl<A: HalApi> Access<ShaderModule<A>> for BindGroupLayout<A> {}
261impl<A: HalApi> Access<Buffer<A>> for Root {}
262impl<A: HalApi> Access<Buffer<A>> for Device<A> {}
263impl<A: HalApi> Access<Buffer<A>> for BindGroupLayout<A> {}
264impl<A: HalApi> Access<Buffer<A>> for BindGroup<A> {}
265impl<A: HalApi> Access<Buffer<A>> for CommandBuffer<A> {}
266impl<A: HalApi> Access<Buffer<A>> for ComputePipeline<A> {}
267impl<A: HalApi> Access<Buffer<A>> for RenderPipeline<A> {}
268impl<A: HalApi> Access<Buffer<A>> for QuerySet<A> {}
269impl<A: HalApi> Access<StagingBuffer<A>> for Device<A> {}
270impl<A: HalApi> Access<Texture<A>> for Root {}
271impl<A: HalApi> Access<Texture<A>> for Device<A> {}
272impl<A: HalApi> Access<Texture<A>> for Buffer<A> {}
273impl<A: HalApi> Access<TextureView<A>> for Root {}
274impl<A: HalApi> Access<TextureView<A>> for Device<A> {}
275impl<A: HalApi> Access<TextureView<A>> for Texture<A> {}
276impl<A: HalApi> Access<Sampler<A>> for Root {}
277impl<A: HalApi> Access<Sampler<A>> for Device<A> {}
278impl<A: HalApi> Access<Sampler<A>> for TextureView<A> {}
279impl<A: HalApi> Access<QuerySet<A>> for Root {}
280impl<A: HalApi> Access<QuerySet<A>> for Device<A> {}
281impl<A: HalApi> Access<QuerySet<A>> for CommandBuffer<A> {}
282impl<A: HalApi> Access<QuerySet<A>> for RenderPipeline<A> {}
283impl<A: HalApi> Access<QuerySet<A>> for ComputePipeline<A> {}
284impl<A: HalApi> Access<QuerySet<A>> for Sampler<A> {}
285
286#[cfg(any(debug_assertions, feature = "strict_asserts"))]
287thread_local! {
288    /// Per-thread state checking `Token<Root>` creation in debug builds.
289    ///
290    /// This is the number of `Token` values alive on the current
291    /// thread. Since `Token` creation respects the [`Access`] graph,
292    /// there can never be more tokens alive than there are fields of
293    /// [`Hub`], so a `u8` is plenty.
294    static ACTIVE_TOKEN: Cell<u8> = Cell::new(0);
295}
296
297/// A zero-size permission token to lock some fields of [`Hub`].
298///
299/// Access to a `Token<T>` grants permission to lock any field of
300/// [`Hub`] following the one of type [`Registry<T, ...>`], where
301/// "following" is as defined by the [`Access`] implementations.
302///
303/// Calling [`Token::root()`] returns a `Token<Root>`, which grants
304/// permission to lock any field. Dynamic checks ensure that each
305/// thread has at most one `Token<Root>` live at a time, in debug
306/// builds.
307///
308/// The locking methods on `Registry<T, ...>` take a `&'t mut
309/// Token<A>`, and return a fresh `Token<'t, T>` and a lock guard with
310/// lifetime `'t`, so the caller cannot access their `Token<A>` again
311/// until they have dropped both the `Token<T>` and the lock guard.
312///
313/// Tokens are `!Send`, so one thread can't send its permissions to
314/// another.
315pub(crate) struct Token<'a, T: 'a> {
316    // The `*const` makes us `!Send` and `!Sync`.
317    level: PhantomData<&'a *const T>,
318}
319
320impl<'a, T> Token<'a, T> {
321    /// Return a new token for a locked field.
322    ///
323    /// This should only be used by `Registry` locking methods.
324    pub(crate) fn new() -> Self {
325        #[cfg(any(debug_assertions, feature = "strict_asserts"))]
326        ACTIVE_TOKEN.with(|active| {
327            let old = active.get();
328            strict_assert_ne!(old, 0, "Root token was dropped");
329            active.set(old + 1);
330        });
331        Self { level: PhantomData }
332    }
333}
334
335impl Token<'static, Root> {
336    /// Return a `Token<Root>`, granting permission to lock any [`Hub`] field.
337    ///
338    /// Debug builds check dynamically that each thread has at most
339    /// one root token at a time.
340    pub fn root() -> Self {
341        #[cfg(any(debug_assertions, feature = "strict_asserts"))]
342        ACTIVE_TOKEN.with(|active| {
343            strict_assert_eq!(0, active.replace(1), "Root token is already active");
344        });
345
346        Self { level: PhantomData }
347    }
348}
349
350impl<'a, T> Drop for Token<'a, T> {
351    fn drop(&mut self) {
352        #[cfg(any(debug_assertions, feature = "strict_asserts"))]
353        ACTIVE_TOKEN.with(|active| {
354            let old = active.get();
355            active.set(old - 1);
356        });
357    }
358}
359
360#[derive(Debug)]
361pub struct HubReport {
362    pub adapters: StorageReport,
363    pub devices: StorageReport,
364    pub pipeline_layouts: StorageReport,
365    pub shader_modules: StorageReport,
366    pub bind_group_layouts: StorageReport,
367    pub bind_groups: StorageReport,
368    pub command_buffers: StorageReport,
369    pub render_bundles: StorageReport,
370    pub render_pipelines: StorageReport,
371    pub compute_pipelines: StorageReport,
372    pub query_sets: StorageReport,
373    pub buffers: StorageReport,
374    pub textures: StorageReport,
375    pub texture_views: StorageReport,
376    pub samplers: StorageReport,
377}
378
379impl HubReport {
380    pub fn is_empty(&self) -> bool {
381        self.adapters.is_empty()
382    }
383}
384
385#[allow(rustdoc::private_intra_doc_links)]
386/// All the resources for a particular backend in a [`Global`].
387///
388/// To obtain `global`'s `Hub` for some [`HalApi`] backend type `A`,
389/// call [`A::hub(global)`].
390///
391/// ## Locking
392///
393/// Each field in `Hub` is a [`Registry`] holding all the values of a
394/// particular type of resource, all protected by a single [`RwLock`].
395/// So for example, to access any [`Buffer`], you must acquire a read
396/// lock on the `Hub`s entire [`buffers`] registry. The lock guard
397/// gives you access to the `Registry`'s [`Storage`], which you can
398/// then index with the buffer's id. (Yes, this design causes
399/// contention; see [#2272].)
400///
401/// But most `wgpu` operations require access to several different
402/// kinds of resource, so you often need to hold locks on several
403/// different fields of your [`Hub`] simultaneously. To avoid
404/// deadlock, there is an ordering imposed on the fields, and you may
405/// only acquire new locks on fields that come *after* all those you
406/// are already holding locks on, in this ordering. (The ordering is
407/// described in the documentation for the [`Access`] trait.)
408///
409/// We use Rust's type system to statically check that `wgpu_core` can
410/// only ever acquire locks in the correct order:
411///
412/// - A value of type [`Token<T>`] represents proof that the owner
413///   only holds locks on the `Hub` fields holding resources of type
414///   `T` or earlier in the lock ordering. A special value of type
415///   `Token<Root>`, obtained by calling [`Token::root`], represents
416///   proof that no `Hub` field locks are held.
417///
418/// - To lock the `Hub` field holding resources of type `T`, you must
419///   call its [`read`] or [`write`] methods. These require you to
420///   pass in a `&mut Token<A>`, for some `A` that implements
421///   [`Access<T>`]. This implementation exists only if `T` follows `A`
422///   in the field ordering, which statically ensures that you are
423///   indeed allowed to lock this new `Hub` field.
424///
425/// - The locking methods return both an [`RwLock`] guard that you can
426///   use to access the field's resources, and a new `Token<T>` value.
427///   These both borrow from the lifetime of your `Token<A>`, so since
428///   you passed that by mutable reference, you cannot access it again
429///   until you drop the new token and lock guard.
430///
431/// Because a thread only ever has access to the `Token<T>` for the
432/// last resource type `T` it holds a lock for, and the `Access` trait
433/// implementations only permit acquiring locks for types `U` that
434/// follow `T` in the lock ordering, it is statically impossible for a
435/// program to violate the locking order.
436///
437/// This does assume that threads cannot call `Token<Root>` when they
438/// already hold locks (dynamically enforced in debug builds) and that
439/// threads cannot send their `Token`s to other threads (enforced by
440/// making `Token` neither `Send` nor `Sync`).
441///
442/// [`Global`]: crate::global::Global
443/// [`A::hub(global)`]: HalApi::hub
444/// [`RwLock`]: parking_lot::RwLock
445/// [`buffers`]: Hub::buffers
446/// [`read`]: Registry::read
447/// [`write`]: Registry::write
448/// [`Token<T>`]: Token
449/// [`Access<T>`]: Access
450/// [#2272]: https://github.com/gfx-rs/wgpu/pull/2272
451pub struct Hub<A: HalApi, F: GlobalIdentityHandlerFactory> {
452    pub adapters: Registry<Adapter<A>, id::AdapterId, F>,
453    pub devices: Registry<Device<A>, id::DeviceId, F>,
454    pub pipeline_layouts: Registry<PipelineLayout<A>, id::PipelineLayoutId, F>,
455    pub shader_modules: Registry<ShaderModule<A>, id::ShaderModuleId, F>,
456    pub bind_group_layouts: Registry<BindGroupLayout<A>, id::BindGroupLayoutId, F>,
457    pub bind_groups: Registry<BindGroup<A>, id::BindGroupId, F>,
458    pub command_buffers: Registry<CommandBuffer<A>, id::CommandBufferId, F>,
459    pub render_bundles: Registry<RenderBundle<A>, id::RenderBundleId, F>,
460    pub render_pipelines: Registry<RenderPipeline<A>, id::RenderPipelineId, F>,
461    pub compute_pipelines: Registry<ComputePipeline<A>, id::ComputePipelineId, F>,
462    pub query_sets: Registry<QuerySet<A>, id::QuerySetId, F>,
463    pub buffers: Registry<Buffer<A>, id::BufferId, F>,
464    pub staging_buffers: Registry<StagingBuffer<A>, id::StagingBufferId, F>,
465    pub textures: Registry<Texture<A>, id::TextureId, F>,
466    pub texture_views: Registry<TextureView<A>, id::TextureViewId, F>,
467    pub samplers: Registry<Sampler<A>, id::SamplerId, F>,
468}
469
470impl<A: HalApi, F: GlobalIdentityHandlerFactory> Hub<A, F> {
471    fn new(factory: &F) -> Self {
472        Self {
473            adapters: Registry::new(A::VARIANT, factory),
474            devices: Registry::new(A::VARIANT, factory),
475            pipeline_layouts: Registry::new(A::VARIANT, factory),
476            shader_modules: Registry::new(A::VARIANT, factory),
477            bind_group_layouts: Registry::new(A::VARIANT, factory),
478            bind_groups: Registry::new(A::VARIANT, factory),
479            command_buffers: Registry::new(A::VARIANT, factory),
480            render_bundles: Registry::new(A::VARIANT, factory),
481            render_pipelines: Registry::new(A::VARIANT, factory),
482            compute_pipelines: Registry::new(A::VARIANT, factory),
483            query_sets: Registry::new(A::VARIANT, factory),
484            buffers: Registry::new(A::VARIANT, factory),
485            staging_buffers: Registry::new(A::VARIANT, factory),
486            textures: Registry::new(A::VARIANT, factory),
487            texture_views: Registry::new(A::VARIANT, factory),
488            samplers: Registry::new(A::VARIANT, factory),
489        }
490    }
491
492    //TODO: instead of having a hacky `with_adapters` parameter,
493    // we should have `clear_device(device_id)` that specifically destroys
494    // everything related to a logical device.
495    pub(crate) fn clear(
496        &self,
497        surface_guard: &mut Storage<Surface, id::SurfaceId>,
498        with_adapters: bool,
499    ) {
500        use crate::resource::TextureInner;
501        use hal::{Device as _, Surface as _};
502
503        let mut devices = self.devices.data.write();
504        for element in devices.map.iter_mut() {
505            if let Element::Occupied(ref mut device, _) = *element {
506                device.prepare_to_die();
507            }
508        }
509
510        // destroy command buffers first, since otherwise DX12 isn't happy
511        for element in self.command_buffers.data.write().map.drain(..) {
512            if let Element::Occupied(command_buffer, _) = element {
513                let device = &devices[command_buffer.device_id.value];
514                device.destroy_command_buffer(command_buffer);
515            }
516        }
517
518        for element in self.samplers.data.write().map.drain(..) {
519            if let Element::Occupied(sampler, _) = element {
520                unsafe {
521                    devices[sampler.device_id.value]
522                        .raw
523                        .destroy_sampler(sampler.raw);
524                }
525            }
526        }
527
528        for element in self.texture_views.data.write().map.drain(..) {
529            if let Element::Occupied(texture_view, _) = element {
530                let device = &devices[texture_view.device_id.value];
531                unsafe {
532                    device.raw.destroy_texture_view(texture_view.raw);
533                }
534            }
535        }
536
537        for element in self.textures.data.write().map.drain(..) {
538            if let Element::Occupied(texture, _) = element {
539                let device = &devices[texture.device_id.value];
540                if let TextureInner::Native { raw: Some(raw) } = texture.inner {
541                    unsafe {
542                        device.raw.destroy_texture(raw);
543                    }
544                }
545                if let TextureClearMode::RenderPass { clear_views, .. } = texture.clear_mode {
546                    for view in clear_views {
547                        unsafe {
548                            device.raw.destroy_texture_view(view);
549                        }
550                    }
551                }
552            }
553        }
554        for element in self.buffers.data.write().map.drain(..) {
555            if let Element::Occupied(buffer, _) = element {
556                //TODO: unmap if needed
557                devices[buffer.device_id.value].destroy_buffer(buffer);
558            }
559        }
560        for element in self.bind_groups.data.write().map.drain(..) {
561            if let Element::Occupied(bind_group, _) = element {
562                let device = &devices[bind_group.device_id.value];
563                unsafe {
564                    device.raw.destroy_bind_group(bind_group.raw);
565                }
566            }
567        }
568
569        for element in self.shader_modules.data.write().map.drain(..) {
570            if let Element::Occupied(module, _) = element {
571                let device = &devices[module.device_id.value];
572                unsafe {
573                    device.raw.destroy_shader_module(module.raw);
574                }
575            }
576        }
577        for element in self.bind_group_layouts.data.write().map.drain(..) {
578            if let Element::Occupied(bgl, _) = element {
579                let device = &devices[bgl.device_id.value];
580                if let Some(inner) = bgl.into_inner() {
581                    unsafe {
582                        device.raw.destroy_bind_group_layout(inner.raw);
583                    }
584                }
585            }
586        }
587        for element in self.pipeline_layouts.data.write().map.drain(..) {
588            if let Element::Occupied(pipeline_layout, _) = element {
589                let device = &devices[pipeline_layout.device_id.value];
590                unsafe {
591                    device.raw.destroy_pipeline_layout(pipeline_layout.raw);
592                }
593            }
594        }
595        for element in self.compute_pipelines.data.write().map.drain(..) {
596            if let Element::Occupied(pipeline, _) = element {
597                let device = &devices[pipeline.device_id.value];
598                unsafe {
599                    device.raw.destroy_compute_pipeline(pipeline.raw);
600                }
601            }
602        }
603        for element in self.render_pipelines.data.write().map.drain(..) {
604            if let Element::Occupied(pipeline, _) = element {
605                let device = &devices[pipeline.device_id.value];
606                unsafe {
607                    device.raw.destroy_render_pipeline(pipeline.raw);
608                }
609            }
610        }
611
612        for element in surface_guard.map.iter_mut() {
613            if let Element::Occupied(ref mut surface, _epoch) = *element {
614                if surface
615                    .presentation
616                    .as_ref()
617                    .map_or(wgt::Backend::Empty, |p| p.backend())
618                    != A::VARIANT
619                {
620                    continue;
621                }
622                if let Some(present) = surface.presentation.take() {
623                    let device = &devices[present.device_id.value];
624                    let suf = A::get_surface_mut(surface);
625                    unsafe {
626                        suf.unwrap().raw.unconfigure(&device.raw);
627                        //TODO: we could destroy the surface here
628                    }
629                }
630            }
631        }
632
633        for element in self.query_sets.data.write().map.drain(..) {
634            if let Element::Occupied(query_set, _) = element {
635                let device = &devices[query_set.device_id.value];
636                unsafe {
637                    device.raw.destroy_query_set(query_set.raw);
638                }
639            }
640        }
641
642        for element in devices.map.drain(..) {
643            if let Element::Occupied(device, _) = element {
644                device.dispose();
645            }
646        }
647
648        if with_adapters {
649            drop(devices);
650            self.adapters.data.write().map.clear();
651        }
652    }
653
654    pub(crate) fn surface_unconfigure(
655        &self,
656        device_id: id::Valid<id::DeviceId>,
657        surface: &mut HalSurface<A>,
658    ) {
659        use hal::Surface as _;
660
661        let devices = self.devices.data.read();
662        let device = &devices[device_id];
663        unsafe {
664            surface.raw.unconfigure(&device.raw);
665        }
666    }
667
668    pub fn generate_report(&self) -> HubReport {
669        HubReport {
670            adapters: self.adapters.data.read().generate_report(),
671            devices: self.devices.data.read().generate_report(),
672            pipeline_layouts: self.pipeline_layouts.data.read().generate_report(),
673            shader_modules: self.shader_modules.data.read().generate_report(),
674            bind_group_layouts: self.bind_group_layouts.data.read().generate_report(),
675            bind_groups: self.bind_groups.data.read().generate_report(),
676            command_buffers: self.command_buffers.data.read().generate_report(),
677            render_bundles: self.render_bundles.data.read().generate_report(),
678            render_pipelines: self.render_pipelines.data.read().generate_report(),
679            compute_pipelines: self.compute_pipelines.data.read().generate_report(),
680            query_sets: self.query_sets.data.read().generate_report(),
681            buffers: self.buffers.data.read().generate_report(),
682            textures: self.textures.data.read().generate_report(),
683            texture_views: self.texture_views.data.read().generate_report(),
684            samplers: self.samplers.data.read().generate_report(),
685        }
686    }
687}
688
689pub struct Hubs<F: GlobalIdentityHandlerFactory> {
690    #[cfg(all(feature = "vulkan", not(target_arch = "wasm32")))]
691    pub(crate) vulkan: Hub<hal::api::Vulkan, F>,
692    #[cfg(all(feature = "metal", any(target_os = "macos", target_os = "ios")))]
693    pub(crate) metal: Hub<hal::api::Metal, F>,
694    #[cfg(all(feature = "dx12", windows))]
695    pub(crate) dx12: Hub<hal::api::Dx12, F>,
696    #[cfg(all(feature = "dx11", windows))]
697    pub(crate) dx11: Hub<hal::api::Dx11, F>,
698    #[cfg(feature = "gles")]
699    pub(crate) gl: Hub<hal::api::Gles, F>,
700    #[cfg(all(
701        not(all(feature = "vulkan", not(target_arch = "wasm32"))),
702        not(all(feature = "metal", any(target_os = "macos", target_os = "ios"))),
703        not(all(feature = "dx12", windows)),
704        not(all(feature = "dx11", windows)),
705        not(feature = "gles"),
706    ))]
707    pub(crate) empty: Hub<hal::api::Empty, F>,
708}
709
710impl<F: GlobalIdentityHandlerFactory> Hubs<F> {
711    pub(crate) fn new(factory: &F) -> Self {
712        Self {
713            #[cfg(all(feature = "vulkan", not(target_arch = "wasm32")))]
714            vulkan: Hub::new(factory),
715            #[cfg(all(feature = "metal", any(target_os = "macos", target_os = "ios")))]
716            metal: Hub::new(factory),
717            #[cfg(all(feature = "dx12", windows))]
718            dx12: Hub::new(factory),
719            #[cfg(all(feature = "dx11", windows))]
720            dx11: Hub::new(factory),
721            #[cfg(feature = "gles")]
722            gl: Hub::new(factory),
723            #[cfg(all(
724                not(all(feature = "vulkan", not(target_arch = "wasm32"))),
725                not(all(feature = "metal", any(target_os = "macos", target_os = "ios"))),
726                not(all(feature = "dx12", windows)),
727                not(all(feature = "dx11", windows)),
728                not(feature = "gles"),
729            ))]
730            empty: Hub::new(factory),
731        }
732    }
733}