wgpu_hal/lib.rs
1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//! should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//! validation, if any, and incorrect use will often cause undefined behavior.
20//! This allows us to minimize the overhead we impose over the underlying
21//! graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//! safe API for driving `wgpu-hal`, implementing all necessary validation,
23//! resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//! use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//! `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//! anticipate*, like running out of memory or losing the device. Any errors
29//! that the user could reasonably anticipate are their responsibility to
30//! avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//! not mappable: as the buffer creator, the user should already know if they
32//! can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//! generally object-safe. You must select a specific backend type
36//! like [`vulkan::Api`] or [`metal::Api`], and then use that
37//! according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//! taking objects by reference, returning them by value, and so on,
41//! unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//! remain mapped on the CPU while the GPU reads or writes to it. You must
45//! explicitly indicate when data might need to be transferred between CPU and
46//! GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//! resource. For example, if a buffer is written to by a compute
50//! shader, and then used as and index buffer to a draw call, you
51//! must use [`CommandEncoder::transition_buffers`] between those two
52//! operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//! Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//! store data in particular containers. The implementation doesn't guarantee
59//! that any of the iterators are drained, unless stated otherwise by the
60//! function documentation. For this reason, we recommend that iterators don't
61//! do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//! Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`windows`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`windows`]: https://crates.io/crates/windows
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//! available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//! own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//! creates an instance value, which you can use to enumerate the adapters
121//! available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//! returns an instance that can enumerate the Vulkan physical devices on your
123//! system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//! particular device from a particular backend. For example, a Vulkan instance
127//! might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//! link to a device. You get a device value by calling [`Adapter::open`], and
131//! then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//! command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//! use to build buffers of commands to submit to a queue. This has all the
138//! methods for drawing and running compute shaders, which is presumably what
139//! you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//! swapchain for presenting images on the screen, via interaction with the
143//! system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![no_std]
206#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
207#![allow(
208 // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
209 clippy::arc_with_non_send_sync,
210 // We don't use syntax sugar where it's not necessary.
211 clippy::match_like_matches_macro,
212 // Redundant matching is more explicit.
213 clippy::redundant_pattern_matching,
214 // Explicit lifetimes are often easier to reason about.
215 clippy::needless_lifetimes,
216 // No need for defaults in the internal types.
217 clippy::new_without_default,
218 // Matches are good and extendable, no need to make an exception here.
219 clippy::single_match,
220 // Push commands are more regular than macros.
221 clippy::vec_init_then_push,
222 // We unsafe impl `Send` for a reason.
223 clippy::non_send_fields_in_send_ty,
224 // TODO!
225 clippy::missing_safety_doc,
226 // It gets in the way a lot and does not prevent bugs in practice.
227 clippy::pattern_type_mismatch,
228)]
229#![warn(
230 clippy::alloc_instead_of_core,
231 clippy::ptr_as_ptr,
232 clippy::std_instead_of_alloc,
233 clippy::std_instead_of_core,
234 trivial_casts,
235 trivial_numeric_casts,
236 unsafe_op_in_unsafe_fn,
237 unused_extern_crates,
238 unused_qualifications
239)]
240
241extern crate alloc;
242extern crate wgpu_types as wgt;
243// Each of these backends needs `std` in some fashion; usually `std::thread` functions.
244#[cfg(any(dx12, gles_with_std, metal, vulkan))]
245#[macro_use]
246extern crate std;
247
248/// DirectX12 API internals.
249#[cfg(dx12)]
250pub mod dx12;
251/// GLES API internals.
252#[cfg(gles)]
253pub mod gles;
254/// Metal API internals.
255#[cfg(metal)]
256pub mod metal;
257/// A dummy API implementation.
258// TODO(https://github.com/gfx-rs/wgpu/issues/7120): this should have a cfg
259pub mod noop;
260/// Vulkan API internals.
261#[cfg(vulkan)]
262pub mod vulkan;
263
264pub mod auxil;
265pub mod api {
266 #[cfg(dx12)]
267 pub use super::dx12::Api as Dx12;
268 #[cfg(gles)]
269 pub use super::gles::Api as Gles;
270 #[cfg(metal)]
271 pub use super::metal::Api as Metal;
272 pub use super::noop::Api as Noop;
273 #[cfg(vulkan)]
274 pub use super::vulkan::Api as Vulkan;
275}
276
277mod dynamic;
278#[cfg(feature = "validation_canary")]
279mod validation_canary;
280
281#[cfg(feature = "validation_canary")]
282pub use validation_canary::{ValidationCanary, VALIDATION_CANARY};
283
284pub(crate) use dynamic::impl_dyn_resource;
285pub use dynamic::{
286 DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
287 DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
288 DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
289 DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
290 DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
291};
292
293#[allow(unused)]
294use alloc::boxed::Box;
295use alloc::{borrow::Cow, string::String, vec::Vec};
296use core::{
297 borrow::Borrow,
298 error::Error,
299 fmt,
300 num::NonZeroU32,
301 ops::{Range, RangeInclusive},
302 ptr::NonNull,
303};
304
305use bitflags::bitflags;
306use thiserror::Error;
307use wgt::WasmNotSendSync;
308
309cfg_if::cfg_if! {
310 if #[cfg(supports_ptr_atomics)] {
311 use alloc::sync::Arc;
312 } else if #[cfg(feature = "portable-atomic")] {
313 use portable_atomic_util::Arc;
314 }
315}
316
317// - Vertex + Fragment
318// - Compute
319// Task + Mesh + Fragment
320pub const MAX_CONCURRENT_SHADER_STAGES: usize = 3;
321pub const MAX_ANISOTROPY: u8 = 16;
322pub const MAX_BIND_GROUPS: usize = 8;
323pub const MAX_VERTEX_BUFFERS: usize = 16;
324pub const MAX_COLOR_ATTACHMENTS: usize = 8;
325pub const MAX_MIP_LEVELS: u32 = 16;
326/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
327/// cbindgen:ignore
328pub const QUERY_SIZE: wgt::BufferAddress = 8;
329
330pub type Label<'a> = Option<&'a str>;
331pub type MemoryRange = Range<wgt::BufferAddress>;
332pub type FenceValue = u64;
333#[cfg(supports_64bit_atomics)]
334pub type AtomicFenceValue = core::sync::atomic::AtomicU64;
335#[cfg(not(supports_64bit_atomics))]
336pub type AtomicFenceValue = portable_atomic::AtomicU64;
337
338/// A callback to signal that wgpu is no longer using a resource.
339#[cfg(any(gles, vulkan))]
340pub type DropCallback = Box<dyn FnOnce() + Send + Sync + 'static>;
341
342#[cfg(any(gles, vulkan))]
343pub struct DropGuard {
344 callback: Option<DropCallback>,
345}
346
347#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
348impl DropGuard {
349 fn from_option(callback: Option<DropCallback>) -> Option<Self> {
350 callback.map(|callback| Self {
351 callback: Some(callback),
352 })
353 }
354}
355
356#[cfg(any(gles, vulkan))]
357impl Drop for DropGuard {
358 fn drop(&mut self) {
359 if let Some(cb) = self.callback.take() {
360 (cb)();
361 }
362 }
363}
364
365#[cfg(any(gles, vulkan))]
366impl fmt::Debug for DropGuard {
367 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
368 f.debug_struct("DropGuard").finish()
369 }
370}
371
372#[derive(Clone, Debug, PartialEq, Eq, Error)]
373pub enum DeviceError {
374 #[error("Out of memory")]
375 OutOfMemory,
376 #[error("Device is lost")]
377 Lost,
378 #[error("Unexpected error variant (driver implementation is at fault)")]
379 Unexpected,
380}
381
382#[allow(dead_code)] // may be unused on some platforms
383#[cold]
384fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
385 panic!("wgpu-hal invariant was violated (usage error): {txt}")
386}
387
388#[allow(dead_code)] // may be unused on some platforms
389#[cold]
390fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
391 panic!("wgpu-hal ran into a preventable internal error: {txt}")
392}
393
394#[derive(Clone, Debug, Eq, PartialEq, Error)]
395pub enum ShaderError {
396 #[error("Compilation failed: {0:?}")]
397 Compilation(String),
398 #[error(transparent)]
399 Device(#[from] DeviceError),
400}
401
402#[derive(Clone, Debug, Eq, PartialEq, Error)]
403pub enum PipelineError {
404 #[error("Linkage failed for stage {0:?}: {1}")]
405 Linkage(wgt::ShaderStages, String),
406 #[error("Entry point for stage {0:?} is invalid")]
407 EntryPoint(naga::ShaderStage),
408 #[error(transparent)]
409 Device(#[from] DeviceError),
410 #[error("Pipeline constant error for stage {0:?}: {1}")]
411 PipelineConstants(wgt::ShaderStages, String),
412}
413
414#[derive(Clone, Debug, Eq, PartialEq, Error)]
415pub enum PipelineCacheError {
416 #[error(transparent)]
417 Device(#[from] DeviceError),
418}
419
420#[derive(Clone, Debug, Eq, PartialEq, Error)]
421pub enum SurfaceError {
422 #[error("Surface is lost")]
423 Lost,
424 #[error("Surface is outdated, needs to be re-created")]
425 Outdated,
426 #[error(transparent)]
427 Device(#[from] DeviceError),
428 #[error("Other reason: {0}")]
429 Other(&'static str),
430}
431
432/// Error occurring while trying to create an instance, or create a surface from an instance;
433/// typically relating to the state of the underlying graphics API or hardware.
434#[derive(Clone, Debug, Error)]
435#[error("{message}")]
436pub struct InstanceError {
437 /// These errors are very platform specific, so do not attempt to encode them as an enum.
438 ///
439 /// This message should describe the problem in sufficient detail to be useful for a
440 /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
441 /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
442 message: String,
443
444 /// Underlying error value, if any is available.
445 #[source]
446 source: Option<Arc<dyn Error + Send + Sync + 'static>>,
447}
448
449impl InstanceError {
450 #[allow(dead_code)] // may be unused on some platforms
451 pub(crate) fn new(message: String) -> Self {
452 Self {
453 message,
454 source: None,
455 }
456 }
457 #[allow(dead_code)] // may be unused on some platforms
458 pub(crate) fn with_source(message: String, source: impl Error + Send + Sync + 'static) -> Self {
459 cfg_if::cfg_if! {
460 if #[cfg(supports_ptr_atomics)] {
461 let source = Arc::new(source);
462 } else {
463 // TODO(https://github.com/rust-lang/rust/issues/18598): avoid indirection via Box once arbitrary types support unsized coercion
464 let source: Box<dyn Error + Send + Sync + 'static> = Box::new(source);
465 let source = Arc::from(source);
466 }
467 }
468 Self {
469 message,
470 source: Some(source),
471 }
472 }
473}
474
475pub trait Api: Clone + fmt::Debug + Sized {
476 type Instance: DynInstance + Instance<A = Self>;
477 type Surface: DynSurface + Surface<A = Self>;
478 type Adapter: DynAdapter + Adapter<A = Self>;
479 type Device: DynDevice + Device<A = Self>;
480
481 type Queue: DynQueue + Queue<A = Self>;
482 type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
483
484 /// This API's command buffer type.
485 ///
486 /// The only thing you can do with `CommandBuffer`s is build them
487 /// with a [`CommandEncoder`] and then pass them to
488 /// [`Queue::submit`] for execution, or destroy them by passing
489 /// them to [`CommandEncoder::reset_all`].
490 ///
491 /// [`CommandEncoder`]: Api::CommandEncoder
492 type CommandBuffer: DynCommandBuffer;
493
494 type Buffer: DynBuffer;
495 type Texture: DynTexture;
496 type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
497 type TextureView: DynTextureView;
498 type Sampler: DynSampler;
499 type QuerySet: DynQuerySet;
500
501 /// A value you can block on to wait for something to finish.
502 ///
503 /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
504 /// [`Device::wait`] to block until a fence reaches or passes a value you
505 /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
506 /// store in it when the submitted work is complete.
507 ///
508 /// Attempting to set a fence to a value less than its current value has no
509 /// effect.
510 ///
511 /// Waiting on a fence returns as soon as the fence reaches *or passes* the
512 /// requested value. This implies that, in order to reliably determine when
513 /// an operation has completed, operations must finish in order of
514 /// increasing fence values: if a higher-valued operation were to finish
515 /// before a lower-valued operation, then waiting for the fence to reach the
516 /// lower value could return before the lower-valued operation has actually
517 /// finished.
518 type Fence: DynFence;
519
520 type BindGroupLayout: DynBindGroupLayout;
521 type BindGroup: DynBindGroup;
522 type PipelineLayout: DynPipelineLayout;
523 type ShaderModule: DynShaderModule;
524 type RenderPipeline: DynRenderPipeline;
525 type ComputePipeline: DynComputePipeline;
526 type PipelineCache: DynPipelineCache;
527
528 type AccelerationStructure: DynAccelerationStructure + 'static;
529}
530
531pub trait Instance: Sized + WasmNotSendSync {
532 type A: Api;
533
534 unsafe fn init(desc: &InstanceDescriptor) -> Result<Self, InstanceError>;
535 unsafe fn create_surface(
536 &self,
537 display_handle: raw_window_handle::RawDisplayHandle,
538 window_handle: raw_window_handle::RawWindowHandle,
539 ) -> Result<<Self::A as Api>::Surface, InstanceError>;
540 /// `surface_hint` is only used by the GLES backend targeting WebGL2
541 unsafe fn enumerate_adapters(
542 &self,
543 surface_hint: Option<&<Self::A as Api>::Surface>,
544 ) -> Vec<ExposedAdapter<Self::A>>;
545}
546
547pub trait Surface: WasmNotSendSync {
548 type A: Api;
549
550 /// Configure `self` to use `device`.
551 ///
552 /// # Safety
553 ///
554 /// - All GPU work using `self` must have been completed.
555 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
556 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
557 /// - The surface `self` must not currently be configured to use any other [`Device`].
558 unsafe fn configure(
559 &self,
560 device: &<Self::A as Api>::Device,
561 config: &SurfaceConfiguration,
562 ) -> Result<(), SurfaceError>;
563
564 /// Unconfigure `self` on `device`.
565 ///
566 /// # Safety
567 ///
568 /// - All GPU work that uses `surface` must have been completed.
569 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
570 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
571 /// - The surface `self` must have been configured on `device`.
572 unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
573
574 /// Return the next texture to be presented by `self`, for the caller to draw on.
575 ///
576 /// On success, return an [`AcquiredSurfaceTexture`] representing the
577 /// texture into which the caller should draw the image to be displayed on
578 /// `self`.
579 ///
580 /// If `timeout` elapses before `self` has a texture ready to be acquired,
581 /// return `Ok(None)`. If `timeout` is `None`, wait indefinitely, with no
582 /// timeout.
583 ///
584 /// # Using an [`AcquiredSurfaceTexture`]
585 ///
586 /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
587 /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
588 /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
589 /// carries some metadata about that [`SurfaceTexture`].
590 ///
591 /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
592 /// include the [`SurfaceTexture`] in the `surface_textures` argument.
593 ///
594 /// When you are done drawing on the texture, you can display it on `self`
595 /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
596 ///
597 /// If you do not wish to display the texture, you must pass the
598 /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
599 /// by future acquisitions.
600 ///
601 /// # Portability
602 ///
603 /// Some backends can't support a timeout when acquiring a texture. On these
604 /// backends, `timeout` is ignored.
605 ///
606 /// # Safety
607 ///
608 /// - The surface `self` must currently be configured on some [`Device`].
609 ///
610 /// - The `fence` argument must be the same [`Fence`] passed to all calls to
611 /// [`Queue::submit`] that used [`Texture`]s acquired from this surface.
612 ///
613 /// - You may only have one texture acquired from `self` at a time. When
614 /// `acquire_texture` returns `Ok(Some(ast))`, you must pass the returned
615 /// [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
616 /// [`Surface::discard_texture`] before calling `acquire_texture` again.
617 ///
618 /// [`texture`]: AcquiredSurfaceTexture::texture
619 /// [`SurfaceTexture`]: Api::SurfaceTexture
620 /// [`borrow`]: alloc::borrow::Borrow::borrow
621 /// [`Texture`]: Api::Texture
622 /// [`Fence`]: Api::Fence
623 /// [`self.discard_texture`]: Surface::discard_texture
624 unsafe fn acquire_texture(
625 &self,
626 timeout: Option<core::time::Duration>,
627 fence: &<Self::A as Api>::Fence,
628 ) -> Result<Option<AcquiredSurfaceTexture<Self::A>>, SurfaceError>;
629
630 /// Relinquish an acquired texture without presenting it.
631 ///
632 /// After this call, the texture underlying [`SurfaceTexture`] may be
633 /// returned by subsequent calls to [`self.acquire_texture`].
634 ///
635 /// # Safety
636 ///
637 /// - The surface `self` must currently be configured on some [`Device`].
638 ///
639 /// - `texture` must be a [`SurfaceTexture`] returned by a call to
640 /// [`self.acquire_texture`] that has not yet been passed to
641 /// [`Queue::present`].
642 ///
643 /// [`SurfaceTexture`]: Api::SurfaceTexture
644 /// [`self.acquire_texture`]: Surface::acquire_texture
645 unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
646}
647
648pub trait Adapter: WasmNotSendSync {
649 type A: Api;
650
651 unsafe fn open(
652 &self,
653 features: wgt::Features,
654 limits: &wgt::Limits,
655 memory_hints: &wgt::MemoryHints,
656 ) -> Result<OpenDevice<Self::A>, DeviceError>;
657
658 /// Return the set of supported capabilities for a texture format.
659 unsafe fn texture_format_capabilities(
660 &self,
661 format: wgt::TextureFormat,
662 ) -> TextureFormatCapabilities;
663
664 /// Returns the capabilities of working with a specified surface.
665 ///
666 /// `None` means presentation is not supported for it.
667 unsafe fn surface_capabilities(
668 &self,
669 surface: &<Self::A as Api>::Surface,
670 ) -> Option<SurfaceCapabilities>;
671
672 /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
673 ///
674 /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
675 unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
676}
677
678/// A connection to a GPU and a pool of resources to use with it.
679///
680/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
681/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
682/// used for creating resources. Each `Device` has an associated [`Queue`] used
683/// for command submission.
684///
685/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
686/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
687/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
688/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
689/// `Adapter`.
690///
691/// A `Device`'s life cycle is generally:
692///
693/// 1) Obtain a `Device` and its associated [`Queue`] by calling
694/// [`Adapter::open`].
695///
696/// Alternatively, the backend-specific types that implement [`Adapter`] often
697/// have methods for creating a `wgpu-hal` `Device` from a platform-specific
698/// handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
699/// [`vulkan::Device`] from an [`ash::Device`].
700///
701/// 1) Create resources to use on the device by calling methods like
702/// [`Device::create_texture`] or [`Device::create_shader_module`].
703///
704/// 1) Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
705/// which you can use to build [`CommandBuffer`]s holding commands to be
706/// executed on the GPU.
707///
708/// 1) Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
709/// [`CommandBuffer`]s for execution on the GPU. If needed, call
710/// [`Device::wait`] to wait for them to finish execution.
711///
712/// 1) Free resources with methods like [`Device::destroy_texture`] or
713/// [`Device::destroy_shader_module`].
714///
715/// 1) Drop the device.
716///
717/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
718/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
719/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
720/// [`wgpu_hal::Adapter`]: Adapter
721/// [`wgpu_hal::Device`]: Device
722/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
723/// [`vulkan::Device`]: vulkan/struct.Device.html
724/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
725/// [`CommandBuffer`]: Api::CommandBuffer
726///
727/// # Safety
728///
729/// As with other `wgpu-hal` APIs, [validation] is the caller's
730/// responsibility. Here are the general requirements for all `Device`
731/// methods:
732///
733/// - Any resource passed to a `Device` method must have been created by that
734/// `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
735/// have been created with the `Device` passed as `self`.
736///
737/// - Resources may not be destroyed if they are used by any submitted command
738/// buffers that have not yet finished execution.
739///
740/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
741/// [`Texture`]: Api::Texture
742pub trait Device: WasmNotSendSync {
743 type A: Api;
744
745 /// Creates a new buffer.
746 ///
747 /// The initial usage is `wgt::BufferUses::empty()`.
748 unsafe fn create_buffer(
749 &self,
750 desc: &BufferDescriptor,
751 ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
752
753 /// Free `buffer` and any GPU resources it owns.
754 ///
755 /// Note that backends are allowed to allocate GPU memory for buffers from
756 /// allocation pools, and this call is permitted to simply return `buffer`'s
757 /// storage to that pool, without making it available to other applications.
758 ///
759 /// # Safety
760 ///
761 /// - The given `buffer` must not currently be mapped.
762 unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
763
764 /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
765 unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
766
767 /// Return a pointer to CPU memory mapping the contents of `buffer`.
768 ///
769 /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
770 /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
771 /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
772 /// `wgpu_hal` buffer is also unmapped.)
773 ///
774 /// If this function returns `Ok(mapping)`, then:
775 ///
776 /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
777 ///
778 /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
779 /// memory are immediately visible on the GPU, and vice versa.
780 ///
781 /// # Safety
782 ///
783 /// - The given `buffer` must have been created with the [`MAP_READ`] or
784 /// [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
785 ///
786 /// - The given `range` must fall within the size of `buffer`.
787 ///
788 /// - The caller must avoid data races between the CPU and the GPU. A data
789 /// race is any pair of accesses to a particular byte, one of which is a
790 /// write, that are not ordered with respect to each other by some sort of
791 /// synchronization operation.
792 ///
793 /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
794 /// `false`, then:
795 ///
796 /// - Every CPU write to a mapped byte followed by a GPU read of that byte
797 /// must have at least one call to [`Device::flush_mapped_ranges`]
798 /// covering that byte that occurs between those two accesses.
799 ///
800 /// - Every GPU write to a mapped byte followed by a CPU read of that byte
801 /// must have at least one call to [`Device::invalidate_mapped_ranges`]
802 /// covering that byte that occurs between those two accesses.
803 ///
804 /// Note that the data race rule above requires that all such access pairs
805 /// be ordered, so it is meaningful to talk about what must occur
806 /// "between" them.
807 ///
808 /// - Zero-sized mappings are not allowed.
809 ///
810 /// - The returned [`BufferMapping::ptr`] must not be used after a call to
811 /// [`Device::unmap_buffer`].
812 ///
813 /// [`MAP_READ`]: wgt::BufferUses::MAP_READ
814 /// [`MAP_WRITE`]: wgt::BufferUses::MAP_WRITE
815 unsafe fn map_buffer(
816 &self,
817 buffer: &<Self::A as Api>::Buffer,
818 range: MemoryRange,
819 ) -> Result<BufferMapping, DeviceError>;
820
821 /// Remove the mapping established by the last call to [`Device::map_buffer`].
822 ///
823 /// # Safety
824 ///
825 /// - The given `buffer` must be currently mapped.
826 unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
827
828 /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
829 ///
830 /// # Safety
831 ///
832 /// - The given `buffer` must be currently mapped.
833 ///
834 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
835 unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
836 where
837 I: Iterator<Item = MemoryRange>;
838
839 /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
840 ///
841 /// # Safety
842 ///
843 /// - The given `buffer` must be currently mapped.
844 ///
845 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
846 unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
847 where
848 I: Iterator<Item = MemoryRange>;
849
850 /// Creates a new texture.
851 ///
852 /// The initial usage for all subresources is `wgt::TextureUses::UNINITIALIZED`.
853 unsafe fn create_texture(
854 &self,
855 desc: &TextureDescriptor,
856 ) -> Result<<Self::A as Api>::Texture, DeviceError>;
857 unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
858
859 /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
860 unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
861
862 unsafe fn create_texture_view(
863 &self,
864 texture: &<Self::A as Api>::Texture,
865 desc: &TextureViewDescriptor,
866 ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
867 unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
868 unsafe fn create_sampler(
869 &self,
870 desc: &SamplerDescriptor,
871 ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
872 unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
873
874 /// Create a fresh [`CommandEncoder`].
875 ///
876 /// The new `CommandEncoder` is in the "closed" state.
877 unsafe fn create_command_encoder(
878 &self,
879 desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
880 ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
881
882 /// Creates a bind group layout.
883 unsafe fn create_bind_group_layout(
884 &self,
885 desc: &BindGroupLayoutDescriptor,
886 ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
887 unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
888 unsafe fn create_pipeline_layout(
889 &self,
890 desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
891 ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
892 unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
893
894 #[allow(clippy::type_complexity)]
895 unsafe fn create_bind_group(
896 &self,
897 desc: &BindGroupDescriptor<
898 <Self::A as Api>::BindGroupLayout,
899 <Self::A as Api>::Buffer,
900 <Self::A as Api>::Sampler,
901 <Self::A as Api>::TextureView,
902 <Self::A as Api>::AccelerationStructure,
903 >,
904 ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
905 unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
906
907 unsafe fn create_shader_module(
908 &self,
909 desc: &ShaderModuleDescriptor,
910 shader: ShaderInput,
911 ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
912 unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
913
914 #[allow(clippy::type_complexity)]
915 unsafe fn create_render_pipeline(
916 &self,
917 desc: &RenderPipelineDescriptor<
918 <Self::A as Api>::PipelineLayout,
919 <Self::A as Api>::ShaderModule,
920 <Self::A as Api>::PipelineCache,
921 >,
922 ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
923 #[allow(clippy::type_complexity)]
924 unsafe fn create_mesh_pipeline(
925 &self,
926 desc: &MeshPipelineDescriptor<
927 <Self::A as Api>::PipelineLayout,
928 <Self::A as Api>::ShaderModule,
929 <Self::A as Api>::PipelineCache,
930 >,
931 ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
932 unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
933
934 #[allow(clippy::type_complexity)]
935 unsafe fn create_compute_pipeline(
936 &self,
937 desc: &ComputePipelineDescriptor<
938 <Self::A as Api>::PipelineLayout,
939 <Self::A as Api>::ShaderModule,
940 <Self::A as Api>::PipelineCache,
941 >,
942 ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
943 unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
944
945 unsafe fn create_pipeline_cache(
946 &self,
947 desc: &PipelineCacheDescriptor<'_>,
948 ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
949 fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
950 None
951 }
952 unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
953
954 unsafe fn create_query_set(
955 &self,
956 desc: &wgt::QuerySetDescriptor<Label>,
957 ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
958 unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
959 unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
960 unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
961 unsafe fn get_fence_value(
962 &self,
963 fence: &<Self::A as Api>::Fence,
964 ) -> Result<FenceValue, DeviceError>;
965
966 /// Wait for `fence` to reach `value`.
967 ///
968 /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
969 /// [`FenceValue`] to store in it, so you can use this `wait` function
970 /// to wait for a given queue submission to finish execution.
971 ///
972 /// The `value` argument must be a value that some actual operation you have
973 /// already presented to the device is going to store in `fence`. You cannot
974 /// wait for values yet to be submitted. (This restriction accommodates
975 /// implementations like the `vulkan` backend's [`FencePool`] that must
976 /// allocate a distinct synchronization object for each fence value one is
977 /// able to wait for.)
978 ///
979 /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
980 /// returns immediately.
981 ///
982 /// Returns `Ok(true)` on success and `Ok(false)` on timeout.
983 ///
984 /// [`Fence`]: Api::Fence
985 /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
986 unsafe fn wait(
987 &self,
988 fence: &<Self::A as Api>::Fence,
989 value: FenceValue,
990 timeout_ms: u32,
991 ) -> Result<bool, DeviceError>;
992
993 /// Start a graphics debugger capture.
994 ///
995 /// # Safety
996 ///
997 /// See [`wgpu::Device::start_graphics_debugger_capture`][api] for more details.
998 ///
999 /// [api]: ../wgpu/struct.Device.html#method.start_graphics_debugger_capture
1000 unsafe fn start_graphics_debugger_capture(&self) -> bool;
1001
1002 /// Stop a graphics debugger capture.
1003 ///
1004 /// # Safety
1005 ///
1006 /// See [`wgpu::Device::stop_graphics_debugger_capture`][api] for more details.
1007 ///
1008 /// [api]: ../wgpu/struct.Device.html#method.stop_graphics_debugger_capture
1009 unsafe fn stop_graphics_debugger_capture(&self);
1010
1011 #[allow(unused_variables)]
1012 unsafe fn pipeline_cache_get_data(
1013 &self,
1014 cache: &<Self::A as Api>::PipelineCache,
1015 ) -> Option<Vec<u8>> {
1016 None
1017 }
1018
1019 unsafe fn create_acceleration_structure(
1020 &self,
1021 desc: &AccelerationStructureDescriptor,
1022 ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
1023 unsafe fn get_acceleration_structure_build_sizes(
1024 &self,
1025 desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
1026 ) -> AccelerationStructureBuildSizes;
1027 unsafe fn get_acceleration_structure_device_address(
1028 &self,
1029 acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1030 ) -> wgt::BufferAddress;
1031 unsafe fn destroy_acceleration_structure(
1032 &self,
1033 acceleration_structure: <Self::A as Api>::AccelerationStructure,
1034 );
1035 fn tlas_instance_to_bytes(&self, instance: TlasInstance) -> Vec<u8>;
1036
1037 fn get_internal_counters(&self) -> wgt::HalCounters;
1038
1039 fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
1040 None
1041 }
1042
1043 fn check_if_oom(&self) -> Result<(), DeviceError>;
1044}
1045
1046pub trait Queue: WasmNotSendSync {
1047 type A: Api;
1048
1049 /// Submit `command_buffers` for execution on GPU.
1050 ///
1051 /// Update `fence` to `value` when the operation is complete. See
1052 /// [`Fence`] for details.
1053 ///
1054 /// A `wgpu_hal` queue is "single threaded": all command buffers are
1055 /// executed in the order they're submitted, with each buffer able to see
1056 /// previous buffers' results. Specifically:
1057 ///
1058 /// - If two calls to `submit` on a single `Queue` occur in a particular
1059 /// order (that is, they happen on the same thread, or on two threads that
1060 /// have synchronized to establish an ordering), then the first
1061 /// submission's commands all complete execution before any of the second
1062 /// submission's commands begin. All results produced by one submission
1063 /// are visible to the next.
1064 ///
1065 /// - Within a submission, command buffers execute in the order in which they
1066 /// appear in `command_buffers`. All results produced by one buffer are
1067 /// visible to the next.
1068 ///
1069 /// If two calls to `submit` on a single `Queue` from different threads are
1070 /// not synchronized to occur in a particular order, they must pass distinct
1071 /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1072 /// operations to complete is only trustworthy when operations finish in
1073 /// order of increasing fence value, but submissions from different threads
1074 /// cannot determine how to order the fence values if the submissions
1075 /// themselves are unordered. If each thread uses a separate [`Fence`], this
1076 /// problem does not arise.
1077 ///
1078 /// # Safety
1079 ///
1080 /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1081 /// from a [`CommandEncoder`][ce] that was constructed from the
1082 /// [`Device`][d] associated with this [`Queue`].
1083 ///
1084 /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1085 /// commands have finished execution. Since command buffers must not
1086 /// outlive their encoders, this implies that the encoders must remain
1087 /// alive as well.
1088 ///
1089 /// - All resources used by a submitted [`CommandBuffer`][cb]
1090 /// ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1091 /// on) must remain alive until the command buffer finishes execution.
1092 ///
1093 /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1094 /// writes to must appear in the `surface_textures` argument.
1095 ///
1096 /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1097 /// argument more than once.
1098 ///
1099 /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1100 /// for use with the [`Device`][d] associated with this [`Queue`],
1101 /// typically by calling [`Surface::configure`].
1102 ///
1103 /// - All calls to this function that include a given [`SurfaceTexture`][st]
1104 /// in `surface_textures` must use the same [`Fence`].
1105 ///
1106 /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1107 /// all submissions that will signal it have completed.
1108 ///
1109 /// [`Fence`]: Api::Fence
1110 /// [cb]: Api::CommandBuffer
1111 /// [ce]: Api::CommandEncoder
1112 /// [d]: Api::Device
1113 /// [t]: Api::Texture
1114 /// [bg]: Api::BindGroup
1115 /// [rp]: Api::RenderPipeline
1116 /// [st]: Api::SurfaceTexture
1117 unsafe fn submit(
1118 &self,
1119 command_buffers: &[&<Self::A as Api>::CommandBuffer],
1120 surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1121 signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1122 ) -> Result<(), DeviceError>;
1123 unsafe fn present(
1124 &self,
1125 surface: &<Self::A as Api>::Surface,
1126 texture: <Self::A as Api>::SurfaceTexture,
1127 ) -> Result<(), SurfaceError>;
1128 unsafe fn get_timestamp_period(&self) -> f32;
1129}
1130
1131/// Encoder and allocation pool for `CommandBuffer`s.
1132///
1133/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1134/// acts as the allocation pool that owns the buffers' underlying
1135/// storage. Thus, `CommandBuffer`s must not outlive the
1136/// `CommandEncoder` that created them.
1137///
1138/// The life cycle of a `CommandBuffer` is as follows:
1139///
1140/// - Call [`Device::create_command_encoder`] to create a new
1141/// `CommandEncoder`, in the "closed" state.
1142///
1143/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1144/// recording commands. This puts the `CommandEncoder` in the
1145/// "recording" state.
1146///
1147/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1148/// etc. on a "recording" `CommandEncoder` to add commands to the
1149/// list. (If an error occurs, you must call `discard_encoding`; see
1150/// below.)
1151///
1152/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1153/// encoder and construct a fresh `CommandBuffer` consisting of the
1154/// list of commands recorded up to that point.
1155///
1156/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1157/// the commands recorded thus far and close the encoder. This is
1158/// the only safe thing to do on a `CommandEncoder` if an error has
1159/// occurred while recording commands.
1160///
1161/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1162/// live `CommandBuffers` built from it. All the `CommandBuffer`s
1163/// are destroyed, and their resources are freed.
1164///
1165/// # Safety
1166///
1167/// - The `CommandEncoder` must be in the states described above to
1168/// make the given calls.
1169///
1170/// - A `CommandBuffer` that has been submitted for execution on the
1171/// GPU must live until its execution is complete.
1172///
1173/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1174/// built it.
1175///
1176/// It is the user's responsibility to meet this requirements. This
1177/// allows `CommandEncoder` implementations to keep their state
1178/// tracking to a minimum.
1179pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1180 type A: Api;
1181
1182 /// Begin encoding a new command buffer.
1183 ///
1184 /// This puts this `CommandEncoder` in the "recording" state.
1185 ///
1186 /// # Safety
1187 ///
1188 /// This `CommandEncoder` must be in the "closed" state.
1189 unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1190
1191 /// Discard the command list under construction.
1192 ///
1193 /// If an error has occurred while recording commands, this
1194 /// is the only safe thing to do with the encoder.
1195 ///
1196 /// This puts this `CommandEncoder` in the "closed" state.
1197 ///
1198 /// # Safety
1199 ///
1200 /// This `CommandEncoder` must be in the "recording" state.
1201 ///
1202 /// Callers must not assume that implementations of this
1203 /// function are idempotent, and thus should not call it
1204 /// multiple times in a row.
1205 unsafe fn discard_encoding(&mut self);
1206
1207 /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1208 ///
1209 /// The returned [`CommandBuffer`] holds all the commands recorded
1210 /// on this `CommandEncoder` since the last call to
1211 /// [`begin_encoding`].
1212 ///
1213 /// This puts this `CommandEncoder` in the "closed" state.
1214 ///
1215 /// # Safety
1216 ///
1217 /// This `CommandEncoder` must be in the "recording" state.
1218 ///
1219 /// The returned [`CommandBuffer`] must not outlive this
1220 /// `CommandEncoder`. Implementations are allowed to build
1221 /// `CommandBuffer`s that depend on storage owned by this
1222 /// `CommandEncoder`.
1223 ///
1224 /// [`CommandBuffer`]: Api::CommandBuffer
1225 /// [`begin_encoding`]: CommandEncoder::begin_encoding
1226 unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1227
1228 /// Reclaim all resources belonging to this `CommandEncoder`.
1229 ///
1230 /// # Safety
1231 ///
1232 /// This `CommandEncoder` must be in the "closed" state.
1233 ///
1234 /// The `command_buffers` iterator must produce all the live
1235 /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1236 /// is, every extant `CommandBuffer` returned from `end_encoding`.
1237 ///
1238 /// [`CommandBuffer`]: Api::CommandBuffer
1239 unsafe fn reset_all<I>(&mut self, command_buffers: I)
1240 where
1241 I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1242
1243 unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1244 where
1245 T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1246
1247 unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1248 where
1249 T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1250
1251 // copy operations
1252
1253 unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1254
1255 unsafe fn copy_buffer_to_buffer<T>(
1256 &mut self,
1257 src: &<Self::A as Api>::Buffer,
1258 dst: &<Self::A as Api>::Buffer,
1259 regions: T,
1260 ) where
1261 T: Iterator<Item = BufferCopy>;
1262
1263 /// Copy from an external image to an internal texture.
1264 /// Works with a single array layer.
1265 /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1266 /// Note: the copy extent is in physical size (rounded to the block size)
1267 #[cfg(webgl)]
1268 unsafe fn copy_external_image_to_texture<T>(
1269 &mut self,
1270 src: &wgt::CopyExternalImageSourceInfo,
1271 dst: &<Self::A as Api>::Texture,
1272 dst_premultiplication: bool,
1273 regions: T,
1274 ) where
1275 T: Iterator<Item = TextureCopy>;
1276
1277 /// Copy from one texture to another.
1278 /// Works with a single array layer.
1279 /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1280 /// Note: the copy extent is in physical size (rounded to the block size)
1281 unsafe fn copy_texture_to_texture<T>(
1282 &mut self,
1283 src: &<Self::A as Api>::Texture,
1284 src_usage: wgt::TextureUses,
1285 dst: &<Self::A as Api>::Texture,
1286 regions: T,
1287 ) where
1288 T: Iterator<Item = TextureCopy>;
1289
1290 /// Copy from buffer to texture.
1291 /// Works with a single array layer.
1292 /// Note: `dst` current usage has to be `wgt::TextureUses::COPY_DST`.
1293 /// Note: the copy extent is in physical size (rounded to the block size)
1294 unsafe fn copy_buffer_to_texture<T>(
1295 &mut self,
1296 src: &<Self::A as Api>::Buffer,
1297 dst: &<Self::A as Api>::Texture,
1298 regions: T,
1299 ) where
1300 T: Iterator<Item = BufferTextureCopy>;
1301
1302 /// Copy from texture to buffer.
1303 /// Works with a single array layer.
1304 /// Note: the copy extent is in physical size (rounded to the block size)
1305 unsafe fn copy_texture_to_buffer<T>(
1306 &mut self,
1307 src: &<Self::A as Api>::Texture,
1308 src_usage: wgt::TextureUses,
1309 dst: &<Self::A as Api>::Buffer,
1310 regions: T,
1311 ) where
1312 T: Iterator<Item = BufferTextureCopy>;
1313
1314 unsafe fn copy_acceleration_structure_to_acceleration_structure(
1315 &mut self,
1316 src: &<Self::A as Api>::AccelerationStructure,
1317 dst: &<Self::A as Api>::AccelerationStructure,
1318 copy: wgt::AccelerationStructureCopy,
1319 );
1320 // pass common
1321
1322 /// Sets the bind group at `index` to `group`.
1323 ///
1324 /// If this is not the first call to `set_bind_group` within the current
1325 /// render or compute pass:
1326 ///
1327 /// - If `layout` contains `n` bind group layouts, then any previously set
1328 /// bind groups at indices `n` or higher are cleared.
1329 ///
1330 /// - If the first `m` bind group layouts of `layout` are equal to those of
1331 /// the previously passed layout, but no more, then any previously set
1332 /// bind groups at indices `m` or higher are cleared.
1333 ///
1334 /// It follows from the above that passing the same layout as before doesn't
1335 /// clear any bind groups.
1336 ///
1337 /// # Safety
1338 ///
1339 /// - This [`CommandEncoder`] must be within a render or compute pass.
1340 ///
1341 /// - `index` must be the valid index of some bind group layout in `layout`.
1342 /// Call this the "relevant bind group layout".
1343 ///
1344 /// - The layout of `group` must be equal to the relevant bind group layout.
1345 ///
1346 /// - The length of `dynamic_offsets` must match the number of buffer
1347 /// bindings [with dynamic offsets][hdo] in the relevant bind group
1348 /// layout.
1349 ///
1350 /// - If those buffer bindings are ordered by increasing [`binding` number]
1351 /// and paired with elements from `dynamic_offsets`, then each offset must
1352 /// be a valid offset for the binding's corresponding buffer in `group`.
1353 ///
1354 /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1355 /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1356 unsafe fn set_bind_group(
1357 &mut self,
1358 layout: &<Self::A as Api>::PipelineLayout,
1359 index: u32,
1360 group: &<Self::A as Api>::BindGroup,
1361 dynamic_offsets: &[wgt::DynamicOffset],
1362 );
1363
1364 /// Sets a range in push constant data.
1365 ///
1366 /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1367 ///
1368 /// # Safety
1369 ///
1370 /// - `offset_bytes` must be a multiple of 4.
1371 /// - The range of push constants written must be valid for the pipeline layout at draw time.
1372 unsafe fn set_push_constants(
1373 &mut self,
1374 layout: &<Self::A as Api>::PipelineLayout,
1375 stages: wgt::ShaderStages,
1376 offset_bytes: u32,
1377 data: &[u32],
1378 );
1379
1380 unsafe fn insert_debug_marker(&mut self, label: &str);
1381 unsafe fn begin_debug_marker(&mut self, group_label: &str);
1382 unsafe fn end_debug_marker(&mut self);
1383
1384 // queries
1385
1386 /// # Safety:
1387 ///
1388 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1389 unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1390 /// # Safety:
1391 ///
1392 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1393 unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1394 unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1395 unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1396 unsafe fn copy_query_results(
1397 &mut self,
1398 set: &<Self::A as Api>::QuerySet,
1399 range: Range<u32>,
1400 buffer: &<Self::A as Api>::Buffer,
1401 offset: wgt::BufferAddress,
1402 stride: wgt::BufferSize,
1403 );
1404
1405 // render passes
1406
1407 /// Begin a new render pass, clearing all active bindings.
1408 ///
1409 /// This clears any bindings established by the following calls:
1410 ///
1411 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1412 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1413 /// - [`begin_query`](CommandEncoder::begin_query)
1414 /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1415 /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1416 /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1417 ///
1418 /// # Safety
1419 ///
1420 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1421 /// by a call to [`end_render_pass`].
1422 ///
1423 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1424 /// by a call to [`end_compute_pass`].
1425 ///
1426 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1427 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1428 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1429 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1430 unsafe fn begin_render_pass(
1431 &mut self,
1432 desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1433 ) -> Result<(), DeviceError>;
1434
1435 /// End the current render pass.
1436 ///
1437 /// # Safety
1438 ///
1439 /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1440 /// that has not been followed by a call to [`end_render_pass`].
1441 ///
1442 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1443 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1444 unsafe fn end_render_pass(&mut self);
1445
1446 unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1447
1448 unsafe fn set_index_buffer<'a>(
1449 &mut self,
1450 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1451 format: wgt::IndexFormat,
1452 );
1453 unsafe fn set_vertex_buffer<'a>(
1454 &mut self,
1455 index: u32,
1456 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1457 );
1458 unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1459 unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1460 unsafe fn set_stencil_reference(&mut self, value: u32);
1461 unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1462
1463 unsafe fn draw(
1464 &mut self,
1465 first_vertex: u32,
1466 vertex_count: u32,
1467 first_instance: u32,
1468 instance_count: u32,
1469 );
1470 unsafe fn draw_indexed(
1471 &mut self,
1472 first_index: u32,
1473 index_count: u32,
1474 base_vertex: i32,
1475 first_instance: u32,
1476 instance_count: u32,
1477 );
1478 unsafe fn draw_indirect(
1479 &mut self,
1480 buffer: &<Self::A as Api>::Buffer,
1481 offset: wgt::BufferAddress,
1482 draw_count: u32,
1483 );
1484 unsafe fn draw_indexed_indirect(
1485 &mut self,
1486 buffer: &<Self::A as Api>::Buffer,
1487 offset: wgt::BufferAddress,
1488 draw_count: u32,
1489 );
1490 unsafe fn draw_indirect_count(
1491 &mut self,
1492 buffer: &<Self::A as Api>::Buffer,
1493 offset: wgt::BufferAddress,
1494 count_buffer: &<Self::A as Api>::Buffer,
1495 count_offset: wgt::BufferAddress,
1496 max_count: u32,
1497 );
1498 unsafe fn draw_indexed_indirect_count(
1499 &mut self,
1500 buffer: &<Self::A as Api>::Buffer,
1501 offset: wgt::BufferAddress,
1502 count_buffer: &<Self::A as Api>::Buffer,
1503 count_offset: wgt::BufferAddress,
1504 max_count: u32,
1505 );
1506 unsafe fn draw_mesh_tasks(
1507 &mut self,
1508 group_count_x: u32,
1509 group_count_y: u32,
1510 group_count_z: u32,
1511 );
1512 unsafe fn draw_mesh_tasks_indirect(
1513 &mut self,
1514 buffer: &<Self::A as Api>::Buffer,
1515 offset: wgt::BufferAddress,
1516 draw_count: u32,
1517 );
1518 unsafe fn draw_mesh_tasks_indirect_count(
1519 &mut self,
1520 buffer: &<Self::A as Api>::Buffer,
1521 offset: wgt::BufferAddress,
1522 count_buffer: &<Self::A as Api>::Buffer,
1523 count_offset: wgt::BufferAddress,
1524 max_count: u32,
1525 );
1526
1527 // compute passes
1528
1529 /// Begin a new compute pass, clearing all active bindings.
1530 ///
1531 /// This clears any bindings established by the following calls:
1532 ///
1533 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1534 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1535 /// - [`begin_query`](CommandEncoder::begin_query)
1536 /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1537 ///
1538 /// # Safety
1539 ///
1540 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1541 /// by a call to [`end_render_pass`].
1542 ///
1543 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1544 /// by a call to [`end_compute_pass`].
1545 ///
1546 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1547 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1548 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1549 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1550 unsafe fn begin_compute_pass(
1551 &mut self,
1552 desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1553 );
1554
1555 /// End the current compute pass.
1556 ///
1557 /// # Safety
1558 ///
1559 /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1560 /// that has not been followed by a call to [`end_compute_pass`].
1561 ///
1562 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1563 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1564 unsafe fn end_compute_pass(&mut self);
1565
1566 unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1567
1568 unsafe fn dispatch(&mut self, count: [u32; 3]);
1569 unsafe fn dispatch_indirect(
1570 &mut self,
1571 buffer: &<Self::A as Api>::Buffer,
1572 offset: wgt::BufferAddress,
1573 );
1574
1575 /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1576 /// All buffers must be synchronized externally
1577 /// All buffer regions, which are written to may only be passed once per function call,
1578 /// with the exception of updates in the same descriptor.
1579 /// Consequences of this limitation:
1580 /// - scratch buffers need to be unique
1581 /// - a tlas can't be build in the same call with a blas it contains
1582 unsafe fn build_acceleration_structures<'a, T>(
1583 &mut self,
1584 descriptor_count: u32,
1585 descriptors: T,
1586 ) where
1587 Self::A: 'a,
1588 T: IntoIterator<
1589 Item = BuildAccelerationStructureDescriptor<
1590 'a,
1591 <Self::A as Api>::Buffer,
1592 <Self::A as Api>::AccelerationStructure,
1593 >,
1594 >;
1595
1596 unsafe fn place_acceleration_structure_barrier(
1597 &mut self,
1598 barrier: AccelerationStructureBarrier,
1599 );
1600 // modeled off dx12, because this is able to be polyfilled in vulkan as opposed to the other way round
1601 unsafe fn read_acceleration_structure_compact_size(
1602 &mut self,
1603 acceleration_structure: &<Self::A as Api>::AccelerationStructure,
1604 buf: &<Self::A as Api>::Buffer,
1605 );
1606}
1607
1608bitflags!(
1609 /// Pipeline layout creation flags.
1610 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1611 pub struct PipelineLayoutFlags: u32 {
1612 /// D3D12: Add support for `first_vertex` and `first_instance` builtins
1613 /// via push constants for direct execution.
1614 const FIRST_VERTEX_INSTANCE = 1 << 0;
1615 /// D3D12: Add support for `num_workgroups` builtins via push constants
1616 /// for direct execution.
1617 const NUM_WORK_GROUPS = 1 << 1;
1618 /// D3D12: Add support for the builtins that the other flags enable for
1619 /// indirect execution.
1620 const INDIRECT_BUILTIN_UPDATE = 1 << 2;
1621 }
1622);
1623
1624bitflags!(
1625 /// Pipeline layout creation flags.
1626 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1627 pub struct BindGroupLayoutFlags: u32 {
1628 /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1629 const PARTIALLY_BOUND = 1 << 0;
1630 }
1631);
1632
1633bitflags!(
1634 /// Texture format capability flags.
1635 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1636 pub struct TextureFormatCapabilities: u32 {
1637 /// Format can be sampled.
1638 const SAMPLED = 1 << 0;
1639 /// Format can be sampled with a linear sampler.
1640 const SAMPLED_LINEAR = 1 << 1;
1641 /// Format can be sampled with a min/max reduction sampler.
1642 const SAMPLED_MINMAX = 1 << 2;
1643
1644 /// Format can be used as storage with read-only access.
1645 const STORAGE_READ_ONLY = 1 << 3;
1646 /// Format can be used as storage with write-only access.
1647 const STORAGE_WRITE_ONLY = 1 << 4;
1648 /// Format can be used as storage with both read and write access.
1649 const STORAGE_READ_WRITE = 1 << 5;
1650 /// Format can be used as storage with atomics.
1651 const STORAGE_ATOMIC = 1 << 6;
1652
1653 /// Format can be used as color and input attachment.
1654 const COLOR_ATTACHMENT = 1 << 7;
1655 /// Format can be used as color (with blending) and input attachment.
1656 const COLOR_ATTACHMENT_BLEND = 1 << 8;
1657 /// Format can be used as depth-stencil and input attachment.
1658 const DEPTH_STENCIL_ATTACHMENT = 1 << 9;
1659
1660 /// Format can be multisampled by x2.
1661 const MULTISAMPLE_X2 = 1 << 10;
1662 /// Format can be multisampled by x4.
1663 const MULTISAMPLE_X4 = 1 << 11;
1664 /// Format can be multisampled by x8.
1665 const MULTISAMPLE_X8 = 1 << 12;
1666 /// Format can be multisampled by x16.
1667 const MULTISAMPLE_X16 = 1 << 13;
1668
1669 /// Format can be used for render pass resolve targets.
1670 const MULTISAMPLE_RESOLVE = 1 << 14;
1671
1672 /// Format can be copied from.
1673 const COPY_SRC = 1 << 15;
1674 /// Format can be copied to.
1675 const COPY_DST = 1 << 16;
1676 }
1677);
1678
1679bitflags!(
1680 /// Texture format capability flags.
1681 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1682 pub struct FormatAspects: u8 {
1683 const COLOR = 1 << 0;
1684 const DEPTH = 1 << 1;
1685 const STENCIL = 1 << 2;
1686 const PLANE_0 = 1 << 3;
1687 const PLANE_1 = 1 << 4;
1688 const PLANE_2 = 1 << 5;
1689
1690 const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1691 }
1692);
1693
1694impl FormatAspects {
1695 pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1696 let aspect_mask = match aspect {
1697 wgt::TextureAspect::All => Self::all(),
1698 wgt::TextureAspect::DepthOnly => Self::DEPTH,
1699 wgt::TextureAspect::StencilOnly => Self::STENCIL,
1700 wgt::TextureAspect::Plane0 => Self::PLANE_0,
1701 wgt::TextureAspect::Plane1 => Self::PLANE_1,
1702 wgt::TextureAspect::Plane2 => Self::PLANE_2,
1703 };
1704 Self::from(format) & aspect_mask
1705 }
1706
1707 /// Returns `true` if only one flag is set
1708 pub fn is_one(&self) -> bool {
1709 self.bits().is_power_of_two()
1710 }
1711
1712 pub fn map(&self) -> wgt::TextureAspect {
1713 match *self {
1714 Self::COLOR => wgt::TextureAspect::All,
1715 Self::DEPTH => wgt::TextureAspect::DepthOnly,
1716 Self::STENCIL => wgt::TextureAspect::StencilOnly,
1717 Self::PLANE_0 => wgt::TextureAspect::Plane0,
1718 Self::PLANE_1 => wgt::TextureAspect::Plane1,
1719 Self::PLANE_2 => wgt::TextureAspect::Plane2,
1720 _ => unreachable!(),
1721 }
1722 }
1723}
1724
1725impl From<wgt::TextureFormat> for FormatAspects {
1726 fn from(format: wgt::TextureFormat) -> Self {
1727 match format {
1728 wgt::TextureFormat::Stencil8 => Self::STENCIL,
1729 wgt::TextureFormat::Depth16Unorm
1730 | wgt::TextureFormat::Depth32Float
1731 | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1732 wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1733 Self::DEPTH_STENCIL
1734 }
1735 wgt::TextureFormat::NV12 => Self::PLANE_0 | Self::PLANE_1,
1736 _ => Self::COLOR,
1737 }
1738 }
1739}
1740
1741bitflags!(
1742 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1743 pub struct MemoryFlags: u32 {
1744 const TRANSIENT = 1 << 0;
1745 const PREFER_COHERENT = 1 << 1;
1746 }
1747);
1748
1749//TODO: it's not intuitive for the backends to consider `LOAD` being optional.
1750
1751bitflags!(
1752 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1753 pub struct AttachmentOps: u8 {
1754 const LOAD = 1 << 0;
1755 const STORE = 1 << 1;
1756 }
1757);
1758
1759#[derive(Clone, Debug)]
1760pub struct InstanceDescriptor<'a> {
1761 pub name: &'a str,
1762 pub flags: wgt::InstanceFlags,
1763 pub memory_budget_thresholds: wgt::MemoryBudgetThresholds,
1764 pub backend_options: wgt::BackendOptions,
1765}
1766
1767#[derive(Clone, Debug)]
1768pub struct Alignments {
1769 /// The alignment of the start of the buffer used as a GPU copy source.
1770 pub buffer_copy_offset: wgt::BufferSize,
1771
1772 /// The alignment of the row pitch of the texture data stored in a buffer that is
1773 /// used in a GPU copy operation.
1774 pub buffer_copy_pitch: wgt::BufferSize,
1775
1776 /// The finest alignment of bound range checking for uniform buffers.
1777 ///
1778 /// When `wgpu_hal` restricts shader references to the [accessible
1779 /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1780 /// is the bind group binding's stated [size], rounded up to the next
1781 /// multiple of this value.
1782 ///
1783 /// We don't need an analogous field for storage buffer bindings, because
1784 /// all our backends promise to enforce the size at least to a four-byte
1785 /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1786 /// of four anyway.
1787 ///
1788 /// [ar]: struct.BufferBinding.html#accessible-region
1789 /// [`Uniform`]: wgt::BufferBindingType::Uniform
1790 /// [size]: BufferBinding::size
1791 pub uniform_bounds_check_alignment: wgt::BufferSize,
1792
1793 /// The size of the raw TLAS instance
1794 pub raw_tlas_instance_size: usize,
1795
1796 /// What the scratch buffer for building an acceleration structure must be aligned to
1797 pub ray_tracing_scratch_buffer_alignment: u32,
1798}
1799
1800#[derive(Clone, Debug)]
1801pub struct Capabilities {
1802 pub limits: wgt::Limits,
1803 pub alignments: Alignments,
1804 pub downlevel: wgt::DownlevelCapabilities,
1805}
1806
1807#[derive(Debug)]
1808pub struct ExposedAdapter<A: Api> {
1809 pub adapter: A::Adapter,
1810 pub info: wgt::AdapterInfo,
1811 pub features: wgt::Features,
1812 pub capabilities: Capabilities,
1813}
1814
1815/// Describes information about what a `Surface`'s presentation capabilities are.
1816/// Fetch this with [Adapter::surface_capabilities].
1817#[derive(Debug, Clone)]
1818pub struct SurfaceCapabilities {
1819 /// List of supported texture formats.
1820 ///
1821 /// Must be at least one.
1822 pub formats: Vec<wgt::TextureFormat>,
1823
1824 /// Range for the number of queued frames.
1825 ///
1826 /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
1827 /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
1828 ///
1829 /// - `maximum_frame_latency.start` must be at least 1.
1830 /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
1831 pub maximum_frame_latency: RangeInclusive<u32>,
1832
1833 /// Current extent of the surface, if known.
1834 pub current_extent: Option<wgt::Extent3d>,
1835
1836 /// Supported texture usage flags.
1837 ///
1838 /// Must have at least `wgt::TextureUses::COLOR_TARGET`
1839 pub usage: wgt::TextureUses,
1840
1841 /// List of supported V-sync modes.
1842 ///
1843 /// Must be at least one.
1844 pub present_modes: Vec<wgt::PresentMode>,
1845
1846 /// List of supported alpha composition modes.
1847 ///
1848 /// Must be at least one.
1849 pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
1850}
1851
1852#[derive(Debug)]
1853pub struct AcquiredSurfaceTexture<A: Api> {
1854 pub texture: A::SurfaceTexture,
1855 /// The presentation configuration no longer matches
1856 /// the surface properties exactly, but can still be used to present
1857 /// to the surface successfully.
1858 pub suboptimal: bool,
1859}
1860
1861#[derive(Debug)]
1862pub struct OpenDevice<A: Api> {
1863 pub device: A::Device,
1864 pub queue: A::Queue,
1865}
1866
1867#[derive(Clone, Debug)]
1868pub struct BufferMapping {
1869 pub ptr: NonNull<u8>,
1870 pub is_coherent: bool,
1871}
1872
1873#[derive(Clone, Debug)]
1874pub struct BufferDescriptor<'a> {
1875 pub label: Label<'a>,
1876 pub size: wgt::BufferAddress,
1877 pub usage: wgt::BufferUses,
1878 pub memory_flags: MemoryFlags,
1879}
1880
1881#[derive(Clone, Debug)]
1882pub struct TextureDescriptor<'a> {
1883 pub label: Label<'a>,
1884 pub size: wgt::Extent3d,
1885 pub mip_level_count: u32,
1886 pub sample_count: u32,
1887 pub dimension: wgt::TextureDimension,
1888 pub format: wgt::TextureFormat,
1889 pub usage: wgt::TextureUses,
1890 pub memory_flags: MemoryFlags,
1891 /// Allows views of this texture to have a different format
1892 /// than the texture does.
1893 pub view_formats: Vec<wgt::TextureFormat>,
1894}
1895
1896impl TextureDescriptor<'_> {
1897 pub fn copy_extent(&self) -> CopyExtent {
1898 CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
1899 }
1900
1901 pub fn is_cube_compatible(&self) -> bool {
1902 self.dimension == wgt::TextureDimension::D2
1903 && self.size.depth_or_array_layers % 6 == 0
1904 && self.sample_count == 1
1905 && self.size.width == self.size.height
1906 }
1907
1908 pub fn array_layer_count(&self) -> u32 {
1909 match self.dimension {
1910 wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
1911 wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
1912 }
1913 }
1914}
1915
1916/// TextureView descriptor.
1917///
1918/// Valid usage:
1919///. - `format` has to be the same as `TextureDescriptor::format`
1920///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
1921///. - `usage` has to be a subset of `TextureDescriptor::usage`
1922///. - `range` has to be a subset of parent texture
1923#[derive(Clone, Debug)]
1924pub struct TextureViewDescriptor<'a> {
1925 pub label: Label<'a>,
1926 pub format: wgt::TextureFormat,
1927 pub dimension: wgt::TextureViewDimension,
1928 pub usage: wgt::TextureUses,
1929 pub range: wgt::ImageSubresourceRange,
1930}
1931
1932#[derive(Clone, Debug)]
1933pub struct SamplerDescriptor<'a> {
1934 pub label: Label<'a>,
1935 pub address_modes: [wgt::AddressMode; 3],
1936 pub mag_filter: wgt::FilterMode,
1937 pub min_filter: wgt::FilterMode,
1938 pub mipmap_filter: wgt::FilterMode,
1939 pub lod_clamp: Range<f32>,
1940 pub compare: Option<wgt::CompareFunction>,
1941 // Must in the range [1, 16].
1942 //
1943 // Anisotropic filtering must be supported if this is not 1.
1944 pub anisotropy_clamp: u16,
1945 pub border_color: Option<wgt::SamplerBorderColor>,
1946}
1947
1948/// BindGroupLayout descriptor.
1949///
1950/// Valid usage:
1951/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
1952#[derive(Clone, Debug)]
1953pub struct BindGroupLayoutDescriptor<'a> {
1954 pub label: Label<'a>,
1955 pub flags: BindGroupLayoutFlags,
1956 pub entries: &'a [wgt::BindGroupLayoutEntry],
1957}
1958
1959#[derive(Clone, Debug)]
1960pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
1961 pub label: Label<'a>,
1962 pub flags: PipelineLayoutFlags,
1963 pub bind_group_layouts: &'a [&'a B],
1964 pub push_constant_ranges: &'a [wgt::PushConstantRange],
1965}
1966
1967/// A region of a buffer made visible to shaders via a [`BindGroup`].
1968///
1969/// [`BindGroup`]: Api::BindGroup
1970///
1971/// ## Accessible region
1972///
1973/// `wgpu_hal` guarantees that shaders compiled with
1974/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
1975/// write data via this binding outside the *accessible region* of [`buffer`]:
1976///
1977/// - The accessible region starts at [`offset`].
1978///
1979/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
1980/// which must be a multiple of 4.
1981///
1982/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
1983/// rounded up to the next multiple of
1984/// [`Alignments::uniform_bounds_check_alignment`].
1985///
1986/// Note that this guarantee is stricter than WGSL's requirements for
1987/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
1988/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
1989/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
1990/// never be read by the application before they are overwritten. This
1991/// optimization consults bind group buffer binding regions to determine which
1992/// parts of which buffers shaders might observe. This optimization is only
1993/// sound if shader access is bounds-checked.
1994///
1995/// [`buffer`]: BufferBinding::buffer
1996/// [`offset`]: BufferBinding::offset
1997/// [`size`]: BufferBinding::size
1998/// [`Storage`]: wgt::BufferBindingType::Storage
1999/// [`Uniform`]: wgt::BufferBindingType::Uniform
2000/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
2001#[derive(Debug)]
2002pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
2003 /// The buffer being bound.
2004 pub buffer: &'a B,
2005
2006 /// The offset at which the bound region starts.
2007 ///
2008 /// This must be less than the size of the buffer. Some back ends
2009 /// cannot tolerate zero-length regions; for example, see
2010 /// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
2011 /// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
2012 /// documentation for GLES's [glBindBufferRange][bbr].
2013 ///
2014 /// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
2015 /// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
2016 /// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
2017 pub offset: wgt::BufferAddress,
2018
2019 /// The size of the region bound, in bytes.
2020 ///
2021 /// If `None`, the region extends from `offset` to the end of the
2022 /// buffer. Given the restrictions on `offset`, this means that
2023 /// the size is always greater than zero.
2024 pub size: Option<wgt::BufferSize>,
2025}
2026
2027impl<'a, T: DynBuffer + ?Sized> Clone for BufferBinding<'a, T> {
2028 fn clone(&self) -> Self {
2029 BufferBinding {
2030 buffer: self.buffer,
2031 offset: self.offset,
2032 size: self.size,
2033 }
2034 }
2035}
2036
2037#[derive(Debug)]
2038pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2039 pub view: &'a T,
2040 pub usage: wgt::TextureUses,
2041}
2042
2043impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2044 fn clone(&self) -> Self {
2045 TextureBinding {
2046 view: self.view,
2047 usage: self.usage,
2048 }
2049 }
2050}
2051
2052/// cbindgen:ignore
2053#[derive(Clone, Debug)]
2054pub struct BindGroupEntry {
2055 pub binding: u32,
2056 pub resource_index: u32,
2057 pub count: u32,
2058}
2059
2060/// BindGroup descriptor.
2061///
2062/// Valid usage:
2063///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2064///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2065///. - each entry has to be compatible with the `layout`
2066///. - each entry's `BindGroupEntry::resource_index` is within range
2067/// of the corresponding resource array, selected by the relevant
2068/// `BindGroupLayoutEntry`.
2069#[derive(Clone, Debug)]
2070pub struct BindGroupDescriptor<
2071 'a,
2072 Bgl: DynBindGroupLayout + ?Sized,
2073 B: DynBuffer + ?Sized,
2074 S: DynSampler + ?Sized,
2075 T: DynTextureView + ?Sized,
2076 A: DynAccelerationStructure + ?Sized,
2077> {
2078 pub label: Label<'a>,
2079 pub layout: &'a Bgl,
2080 pub buffers: &'a [BufferBinding<'a, B>],
2081 pub samplers: &'a [&'a S],
2082 pub textures: &'a [TextureBinding<'a, T>],
2083 pub entries: &'a [BindGroupEntry],
2084 pub acceleration_structures: &'a [&'a A],
2085}
2086
2087#[derive(Clone, Debug)]
2088pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2089 pub label: Label<'a>,
2090 pub queue: &'a Q,
2091}
2092
2093/// Naga shader module.
2094#[derive(Default)]
2095pub struct NagaShader {
2096 /// Shader module IR.
2097 pub module: Cow<'static, naga::Module>,
2098 /// Analysis information of the module.
2099 pub info: naga::valid::ModuleInfo,
2100 /// Source codes for debug
2101 pub debug_source: Option<DebugSource>,
2102}
2103
2104// Custom implementation avoids the need to generate Debug impl code
2105// for the whole Naga module and info.
2106impl fmt::Debug for NagaShader {
2107 fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2108 write!(formatter, "Naga shader")
2109 }
2110}
2111
2112/// Shader input.
2113#[allow(clippy::large_enum_variant)]
2114pub enum ShaderInput<'a> {
2115 Naga(NagaShader),
2116 Msl {
2117 shader: String,
2118 entry_point: String,
2119 num_workgroups: (u32, u32, u32),
2120 },
2121 SpirV(&'a [u32]),
2122 Dxil {
2123 shader: &'a [u8],
2124 entry_point: String,
2125 num_workgroups: (u32, u32, u32),
2126 },
2127 Hlsl {
2128 shader: &'a str,
2129 entry_point: String,
2130 num_workgroups: (u32, u32, u32),
2131 },
2132}
2133
2134pub struct ShaderModuleDescriptor<'a> {
2135 pub label: Label<'a>,
2136
2137 /// # Safety
2138 ///
2139 /// See the documentation for each flag in [`ShaderRuntimeChecks`][src].
2140 ///
2141 /// [src]: wgt::ShaderRuntimeChecks
2142 pub runtime_checks: wgt::ShaderRuntimeChecks,
2143}
2144
2145#[derive(Debug, Clone)]
2146pub struct DebugSource {
2147 pub file_name: Cow<'static, str>,
2148 pub source_code: Cow<'static, str>,
2149}
2150
2151/// Describes a programmable pipeline stage.
2152#[derive(Debug)]
2153pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2154 /// The compiled shader module for this stage.
2155 pub module: &'a M,
2156 /// The name of the entry point in the compiled shader. There must be a function with this name
2157 /// in the shader.
2158 pub entry_point: &'a str,
2159 /// Pipeline constants
2160 pub constants: &'a naga::back::PipelineConstants,
2161 /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2162 ///
2163 /// This is required by the WebGPU spec, but may have overhead which can be avoided
2164 /// for cross-platform applications
2165 pub zero_initialize_workgroup_memory: bool,
2166}
2167
2168impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2169 fn clone(&self) -> Self {
2170 Self {
2171 module: self.module,
2172 entry_point: self.entry_point,
2173 constants: self.constants,
2174 zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2175 }
2176 }
2177}
2178
2179/// Describes a compute pipeline.
2180#[derive(Clone, Debug)]
2181pub struct ComputePipelineDescriptor<
2182 'a,
2183 Pl: DynPipelineLayout + ?Sized,
2184 M: DynShaderModule + ?Sized,
2185 Pc: DynPipelineCache + ?Sized,
2186> {
2187 pub label: Label<'a>,
2188 /// The layout of bind groups for this pipeline.
2189 pub layout: &'a Pl,
2190 /// The compiled compute stage and its entry point.
2191 pub stage: ProgrammableStage<'a, M>,
2192 /// The cache which will be used and filled when compiling this pipeline
2193 pub cache: Option<&'a Pc>,
2194}
2195
2196pub struct PipelineCacheDescriptor<'a> {
2197 pub label: Label<'a>,
2198 pub data: Option<&'a [u8]>,
2199}
2200
2201/// Describes how the vertex buffer is interpreted.
2202#[derive(Clone, Debug)]
2203pub struct VertexBufferLayout<'a> {
2204 /// The stride, in bytes, between elements of this buffer.
2205 pub array_stride: wgt::BufferAddress,
2206 /// How often this vertex buffer is "stepped" forward.
2207 pub step_mode: wgt::VertexStepMode,
2208 /// The list of attributes which comprise a single vertex.
2209 pub attributes: &'a [wgt::VertexAttribute],
2210}
2211
2212/// Describes a render (graphics) pipeline.
2213#[derive(Clone, Debug)]
2214pub struct RenderPipelineDescriptor<
2215 'a,
2216 Pl: DynPipelineLayout + ?Sized,
2217 M: DynShaderModule + ?Sized,
2218 Pc: DynPipelineCache + ?Sized,
2219> {
2220 pub label: Label<'a>,
2221 /// The layout of bind groups for this pipeline.
2222 pub layout: &'a Pl,
2223 /// The format of any vertex buffers used with this pipeline.
2224 pub vertex_buffers: &'a [VertexBufferLayout<'a>],
2225 /// The vertex stage for this pipeline.
2226 pub vertex_stage: ProgrammableStage<'a, M>,
2227 /// The properties of the pipeline at the primitive assembly and rasterization level.
2228 pub primitive: wgt::PrimitiveState,
2229 /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2230 pub depth_stencil: Option<wgt::DepthStencilState>,
2231 /// The multi-sampling properties of the pipeline.
2232 pub multisample: wgt::MultisampleState,
2233 /// The fragment stage for this pipeline.
2234 pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2235 /// The effect of draw calls on the color aspect of the output target.
2236 pub color_targets: &'a [Option<wgt::ColorTargetState>],
2237 /// If the pipeline will be used with a multiview render pass, this indicates how many array
2238 /// layers the attachments will have.
2239 pub multiview: Option<NonZeroU32>,
2240 /// The cache which will be used and filled when compiling this pipeline
2241 pub cache: Option<&'a Pc>,
2242}
2243pub struct MeshPipelineDescriptor<
2244 'a,
2245 Pl: DynPipelineLayout + ?Sized,
2246 M: DynShaderModule + ?Sized,
2247 Pc: DynPipelineCache + ?Sized,
2248> {
2249 pub label: Label<'a>,
2250 /// The layout of bind groups for this pipeline.
2251 pub layout: &'a Pl,
2252 pub task_stage: Option<ProgrammableStage<'a, M>>,
2253 pub mesh_stage: ProgrammableStage<'a, M>,
2254 /// The properties of the pipeline at the primitive assembly and rasterization level.
2255 pub primitive: wgt::PrimitiveState,
2256 /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2257 pub depth_stencil: Option<wgt::DepthStencilState>,
2258 /// The multi-sampling properties of the pipeline.
2259 pub multisample: wgt::MultisampleState,
2260 /// The fragment stage for this pipeline.
2261 pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2262 /// The effect of draw calls on the color aspect of the output target.
2263 pub color_targets: &'a [Option<wgt::ColorTargetState>],
2264 /// If the pipeline will be used with a multiview render pass, this indicates how many array
2265 /// layers the attachments will have.
2266 pub multiview: Option<NonZeroU32>,
2267 /// The cache which will be used and filled when compiling this pipeline
2268 pub cache: Option<&'a Pc>,
2269}
2270
2271#[derive(Debug, Clone)]
2272pub struct SurfaceConfiguration {
2273 /// Maximum number of queued frames. Must be in
2274 /// `SurfaceCapabilities::maximum_frame_latency` range.
2275 pub maximum_frame_latency: u32,
2276 /// Vertical synchronization mode.
2277 pub present_mode: wgt::PresentMode,
2278 /// Alpha composition mode.
2279 pub composite_alpha_mode: wgt::CompositeAlphaMode,
2280 /// Format of the surface textures.
2281 pub format: wgt::TextureFormat,
2282 /// Requested texture extent. Must be in
2283 /// `SurfaceCapabilities::extents` range.
2284 pub extent: wgt::Extent3d,
2285 /// Allowed usage of surface textures,
2286 pub usage: wgt::TextureUses,
2287 /// Allows views of swapchain texture to have a different format
2288 /// than the texture does.
2289 pub view_formats: Vec<wgt::TextureFormat>,
2290}
2291
2292#[derive(Debug, Clone)]
2293pub struct Rect<T> {
2294 pub x: T,
2295 pub y: T,
2296 pub w: T,
2297 pub h: T,
2298}
2299
2300#[derive(Debug, Clone, PartialEq)]
2301pub struct StateTransition<T> {
2302 pub from: T,
2303 pub to: T,
2304}
2305
2306#[derive(Debug, Clone)]
2307pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2308 pub buffer: &'a B,
2309 pub usage: StateTransition<wgt::BufferUses>,
2310}
2311
2312#[derive(Debug, Clone)]
2313pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2314 pub texture: &'a T,
2315 pub range: wgt::ImageSubresourceRange,
2316 pub usage: StateTransition<wgt::TextureUses>,
2317}
2318
2319#[derive(Clone, Copy, Debug)]
2320pub struct BufferCopy {
2321 pub src_offset: wgt::BufferAddress,
2322 pub dst_offset: wgt::BufferAddress,
2323 pub size: wgt::BufferSize,
2324}
2325
2326#[derive(Clone, Debug)]
2327pub struct TextureCopyBase {
2328 pub mip_level: u32,
2329 pub array_layer: u32,
2330 /// Origin within a texture.
2331 /// Note: for 1D and 2D textures, Z must be 0.
2332 pub origin: wgt::Origin3d,
2333 pub aspect: FormatAspects,
2334}
2335
2336#[derive(Clone, Copy, Debug)]
2337pub struct CopyExtent {
2338 pub width: u32,
2339 pub height: u32,
2340 pub depth: u32,
2341}
2342
2343#[derive(Clone, Debug)]
2344pub struct TextureCopy {
2345 pub src_base: TextureCopyBase,
2346 pub dst_base: TextureCopyBase,
2347 pub size: CopyExtent,
2348}
2349
2350#[derive(Clone, Debug)]
2351pub struct BufferTextureCopy {
2352 pub buffer_layout: wgt::TexelCopyBufferLayout,
2353 pub texture_base: TextureCopyBase,
2354 pub size: CopyExtent,
2355}
2356
2357#[derive(Clone, Debug)]
2358pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2359 pub view: &'a T,
2360 /// Contains either a single mutating usage as a target,
2361 /// or a valid combination of read-only usages.
2362 pub usage: wgt::TextureUses,
2363}
2364
2365#[derive(Clone, Debug)]
2366pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2367 pub target: Attachment<'a, T>,
2368 pub depth_slice: Option<u32>,
2369 pub resolve_target: Option<Attachment<'a, T>>,
2370 pub ops: AttachmentOps,
2371 pub clear_value: wgt::Color,
2372}
2373
2374#[derive(Clone, Debug)]
2375pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2376 pub target: Attachment<'a, T>,
2377 pub depth_ops: AttachmentOps,
2378 pub stencil_ops: AttachmentOps,
2379 pub clear_value: (f32, u32),
2380}
2381
2382#[derive(Clone, Debug)]
2383pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2384 pub query_set: &'a Q,
2385 pub beginning_of_pass_write_index: Option<u32>,
2386 pub end_of_pass_write_index: Option<u32>,
2387}
2388
2389#[derive(Clone, Debug)]
2390pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2391 pub label: Label<'a>,
2392 pub extent: wgt::Extent3d,
2393 pub sample_count: u32,
2394 pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2395 pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2396 pub multiview: Option<NonZeroU32>,
2397 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2398 pub occlusion_query_set: Option<&'a Q>,
2399}
2400
2401#[derive(Clone, Debug)]
2402pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2403 pub label: Label<'a>,
2404 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2405}
2406
2407#[test]
2408fn test_default_limits() {
2409 let limits = wgt::Limits::default();
2410 assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2411}
2412
2413#[derive(Clone, Debug)]
2414pub struct AccelerationStructureDescriptor<'a> {
2415 pub label: Label<'a>,
2416 pub size: wgt::BufferAddress,
2417 pub format: AccelerationStructureFormat,
2418 pub allow_compaction: bool,
2419}
2420
2421#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2422pub enum AccelerationStructureFormat {
2423 TopLevel,
2424 BottomLevel,
2425}
2426
2427#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2428pub enum AccelerationStructureBuildMode {
2429 Build,
2430 Update,
2431}
2432
2433/// Information of the required size for a corresponding entries struct (+ flags)
2434#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2435pub struct AccelerationStructureBuildSizes {
2436 pub acceleration_structure_size: wgt::BufferAddress,
2437 pub update_scratch_size: wgt::BufferAddress,
2438 pub build_scratch_size: wgt::BufferAddress,
2439}
2440
2441/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2442/// For updates, only the data is allowed to change (not the meta data or sizes).
2443#[derive(Clone, Debug)]
2444pub struct BuildAccelerationStructureDescriptor<
2445 'a,
2446 B: DynBuffer + ?Sized,
2447 A: DynAccelerationStructure + ?Sized,
2448> {
2449 pub entries: &'a AccelerationStructureEntries<'a, B>,
2450 pub mode: AccelerationStructureBuildMode,
2451 pub flags: AccelerationStructureBuildFlags,
2452 pub source_acceleration_structure: Option<&'a A>,
2453 pub destination_acceleration_structure: &'a A,
2454 pub scratch_buffer: &'a B,
2455 pub scratch_buffer_offset: wgt::BufferAddress,
2456}
2457
2458/// - All buffers, buffer addresses and offsets will be ignored.
2459/// - The build mode will be ignored.
2460/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2461/// may result in reduced size requirements.
2462/// - Any other change may result in a bigger or smaller size requirement.
2463#[derive(Clone, Debug)]
2464pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2465 pub entries: &'a AccelerationStructureEntries<'a, B>,
2466 pub flags: AccelerationStructureBuildFlags,
2467}
2468
2469/// Entries for a single descriptor
2470/// * `Instances` - Multiple instances for a top level acceleration structure
2471/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2472/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2473#[derive(Debug)]
2474pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2475 Instances(AccelerationStructureInstances<'a, B>),
2476 Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2477 AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2478}
2479
2480/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2481/// * `indices` - optional index buffer with attributes
2482/// * `transform` - optional transform
2483#[derive(Clone, Debug)]
2484pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2485 pub vertex_buffer: Option<&'a B>,
2486 pub vertex_format: wgt::VertexFormat,
2487 pub first_vertex: u32,
2488 pub vertex_count: u32,
2489 pub vertex_stride: wgt::BufferAddress,
2490 pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2491 pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2492 pub flags: AccelerationStructureGeometryFlags,
2493}
2494
2495/// * `offset` - offset in bytes
2496#[derive(Clone, Debug)]
2497pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2498 pub buffer: Option<&'a B>,
2499 pub offset: u32,
2500 pub count: u32,
2501 pub stride: wgt::BufferAddress,
2502 pub flags: AccelerationStructureGeometryFlags,
2503}
2504
2505pub struct AccelerationStructureCopy {
2506 pub copy_flags: wgt::AccelerationStructureCopy,
2507 pub type_flags: wgt::AccelerationStructureType,
2508}
2509
2510/// * `offset` - offset in bytes
2511#[derive(Clone, Debug)]
2512pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2513 pub buffer: Option<&'a B>,
2514 pub offset: u32,
2515 pub count: u32,
2516}
2517
2518/// * `offset` - offset in bytes
2519#[derive(Clone, Debug)]
2520pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2521 pub format: wgt::IndexFormat,
2522 pub buffer: Option<&'a B>,
2523 pub offset: u32,
2524 pub count: u32,
2525}
2526
2527/// * `offset` - offset in bytes
2528#[derive(Clone, Debug)]
2529pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2530 pub buffer: &'a B,
2531 pub offset: u32,
2532}
2533
2534pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2535pub use wgt::AccelerationStructureGeometryFlags;
2536
2537bitflags::bitflags! {
2538 #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2539 pub struct AccelerationStructureUses: u8 {
2540 // For blas used as input for tlas
2541 const BUILD_INPUT = 1 << 0;
2542 // Target for acceleration structure build
2543 const BUILD_OUTPUT = 1 << 1;
2544 // Tlas used in a shader
2545 const SHADER_INPUT = 1 << 2;
2546 // Blas used to query compacted size
2547 const QUERY_INPUT = 1 << 3;
2548 // BLAS used as a src for a copy operation
2549 const COPY_SRC = 1 << 4;
2550 // BLAS used as a dst for a copy operation
2551 const COPY_DST = 1 << 5;
2552 }
2553}
2554
2555#[derive(Debug, Clone)]
2556pub struct AccelerationStructureBarrier {
2557 pub usage: StateTransition<AccelerationStructureUses>,
2558}
2559
2560#[derive(Debug, Copy, Clone)]
2561pub struct TlasInstance {
2562 pub transform: [f32; 12],
2563 pub custom_data: u32,
2564 pub mask: u8,
2565 pub blas_address: u64,
2566}