[]Struct bevy::wgpu::WgpuResourcesReadLock

pub struct WgpuResourcesReadLock<'a> {
    pub buffers: RwLockReadGuard<'a, RawRwLock, HashMap<BufferId, Arc<Buffer>, RandomState>>,
    pub textures: RwLockReadGuard<'a, RawRwLock, HashMap<TextureId, TextureView, RandomState>>,
    pub swap_chain_frames: RwLockReadGuard<'a, RawRwLock, HashMap<TextureId, SwapChainFrame, RandomState>>,
    pub render_pipelines: RwLockReadGuard<'a, RawRwLock, HashMap<Handle<PipelineDescriptor>, RenderPipeline, RandomState>>,
    pub bind_groups: RwLockReadGuard<'a, RawRwLock, HashMap<BindGroupDescriptorId, WgpuBindGroupInfo, RandomState>>,
    pub used_bind_group_sender: Sender<BindGroupId>,
}

Grabs a read lock on all wgpu resources. When paired with WgpuResourceRefs, this allows you to pass in wgpu resources to wgpu::RenderPass<'a> with the appropriate lifetime. This is accomplished by grabbing a WgpuResourcesReadLock before creating a wgpu::RenderPass, getting a WgpuResourcesRefs, and storing that in the pass.

This is only a problem because RwLockReadGuard.read() erases the guard's lifetime and creates a new anonymous lifetime. If you call RwLockReadGuard.read() during a pass, the reference will have an anonymous lifetime that lives for less than the pass, which violates the lifetime constraints in place.

The biggest implication of this design (other than the additional boilerplate here) is that beginning a render pass blocks writes to these resources. This means that if the pass attempts to write any resource, a deadlock will occur. WgpuResourceRefs only has immutable references, so the only way to make a deadlock happen is to access WgpuResources directly in the pass. It also means that other threads attempting to write resources will need to wait for pass encoding to finish. Almost all writes should occur before passes start, so this hopefully won't be a problem.

It is worth comparing the performance of this to transactional / copy-based approaches. This lock based design guarantees consistency, doesn't perform redundant allocations, and only blocks when a write is occurring. A copy based approach would never block, but would require more allocations / state-synchronization, which I expect will be more expensive. It would also be "eventually consistent" instead of "strongly consistent".

Single threaded implementations don't need to worry about these lifetimes constraints at all. RenderPasses can use a RenderContext's WgpuResources directly. RenderContext already has a lifetime greater than the RenderPass.

Fields

buffers: RwLockReadGuard<'a, RawRwLock, HashMap<BufferId, Arc<Buffer>, RandomState>>textures: RwLockReadGuard<'a, RawRwLock, HashMap<TextureId, TextureView, RandomState>>swap_chain_frames: RwLockReadGuard<'a, RawRwLock, HashMap<TextureId, SwapChainFrame, RandomState>>render_pipelines: RwLockReadGuard<'a, RawRwLock, HashMap<Handle<PipelineDescriptor>, RenderPipeline, RandomState>>bind_groups: RwLockReadGuard<'a, RawRwLock, HashMap<BindGroupDescriptorId, WgpuBindGroupInfo, RandomState>>used_bind_group_sender: Sender<BindGroupId>

Implementations

impl<'a> WgpuResourcesReadLock<'a>

pub fn refs(&'a self) -> WgpuResourceRefs<'a>

Trait Implementations

impl<'a> Debug for WgpuResourcesReadLock<'a>

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Any for T where
    T: Any

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> Component for T where
    T: 'static + Send + Sync

impl<T> Downcast for T where
    T: Any

impl<T> DowncastSync for T where
    T: Send + Sync + Any

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> Resource for T where
    T: 'static + Send + Sync

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>,