1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// https://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or https://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
//! Location in memory that contains data.
//!
//! A Vulkan buffer is very similar to a buffer that you would use in programming languages in
//! general, in the sense that it is a location in memory that contains data. The difference
//! between a Vulkan buffer and a regular buffer is that the content of a Vulkan buffer is
//! accessible from the GPU.
//!
//! Vulkano does not perform any specific marshalling of buffer data. The representation of the
//! buffer in memory is identical between the CPU and GPU. Because the Rust compiler is allowed to
//! reorder struct fields at will by default when using `#[repr(Rust)]`, it is advised to mark each
//! struct requiring imput assembly as `#[repr(C)]`. This forces Rust to follow the standard C
//! procedure. Each element is laid out in memory in the order of declaration and aligned to a
//! multiple of their alignment.
//!
//! # Multiple levels of abstraction
//!
//! - The low-level implementation of a buffer is [`RawBuffer`], which corresponds directly to a
//! `VkBuffer`, and as such doesn't hold onto any memory.
//! - [`Buffer`] is a `RawBuffer` with memory bound to it, and with state tracking.
//! - [`Subbuffer`] is what you will use most of the time, as it is what all the APIs expect. It is
//! a reference to a portion of a `Buffer`. `Subbuffer` also has a type parameter, which is a
//! hint for how the data in the portion of the buffer is going to be interpreted.
//!
//! # `Subbuffer` allocation
//!
//! There are two ways to get a `Subbuffer`:
//!
//! - By using the functions on `Buffer`, which create a new buffer and memory allocation each
//! time, and give you a `Subbuffer` that has an entire `Buffer` dedicated to it.
//! - By using the [`SubbufferAllocator`], which creates `Subbuffer`s by suballocating existing
//! `Buffer`s such that the `Buffer`s can keep being reused.
//!
//! Which of these you should choose depends on the use case. For example, if you need to upload
//! data to the device each frame, then you should use `SubbufferAllocator`. Same goes for if you
//! need to download data very frequently, or if you need to allocate a lot of intermediary buffers
//! that are only accessed by the device. On the other hand, if you need to upload some data just
//! once, or you can keep reusing the same buffer (because its size is unchanging) it's best to
//! use a dedicated `Buffer` for that.
//!
//! # Buffer usage
//!
//! When you create a buffer, you have to specify its *usage*. In other words, you have to
//! specify the way it is going to be used. Trying to use a buffer in a way that wasn't specified
//! when you created it will result in a runtime error.
//!
//! You can use buffers for the following purposes:
//!
//! - Can contain arbitrary data that can be transferred from/to other buffers and images.
//! - Can be read and modified from a shader.
//! - Can be used as a source of vertices and indices.
//! - Can be used as a source of list of models for draw indirect commands.
//!
//! Accessing a buffer from a shader can be done in the following ways:
//!
//! - As a uniform buffer. Uniform buffers are read-only.
//! - As a storage buffer. Storage buffers can be read and written.
//! - As a uniform texel buffer. Contrary to a uniform buffer, the data is interpreted by the GPU
//! and can be for example normalized.
//! - As a storage texel buffer. Additionally, some data formats can be modified with atomic
//! operations.
//!
//! Using uniform/storage texel buffers requires creating a *buffer view*. See [the `view` module]
//! for how to create a buffer view.
//!
//! See also [the `shader` module documentation] for information about how buffer contents need to
//! be laid out in accordance with the shader interface.
//!
//! [`RawBuffer`]: self::sys::RawBuffer
//! [`SubbufferAllocator`]: self::allocator::SubbufferAllocator
//! [the `view` module]: self::view
//! [the `shader` module documentation]: crate::shader
pub use self::{subbuffer::*, sys::*, usage::*};
use crate::{
device::{physical::PhysicalDevice, Device, DeviceOwned},
macros::{vulkan_bitflags, vulkan_enum},
memory::{
allocator::{
AllocationCreateInfo, AllocationType, DeviceLayout, MemoryAllocator,
MemoryAllocatorError,
},
DedicatedAllocation, ExternalMemoryHandleType, ExternalMemoryHandleTypes,
ExternalMemoryProperties, MemoryRequirements, ResourceMemory,
},
range_map::RangeMap,
sync::{future::AccessError, AccessConflict, CurrentAccess, Sharing},
DeviceSize, NonNullDeviceAddress, NonZeroDeviceSize, Requires, RequiresAllOf, RequiresOneOf,
Validated, ValidationError, Version, VulkanError, VulkanObject,
};
use parking_lot::{Mutex, MutexGuard};
use smallvec::SmallVec;
use std::{
error::Error,
fmt::{Display, Formatter},
hash::{Hash, Hasher},
ops::Range,
sync::Arc,
};
pub mod allocator;
pub mod subbuffer;
pub mod sys;
mod usage;
pub mod view;
/// A storage for raw bytes.
///
/// Unlike [`RawBuffer`], a `Buffer` has memory backing it, and can be used normally.
///
/// See [the module-level documentation] for more information about buffers.
///
/// # Examples
///
/// Sometimes, you need a buffer that is rarely accessed by the host. To get the best performance
/// in this case, one should use a buffer in device-local memory, that is inaccessible from the
/// host. As such, to initialize or otherwise access such a buffer, we need a *staging buffer*.
///
/// The following example outlines the general strategy one may take when initializing a
/// device-local buffer.
///
/// ```
/// use vulkano::{
/// buffer::{BufferUsage, Buffer, BufferCreateInfo},
/// command_buffer::{
/// AutoCommandBufferBuilder, CommandBufferUsage, CopyBufferInfo,
/// PrimaryCommandBufferAbstract,
/// },
/// memory::allocator::{AllocationCreateInfo, MemoryTypeFilter},
/// sync::GpuFuture,
/// DeviceSize,
/// };
///
/// # let device: std::sync::Arc<vulkano::device::Device> = return;
/// # let queue: std::sync::Arc<vulkano::device::Queue> = return;
/// # let memory_allocator: std::sync::Arc<vulkano::memory::allocator::StandardMemoryAllocator> = return;
/// # let command_buffer_allocator: vulkano::command_buffer::allocator::StandardCommandBufferAllocator = return;
/// #
/// // Simple iterator to construct test data.
/// let data = (0..10_000).map(|i| i as f32);
///
/// // Create a host-accessible buffer initialized with the data.
/// let temporary_accessible_buffer = Buffer::from_iter(
/// memory_allocator.clone(),
/// BufferCreateInfo {
/// // Specify that this buffer will be used as a transfer source.
/// usage: BufferUsage::TRANSFER_SRC,
/// ..Default::default()
/// },
/// AllocationCreateInfo {
/// // Specify use for upload to the device.
/// memory_type_filter: MemoryTypeFilter::PREFER_HOST
/// | MemoryTypeFilter::HOST_SEQUENTIAL_WRITE,
/// ..Default::default()
/// },
/// data,
/// )
/// .unwrap();
///
/// // Create a buffer in device-local memory with enough space for a slice of `10_000` floats.
/// let device_local_buffer = Buffer::new_slice::<f32>(
/// memory_allocator.clone(),
/// BufferCreateInfo {
/// // Specify use as a storage buffer and transfer destination.
/// usage: BufferUsage::STORAGE_BUFFER | BufferUsage::TRANSFER_DST,
/// ..Default::default()
/// },
/// AllocationCreateInfo {
/// // Specify use by the device only.
/// memory_type_filter: MemoryTypeFilter::PREFER_DEVICE,
/// ..Default::default()
/// },
/// 10_000 as DeviceSize,
/// )
/// .unwrap();
///
/// // Create a one-time command to copy between the buffers.
/// let mut cbb = AutoCommandBufferBuilder::primary(
/// &command_buffer_allocator,
/// queue.queue_family_index(),
/// CommandBufferUsage::OneTimeSubmit,
/// )
/// .unwrap();
/// cbb.copy_buffer(CopyBufferInfo::buffers(
/// temporary_accessible_buffer,
/// device_local_buffer.clone(),
/// ))
/// .unwrap();
/// let cb = cbb.build().unwrap();
///
/// // Execute the copy command and wait for completion before proceeding.
/// cb.execute(queue.clone())
/// .unwrap()
/// .then_signal_fence_and_flush()
/// .unwrap()
/// .wait(None /* timeout */)
/// .unwrap()
/// ```
///
/// [the module-level documentation]: self
#[derive(Debug)]
pub struct Buffer {
inner: RawBuffer,
memory: BufferMemory,
state: Mutex<BufferState>,
}
/// The type of backing memory that a buffer can have.
#[derive(Debug)]
pub enum BufferMemory {
/// The buffer is backed by normal memory, bound with [`bind_memory`].
///
/// [`bind_memory`]: RawBuffer::bind_memory
Normal(ResourceMemory),
/// The buffer is backed by sparse memory, bound with [`bind_sparse`].
///
/// [`bind_sparse`]: crate::device::QueueGuard::bind_sparse
Sparse,
}
impl Buffer {
/// Creates a new `Buffer` and writes `data` in it. Returns a [`Subbuffer`] spanning the whole
/// buffer.
///
/// > **Note**: This only works with memory types that are host-visible. If you want to upload
/// > data to a buffer allocated in device-local memory, you will need to create a staging
/// > buffer and copy the contents over.
///
/// # Panics
///
/// - Panics if `create_info.size` is not zero.
/// - Panics if the chosen memory type is not host-visible.
pub fn from_data<T>(
allocator: Arc<dyn MemoryAllocator>,
create_info: BufferCreateInfo,
allocation_info: AllocationCreateInfo,
data: T,
) -> Result<Subbuffer<T>, Validated<AllocateBufferError>>
where
T: BufferContents,
{
let buffer = Buffer::new_sized(allocator, create_info, allocation_info)?;
{
let mut write_guard = buffer.write().unwrap();
*write_guard = data;
}
Ok(buffer)
}
/// Creates a new `Buffer` and writes all elements of `iter` in it. Returns a [`Subbuffer`]
/// spanning the whole buffer.
///
/// > **Note**: This only works with memory types that are host-visible. If you want to upload
/// > data to a buffer allocated in device-local memory, you will need to create a staging
/// > buffer and copy the contents over.
///
/// # Panics
///
/// - Panics if `create_info.size` is not zero.
/// - Panics if the chosen memory type is not host-visible.
/// - Panics if `iter` is empty.
pub fn from_iter<T, I>(
allocator: Arc<dyn MemoryAllocator>,
create_info: BufferCreateInfo,
allocation_info: AllocationCreateInfo,
iter: I,
) -> Result<Subbuffer<[T]>, Validated<AllocateBufferError>>
where
T: BufferContents,
I: IntoIterator<Item = T>,
I::IntoIter: ExactSizeIterator,
{
let iter = iter.into_iter();
let buffer = Buffer::new_slice(
allocator,
create_info,
allocation_info,
iter.len().try_into().unwrap(),
)?;
{
let mut write_guard = buffer.write().unwrap();
for (o, i) in write_guard.iter_mut().zip(iter) {
*o = i;
}
}
Ok(buffer)
}
/// Creates a new uninitialized `Buffer` for sized data. Returns a [`Subbuffer`] spanning the
/// whole buffer.
///
/// # Panics
///
/// - Panics if `create_info.size` is not zero.
pub fn new_sized<T>(
allocator: Arc<dyn MemoryAllocator>,
create_info: BufferCreateInfo,
allocation_info: AllocationCreateInfo,
) -> Result<Subbuffer<T>, Validated<AllocateBufferError>>
where
T: BufferContents,
{
let layout = T::LAYOUT.unwrap_sized();
let buffer = Subbuffer::new(Buffer::new(
allocator,
create_info,
allocation_info,
layout,
)?);
Ok(unsafe { buffer.reinterpret_unchecked() })
}
/// Creates a new uninitialized `Buffer` for a slice. Returns a [`Subbuffer`] spanning the
/// whole buffer.
///
/// # Panics
///
/// - Panics if `create_info.size` is not zero.
/// - Panics if `len` is zero.
pub fn new_slice<T>(
allocator: Arc<dyn MemoryAllocator>,
create_info: BufferCreateInfo,
allocation_info: AllocationCreateInfo,
len: DeviceSize,
) -> Result<Subbuffer<[T]>, Validated<AllocateBufferError>>
where
T: BufferContents,
{
Buffer::new_unsized(allocator, create_info, allocation_info, len)
}
/// Creates a new uninitialized `Buffer` for unsized data. Returns a [`Subbuffer`] spanning the
/// whole buffer.
///
/// # Panics
///
/// - Panics if `create_info.size` is not zero.
/// - Panics if `len` is zero.
pub fn new_unsized<T>(
allocator: Arc<dyn MemoryAllocator>,
create_info: BufferCreateInfo,
allocation_info: AllocationCreateInfo,
len: DeviceSize,
) -> Result<Subbuffer<T>, Validated<AllocateBufferError>>
where
T: BufferContents + ?Sized,
{
let len = NonZeroDeviceSize::new(len).expect("empty slices are not valid buffer contents");
let layout = T::LAYOUT.layout_for_len(len).unwrap();
let buffer = Subbuffer::new(Buffer::new(
allocator,
create_info,
allocation_info,
layout,
)?);
Ok(unsafe { buffer.reinterpret_unchecked() })
}
/// Creates a new uninitialized `Buffer` with the given `layout`.
///
/// # Panics
///
/// - Panics if `create_info.size` is not zero.
/// - Panics if `layout.alignment()` is greater than 64.
pub fn new(
allocator: Arc<dyn MemoryAllocator>,
mut create_info: BufferCreateInfo,
allocation_info: AllocationCreateInfo,
layout: DeviceLayout,
) -> Result<Arc<Self>, Validated<AllocateBufferError>> {
assert!(layout.alignment().as_devicesize() <= 64);
// TODO: Enable once sparse binding materializes
// assert!(!allocate_info.flags.contains(BufferCreateFlags::SPARSE_BINDING));
assert_eq!(
create_info.size, 0,
"`Buffer::new*` functions set the `create_info.size` field themselves, you should not \
set it yourself"
);
create_info.size = layout.size();
let raw_buffer =
RawBuffer::new(allocator.device().clone(), create_info).map_err(|err| match err {
Validated::Error(err) => Validated::Error(AllocateBufferError::CreateBuffer(err)),
Validated::ValidationError(err) => err.into(),
})?;
let mut requirements = *raw_buffer.memory_requirements();
requirements.layout = requirements.layout.align_to(layout.alignment()).unwrap();
let allocation = allocator
.allocate(
requirements,
AllocationType::Linear,
allocation_info,
Some(DedicatedAllocation::Buffer(&raw_buffer)),
)
.map_err(AllocateBufferError::AllocateMemory)?;
let allocation = unsafe { ResourceMemory::from_allocation(allocator, allocation) };
let buffer = raw_buffer.bind_memory(allocation).map_err(|(err, _, _)| {
err.map(AllocateBufferError::BindMemory)
.map_validation(|err| err.add_context("RawBuffer::bind_memory"))
})?;
Ok(Arc::new(buffer))
}
fn from_raw(inner: RawBuffer, memory: BufferMemory) -> Self {
let state = Mutex::new(BufferState::new(inner.size()));
Buffer {
inner,
memory,
state,
}
}
/// Returns the type of memory that is backing this buffer.
#[inline]
pub fn memory(&self) -> &BufferMemory {
&self.memory
}
/// Returns the memory requirements for this buffer.
#[inline]
pub fn memory_requirements(&self) -> &MemoryRequirements {
self.inner.memory_requirements()
}
/// Returns the flags the buffer was created with.
#[inline]
pub fn flags(&self) -> BufferCreateFlags {
self.inner.flags()
}
/// Returns the size of the buffer in bytes.
#[inline]
pub fn size(&self) -> DeviceSize {
self.inner.size()
}
/// Returns the usage the buffer was created with.
#[inline]
pub fn usage(&self) -> BufferUsage {
self.inner.usage()
}
/// Returns the sharing the buffer was created with.
#[inline]
pub fn sharing(&self) -> &Sharing<SmallVec<[u32; 4]>> {
self.inner.sharing()
}
/// Returns the external memory handle types that are supported with this buffer.
#[inline]
pub fn external_memory_handle_types(&self) -> ExternalMemoryHandleTypes {
self.inner.external_memory_handle_types()
}
/// Returns the device address for this buffer.
// TODO: Caching?
pub fn device_address(&self) -> Result<NonNullDeviceAddress, Box<ValidationError>> {
self.validate_device_address()?;
unsafe { Ok(self.device_address_unchecked()) }
}
fn validate_device_address(&self) -> Result<(), Box<ValidationError>> {
let device = self.device();
if !device.enabled_features().buffer_device_address {
return Err(Box::new(ValidationError {
requires_one_of: RequiresOneOf(&[RequiresAllOf(&[Requires::Feature(
"buffer_device_address",
)])]),
vuids: &["VUID-vkGetBufferDeviceAddress-bufferDeviceAddress-03324"],
..Default::default()
}));
}
if !self.usage().intersects(BufferUsage::SHADER_DEVICE_ADDRESS) {
return Err(Box::new(ValidationError {
context: "self.usage()".into(),
problem: "does not contain `BufferUsage::SHADER_DEVICE_ADDRESS`".into(),
vuids: &["VUID-VkBufferDeviceAddressInfo-buffer-02601"],
..Default::default()
}));
}
Ok(())
}
#[cfg_attr(not(feature = "document_unchecked"), doc(hidden))]
pub unsafe fn device_address_unchecked(&self) -> NonNullDeviceAddress {
let device = self.device();
let info_vk = ash::vk::BufferDeviceAddressInfo {
buffer: self.handle(),
..Default::default()
};
let ptr = {
let fns = device.fns();
let f = if device.api_version() >= Version::V1_2 {
fns.v1_2.get_buffer_device_address
} else if device.enabled_extensions().khr_buffer_device_address {
fns.khr_buffer_device_address.get_buffer_device_address_khr
} else {
fns.ext_buffer_device_address.get_buffer_device_address_ext
};
f(device.handle(), &info_vk)
};
NonNullDeviceAddress::new(ptr).unwrap()
}
pub(crate) fn state(&self) -> MutexGuard<'_, BufferState> {
self.state.lock()
}
}
unsafe impl VulkanObject for Buffer {
type Handle = ash::vk::Buffer;
#[inline]
fn handle(&self) -> Self::Handle {
self.inner.handle()
}
}
unsafe impl DeviceOwned for Buffer {
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
impl PartialEq for Buffer {
#[inline]
fn eq(&self, other: &Self) -> bool {
self.inner == other.inner
}
}
impl Eq for Buffer {}
impl Hash for Buffer {
fn hash<H: Hasher>(&self, state: &mut H) {
self.inner.hash(state);
}
}
/// Error that can happen when allocating a new buffer.
#[derive(Clone, Debug)]
pub enum AllocateBufferError {
CreateBuffer(VulkanError),
AllocateMemory(MemoryAllocatorError),
BindMemory(VulkanError),
}
impl Error for AllocateBufferError {
fn source(&self) -> Option<&(dyn Error + 'static)> {
match self {
Self::CreateBuffer(err) => Some(err),
Self::AllocateMemory(err) => Some(err),
Self::BindMemory(err) => Some(err),
}
}
}
impl Display for AllocateBufferError {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
Self::CreateBuffer(_) => write!(f, "creating the buffer failed"),
Self::AllocateMemory(_) => write!(f, "allocating memory for the buffer failed"),
Self::BindMemory(_) => write!(f, "binding memory to the buffer failed"),
}
}
}
impl From<AllocateBufferError> for Validated<AllocateBufferError> {
fn from(err: AllocateBufferError) -> Self {
Self::Error(err)
}
}
/// The current state of a buffer.
#[derive(Debug)]
pub(crate) struct BufferState {
ranges: RangeMap<DeviceSize, BufferRangeState>,
}
impl BufferState {
fn new(size: DeviceSize) -> Self {
BufferState {
ranges: [(
0..size,
BufferRangeState {
current_access: CurrentAccess::Shared {
cpu_reads: 0,
gpu_reads: 0,
},
},
)]
.into_iter()
.collect(),
}
}
pub(crate) fn check_cpu_read(&self, range: Range<DeviceSize>) -> Result<(), AccessConflict> {
for (_range, state) in self.ranges.range(&range) {
match &state.current_access {
CurrentAccess::CpuExclusive { .. } => return Err(AccessConflict::HostWrite),
CurrentAccess::GpuExclusive { .. } => return Err(AccessConflict::DeviceWrite),
CurrentAccess::Shared { .. } => (),
}
}
Ok(())
}
pub(crate) unsafe fn cpu_read_lock(&mut self, range: Range<DeviceSize>) {
self.ranges.split_at(&range.start);
self.ranges.split_at(&range.end);
for (_range, state) in self.ranges.range_mut(&range) {
match &mut state.current_access {
CurrentAccess::Shared { cpu_reads, .. } => {
*cpu_reads += 1;
}
_ => unreachable!("Buffer is being written by the CPU or GPU"),
}
}
}
pub(crate) unsafe fn cpu_read_unlock(&mut self, range: Range<DeviceSize>) {
self.ranges.split_at(&range.start);
self.ranges.split_at(&range.end);
for (_range, state) in self.ranges.range_mut(&range) {
match &mut state.current_access {
CurrentAccess::Shared { cpu_reads, .. } => *cpu_reads -= 1,
_ => unreachable!("Buffer was not locked for CPU read"),
}
}
}
pub(crate) fn check_cpu_write(&self, range: Range<DeviceSize>) -> Result<(), AccessConflict> {
for (_range, state) in self.ranges.range(&range) {
match &state.current_access {
CurrentAccess::CpuExclusive => return Err(AccessConflict::HostWrite),
CurrentAccess::GpuExclusive { .. } => return Err(AccessConflict::DeviceWrite),
CurrentAccess::Shared {
cpu_reads: 0,
gpu_reads: 0,
} => (),
CurrentAccess::Shared { cpu_reads, .. } if *cpu_reads > 0 => {
return Err(AccessConflict::HostRead);
}
CurrentAccess::Shared { .. } => return Err(AccessConflict::DeviceRead),
}
}
Ok(())
}
pub(crate) unsafe fn cpu_write_lock(&mut self, range: Range<DeviceSize>) {
self.ranges.split_at(&range.start);
self.ranges.split_at(&range.end);
for (_range, state) in self.ranges.range_mut(&range) {
state.current_access = CurrentAccess::CpuExclusive;
}
}
pub(crate) unsafe fn cpu_write_unlock(&mut self, range: Range<DeviceSize>) {
self.ranges.split_at(&range.start);
self.ranges.split_at(&range.end);
for (_range, state) in self.ranges.range_mut(&range) {
match &mut state.current_access {
CurrentAccess::CpuExclusive => {
state.current_access = CurrentAccess::Shared {
cpu_reads: 0,
gpu_reads: 0,
}
}
_ => unreachable!("Buffer was not locked for CPU write"),
}
}
}
pub(crate) fn check_gpu_read(&self, range: Range<DeviceSize>) -> Result<(), AccessError> {
for (_range, state) in self.ranges.range(&range) {
match &state.current_access {
CurrentAccess::Shared { .. } => (),
_ => return Err(AccessError::AlreadyInUse),
}
}
Ok(())
}
pub(crate) unsafe fn gpu_read_lock(&mut self, range: Range<DeviceSize>) {
self.ranges.split_at(&range.start);
self.ranges.split_at(&range.end);
for (_range, state) in self.ranges.range_mut(&range) {
match &mut state.current_access {
CurrentAccess::GpuExclusive { gpu_reads, .. }
| CurrentAccess::Shared { gpu_reads, .. } => *gpu_reads += 1,
_ => unreachable!("Buffer is being written by the CPU"),
}
}
}
pub(crate) unsafe fn gpu_read_unlock(&mut self, range: Range<DeviceSize>) {
self.ranges.split_at(&range.start);
self.ranges.split_at(&range.end);
for (_range, state) in self.ranges.range_mut(&range) {
match &mut state.current_access {
CurrentAccess::GpuExclusive { gpu_reads, .. } => *gpu_reads -= 1,
CurrentAccess::Shared { gpu_reads, .. } => *gpu_reads -= 1,
_ => unreachable!("Buffer was not locked for GPU read"),
}
}
}
pub(crate) fn check_gpu_write(&self, range: Range<DeviceSize>) -> Result<(), AccessError> {
for (_range, state) in self.ranges.range(&range) {
match &state.current_access {
CurrentAccess::Shared {
cpu_reads: 0,
gpu_reads: 0,
} => (),
_ => return Err(AccessError::AlreadyInUse),
}
}
Ok(())
}
pub(crate) unsafe fn gpu_write_lock(&mut self, range: Range<DeviceSize>) {
self.ranges.split_at(&range.start);
self.ranges.split_at(&range.end);
for (_range, state) in self.ranges.range_mut(&range) {
match &mut state.current_access {
CurrentAccess::GpuExclusive { gpu_writes, .. } => *gpu_writes += 1,
&mut CurrentAccess::Shared {
cpu_reads: 0,
gpu_reads,
} => {
state.current_access = CurrentAccess::GpuExclusive {
gpu_reads,
gpu_writes: 1,
}
}
_ => unreachable!("Buffer is being accessed by the CPU"),
}
}
}
pub(crate) unsafe fn gpu_write_unlock(&mut self, range: Range<DeviceSize>) {
self.ranges.split_at(&range.start);
self.ranges.split_at(&range.end);
for (_range, state) in self.ranges.range_mut(&range) {
match &mut state.current_access {
&mut CurrentAccess::GpuExclusive {
gpu_reads,
gpu_writes: 1,
} => {
state.current_access = CurrentAccess::Shared {
cpu_reads: 0,
gpu_reads,
}
}
CurrentAccess::GpuExclusive { gpu_writes, .. } => *gpu_writes -= 1,
_ => unreachable!("Buffer was not locked for GPU write"),
}
}
}
}
/// The current state of a specific range of bytes in a buffer.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
struct BufferRangeState {
current_access: CurrentAccess,
}
vulkan_bitflags! {
#[non_exhaustive]
/// Flags specifying additional properties of a buffer.
BufferCreateFlags = BufferCreateFlags(u32);
/* TODO: enable
/// The buffer will be backed by sparse memory binding (through queue commands) instead of
/// regular binding (through [`bind_memory`]).
///
/// The [`sparse_binding`] feature must be enabled on the device.
///
/// [`bind_memory`]: sys::RawBuffer::bind_memory
/// [`sparse_binding`]: crate::device::Features::sparse_binding
SPARSE_BINDING = SPARSE_BINDING,*/
/* TODO: enable
/// The buffer can be used without being fully resident in memory at the time of use.
///
/// This requires the `sparse_binding` flag as well.
///
/// The [`sparse_residency_buffer`] feature must be enabled on the device.
///
/// [`sparse_residency_buffer`]: crate::device::Features::sparse_residency_buffer
SPARSE_RESIDENCY = SPARSE_RESIDENCY,*/
/* TODO: enable
/// The buffer's memory can alias with another buffer or a different part of the same buffer.
///
/// This requires the `sparse_binding` flag as well.
///
/// The [`sparse_residency_aliased`] feature must be enabled on the device.
///
/// [`sparse_residency_aliased`]: crate::device::Features::sparse_residency_aliased
SPARSE_ALIASED = SPARSE_ALIASED,*/
/* TODO: enable
/// The buffer is protected, and can only be used in combination with protected memory and other
/// protected objects.
///
/// The device API version must be at least 1.1.
PROTECTED = PROTECTED
RequiresOneOf([
RequiresAllOf([APIVersion(V1_1)]),
]),*/
/* TODO: enable
/// The buffer's device address can be saved and reused on a subsequent run.
///
/// The device API version must be at least 1.2, or either the [`khr_buffer_device_address`] or
/// [`ext_buffer_device_address`] extension must be enabled on the device.
DEVICE_ADDRESS_CAPTURE_REPLAY = DEVICE_ADDRESS_CAPTURE_REPLAY {
api_version: V1_2,
device_extensions: [khr_buffer_device_address, ext_buffer_device_address],
},*/
}
/// The buffer configuration to query in [`PhysicalDevice::external_buffer_properties`].
///
/// [`PhysicalDevice::external_buffer_properties`]: crate::device::physical::PhysicalDevice::external_buffer_properties
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct ExternalBufferInfo {
/// The flags that will be used.
pub flags: BufferCreateFlags,
/// The usage that the buffer will have.
pub usage: BufferUsage,
/// The external handle type that will be used with the buffer.
pub handle_type: ExternalMemoryHandleType,
pub _ne: crate::NonExhaustive,
}
impl ExternalBufferInfo {
/// Returns an `ExternalBufferInfo` with the specified `handle_type`.
#[inline]
pub fn handle_type(handle_type: ExternalMemoryHandleType) -> Self {
Self {
flags: BufferCreateFlags::empty(),
usage: BufferUsage::empty(),
handle_type,
_ne: crate::NonExhaustive(()),
}
}
pub(crate) fn validate(
&self,
physical_device: &PhysicalDevice,
) -> Result<(), Box<ValidationError>> {
let &Self {
flags,
usage,
handle_type,
_ne: _,
} = self;
flags
.validate_physical_device(physical_device)
.map_err(|err| {
err.add_context("flags")
.set_vuids(&["VUID-VkPhysicalDeviceExternalBufferInfo-flags-parameter"])
})?;
usage
.validate_physical_device(physical_device)
.map_err(|err| {
err.add_context("usage")
.set_vuids(&["VUID-VkPhysicalDeviceExternalBufferInfo-usage-parameter"])
})?;
if usage.is_empty() {
return Err(Box::new(ValidationError {
context: "usage".into(),
problem: "is empty".into(),
vuids: &["VUID-VkPhysicalDeviceExternalBufferInfo-usage-requiredbitmask"],
..Default::default()
}));
}
handle_type
.validate_physical_device(physical_device)
.map_err(|err| {
err.add_context("handle_type")
.set_vuids(&["VUID-VkPhysicalDeviceExternalBufferInfo-handleType-parameter"])
})?;
Ok(())
}
}
/// The external memory properties supported for buffers with a given configuration.
#[derive(Clone, Debug)]
#[non_exhaustive]
pub struct ExternalBufferProperties {
/// The properties for external memory.
pub external_memory_properties: ExternalMemoryProperties,
}
vulkan_enum! {
#[non_exhaustive]
/// An enumeration of all valid index types.
IndexType = IndexType(i32);
/// Indices are 8-bit unsigned integers.
U8 = UINT8_EXT
RequiresOneOf([
RequiresAllOf([DeviceExtension(ext_index_type_uint8)]),
]),
/// Indices are 16-bit unsigned integers.
U16 = UINT16,
/// Indices are 32-bit unsigned integers.
U32 = UINT32,
}
impl IndexType {
/// Returns the size in bytes of indices of this type.
#[inline]
pub fn size(self) -> DeviceSize {
match self {
IndexType::U8 => 1,
IndexType::U16 => 2,
IndexType::U32 => 4,
}
}
}
/// A buffer holding index values, which index into buffers holding vertex data.
#[derive(Clone, Debug)]
pub enum IndexBuffer {
/// An index buffer containing unsigned 8-bit indices.
///
/// The [`index_type_uint8`] feature must be enabled on the device.
///
/// [`index_type_uint8`]: crate::device::Features::index_type_uint8
U8(Subbuffer<[u8]>),
/// An index buffer containing unsigned 16-bit indices.
U16(Subbuffer<[u16]>),
/// An index buffer containing unsigned 32-bit indices.
U32(Subbuffer<[u32]>),
}
impl IndexBuffer {
/// Returns an `IndexType` value corresponding to the type of the buffer.
#[inline]
pub fn index_type(&self) -> IndexType {
match self {
Self::U8(_) => IndexType::U8,
Self::U16(_) => IndexType::U16,
Self::U32(_) => IndexType::U32,
}
}
/// Returns the buffer reinterpreted as a buffer of bytes.
#[inline]
pub fn as_bytes(&self) -> &Subbuffer<[u8]> {
match self {
IndexBuffer::U8(buffer) => buffer.as_bytes(),
IndexBuffer::U16(buffer) => buffer.as_bytes(),
IndexBuffer::U32(buffer) => buffer.as_bytes(),
}
}
/// Returns the number of elements in the buffer.
#[inline]
pub fn len(&self) -> DeviceSize {
match self {
IndexBuffer::U8(buffer) => buffer.len(),
IndexBuffer::U16(buffer) => buffer.len(),
IndexBuffer::U32(buffer) => buffer.len(),
}
}
}
impl From<Subbuffer<[u8]>> for IndexBuffer {
#[inline]
fn from(value: Subbuffer<[u8]>) -> Self {
Self::U8(value)
}
}
impl From<Subbuffer<[u16]>> for IndexBuffer {
#[inline]
fn from(value: Subbuffer<[u16]>) -> Self {
Self::U16(value)
}
}
impl From<Subbuffer<[u32]>> for IndexBuffer {
#[inline]
fn from(value: Subbuffer<[u32]>) -> Self {
Self::U32(value)
}
}