memkit-gpu 0.1.1-beta.1

Backend-agnostic GPU memory management for memkit
Documentation
# memkit-gpu


<div align="center">

**High-Performance GPU Memory Management for the Memkit Ecosystem.**

[![Crates.io](https://img.shields.io/crates/v/memkit-gpu.svg?style=for-the-badge&color=blue)](https://crates.io/crates/memkit-gpu)
[![Documentation](https://img.shields.io/badge/docs.rs-memkit--gpu-green?style=for-the-badge)](https://docs.rs/memkit-gpu)
[![License: MPL-2.0](https://img.shields.io/badge/License-MPL%202.0-brightgreen.svg?style=for-the-badge)](https://opensource.org/licenses/MPL-2.0)

</div>

---

## ๐Ÿš€ Overview


`memkit-gpu` provides a **backend-agnostic** GPU memory management layer that bridges the gap between high-level intent and low-level hardware constraints. It allows you to write graphics-heavy applications once and target Vulkan, Metal, or DX12 with minimal changes.

## ๐Ÿ’Ž The GPU Advantage


| Feature | System (Global) | memkit-gpu |
|---------|-----------------|------------|
| **Latency** | Non-deterministic | Deterministic & Predictable |
| **Backend** | Native-only | Multi-backend Abstraction |
| **Safety** | Manual pointer math | Type-safe Staging/Device buffers |
| **Efficiency** | Ad-hoc transfers | Coalesced & Batched transfers |

## โœจ Features


- ๐ŸŽ๏ธ **Backend-Agnostic** โ€” Write once, run on Vulkan, Metal (WIP), or DX12 (WIP).
- ๐Ÿ“ฆ **Staging/Device Split** โ€” Explicit management of CPU-visible and GPU-local memory.
- โšก **Batch Transfers** โ€” Optimized transfer queues to minimize command buffer overhead.
- ๐Ÿ—„๏ธ **Smart Pooling** โ€” Reusable buffer pools to eliminate allocation judder.
- ๐Ÿงช **Dummy Backend** โ€” Full CPU-based GPU simulation for testing without hardware.

## ๐Ÿ› ๏ธ Quick Start


```rust
use memkit_gpu::{MkGpu, MkBufferUsage, DummyBackend};

// Initialize with the Dummy backend for testing
let gpu = MkGpu::new(DummyBackend::new());

// Create a staging buffer (CPU-visible)
let vertices: &[f32] = &[0.0, 1.0, 0.0, 1.0, 0.0, 0.0];
let staging = gpu.staging_buffer_with_data(vertices).unwrap();

// Create a device-local buffer (GPU-fast)
let device = gpu.create_device_buffer(
    std::mem::size_of_val(vertices),
    MkBufferUsage::VERTEX | MkBufferUsage::TRANSFER_DST,
).unwrap();

// Efficiently transfer data from Staging to Device
gpu.transfer(&staging, &device).unwrap();
```

## ๐Ÿ”Œ Supported Backends


`memkit-gpu` is designed as a trait-based system. Enable your preferred backend via features:

| Backend | Feature Flag | Status |
|---------|--------------|--------|
| **Vulkan** | `vulkan` | โœ… Production Ready |
| **Dummy** | (default) | โœ… Test/Debug Ready |
| **Metal** | `metal` | ๐Ÿšง Coming Soon |
| **DirectX 12** | `dx12` | ๐Ÿšง Coming Soon |

### Vulkan Setup

```toml
[dependencies]
memkit-gpu = { version = "0.1.1-beta.1", features = ["vulkan"] }
```

## ๐Ÿ—๏ธ Architecture


```mermaid
graph TD
    A[User Code] --> B[MkGpu]
    B --> C{MkGpuBackend}
    C -->|vulkan| D[VulkanBackend]
    C -->|dummy| E[DummyBackend]
    D --> F[Hardware GPU]
    E --> G[CPU Simulation]
```

## โš–๏ธ License


Licensed under the **Mozilla Public License 2.0**. See [LICENSE.md](../LICENSE.md) for details.