â ïļ Deprecation Notice
framealloc is being deprecated in favor of memkit.
As of v0.11.1, all public types emit deprecation warnings. The library remains fully functional, but will not receive new features. memkit is a complete rewrite with a cleaner architecture, better modularity, and expanded capabilities.
I sincerely apologize if this transition causes disruption to your projects. Maintaining multiple versions long-term isn't feasible, and memkit represents the better path forward. A migration guide will be provided with memkit's release.
Thank you for using framealloc. ð
â Yelena
Overview
framealloc is a deterministic, frame-based memory allocation library for Rust game engines and real-time applications. It provides predictable performance through explicit lifetimes and scales seamlessly from single-threaded to multi-threaded workloads.
Not a general-purpose allocator replacement. Purpose-built for game engines, renderers, simulations, and real-time systems.
Key Capabilities
| Capability | Description |
|---|---|
| Frame Arenas | Lock-free bump allocation, reset per frame |
| Object Pools | O(1) reuse for small, frequent allocations |
| Thread Coordination | Explicit transfers, barriers, per-thread budgets |
| Static Analysis | cargo fa catches memory mistakes at build time |
| Runtime Diagnostics | Behavior filter detects pattern violations |
Why framealloc?
Traditional allocators (malloc, jemalloc) optimize for general-case throughput. Game engines have different needs:
The Problem:
for frame in 0..1000000
The framealloc Solution:
let alloc = new;
for frame in 0..1000000
Results:
- 139x faster than malloc for batch allocations
- Stable frame times â no GC pauses, no fragmentation
- Explicit lifetimes â frame/pool/heap explicit in code
- Observable â know exactly where memory goes
Documentation & Learning Path
Getting Started (0-2 hours)
Getting Started Guide â Install, write your first allocation, understand core concepts.
Start here if: You're evaluating framealloc or just installed it.
Common Patterns (2-20 hours)
Patterns Guide â Frame loops, threading, organization, common pitfalls.
Start here if: You've used framealloc basics and want to structure real applications.
Domain Guides
| Domain | Guide | Description |
|---|---|---|
| Game Development | Game Dev Guide | ECS, rendering, audio, level streaming |
| Physics | Rapier Integration | Contact generation, queries, performance |
| Async | Async Guide | Safe patterns, TaskAlloc, avoiding frame violations |
| Performance | Performance Guide | Batch allocation, profiling, benchmarks |
Advanced Topics (20-100 hours)
Advanced Guide â Custom allocators, internals, NUMA awareness, instrumentation.
Start here if: You're extending framealloc or need maximum performance.
Reference
| Resource | Description |
|---|---|
| API Documentation | Complete API reference |
| Cookbook | Copy-paste recipes for common tasks |
| Migration Guide | Coming from other allocators |
| Troubleshooting | Common issues and solutions |
| TECHNICAL.md | Architecture and implementation details |
| CHANGELOG.md | Version history |
Examples
# Beginner (0-2 hours)
# Intermediate (2-20 hours)
# Advanced (20+ hours)
Coming From...
Default Rust (Vec, Box):
// Before: // After:
let scratch = vec!; let scratch = alloc.;
bumpalo:
// bumpalo: // framealloc:
let bump = new; alloc.begin_frame;
let x = bump.alloc; let x = alloc.;
bump.reset; alloc.end_frame;
C++ game allocators: Frame allocators â frame_alloc() | Object pools â pool_alloc() | Custom â AllocatorBackend trait
See Migration Guide for detailed conversion steps.
Quick Start
Basic Usage
use ;
Bevy Integration
use *;
use SmartAllocPlugin;
Features
Core Allocation
use ;
let alloc = new;
loop
Thread Coordination (v0.6.0)
// Explicit cross-thread transfers
let handle = alloc.frame_box_for_transfer;
worker_channel.send;
// Frame barriers for deterministic sync
let barrier = new;
barrier.signal_frame_complete;
barrier.wait_all;
// Per-thread budgets
alloc.set_thread_frame_budget;
IDE Integration (v0.7.0)
fa-insight â VS Code extension for framealloc-aware development:
Features: CodeLens memory display, trend graphs, budget alerts at 80%+ usage.
Install: Search "FA Insight" in VS Code Marketplace
Tokio Integration (v0.8.0)
use ;
// Main thread: frame allocations OK
alloc.begin_frame;
let scratch = alloc.;
// Async tasks: use TaskAlloc (pool-backed, auto-cleanup)
spawn;
alloc.end_frame;
Key principle: Frame allocations stay on main thread, async tasks use pool/heap.
Enable: framealloc = { version = "0.10", features = ["tokio"] }
See Async Guide for the full async safety guide.
Batch Allocations (v0.9.0)
â ïļ SAFETY FIRST: Batch APIs use raw pointers
139x faster than individual allocations, but requires unsafe:
let items = alloc.;
// SAFETY REQUIREMENTS:
// 1. Indices must be within 0..count
// 2. Must initialize with std::ptr::write before reading
// 3. Pointers invalid after end_frame()
// 4. Not Send/Sync - don't pass to other threads
unsafe
Specialized sizes (zero overhead, no unsafe):
let = alloc.; // Pairs
let = alloc.; // Quads
let items = alloc.; // Cache line
Rapier Physics Integration (v0.10.0)
Frame-aware wrappers for Rapier physics engine v0.31:
use ;
let mut physics = new;
alloc.begin_frame;
let events = physics.step_with_events;
for contact in events.contacts
alloc.end_frame;
Why Rapier v0.31 matters: Rapier v0.31 refactored broad-phase and query APIs. If you're using Rapier âĪv0.30, use framealloc v0.9.0 instead.
Enable: framealloc = { version = "0.10", features = ["rapier"] }
See Rapier Integration Guide for full documentation.
GPU Support (v0.11.0)
framealloc now supports unified CPU-GPU memory management with clean separation and optional GPU backends.
Architecture
- CPU Module: Always available, zero GPU dependencies
- GPU Module: Feature-gated (
gpu), backend-agnostic traits - Coordinator Module: Bridges CPU and GPU (
coordinatorfeature)
Feature Flags
# Enable GPU support (no backend yet)
= { = "0.11", = ["gpu"] }
# Enable Vulkan backend
= { = "0.11", = ["gpu-vulkan"] }
# Enable unified CPU-GPU coordination
= { = "0.11", = ["gpu-vulkan", "coordinator"] }
Quick Example
use UnifiedAllocator;
use ;
// Create unified allocator
let mut unified = new;
// Begin frame
unified.begin_frame;
// Create staging buffer for CPU-GPU transfer
let staging = unified.create_staging_buffer?;
if let Some = staging.cpu_slice_mut
// Transfer to GPU
unified.transfer_to_gpu?;
// Check usage
let = unified.get_usage;
println!;
unified.end_frame;
Key Benefits
- Zero overhead for CPU-only users (no new deps)
- Backend-agnostic GPU traits (Vulkan today, more tomorrow)
- Unified budgeting across CPU and GPU memory
- Explicit transfers - no hidden synchronization costs
GPU Backend Roadmap
Why Vulkan First? Vulkan provides the most explicit control over memory allocation, making it ideal for demonstrating framealloc's intent-driven approach. Its low-level nature exposes all the memory concepts we abstract (device-local, host-visible, staging buffers), serving as the perfect reference implementation.
Planned Backend Support
| Platform | Status | Notes |
|---|---|---|
| Vulkan | â Available | Low-level, explicit memory control |
| Direct3D 11/12 | ð Planned | Windows gaming platforms |
| Metal | ð Planned | Apple ecosystem (iOS/macOS) |
| WebGPU | ð Future | Browser-based applications |
Generic GPU Usage You can use framealloc's GPU traits without committing to a specific backend:
use ;
// Intent-driven allocation works with any backend
let req = new;
// Backend-agnostic allocation
let buffer = allocator.allocate?;
The intent-based design ensures your code remains portable as new backends are added. Simply swap the allocator implementation without changing allocation logic.
Static Analysis
cargo-fa detects memory intent violations before runtime.
# Check specific categories
# CI integration
| Range | Category | Examples |
|---|---|---|
| FA2xx | Threading | Cross-thread access, barrier mismatch |
| FA6xx | Lifetime | Frame escape, hot loops, missing boundaries |
| FA7xx | Async | Allocation across await, closure capture |
| FA9xx | Rapier | QueryFilter import, step_with_events usage |
Cargo Features
| Feature | Description |
|---|---|
bevy |
Bevy ECS plugin integration |
rapier |
Rapier physics engine integration |
tokio |
Async/await support with Tokio |
parking_lot |
Faster mutex implementation |
debug |
Memory poisoning, allocation backtraces |
minimal |
Disable statistics for max performance |
prefetch |
Hardware prefetch hints (x86_64) |
Performance
Allocation priority minimizes latency:
- Frame arena â Bump pointer increment, no synchronization
- Thread-local pools â Free list pop, no contention
- Global pool refill â Mutex-protected, batched
- System heap â Fallback for oversized allocations
In typical game workloads, 90%+ of allocations hit the frame arena path.
License
Licensed under either of:
at your option.