A high-performance memory pooling library for Rust with type-safe handles and zero-cost abstractions
🚀 Up to 1.4x faster allocation with predictable latency and zero fragmentation
🛠 Perfect for: Game engines, real-time systems, embedded applications, and high-churn workloads
📖 Overview
fastalloc is a memory pooling library that provides efficient, type-safe memory management with minimal overhead. It's designed for performance-critical applications where allocation speed and memory locality matter.
Why fastalloc?
- ⚡ Blazing Fast: Significantly reduces allocation/deallocation overhead
- 🧠 Smart Memory Management: Reduces memory fragmentation and improves cache locality
- 🛡️ Memory Safe: Leverages Rust's type system for safety without sacrificing performance
- 🔄 Flexible: Multiple allocation strategies and pool types for different use cases
- 🌐 no_std Support: Works in embedded and bare-metal environments
✨ Features
-
Multiple Pool Types:
- Fixed-size pools for predictable memory usage
- Growing pools for dynamic workloads
- Thread-local and thread-safe variants
-
Advanced Allocation Strategies:
- Stack-based (LIFO) for maximum speed
- Free-list for better memory utilization
- Bitmap-based for precise control
-
Performance Optimizations:
- Lock-free operations where possible
- Cache-line alignment
- Zero-copy access patterns
-
Developer Experience:
- Type-safe handles with RAII
- Detailed metrics and statistics
- Comprehensive documentation with examples
- Extensive test coverage
A memory pooling library for Rust with type-safe handles and RAII-based memory management. Provides 1.3-1.4x faster allocation than standard heap with the key benefits of predictable latency, zero fragmentation, and excellent cache locality.
Version 1.5.0 - Production-ready release with performance optimizations and comprehensive documentation. Repository: TIVerse/fastalloc.
🚀 Perfect for: Real-time systems, game engines, embedded devices, and high-churn workloads
💡 Key Benefits: Predictable latency, zero fragmentation, improved cache locality, deterministic behavior
Documentation:
- API Documentation - Complete API reference
- BENCHMARKS.md - Real benchmark results and methodology
- SAFETY.md - Safety guarantees and unsafe code documentation
- CONTRIBUTING.md - Contribution guidelines
✨ Key Features
- 🚀 Multiple pool types: Fixed-size, growing, thread-local, and thread-safe pools
- 🔒 Type-safe handles: RAII-based handles that automatically return objects to the pool
- ⚙️ Flexible configuration: Builder pattern with extensive customization options
- 📊 Optional statistics: Track allocation patterns and pool usage
- 🔧 Multiple allocation strategies: Stack (LIFO), free-list, and bitmap allocators
- 🌐 no_std support: Works in embedded and bare-metal environments
- ⚡ Zero-copy: Direct memory access without extra indirection
- 🛡️ Memory safe: Leverage Rust's type system to prevent leaks and use-after-free
- 🎯 Cache-friendly: Configurable alignment for optimal CPU cache utilization
- 📦 Small footprint: Minimal dependencies, < 3K SLOC core library
🚀 Quick Start
Installation
Add this to your Cargo.toml:
[]
= "1.0"
Basic Usage
use FixedPool;
Thread-Safe Usage
use Arc;
use ThreadSafePool;
use thread;
let mut handle = pool.allocate.unwrap;
// Use the value
assert_eq!;
*handle = 100;
assert_eq!;
// Automatically returned to pool when handle is dropped
drop;
🎯 Why Use Memory Pools?
Memory pools significantly improve performance in scenarios with frequent allocations:
Perfect Use Cases
| Domain | Use Case | Why It Matters |
|---|---|---|
| 🎮 Game Development | Entities, particles, physics objects | Maintain 60+ FPS by eliminating allocation stutter |
| 🎵 Real-Time Systems | Audio buffers, robotics control loops | Predictable latency for hard real-time constraints |
| 🌐 Web Servers | Request handlers, connection pooling | Handle 100K+ req/sec with minimal overhead |
| 📊 Data Processing | Temporary objects in hot paths | 50-100x speedup in tight loops |
| 🔬 Scientific Computing | Matrices, particles, graph nodes | Process millions of objects efficiently |
| 📱 Embedded Systems | Sensor data, IoT devices | Predictable memory usage, no fragmentation |
| 🤖 Machine Learning | Tensor buffers, batch processing | Reduce training time, optimize inference |
| 💰 Financial Systems | Order books, market data | Ultra-low latency trading systems |
⚡ Performance
Benchmark Results (criterion.rs, release mode with LTO):
| Operation | fastalloc | Standard Heap | Improvement |
|---|---|---|---|
| Fixed pool allocation (i32) | ~3.5 ns | ~4.8 ns | 1.3-1.4x faster |
| Growing pool allocation | ~4.6 ns | ~4.8 ns | ~1.05x faster |
| Allocation reuse (LIFO) | ~7.2 ns | N/A | Excellent cache locality |
See BENCHMARKS.md for detailed methodology and results.
When Pools Excel
Memory pools provide benefits beyond raw speed:
- Predictable Latency: No allocation spikes or fragmentation slowdowns
- Cache Locality: Objects stored contiguously improve cache hit rates
- Reduced Fragmentation: Eliminates long-term heap fragmentation
- Real-Time Guarantees: Bounded worst-case allocation time
Best use cases:
- High allocation/deallocation churn (game entities, particles)
- Real-time systems requiring bounded latency
- Embedded systems with constrained memory
- Long-running processes avoiding fragmentation
Note: Modern system allocators (jemalloc, mimalloc) are highly optimized. Pools excel in specific scenarios rather than universally. Always benchmark your specific workload.
Examples
Growing Pool with Configuration
use ;
let config = builder
.capacity
.max_capacity
.growth_strategy
.alignment // Cache-line aligned
.build
.unwrap;
let pool = with_config.unwrap;
Thread-Safe Pool
use ThreadSafePool;
use Arc;
use thread;
let pool = new;
let mut handles = vec!;
for i in 0..4
for handle in handles
Custom Initialization
use ;
let config = builder
.capacity
.reset_fn
.build
.unwrap;
Batch Allocation
use FixedPool;
let pool = new.unwrap;
// Allocate multiple objects efficiently in one operation
let values = vec!;
let handles = pool.allocate_batch.unwrap;
assert_eq!;
// All handles automatically returned when dropped
Statistics Tracking
🏊 Pool Types
Comparison Table
| Pool Type | Thread Safety | Growth | Overhead | Best For |
|---|---|---|---|---|
| FixedPool | ❌ | Fixed | Minimal | Single-threaded, predictable load |
| GrowingPool | ❌ | Dynamic | Low | Variable workloads |
| ThreadLocalPool | ⚠️ Per-thread | Fixed | Minimal | High-throughput parallel |
| ThreadSafePool | ✅ | Fixed | Medium | Shared state, moderate contention |
FixedPool
Pre-allocated fixed-size pool with O(1) operations and zero fragmentation.
let pool = new.unwrap;
When to use: Known maximum capacity, need absolute predictability
GrowingPool
Dynamic pool that grows based on demand according to a configurable strategy.
let pool = with_config.unwrap;
When to use: Variable load, want automatic scaling
ThreadLocalPool
Per-thread pool that avoids synchronization overhead.
let pool = new.unwrap;
When to use: Rayon/parallel iterators, zero-contention needed
ThreadSafePool
Lock-based concurrent pool safe for multi-threaded access.
let pool = new.unwrap;
When to use: Shared pool across threads, moderate contention acceptable
🎛️ Optional Features
Enable optional features in your Cargo.toml:
[]
= { = "1.0", = ["stats", "serde", "parking_lot"] }
Available features:
| Feature | Description | Performance Impact |
|---|---|---|
std (default) |
Standard library support | N/A |
stats |
Pool statistics & monitoring | ~2% overhead |
serde |
Serialization support | None when unused |
parking_lot |
Faster mutex (vs std::sync) | 10-20% faster locking |
crossbeam |
Lock-free data structures | 30-50% better under contention |
tracing |
Structured instrumentation | Minimal when disabled |
lock-free |
Experimental lock-free pool | 2-3x faster (requires crossbeam) |
no_std Support
fastalloc works in no_std environments:
[]
= { = "1.0", = false }
Benchmarks
Run benchmarks with:
Benchmark results are available in the target/criterion directory after running the benchmarks.
Documentation
API Reference
Full API documentation is available on docs.rs.
Examples
Explore the examples/ directory for more usage examples:
basic_usage.rs- Basic pool usagethread_safe.rs- Thread-safe poolingcustom_allocator.rs- Implementing custom allocation strategiesembedded.rs- no_std usage example
Changelog
See CHANGELOG.md for a detailed list of changes in each version.
Contributing
We welcome contributions of all kinds! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.
How to Contribute
- Read our Code of Conduct
- Check out the open issues
- Fork the repository and create your feature branch
- Make your changes and add tests
- Ensure all tests pass and code is properly formatted
- Submit a pull request with a clear description of your changes
Development Workflow
# Clone the repository
# Install development dependencies
# Run tests
# Run benchmarks
# Run lints
# Check for unused dependencies
# Check for security vulnerabilities
Security
Security is important to us. If you discover any security related issues:
- Do NOT open a public GitHub issue
- Email the maintainer directly: eshanized@proton.me
- Include:
- Detailed description of the vulnerability
- Steps to reproduce
- Potential impact
- Suggested fix (if any)
We will acknowledge receipt within 48 hours and provide a timeline for a fix. Security issues will be prioritized and patched in expedited releases.
See SECURITY.md for our full security policy.
License
Licensed under either of:
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
at your option.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Acknowledgments
- The Rust community for creating an amazing ecosystem
- All contributors who have helped improve this project
- Inspired by various memory pooling techniques and existing implementations
- Built with ❤️ and Rust
🚀 Who's Using fastalloc?
We're building a list of projects using fastalloc. If you're using it, please consider adding your project!
Open Source Projects:
- Your project here! - Open a PR to add your project
Use Cases in Production:
- Game engines (entity/component systems, particle effects)
- Real-time audio processing pipelines
- High-frequency trading systems
- Embedded robotics control loops
- IoT device firmware
- Web server request pooling
Research & Education:
- Memory management tutorials
- Rust performance optimization courses
- Embedded systems projects
Want to be listed? Open a PR or issue with your project details!
📚 Resources
Documentation
- API Documentation - Complete API reference with examples
- BENCHMARKS.md - Real benchmark results, methodology, and library comparisons
- SAFETY.md - Memory safety guarantees and unsafe code documentation
- ARCHITECTURE.md - Internal design and implementation details
- ERROR_HANDLING.md - Pool exhaustion strategies and error recovery
- CHANGELOG.md - Version history and breaking changes
- CONTRIBUTING.md - How to contribute to the project
- SECURITY.md - Security policy and vulnerability reporting
Examples
- examples/ - Working code examples:
basic_usage.rs- Getting started with FixedPoolthread_safe.rs- Concurrent pool usagecustom_allocator.rs- Custom allocation strategiesgame_entities.rs- Game entity pooling exampleparticle_system.rs- High-performance particle systemasync_usage.rs- Using pools with async/awaitembedded.rs- no_std embedded examplestatistics.rs- Pool monitoring and statistics
See CHANGELOG.md for version history.