# vec_parallel

A library for building vectors in parallel using async tasks.
This crate provides an efficient, executor-agnostic way to construct `Vec<T>` by dividing
the work into multiple async tasks that can run concurrently. It's particularly useful for
CPU-bound initialization tasks where elements can be computed independently.
## Overview
vec_parallel allows you to parallelize the construction of vectors by splitting the work
across multiple async tasks. Each task is responsible for computing a portion of the vector,
writing directly to the final memory location to avoid unnecessary copies.
## Key Features
- **Flexible parallelization strategies**: Control task creation with [`Strategy`]
- **Zero-copy construction**: Elements are written directly to their final location
- **Executor-agnostic**: Works with any async runtime (tokio, async-std, smol, etc.)
- **Optional executor integration**: Use the `some_executor` feature for convenient spawning
- **WASM support**: Works in browser environments with wasm-bindgen
- **Safe abstraction**: Careful use of unsafe code with documented invariants
## Usage Patterns
### Basic Usage
```rust
use vec_parallel::{build_vec, Strategy};
// Build a vector of squares using multiple tasks
let builder = build_vec(100, Strategy::TasksPerCore(4), |i| i * i);
// Run the tasks (in a real application, these would be spawned on an executor)
for task in builder.tasks {
test_executors::spin_on(task);
}
// Get the final result
let squares = test_executors::spin_on(builder.result);
assert_eq!(squares[10], 100); // 10² = 100
```
### With Async Executors
```rust
use vec_parallel::{build_vec, Strategy};
// With tokio (or any async runtime)
let builder = build_vec(1000, Strategy::TasksPerCore(4), |i| {
// Expensive computation
(0..100).map(|j| (i + j) * 2).sum::<usize>()
});
// Spawn tasks on your executor
for task in builder.tasks {
// In a real app: tokio::spawn(task);
test_executors::spin_on(task);
}
// Await the result
let result = test_executors::spin_on(builder.result);
assert_eq!(result.len(), 1000);
```
## Choosing a Strategy
The [`Strategy`] enum controls how work is divided:
- [`Strategy::One`]: No parallelism, single task
- [`Strategy::Tasks(n)`]: Exactly `n` tasks
- [`Strategy::Max`]: One task per element (maximum parallelism)
- [`Strategy::TasksPerCore(n)`]: `n` tasks per CPU core (recommended for CPU-bound work)
## Performance Considerations
- For CPU-bound work, use [`Strategy::TasksPerCore(4)`] to [`Strategy::TasksPerCore(8)`]
- For I/O-bound work, consider higher task counts
- For small vectors (<100 elements), parallelization overhead may not be worth it
- The library uses atomic operations for synchronization, avoiding locks
## Safety
This library uses `unsafe` code internally for performance, but maintains safety through:
- Non-overlapping slice assignments for each task
- Atomic counters for task completion tracking
- Careful lifetime management with `Arc` and `Weak` references
- All unsafe operations are documented with their safety invariants
## Optional Features
- `some_executor`: Enables integration with the `some_executor` crate for convenient task spawning