FerroMPI
Lightweight Rust bindings for MPI 4.x with persistent collectives support.
FerroMPI provides Rust bindings to MPI through a thin C wrapper layer, enabling access to MPI 4.0+ features like persistent collectives that are not available in other Rust MPI bindings.
Features
- ๐ MPI 4.0+ support: Persistent collectives, large-count operations
- ๐ชถ Lightweight: Minimal C wrapper (~700 lines), focused API
- ๐ Safe: Rust-idiomatic API with proper error handling and RAII
- ๐ง Flexible: Works with MPICH, OpenMPI, Intel MPI, and Cray MPI
- โก Fast: Zero-cost abstractions, direct FFI calls
Why FerroMPI?
| Feature | FerroMPI | rsmpi |
|---|---|---|
| MPI Version | 4.1 | 3.1 |
| Persistent Collectives | โ | โ |
| Large Count (>2ยณยน) | โ | โ |
| API Style | Minimal, focused | Comprehensive |
| C Wrapper | ~700 lines | None (direct bindings) |
FerroMPI is ideal for:
- Iterative algorithms benefiting from persistent collectives (10-30% speedup)
- Applications with large data transfers (>2GB)
- Users who want a simple, focused MPI API
Quick Start
Installation
Add to your Cargo.toml:
[]
= "0.1"
Requirements
- Rust 1.74+
- MPICH 4.0+ (recommended) or OpenMPI 5.0+
Ubuntu/Debian:
macOS:
Hello World
use ;
Examples
Blocking Collectives
use ;
let mpi = init?;
let world = mpi.world;
// Broadcast
let mut data = vec!;
if world.rank == 0
world.broadcast_f64?;
// All-reduce
let send = vec!;
let mut recv = vec!;
world.allreduce_f64?;
// Gather
let my_data = vec!;
let mut gathered = vec!;
world.gather_f64?;
Nonblocking Collectives
use ;
let mpi = init?;
let world = mpi.world;
let send = vec!;
let mut recv = vec!;
// Start nonblocking operation
let request = world.iallreduce_f64?;
// Do other work while communication proceeds...
expensive_computation;
// Wait for completion
request.wait?;
// recv now contains the result
Persistent Collectives (MPI 4.0+)
use ;
let mpi = init?;
let world = mpi.world;
// Buffer used for all iterations
let mut data = vec!;
// Initialize ONCE
let mut persistent = world.bcast_init_f64?;
// Use MANY times - amortizes setup cost!
for iter in 0..10000
// Cleanup on drop
API Reference
Core Types
| Type | Description |
|---|---|
Mpi |
MPI environment handle (init/finalize) |
Communicator |
MPI communicator wrapper |
Request |
Nonblocking operation handle |
PersistentRequest |
Persistent operation handle (MPI 4.0+) |
Collective Operations
| Operation | Blocking | Nonblocking | Persistent |
|---|---|---|---|
| Broadcast | broadcast_f64 |
ibroadcast_f64 |
bcast_init_f64 |
| Reduce | reduce_f64 |
- | - |
| Allreduce | allreduce_f64 |
iallreduce_f64 |
allreduce_init_f64 |
| Gather | gather_f64 |
- | - |
| Allgather | allgather_f64 |
- | - |
| Scatter | scatter_f64 |
- | - |
Reduction Operations
Running Tests
# Build examples
# Run hello world
# Run all examples
Configuration
Environment Variables
| Variable | Description | Example |
|---|---|---|
MPI_PKG_CONFIG |
pkg-config name | mpich, ompi |
MPICC |
MPI compiler wrapper | /opt/mpich/bin/mpicc |
CRAY_MPICH_DIR |
Cray MPI installation | /opt/cray/pe/mpich/8.1.25 |
Build Configuration
FerroMPI automatically detects MPI installations via:
MPI_PKG_CONFIGenvironment variable- pkg-config (
mpich,ompi,mpi) mpicc -showoutputCRAY_MPICH_DIR(for Cray systems)- Common installation paths
Troubleshooting
"Could not find MPI installation"
# Check if MPI is installed
# Set pkg-config name explicitly
"Persistent collectives not available"
Persistent collectives require MPI 4.0+. Check your MPI version:
# MPICH Version: 4.2.0 โ
# Open MPI 5.0.0 โ
# MPICH Version: 3.4.2 โ (too old)
macOS linking issues
Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Rust Application โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ferrompi (Safe Rust) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ffi.rs (bindings) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ ferrompi.c (C layer) โ โ ~700 lines
โโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ MPICH / OpenMPI โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
The C layer provides:
- Handle tables for MPI opaque objects
- Automatic large-count operation selection
- Thread-safe request management
- Graceful degradation for MPI <4.0
License
Licensed under:
- MIT license (LICENSE)
Contributing
Contributions welcome! Please ensure:
- All examples pass with
mpiexec -n 4 - New features include tests and documentation
- Code follows Rust style guidelines (
cargo fmt,cargo clippy)
Acknowledgments
FerroMPI was inspired by:
- rsmpi - Comprehensive MPI bindings for Rust
- The MPI Forum for the excellent MPI 4.0 specification