Expand description
§ferrompi
Safe, generic Rust bindings for MPI (Message Passing Interface).
This crate wraps MPI functionality through a thin C layer, providing:
- Type-safe generic API for all MPI datatypes
- Blocking, nonblocking, and persistent (MPI 4.0+) collectives
- Communicator management (split, duplicate)
- RMA shared memory windows (with
rmafeature) - SLURM environment helpers (with
numafeature) - Large count support (MPI 4.0+
_cvariants)
§Supported Types
All communication operations are generic over MpiDatatype:
f32, f64, i32, i64, u8, u32, u64
§Quick Start
use ferrompi::{Mpi, ReduceOp};
fn main() -> Result<(), ferrompi::Error> {
let mpi = Mpi::init()?;
let world = mpi.world();
let rank = world.rank();
let size = world.size();
println!("Hello from rank {} of {}", rank, size);
// Generic broadcast — works with any MpiDatatype
let mut data = vec![0.0f64; 100];
if rank == 0 {
data.fill(42.0);
}
world.broadcast(&mut data, 0)?;
// Generic all-reduce
let sum = world.allreduce_scalar(rank as f64, ReduceOp::Sum)?;
println!("Rank {rank}: sum of all ranks = {sum}");
Ok(())
}§Feature Flags
| Feature | Description | Dependencies |
|---|---|---|
rma | RMA shared memory window operations | — |
numa | NUMA-aware windows and SLURM helpers | rma |
debug | Detailed debug output | — |
§Capabilities
- Generic API: All operations work with any
MpiDatatype(f32,f64,i32,i64,u8,u32,u64) - Blocking collectives: barrier, broadcast, reduce, allreduce, gather, scatter, allgather, alltoall, scan, exscan, reduce_scatter_block, plus V-variants (gatherv, scatterv, allgatherv, alltoallv)
- Nonblocking collectives: All 13
i-prefixed variants withRequesthandles - Persistent collectives (MPI 4.0+): All 11+
_initvariants withPersistentRequesthandles - Scalar and in-place variants:
reduce_scalar,allreduce_scalar,reduce_inplace,allreduce_inplace,scan_scalar,exscan_scalar - Point-to-point:
send,recv,isend,irecv,sendrecv,probe,iprobe - Communicator management:
split,split_type,split_shared,duplicate - Shared memory windows (feature
rma): [SharedWindow<T>] with RAII lock guards - SLURM helpers (feature
numa): Job topology queries viaslurmmodule - Rich error handling:
MpiErrorClasscategorization with messages from the MPI runtime
§Thread Safety
Communicator is Send + Sync to support hybrid MPI + threads programs
(e.g., MPI between nodes, std::thread::scope within a node).
The actual thread-safety guarantees depend on the thread level requested at initialization:
| Thread Level | Who can call MPI | Synchronization |
|---|---|---|
ThreadLevel::Single | Main thread only | N/A |
ThreadLevel::Funneled | Main thread only | N/A |
ThreadLevel::Serialized | Any thread | User must serialize |
ThreadLevel::Multiple | Any thread | None needed |
use ferrompi::{Mpi, ThreadLevel};
// Request serialized thread support for hybrid MPI + threads
let mpi = Mpi::init_thread(ThreadLevel::Funneled).unwrap();
assert!(mpi.thread_level() >= ThreadLevel::Funneled);Mpi itself is !Send + !Sync — MPI initialization and finalization
must occur on the same thread. Only Communicator handles (and the
operations on them) may cross thread boundaries.
§Hybrid MPI+OpenMP
For hybrid parallelism, use Mpi::init_thread() with the appropriate level:
Funneled(recommended): Only the main thread makes MPI calls. OpenMP threads handle computation between MPI calls.Serialized: Any thread can make MPI calls, but only one at a time.Multiple: Full concurrent MPI from any thread (highest overhead).
use ferrompi::{Mpi, ThreadLevel, ReduceOp};
let mpi = Mpi::init_thread(ThreadLevel::Funneled).unwrap();
assert!(mpi.thread_level() >= ThreadLevel::Funneled);
let world = mpi.world();
// Worker threads compute locally, main thread calls MPI
let local = 42.0_f64;
let global = world.allreduce_scalar(local, ReduceOp::Sum).unwrap();§SLURM Configuration
#SBATCH --ntasks-per-node=4 # MPI ranks per node
#SBATCH --cpus-per-task=8 # OpenMP threads per rank
#SBATCH --bind-to core # Pin MPI ranks
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./my_programUse the slurm module (with numa feature) to read these values at runtime.
See examples/hybrid_openmp.rs for the full pattern.
Structs§
- Communicator
- An MPI communicator.
- Info
- An MPI info object for passing hints to MPI operations.
- Mpi
- MPI environment handle.
- Persistent
Request - A persistent MPI request handle.
- Request
- A handle to a nonblocking MPI operation.
- Status
- Information about a probed or received MPI message.
Enums§
- Datatype
Tag - Tag values matching C-side
FERROMPI_*defines. - Error
- Error types for MPI operations.
- MpiError
Class - MPI error class, categorizing the type of MPI error.
- Reduce
Op - Reduction operations
- Split
Type - Split types for
Communicator::split_type. - Thread
Level - MPI thread support levels
Traits§
- MpiDatatype
- Trait for types that can be used in MPI communication operations.
Type Aliases§
- Result
- Result type for MPI operations.