A forwarding MPI implementation that can use any other MPI implementation via an MPI ABI

Overview

MPItrampoline

  • MPI wrapper library: GitHub CI
  • MPI trampoline library: GitHub CI
  • MPI integration tests: GitHub CI

MPI is the de-facto standard for inter-node communication on HPC systems, and has been for the past 25 years. While highly successful, MPI is a standard for source code (it defines an API), and is not a standard defining binary compatibility (it does not define an ABI). This means that applications running on HPC systems need to be compiled anew on every system. This is tedious, since the software that is available on every HPC system is slightly different.

This project attempts to remedy this. It defines an ABI for MPI, and provides an MPI implementation based on this ABI. That is, MPItrampoline does not implement any MPI functions itself, it only forwards them to a "real" implementation via this ABI. The advantage is that one can produce "portable" applications that can use any given MPI implementation. For example, this will make it possible to build external packages for Julia via Yggdrasil that run efficiently on almost any HPC system.

A small and simple MPIwrapper library is used to provide this ABI for any given MPI installation. MPIwrapper needs to be compiled for each MPI installation that is to be used with MPItrampoline, but this is quick and easy.

Successfully Tested

  • Debian 11.0 via Docker (MPICH; arm32v5, arm32v7, arm64v8, mips64le, ppc64le, riscv64; C/C++ only)
  • Debian 11.0 via Docker (MPICH; i386, x86-64)
  • macOS laptop (MPICH, OpenMPI; x86-64)
  • macOS via Github Actions (OpenMPI; x86-64)
  • Ubuntu 20.04 via Docker (MPICH; x86-64)
  • Ubuntu 20.04 via Github Actions (MPICH, OpenMPI; x86-64)
  • Blue Waters, HPC system at the NCSA (Cray MPICH; x86-64)
  • Graham, HPC system at Compute Canada (Intel MPI; x86-64)
  • Marconi A3, HPC system at Cineca (Intel MPI; x86-64)
  • Niagara, HPC system at Compute Canada (OpenMPI; x86-64)
  • Summit, HPC system at ORNL (Spectrum MPI; IBM POWER 9)
  • Symmetry, in-house HPC system at the Perimeter Institute (MPICH, OpenMPI; x86-64)

Workflow

Preparing an HPC system

Install MPIwrapper, wrapping the MPI installation you want to use there. You can install MPIwrapper multiple times if you want to wrap more than one MPI implementation.

This is possibly as simple as

cmake -S . -B build -DMPIEXEC_EXECUTABLE=mpiexec -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=$HOME/mpiwrapper
cmake --build build
cmake --install build

but nothing is ever simple on an HPC system. It might be necessary to load certain modules, or to specify more cmake MPI configuration options.

The MPIwrapper libraries remain on the HPC system, they are installed independently of any application.

Building an application

Build your application as usual, using MPItrampline as MPI library.

Running an application

At startup time, MPItrampoline needs to be told which MPIwrapper library to use. This is done via the environment variable MPITRAMPOLINE_LIB. You also need to point MPItrampoline's mpiexec to a respective wrapper created by MPIwrapper, using the environment variable MPITRAMPOLINE_MPIEXEC.

For example:

env MPITRAMPOLINE_MPIEXEC=$HOME/mpiwrapper/bin/mpiwrapper-mpiexec MPITRAMPOLINE_LIB=$HOME/mpiwrapper/lib/libmpiwrapper.so mpiexec -n 4 ./your-application

The mpiexec you run here needs to be the one provided by MPItrampoline.

Current state

MPItrampoline uses the C preprocessor to create wrapper functions for each MPI function. This is how MPI_Send is wrapped:

FUNCTION(int, Send,
         (const void *buf, int count, MT(Datatype) datatype, int dest, int tag,
          MT(Comm) comm),
         (buf, count, (MP(Datatype))datatype, dest, tag, (MP(Comm))comm))

Unfortunately, MPItrampoline does not yet wrap the Fortran API. Your help is welcome.

Certain MPI types, constants, and functions are difficult to wrap. Theoretically, there could be MPI libraries where it is not possible to implement the current MPI ABI. If you encounter this, please let me know -- maybe there is a work-around.

Comments
  • Add support for MPI profiling interface

    Add support for MPI profiling interface

    Each standard MPI function can be called with an MPI_ or PMPI_ prefix (quoting from https://www.open-mpi.org/faq/?category=perftools#PMPI), I think MPItrampoline doesn't currently support the PMPI_ calls.

    opened by ocaisa 6
  • Supporting `MPIX_Query_cuda_support()`

    Supporting `MPIX_Query_cuda_support()`

    While not part of the standard, MPIX_Query_cuda_support() is available in a number of MPI implementations (see https://github.com/pmodels/mpich/pull/4741). It's also being used by a number of applications (and hopefully that will grow, see my issue at https://github.com/lammps/lammps/issues/3140 which links back to the support in GROMACS). This would be a valuable inclusion in MPItrampoline since it would then be able to handle the runtime detection of CUDA support in the MPI implementation.

    opened by ocaisa 6
  • Allow overriding default compilation options

    Allow overriding default compilation options

    Currently default compilation options are set during the build of MPItrampoline, it would probably be useful to be able to fully override these, e.g.,

    exec ${MPITRAMPOLINE_CC:[email protected]_C_COMPILER@} ${CFLAGS:-"@CMAKE_C_FLAGS@"} [email protected]_INSTALL_PREFIX@/@CMAKE_INSTALL_INCLUDEDIR@ @LINK_FLAGS@ [email protected]_INSTALL_PREFIX@/@CMAKE_INSTALL_LIBDIR@ -Wl,-rpath,@CMAKE_INSTALL_PREFIX@/@CMAKE_INSTALL_LIBDIR@ "$@" -lmpi -ldl
    
    opened by ocaisa 5
  • Allow use of a fallback/default value for `MPITRAMPOLINE_MPIEXEC`

    Allow use of a fallback/default value for `MPITRAMPOLINE_MPIEXEC`

    It's possible to configure a default MPI library with -DMPITRAMPOLINE_DEFAULT_LIB=XXX, it would be good to be able to also configure a default mpiexec (-DMPITRAMPOLINE_DEFAULT_MPIEXEC=XXX) so that one can have a fully functional fallback in place at build time.

    opened by ocaisa 3
  • Incomplete installation

    Incomplete installation

    Hello ! I am looking at installing MPItrampoline, and following the steps outlined in the README.md, I cannot access files later referenced.

    For instance, on Summit at ORNL, using the following script:

    INSTALLDIR=$HOME/mpiwrapper
    
    module load cmake/3.18
    module load gcc
    
    cmake -S . -B build -DMPIEXEC_EXECUTABLE=mpiexec \
                                   -DCMAKE_BUILD_TYPE=RelWithDebInfo \
                                   -DCMAKE_INSTALL_PREFIX=$INSTALLDIR
    cmake --build build
    cmake --install build
    

    The following installation is generated:

    $ tree mpiwrapper/
    mpiwrapper/
    |-- bin
    |   |-- mpicc
    |   |-- mpicxx
    |   |-- mpiexec
    |   |-- mpifc
    |   `-- mpifort
    |-- include
    |   |-- mpi.h
    |   |-- mpi.mod
    |   |-- mpi_declarations.h
    |   |-- mpi_declarations_fortran.h
    |   |-- mpi_declarations_fortran90.h
    |   |-- mpi_defaults.h
    |   |-- mpi_f08.mod
    |   |-- mpi_version.h
    |   |-- mpiabi.h
    |   |-- mpiabif.h
    |   |-- mpif.h
    |   `-- mpio.h
    |-- lib
    |   |-- cmake
    |   |   `-- MPItrampoline
    |   |       |-- MPItrampolineConfig.cmake
    |   |       |-- MPItrampolineConfigVersion.cmake
    |   |       |-- MPItrampolineTargets-relwithdebinfo.cmake
    |   |       `-- MPItrampolineTargets.cmake
    |   `-- pkgconfig
    |       `-- MPItrampoline.pc
    `-- lib64
        |-- libmpi.a
        `-- libmpifort.a
    
    7 directories, 24 files
    

    The README.md dictates to use the wrapped libraries, but no shared libraries are available here. I looked through the options of the CMakeLists.txt but the only one defined in the project is the fortran flag.

    Am I missing something ?

    opened by spoutn1k 2
  •  Issues building Global Arrays

    Issues building Global Arrays

    I'm trying to build the Global Arrays library with MPItrampoline and have run into some issues. This package is a dependency for Molpro and several other computational chemistry applications. Any assistance that you can provide to get it working would be greatly appreciated.

    The compilation fails at https://github.com/GlobalArrays/ga/blob/f4016b869dfd1a2b2856f74a73dd9452dbbc8ae4/comex/src-mpi-pr/comex.c#L4968 with the error message:

    libtool: compile:  mpicc -DHAVE_CONFIG_H -I. -I./src-common -I./src-mpi-pr -g -O2 -MT src-mpi-pr/comex.lo -MD -MP -MF src-mpi-pr/.deps/comex.Tpo -c src-mpi-pr/comex.c -o src-mpi-pr/comex.o
    src-mpi-pr/comex.c: In function ‘str_mpi_retval’:
    src-mpi-pr/comex.c:4932:9: error: case label does not reduce to an integer constant
             case MPI_SUCCESS       : msg = "MPI_SUCCESS"; break;
             ^
    

    Steps to reproduce:

    git clone -b develop https://github.com/GlobalArrays/ga
    cd ga
    ./autogen.sh
    ./configure --with-mpi-pr --with-blas=no --with-lapack=no --with-scalapack=no --disable-f77
    make
    

    I've tried both the develop and master branches.

    There are lots of configurations which can be used, we're most interested in the recommended port which uses MPI-1 with progress ranks (--with-mpi-pr) but I tried several other configurations without success.

    I'm not sure whether it's an MPItrampoline issue or whether it's non-standard use of MPI within Global Arrays.

    I've attached the full output from build build-ga.txt

    I've tried on

    • RHEL 7.9 with gcc 4.8.5
    • Ubuntu 22.04 with gcc 11.3.0
    opened by nick-wilson 9
  • Issues building CP2K

    Issues building CP2K

    I thought I would give this a full test with Fortran, and CP2K is a good benchmark for that. The build (v8.2) is failing with:

    /project/60005/easybuild/build/CP2K/8.2/gmtfbf-2021a/cp2k-8.2/exts/dbcsr/src/mpi/dbcsr_mpiwrap.F:1669:21:
    
     1669 |       CALL mpi_bcast(msg, msglen, MPI_LOGICAL, source, gid, ierr)
          |                     1
    ......
     3160 |       CALL mpi_bcast(msg, msglen, ${mpi_type1}$, source, gid, ierr)
          |                     2
    Error: Type mismatch between actual argument at (1) and actual argument at (2) (LOGICAL(4)/COMPLEX(4)).
    
    opened by ocaisa 24
  • Building shared and static libraries at once

    Building shared and static libraries at once

    Currently the default behaviour is to only build static libraries. It might be good build both static and shared libraries since then if libmpi.so is in the default search path it is not selected over the library from MPItrampoline (MPItrampoline would shadow libmpi.so and libmpi.a).

    opened by ocaisa 4
Releases(v5.2.0)
Owner
Erik Schnetter
Erik Schnetter
A PyTorch version of You Only Look at One-level Feature object detector

PyTorch_YOLOF A PyTorch version of You Only Look at One-level Feature object detector. The input image must be resized to have their shorter side bein

Jianhua Yang 25 Dec 30, 2022
Simple streamlit app to demonstrate HERE Tour Planning

Table of Contents About the Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing License Acknowledgements About Th

Amol 8 Sep 05, 2022
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022
Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

clip-text-decoder Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script. Example Predi

Frank Odom 36 Dec 21, 2022
190 Jan 03, 2023
Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters"

Manga Character Screentone Synthesis Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters" presented in IEEE ISM 2

Tsubota 2 Nov 20, 2021
source code for 'Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge' by A. Shah, K. Shanmugam, K. Ahuja

Source code for "Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge" Reference: Abhin Shah, Karthikeyan Shanmugam, Kartik Ahu

Abhin Shah 1 Jun 03, 2022
Repository for the "Gotta Go Fast When Generating Data with Score-Based Models" paper

Gotta Go Fast When Generating Data with Score-Based Models This repo contains the official implementation for the paper Gotta Go Fast When Generating

Alexia Jolicoeur-Martineau 89 Nov 09, 2022
Text-to-Music Retrieval using Pre-defined/Data-driven Emotion Embeddings

Text2Music Emotion Embedding Text-to-Music Retrieval using Pre-defined/Data-driven Emotion Embeddings Reference Emotion Embedding Spaces for Matching

Minz Won 50 Dec 05, 2022
Pytorch implementation of XRD spectral identification from COD database

XRDidentifier Pytorch implementation of XRD spectral identification from COD database. Details will be explained in the paper to be submitted to NeurI

Masaki Adachi 4 Jan 07, 2023
UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation

UNION Automatic Evaluation Metric described in the paper UNION: An UNreferenced MetrIc for Evaluating Open-eNded Story Generation (EMNLP 2020). Please

50 Dec 30, 2022
An Abstract Cyber Security Simulation and Markov Game for OpenAI Gym

gym-idsgame An Abstract Cyber Security Simulation and Markov Game for OpenAI Gym gym-idsgame is a reinforcement learning environment for simulating at

Kim Hammar 29 Dec 03, 2022
A Tensorflow implementation of CapsNet based on Geoffrey Hinton's paper Dynamic Routing Between Capsules

CapsNet-Tensorflow A Tensorflow implementation of CapsNet based on Geoffrey Hinton's paper Dynamic Routing Between Capsules Notes: The current version

Huadong Liao 3.8k Dec 29, 2022
Simple implementation of Mobile-Former on Pytorch

Simple-implementation-of-Mobile-Former At present, only the model but no trained. There may be some bug in the code, and some details may be different

Acheung 103 Dec 31, 2022
A Pytorch implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU_pytorch A Pytorch Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/ab

Fuhang 36 Dec 24, 2022
Our VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks.

VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks. VMAgent is constructed based on one month r

56 Dec 12, 2022
MPRNet-Cloud-removal: Progressive cloud removal

MPRNet-Cloud-removal Progressive cloud removal Requirements 1.Pytorch = 1.0 2.Python 3 3.NVIDIA GPU + CUDA 9.0 4.Tensorboard Installation 1.Clone the

Semi 95 Dec 18, 2022
[AAAI-2021] Visual Boundary Knowledge Translation for Foreground Segmentation

Trans-Net Code for (Visual Boundary Knowledge Translation for Foreground Segmentation, AAAI2021). [https://ojs.aaai.org/index.php/AAAI/article/view/16

ZJU-VIPA 2 Mar 04, 2022
A TensorFlow implementation of DeepMind's WaveNet paper

A TensorFlow implementation of DeepMind's WaveNet paper This is a TensorFlow implementation of the WaveNet generative neural network architecture for

Igor Babuschkin 5.3k Dec 28, 2022
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Google 702 Jan 02, 2023