The paper presents Heterogeneous MPI (HeteroMPI), an extension of MPI for programming high-performance computations on heterogeneous networks of computers. It allows the application programmer to describe the performance model of the implemented algorithm in a generic form. This model allows the specification of all the main features of the underlying parallel algorithm, which have an impact on its execution performance. These features include the total number of parallel processes, the total volume of computations to be performed by each process, the total volume of data to be transferred between each pair of the processes, and how exactly the processes interact during the execution of the algorithm. Given a description of the performance model, HeteroMPI tries to create a group of processes that executes the algorithm faster than any other group. The principal extensions to MPI are presented. We demonstrate the features of the library by performing experiments with parallel simulation of the interaction of electric and magnetic fields and parallel matrix multiplication.

References in zbMATH (referenced in 3 articles , 1 standard article )

Showing results 1 to 3 of 3.
Sorted by year (citations)

  1. Ma, Yan; Chen, Lajiao; Liu, Peng; Lu, Ke: Parallel programing templates for remote sensing image processing on GPU architectures: design and implementation (2016) ioport
  2. Plaza, Antonio; Plaza, Javier; Vegas, Hugo: Improving the performance of hyperspectral image and signal processing algorithms using parallel, distributed and specialized hardware-based systems (2010) ioport
  3. Lastovetsky, Alexey; Reddy, Ravi: Heterompi: towards a message-passing library for heterogeneous networks of computers (2006)

Further publications can be found at: http://hcl.ucd.ie/biblio/keyword/HeteroMPI