Intel MPI Benchmarks

The Intel® MPI Benchmarks perform a set of MPI performance measurements for point-to-point and global communication operations for a range of message sizes. The generated benchmark data fully characterizes: Performance of a cluster system, including node performance, network latency, and throughput. Efficiency of the MPI implementation used.


References in zbMATH (referenced in 8 articles )

Showing results 1 to 8 of 8.
Sorted by year (citations)

  1. Ma, Teng; Bosilca, George; Bouteiller, Aurelien; Dongarra, Jack J.: Kernel-assisted and topology-aware MPI collective communications on multicore/many-core platforms (2013) ioport
  2. Wu, Xingfu; Taylor, Valerie: Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers (2013)
  3. Salnikov, A.N.; Andreev, D.Yu.; Lebedev, R.D.: Toolkit for analyzing the communication environment characteristics of a computational cluster based on MPI standard functions (2012) ioport
  4. Feichtinger, Christian; Habich, Johannes; Köstler, Harald; Hager, Georg; Rüde, Ulrich; Wellein, Gerhard: A flexible patch-based lattice Boltzmann parallelization approach for heterogeneous GPU-CPU clusters (2011) ioport
  5. Goglin, Brice: High-performance message-passing over generic ethernet hardware with open-MX (2011) ioport
  6. Habich, J.; Zeiser, T.; Hager, G.; Wellein, G.: Performance analysis and optimization strategies for a D3Q19 lattice Boltzmann kernel on nVIDIA GPUs using CUDA (2011)
  7. Bull, J.Mark; Enright, James; Guo, Xu; Maynard, Chris; Reid, Fiona: Performance evaluation of mixed-mode OpenMP/MPI implementations (2010)
  8. Saini, Subhash; Ciotti, Robert; Gunney, Brian T.N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias: Performance evaluation of supercomputers using HPCC and IMB benchmarks (2008)