Global Arrays

Global Arrays Toolkit The Global Arrays (GA) toolkit provides an efficient and portable ”shared-memory” programming interface for distributed-memory computers. Each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed dense multi-dimensional arrays, without need for explicit cooperation by other processes. Unlike other shared-memory environments, the GA model exposes to the programmer the non-uniform memory access (NUMA) characteristics of the high performance computers and acknowledges that access to a remote portion of the shared data is slower than to the local portion. The locality information for the shared data is available, and a direct access to the local portions of shared data is provided. Global Arrays have been designed to complement rather than substitute for the message-passing programming model. The programmer is free to use both the shared-memory and message-passing paradigms in the same program, and to take advantage of existing message-passing software libraries. Global Arrays are compatible with the Message Passing Interface (MPI). The Global Arrays toolkit has been in the public domain since 1994. It has been actively supported and employed in several large codes since then. EMSL software products such as NWChem use the Global Arrays programming toolkit to provide high-performance parallel processing.


References in zbMATH (referenced in 8 articles )

Showing results 1 to 8 of 8.
Sorted by year (citations)

  1. Steefel, C.I.; Appelo, C.A.J.; Arora, B.; Jacques, D.; Kalbacher, T.; Kolditz, O.; Lagneau, V.; Lichtner, P.C.; Mayer, K.U.; Meeussen, J.C.L.; Molins, S.; Moulton, D.; Shao, H.; Šimůnek, J.; Spycher, N.; Yabusaki, S.B.; Yeh, G.T.: Reactive transport codes for subsurface environmental simulation (2015)
  2. Fraguela, Basilio B.; Bikshandi, Ganesh; Guo, Jia; Garzarán, María J.; Padua, David; Von Praun, Christoph: Optimization techniques for efficient HTA programs (2012)
  3. Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram; Baumgartner, Gerald; Ramanujam, J.; Sadayappan, P.: Empirical performance model-driven data layout optimization and library call selection for tensor contraction expressions (2012)
  4. Rauber, Thomas; Rünger, Gudula: Parallel programming for multicore and cluster systems (2010)
  5. Valiev, M.; Bylaska, E.J.; Govind, N.; Kowalski, K.; Straatsma, T.P.; Van Dam, H.J.J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T.L.; de Jong, W.A.: NWChem: a comprehensive and scalable open-source solution for large scale molecular simulations (2010)
  6. Jakusšev, Alexander; Čiegis, Raimondas; Laukaitytė, Inga; Trofimov, Vyacheslav: Parallelization of linear algebra algorithms using ParSol library of mathematical objects (2009)
  7. Tipparaju, Vinod; Krishnan, Manoj; Palmer, Bruce; Petrini, Fabrizio; Nieplocha, Jarek: Towards fault resilient global arrays (2008)
  8. Chen, Guo-Liang; Sun, Guang-Zhong; Zhang, Yun-Quan; Mo, Ze-Yao: Study on parallel computing (2006)


Further publications can be found at: http://www.emsl.pnl.gov/docs/global/papers/index.shtml