ScaLAPACK is an acronym for scalable linear algebra package or scalable LAPACK. It is a library of high-performance linear algebra routines for distributed memory message-passing MIMD computers and networks of workstations supporting parallel virtual machine (PVM) and/or message passing interface (MPI). It is a continuation of the LAPACK project, which designed and produced analogous software for workstations, vector supercomputers, and shared memory parallel computers. Both libraries contain routines for solving systems of linear equations, least squares problems, and eigenvalue problems. The goals of both projects are efficiency, scalability, reliability, portability, flexibility, and ease of use.\parScaLAPACK includes routines for the solution of dense, band, and tridiagonal linear systems of equations, condition estimation and iterative refinement, for LU and Cholesky factorization, matrix inversion, full-rank linear least squares problems, orthogonal and generalized orthogonal factorizations, orthogonal transformation routines, reductions to upper Hessenberg, bidiagonal and tridiagonal form, reduction of a symmetric-definite/Hermitian-definite generalized eigenproblem to standard form, the symmetric/Hermitian, generalized symmetric/Hermitian, and the nonsymmetric eigenproblem. Prototype codes are provided for out-of-core solvers for LU, Cholesky, and QR, the matrix sign function for eigenproblems, and an HPF interface to a subset of ScaLAPACK routines.\parSoftware is available in single precision real, double precision real, single precision complex, and double precision complex. The software has been written to be portable across a wide range of distributed-memory environments such as the Cray T3, IBM SP, Intel series, TM CM-5, clusters of workstations, and any system for which PVM or MPI is available.\parEach Users’ Guide includes a CD-ROM containing the HTML version of the ScaLAPACK Users’ Guide, the source code for the package, testing and timing programs, prebuilt version of the library for a number of computers, example programs, and the full set of LAPACK Working Notes.

References in zbMATH (referenced in 349 articles , 3 standard articles )

Showing results 1 to 20 of 349.
Sorted by year (citations)

1 2 3 ... 16 17 18 next

  1. Gentle, James E.: Matrix algebra. Theory, computations and applications in statistics (2017)
  2. Hadjiantoni, Stella; Kontoghiorghes, Erricos John: Estimating large-scale general linear and seemingly unrelated regressions models after deleting observations (2017)
  3. Stoykov, S.; Margenov, S.: Numerical methods and parallel algorithms for computation of periodic responses of plates (2017)
  4. Takahiro Misawa, Satoshi Morita, Kazuyoshi Yoshimi, Mitsuaki Kawamura, Yuichi Motoyama, Kota Ido, Takahiro Ohgoe, Masatoshi Imada, Takeo Kato: mVMC - Open-source software for many-variable variational Monte Carlo method (2017) arXiv
  5. Wang, D.S.; Hill, Charles D.; Hollenberg, L.C.L.: Simulations of Shor’s algorithm using matrix product states (2017)
  6. Xin, Zixing; Xia, Jianlin; de Hoop, Maarten V.; Cauley, Stephen; Balakrishnan, Venkataramanan: A distributed-memory randomized structured multifrontal method for sparse direct solutions (2017)
  7. Beliakov, Gleb; Matiyasevich, Yuri: A parallel algorithm for calculation of determinants and minors using arbitrary precision arithmetic (2016)
  8. Drmač, Zlatko; Gugercin, Serkan: A new selection operator for the discrete empirical interpolation method -- improved a priori error bound and extensions (2016)
  9. Houska, Boris; Frasch, Janick; Diehl, Moritz: An augmented Lagrangian based algorithm for distributed nonconvex optimization (2016)
  10. Lācis, Uǧis; Taira, Kunihiko; Bagheri, Shervin: A stable fluid-structure-interaction solver for low-density rigid bodies using the immersed boundary projection method (2016)
  11. Liu, Xiao; Xia, Jianlin; de Hoop, Maarten V.: Parallel randomized and matrix-free direct solvers for large structured dense linear systems (2016)
  12. Loffeld, John; Woodward, Carol S.: Considerations on the implementation and use of Anderson acceleration on distributed memory and GPU-based parallel computers (2016)
  13. Meiyue Shao, Chao Yang: BSEPACK User’s Guide (2016) arXiv
  14. Michailidis, Panagiotis D.; Margaritis, Konstantinos G.: Scientific computations on multi-core systems using different programming frameworks (2016)
  15. Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; Napov, Artem: A distributed-memory package for dense hierarchically semi-separable matrix computations using randomization (2016)
  16. Schatz, Martin D.; van de Geijn, Robert A.; Poulson, Jack: Parallel matrix multiplication: a systematic journey (2016)
  17. Shao, Meiyue; da Jornada, Felipe H.; Yang, Chao; Deslippe, Jack; Louie, Steven G.: Structure preserving parallel algorithms for solving the Bethe-Salpeter eigenvalue problem (2016)
  18. Shevchenko, I.V.; Berloff, P.S.; Guerrero-López, D.; Roman, J.E.: On low-frequency variability of the midlatitude ocean gyres (2016)
  19. Stoykov, S.; Margenov, S.: Scalable parallel implementation of shooting method for large-scale dynamical systems. Application to bridge components (2016)
  20. Wang, Shen; Li, Xiaoye S.; Rouet, François-Henry; Xia, Jianlin; De Hoop, Maarten V.: A parallel geometric multifrontal solver using hierarchically semiseparable structure (2016)

1 2 3 ... 16 17 18 next