LAPACK is written in Fortran 90 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision. The original goal of the LAPACK project was to make the widely used EISPACK and LINPACK libraries run efficiently on shared-memory vector and parallel processors. On these machines, LINPACK and EISPACK are inefficient because their memory access patterns disregard the multi-layered memory hierarchies of the machines, thereby spending too much time moving data instead of doing useful floating-point operations. LAPACK addresses this problem by reorganizing the algorithms to use block matrix operations, such as matrix multiplication, in the innermost loops. These block operations can be optimized for each architecture to account for the memory hierarchy, and so provide a transportable way to achieve high efficiency on diverse modern machines. We use the term ”transportable” instead of ”portable” because, for fastest possible performance, LAPACK requires that highly optimized block matrix operations be already implemented on each machine. LAPACK routines are written so that as much as possible of the computation is performed by calls to the Basic Linear Algebra Subprograms (BLAS). LAPACK is designed at the outset to exploit the Level 3 BLAS — a set of specifications for Fortran subprograms that do various types of matrix multiplication and the solution of triangular systems with multiple right-hand sides. Because of the coarse granularity of the Level 3 BLAS operations, their use promotes high efficiency on many high-performance computers, particularly if specially coded implementations are provided by the manufacturer. Highly efficient machine-specific implementations of the BLAS are available for many modern high-performance computers. For details of known vendor- or ISV-provided BLAS, consult the BLAS FAQ. Alternatively, the user can download ATLAS to automatically generate an optimized BLAS library for the architecture. A Fortran 77 reference implementation of the BLAS is available from netlib; however, its use is discouraged as it will not perform as well as a specifically tuned implementation.
This software is also referenced in ORMS.
This software is also referenced in ORMS.
Keywords for this software
References in zbMATH (referenced in 1604 articles , 4 standard articles )
Showing results 1 to 20 of 1604.
- Almeida Guimarães, Dilson; Salles da Cunha, Alexandre; Pereira, Dilson Lucas: Semidefinite programming lower bounds and branch-and-bound algorithms for the quadratic minimum spanning tree problem (2020)
- Andrew Finley, Abhirup Datta, Sudipto Banerjee: R package for Nearest Neighbor Gaussian Process models (2020) arXiv
- Barrera, Javiera; Moreno, Eduardo; Varas K., Sebastián: A decomposition algorithm for computing income taxes with pass-through entities and its application to the Chilean case (2020)
- Ben Hermans, Andreas Themelis, Panagiotis Patrinos: QPALM: A Proximal Augmented Lagrangian Method for Nonconvex Quadratic Programs (2020) arXiv
- Bollhöfer, Matthias; Schenk, Olaf; Janalik, Radim; Hamm, Steve; Gullapalli, Kiran: State-of-the-art sparse direct solvers (2020)
- Brás, C. P.; Martínez, J. M.; Raydan, M.: Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization (2020)
- Cambier, Léopold; Chen, Chao; Boman, Erik G.; Rajamanickam, Sivasankaran; Tuminaro, Raymond S.; Darve, Eric: An algebraic sparsified nested dissection algorithm using low-rank approximations (2020)
- De Luca, Pasquale; Galletti, Ardelio; Giunta, Giulio; Marcellino, Livia; Raei, Marzie: Performance analysis of a multicore implementation for solving a two-dimensional inverse anomalous diffusion problem (2020)
- Essaouini, M.; Abouzaid, B.; Gaudreau, P.; Safouhi, H.: Computation of energy eigenvalues of the anharmonic Coulombic potential with irregular singularities (2020)
- Fabien, Maurice S.; Knepley, Matthew; Riviere, Beatrice: A high order hybridizable discontinuous Galerkin method for incompressible miscible displacement in heterogeneous media (2020)
- Folberth, James; Becker, Stephen: Safe feature elimination for non-negativity constrained convex optimization (2020)
- Hatič, Vanja; Mavrič, Boštjan; Šarler, Božidar: Simulation of macrosegregation in direct-chill casting -- a model based on meshless diffuse approximate method (2020)
- Lange, Kenneth: Algorithms from THE BOOK (2020)
- Manguoğlu, Murat; Polizzi, Eric; Sameh, Ahmed H.: Parallel hybrid sparse linear system solvers (2020)
- Niemöller, Ansgar; Schlottke-Lakemper, Michael; Meinke, Matthias; Schröder, Wolfgang: Dynamic load balancing for direct-coupled multiphysics simulations (2020)
- Peeters, Carel F. W.; van de Wiel, Mark A.; van Wieringen, Wessel N.: The spectral condition number plot for regularization parameter evaluation (2020)
- Reguly, István Z.; Mudalige, Gihan R.: Productivity, performance, and portability for computational fluid dynamics applications (2020)
- Sauk, Benjamin; Ploskas, Nikolaos; Sahinidis, Nikolaos: GPU parameter tuning for tall and skinny dense linear least squares problems (2020)
- Temizer, İ.; Motamarri, P.; Gavini, V.: NURBS-based non-periodic finite element framework for Kohn-Sham density functional theory calculations (2020)
- Tsachouridis, Vassilios A.; Giantamidis, Georgios; Basagiannis, Stylianos; Kouramas, Kostas: Formal analysis of the Schulz matrix inversion algorithm: a paradigm towards computer aided verification of general matrix flow solvers (2020)