TTC: A Tensor Transposition Compiler for Multiple Architectures. We consider the problem of transposing tensors of arbitrary dimension and describe TTC, an open source domain-specific parallel compiler. TTC generates optimized parallel C++/CUDA C code that achieves a significant fraction of the system’s peak memory bandwidth. TTC exhibits high performance across multiple architectures, including modern AVX-based systems (e.g., Intel Haswell, AMD Steamroller), Intel’s Knights Corner as well as different CUDA-based GPUs such as NVIDIA’s Kepler and Maxwell architectures. We report speedups of TTC over a meaningful baseline implementation generated by external C++ compilers; the results suggest that a domain-specific compiler can outperform its general purpose counterpart significantly: For instance, comparing with Intel’s latest C++ compiler on the Haswell and Knights Corner architecture, TTC yields speedups of up to 8× and 32×, respectively. We also showcase TTC’s support for multiple leading dimensions, making it a suitable candidate for the generation of performance-critical packing functions that are at the core of the ubiquitous BLAS 3 routines.
References in zbMATH (referenced in 3 articles )
Showing results 1 to 3 of 3.
- Matthews, Devin A.: High-performance tensor contraction without transposition (2018)
- Antti-Pekka Hynninen, Dmitry I. Lyakh: cuTT: A High-Performance Tensor Transpose Library for CUDA Compatible GPUs (2017) arXiv
- Paul Springer, Tong Su, Paolo Bientinesi: HPTT: A High-Performance Tensor Transposition C++ Library (2017) arXiv