hiCUDA

hiCUDA: High-Level GPGPU Programming. This project aims to create a high-level interface for GPGPU programming. More specifically, we have defined a directive-based language called hiCUDA (for high-level CUDA) for programming NVIDIA GPUs. It provides a programmer with high-level abstractions to carry out the tasks mentioned above in a simple manner, and directly to the original sequential code. More importantly, the use of hiCUDA directives makes it easier to experiment with different ways of identifying and extracting GPU computation, and of managing the GPU memory. Along with the language, we have designed and implemented a prototype source-to-source compiler that translates a hiCUDA program (i.e. a sequential C program with hiCUDA directives) to an equivalent CUDA program. In this way, we can compile a hiCUDA program to a binary using the existing CUDA compiler toolchain from NVIDIA. There are two aspects of hiCUDA we would like to evaluate. The first is its performance, i.e. how much slower a hiCUDA program runs compared to a hand-written CUDA version, given that they implement the same algorithm. Using seven CUDA benchmarks (most of which are from the Parboil suite developed at UIUC), we found that the performance of the compiler-generated CUDA code is very close to that of the hand-written version, even though we had to make modifications to the sequential program to achieve the same algorithm as the CUDA version. This result encourages us to share the hiCUDA language and its compiler support with the GPGPU programming community, and leads to the second aspect of evaluation: usability. We very much welcome you to try hiCUDA and give us feedback so that we can improve the language design as well as the compiler implementation.