cuDNN: Efficient Primitives for Deep Learning. We present a library of efficient implementations of deep learning primitives. Deep learning workloads are computationally intensive, and optimizing their kernels is difficult and time-consuming. As parallel architectures evolve, kernels must be reoptimized, which makes maintaining codebases difficult over time. Similar issues have long been addressed in the HPC community by libraries such as the Basic Linear Algebra Subroutines (BLAS). However, there is no analogous library for deep learning. Without such a library, researchers implementing deep learning workloads on parallel processors must create and optimize their own implementations of the main computational kernels, and this work must be repeated as new parallel processors emerge. To address this problem, we have created a library similar in intent to BLAS, with optimized routines for deep learning workloads. Our implementation contains routines for GPUs, although similarly to the BLAS library, these routines could be implemented for other platforms. The library is easy to integrate into existing frameworks, and provides optimized performance and memory usage. For example, integrating cuDNN into Caffe, a popular framework for convolutional networks, improves performance by 36% on a standard model while also reducing memory consumption.

References in zbMATH (referenced in 14 articles )

Showing results 1 to 14 of 14.
Sorted by year (citations)

  1. Christopher P. Bridge, Chris Gorman, Steven Pieper, Sean W. Doyle, Jochen K. Lennerz, Jayashree Kalpathy-Cramer, David A. Clunie, Andriy Y. Fedorov, Markus D. Herrmann: Highdicom: A Python library for standardized encoding of image annotations and machine learning model outputs in pathology and radiology (2021) arXiv
  2. Huang, Di; Zhang, Rui; Zhang, Xishan; Wu, Fan; Wang, Xianzhuo; Jin, Pengwei; Liu, Shaoli; Li, Ling; Chen, Yunji: A decomposable Winograd method for N-D convolution acceleration in video analysis (2021)
  3. You, Hojun; Kim, Chongam: Direct reconstruction method for discontinuous Galerkin methods on higher-order mixed-curved meshes III. Code optimization via tensor contraction (2021)
  4. Hackel, Timo; Usvyatsov, Mikhail; Galliani, Silvano; Wegner, Jan D.; Schindler, Konrad: Inference, learning and attention mechanisms that exploit and preserve sparsity in CNNs (2020)
  5. Ju, Caleb; Solomonik, Edgar: Derivation and analysis of fast bilinear algorithms for convolution (2020)
  6. Tian, Chunwei; Fei, Lunke; Zheng, Wenxian; Xu, Yong; Zuo, Wangmeng; Lin, Chia-Wen: Deep learning on image denoising: an overview (2020)
  7. Wang, Bao; Yin, Penghang; Bertozzi, Andrea Louise; Brantingham, P. Jeffrey; Osher, Stanley Joel; Xin, Jack: Deep learning for real-time crime forecasting and its ternarization (2019)
  8. Baydin, Atılım Güneş; Pearlmutter, Barak A.; Radul, Alexey Andreyevich; Siskind, Jeffrey Mark: Automatic differentiation in machine learning: a survey (2018)
  9. Hananel Hazan, Daniel J. Saunders, Hassaan Khan, Darpan T. Sanghavi, Hava T. Siegelmann, Robert Kozma: BindsNET: A machine learning-oriented spiking neural networks library in Python (2018) arXiv
  10. Springer, Paul; Bientinesi, Paolo: Design of a high-performance GEMM-like tensor-tensor multiplication (2018)
  11. Springer, Paul; Hammond, Jeff R.; Bientinesi, Paolo: TTC: a high-performance compiler for tensor transpositions (2017)
  12. Matthew Moskewicz, Forrest Iandola, Kurt Keutzer: Boda-RTC: Productive Generation of Portable, Efficient Code for Convolutional Neural Networks on Mobile Computing Platforms (2016) arXiv
  13. Žbontar, Jure; Lecun, Yann: Stereo matching by training a convolutional neural network to compare image patches (2016)
  14. Andrew Lavin: maxDNN: An Efficient Convolution Kernel for Deep Learning with Maxwell GPUs (2015) arXiv