BinaryConnect

BinaryConnect: Training Deep Neural Networks with binary weights during propagations. Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.


References in zbMATH (referenced in 17 articles )

Showing results 1 to 17 of 17.
Sorted by year (citations)

  1. Connolly, Michael P.; Higham, Nicholas J.; Mary, Theo: Stochastic rounding and its probabilistic backward error analysis (2021)
  2. Gambella, Claudio; Ghaddar, Bissan; Naoum-Sawaya, Joe: Optimization problems for machine learning: a survey (2021)
  3. Gripon, Vicent; Löwe, Matthias; Vermet, Franck: Some remarks on replicated simulated annealing (2021)
  4. Ju, Xiping; Fang, Biao; Yan, Rui; Xu, Xiaoliang; Tang, Huajin: An FPGA implementation of deep spiking neural networks for low-power and fast classification (2020)
  5. Yokoi, Soma; Otsuka, Takuma; Sato, Issei: Weak approximation of transformed stochastic gradient MCMC (2020)
  6. Wang, Bao; Yin, Penghang; Bertozzi, Andrea Louise; Brantingham, P. Jeffrey; Osher, Stanley Joel; Xin, Jack: Deep learning for real-time crime forecasting and its ternarization (2019)
  7. Yin, Penghang; Zhang, Shuai; Lyu, Jiancheng; Osher, Stanley; Qi, Yingyong; Xin, Jack: Blended coarse gradient descent for full quantization of deep neural networks (2019)
  8. Baydin, Atılım Güneş; Pearlmutter, Barak A.; Radul, Alexey Andreyevich; Siskind, Jeffrey Mark: Automatic differentiation in machine learning: a survey (2018)
  9. Deng, Lei; Jiao, Peng; Pei, Jing; Wu, Zhenzhi; Li, Guoqi: GXNOR-Net: training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework (2018)
  10. Hubara, Itay; Courbariaux, Matthieu; Soudry, Daniel; El-Yaniv, Ran; Bengio, Yoshua: Quantized neural networks: training neural networks with low precision weights and activations (2018)
  11. Li, Qianxiao; Chen, Long; Tai, Cheng; E, Weinan: Maximum principle based algorithms for deep learning (2018)
  12. Needell, Deanna; Saab, Rayan; Woolf, Tina: Simple classification using binary data (2018)
  13. Sun, Jian; Li, Jie: A stable distributed neural controller for physically coupled networked discrete-time system via online reinforcement learning (2018)
  14. Yin, Penghang; Zhang, Shuai; Lyu, Jiancheng; Osher, Stanley; Qi, Yingyong; Xin, Jack: BinaryRelax: a relaxation approach for training deep neural networks with quantized weights (2018)
  15. Huang, Haiping: Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses (2017)
  16. Rawat, Waseem; Wang, Zenghui: Deep convolutional neural networks for image classification: a comprehensive review (2017)
  17. David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal: Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations (2016) arXiv