The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class. The CIFAR-100 dataset: This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a ”fine” label (the class to which it belongs) and a ”coarse” label (the superclass to which it belongs).

References in zbMATH (referenced in 151 articles )

Showing results 121 to 140 of 151.
Sorted by year (citations)

previous 1 2 3 ... 5 6 7 8 next

  1. Aggarwal, Charu C.: Neural networks and deep learning. A textbook (2018)
  2. Baldi, Pierre; Sadowski, Peter; Lu, Zhiqin: Learning in the machine: random backpropagation and the deep learning channel (2018)
  3. Bellinger, Colin; Drummond, Christopher; Japkowicz, Nathalie: Manifold-based synthetic oversampling with manifold conformance estimation (2018)
  4. Chaudhari, Pratik; Oberman, Adam; Osher, Stanley; Soatto, Stefano; Carlier, Guillaume: Deep relaxation: partial differential equations for optimizing deep neural networks (2018)
  5. Chen, Ning; Zhu, Jun; Chen, Jianfei; Chen, Ting: Dropout training for SVMs with data augmentation (2018)
  6. Dominik Marek Loroch; Franz-Josef Pfreundt; Norbert Wehn; Janis Keuper: Sparsity in Deep Neural Networks - An Empirical Investigation with TensorQuant (2018) arXiv
  7. Fawzi, Alhussein; Fawzi, Omar; Frossard, Pascal: Analysis of classifiers’ robustness to adversarial perturbations (2018)
  8. Haifeng Jin, Qingquan Song, Xia Hu: Auto-Keras: An Efficient Neural Architecture Search System (2018) arXiv
  9. Li, Lisha; Jamieson, Kevin; DeSalvo, Giulia; Rostamizadeh, Afshin; Talwalkar, Ameet: Hyperband: a novel bandit-based approach to hyperparameter optimization (2018)
  10. Liu, Yang; Feng, Lin; Liu, Shenglan; Sun, Muxin: Global similarity preserving hashing (2018)
  11. Yin, Penghang; Xin, Jack; Qi, Yingyong: Linear feature transform and enhancement of classification on deep neural network (2018)
  12. Yin, Penghang; Zhang, Shuai; Lyu, Jiancheng; Osher, Stanley; Qi, Yingyong; Xin, Jack: BinaryRelax: a relaxation approach for training deep neural networks with quantized weights (2018)
  13. Zennaro, Fabio Massimo; Chen, Ke: Towards understanding sparse filtering: a theoretical perspective (2018)
  14. Chintala, Soumith; Ranzato, Marc’Aurelio; Szlam, Arthur; Tian, Yuandong; Tygert, Mark; Zaremba, Wojciech: Scale-invariant learning and convolutional networks (2017)
  15. Hasenclever, Leonard; Webb, Stefan; Lienart, Thibaut; Vollmer, Sebastian; Lakshminarayanan, Balaji; Blundell, Charles; Teh, Yee Whye: Distributed Bayesian learning with stochastic natural gradient expectation propagation and the posterior server (2017)
  16. Mahsereci, Maren; Hennig, Philipp: Probabilistic line searches for stochastic optimization (2017)
  17. Rawat, Waseem; Wang, Zenghui: Deep convolutional neural networks for image classification: a comprehensive review (2017)
  18. Han, Xixuan; Clemmensen, Line: Regularized generalized eigen-decomposition with applications to sparse supervised feature extraction and sparse discriminant analysis (2016)
  19. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, Christoph H. Lampert: iCaRL: Incremental Classifier and Representation Learning (2016) arXiv
  20. Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur: A mathematical motivation for complex-valued convolutional networks (2016)

previous 1 2 3 ... 5 6 7 8 next