CIFAR

The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class. The CIFAR-100 dataset: This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a ”fine” label (the class to which it belongs) and a ”coarse” label (the superclass to which it belongs).


References in zbMATH (referenced in 151 articles )

Showing results 21 to 40 of 151.
Sorted by year (citations)

previous 1 2 3 4 ... 6 7 8 next

  1. Huang, Junhao; Sun, Weize; Huang, Lei: Joint structure and parameter optimization of multiobjective sparse neural network (2021)
  2. Imaizumi, Masaaki: Analysis on mechanism of deep learning: perspective of generalization error (2021)
  3. Jones, Ilenna Simone; Kording, Konrad Paul: Might a single neuron solve interesting machine learning problems through successive computations on its dendritic tree? (2021)
  4. Kafka, Dominic; Wilke, Daniel N.: Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches (2021)
  5. Kao, Yu-Wei; Chen, Hung-Hsuan: Associated learning: decomposing end-to-end backpropagation based on autoencoders and target propagation (2021)
  6. Kobayashi, Masaki: Stability conditions of bicomplex-valued Hopfield neural networks (2021)
  7. Kobayashi, Masaki: Noise robust projection rule for Klein Hopfield neural networks (2021)
  8. Kong, Hao; Lu, Canyi; Lin, Zhouchen: Tensor Q-rank: new data dependent definition of tensor rank (2021)
  9. Liang, Senwei; Khoo, Yuehaw; Yang, Haizhao: Drop-activation: implicit parameter reduction and harmonious regularization (2021)
  10. Li, Haoliang; Wan, Renjie; Wang, Shiqi; Kot, Alex C.: Unsupervised domain adaptation in the wild via disentangling representation learning (2021)
  11. Liu, Chunlei; Ding, Wenrui; Hu, Yuan; Zhang, Baochang; Liu, Jianzhuang; Guo, Guodong; Doermann, David: Rectified binary convolutional networks with generative adversarial learning (2021)
  12. Mingxiang Chen, Zhanguo Chang, Haonan Lu, Bitao Yang, Zhuang Li, Liufang Guo, Zhecheng Wang: AugNet: End-to-End Unsupervised Visual Representation Learning with Image Augmentation (2021) arXiv
  13. Nakkiran, Preetum; Kaplun, Gal; Bansal, Yamini; Yang, Tristan; Barak, Boaz; Sutskever, Ilya: Deep double descent: where bigger models and more data hurt (2021)
  14. Newman, Elizabeth; Ruthotto, Lars; Hart, Joseph; van Bloemen Waanders, Bart: Train like a (Var)pro: efficient training of neural networks with variable projection (2021)
  15. Northcutt, Curtis G.; Jiang, Lu; Chuang, Isaac L.: Confident learning: estimating uncertainty in dataset labels (2021)
  16. Qin, Shanshan; Mudur, Nayantara; Pehlevan, Cengiz: Contrastive similarity matching for supervised learning (2021)
  17. Ramezani-Kebrya, Ali; Faghri, Fartash; Markov, Ilya; Aksenov, Vitalii; Alistarh, Dan; Roy, Daniel M.: NUQSGD: provably communication-efficient data-parallel SGD via nonuniform quantization (2021)
  18. Shifat-E-Rabbi, Mohammad; Yin, Xuwang; Rubaiyat, Abu Hasnat Mohammad; Li, Shiying; Kolouri, Soheil; Aldroubi, Akram; Nichols, Jonathan M.; Rohde, Gustavo K.: Radon cumulative distribution transform subspace modeling for image classification (2021)
  19. Wang, Bao; Osher, Stan J.: Graph interpolating activation improves both natural and robust accuracies in data-efficient deep learning (2021)
  20. Wang, Weiwei; Zhang, Haofeng; Zhang, Zheng; Liu, Li; Shao, Ling: Sparse graph based self-supervised hashing for scalable image retrieval (2021)

previous 1 2 3 4 ... 6 7 8 next