STL-10 dataset

The STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. In particular, each class has fewer labeled training examples than in CIFAR-10, but a very large set of unlabeled examples is provided to learn image models prior to supervised training. The primary challenge is to make use of the unlabeled data (which comes from a similar but different distribution from the labeled data) to build a useful prior. We also expect that the higher resolution of this dataset (96x96) will make it a challenging benchmark for developing more scalable unsupervised learning methods. Reference: Adam Coates, Honglak Lee, Andrew Y. Ng An Analysis of Single Layer Networks in Unsupervised Feature Learning

References in zbMATH (referenced in 23 articles )

Showing results 1 to 20 of 23.
Sorted by year (citations)

1 2 next

  1. Czaja, Wojciech; Dong, Dong; Jabin, Pierre-Emmanuel; Ndjakou Njeunje, Franck Olivier: Transport model for feature extraction (2021)
  2. Mingxiang Chen, Zhanguo Chang, Haonan Lu, Bitao Yang, Zhuang Li, Liufang Guo, Zhecheng Wang: AugNet: End-to-End Unsupervised Visual Representation Learning with Image Augmentation (2021) arXiv
  3. Arridge, S.; Hauptmann, A.: Networks for nonlinear diffusion problems in imaging (2020)
  4. Boutin, Victor; Franciosini, Angelo; Ruffier, Franck; Perrinet, Laurent: Effect of top-down connections in hierarchical sparse coding (2020)
  5. dos Santos, Fernando P.; Zor, Cemre; Kittler, Josef; Ponti, Moacir A.: Learning image features with fewer labels using a semi-supervised deep convolutional network (2020)
  6. Duan, Jia; Zhou, Jiantao; Li, Yuanman: Privacy-preserving distributed deep learning based on secret sharing (2020)
  7. Ghose, Amur; Jaini, Priyank; Poupart, Pascal: Learning directed acyclic graph SPNs in sub-quadratic time (2020)
  8. Pei, Yan Ru; Manukian, Haik; Di Ventra, Massimiliano: Generating weighted MAX-2-SAT instances with frustrated loops: an RBM case study (2020)
  9. Ruthotto, Lars; Haber, Eldad: Deep neural networks motivated by partial differential equations (2020)
  10. He, Bo; Song, Yan; Zhu, Yuemei; Sha, Qixin; Shen, Yue; Yan, Tianhong; Nian, Rui; Lendasse, Amaury: Local receptive fields based extreme learning machine with hybrid filter kernels for image classification (2019)
  11. Vergari, Antonio; Di Mauro, Nicola; Esposito, Floriana: Visualizing and understanding sum-product networks (2019)
  12. Chaudhari, Pratik; Oberman, Adam; Osher, Stanley; Soatto, Stefano; Carlier, Guillaume: Deep relaxation: partial differential equations for optimizing deep neural networks (2018)
  13. Jiang, Bai; Wu, Tung-Yu; Jin, Yifan; Wong, Wing H.: Convergence of contrastive divergence algorithm in exponential family (2018)
  14. López-Sánchez, Daniel; Corchado, Juan Manuel; González Arrieta, Angélica: Data-independent random projections from the feature-map of the homogeneous polynomial kernel of degree two (2018)
  15. Puškarov, Tatjana; Cortés Cubero, Axel: Machine learning algorithms based on generalized Gibbs ensembles (2018)
  16. Kim, Minyoung: Efficient histogram dictionary learning for text/image modeling and classification (2017)
  17. Rawat, Waseem; Wang, Zenghui: Deep convolutional neural networks for image classification: a comprehensive review (2017)
  18. Chen, Zhong; Xiong, Shengwu; Fang, Zhixiang; Zhang, Ruiling; Kong, Xiangzhen; Rong, Yi: Topologically ordered feature extraction based on sparse group restricted Boltzmann machines (2015)
  19. Yilmaz, Ozgur: Symbolic computation using cellular automata-based hyperdimensional computing (2015)
  20. Gheorghe, Ionut; Li, Weidong; Popham, Thomas; Gaszczak, Anna; Burnham, Keith J.: Key learning features as means for terrain classification (2014) ioport

1 2 next