darch: Package for deep architectures and Restricted-Bolzmann-Machines. The darch package is build on the basis of the code from G. E. Hinton and R. R. Salakhutdinov (available under Matlab Code for deep belief nets : last visit: 01.08.2013). This package is for generating neural networks with many layers (deep architectures) and train them with the method introduced by the publications ”A fast learning algorithm for deep belief nets” (G. E. Hinton, S. Osindero, Y. W. Teh) and ”Reducing the dimensionality of data with neural networks” (G. E. Hinton, R. R. Salakhutdinov). This method includes a pre training with the contrastive divergence method publishing by G.E Hinton (2002) and a fine tuning with common known training algorithms like backpropagation or conjugate gradient.

References in zbMATH (referenced in 90 articles )

Showing results 1 to 20 of 90.
Sorted by year (citations)

1 2 3 4 5 next

  1. Jiang, Yiming; Yang, Chenguang; Na, Jing; Li, Guang; Li, Yanan; Zhong, Junpei: A brief review of neural networks based learning and control and their applications for robots (2017)
  2. Li, Huaxiong; Zhang, Libo; Zhou, Xianzhong; Huang, Bing: Cost-sensitive sequential three-way decision modeling using a deep neural network (2017)
  3. Tran, Truyen; Phung, Dinh; Bui, Hung; Venkatesh, Svetha: Hierarchical semi-Markov conditional random fields for deep recursive sequential data (2017)
  4. Yin, Rujie; Gao, Tingran; Lu, Yue M.; Daubechies, Ingrid: A tale of two bases: local-nonlocal regularization on image patches with convolution framelets (2017)
  5. Carbone, Anna; Jensen, Meiko; Sato, Aki-Hiro: Challenges in data science: a complex systems perspective (2016)
  6. Cárdenas-Peña, David; Collazos-Huertas, Diego; Castellanos-Dominguez, German: Centered kernel alignment enhancing neural network pretraining for MRI-based dementia diagnosis (2016)
  7. Chen, Yutian; Bornn, Luke; de Freitas, Nando; Eskelin, Mareija; Fang, Jing; Welling, Max: Herded Gibbs sampling (2016)
  8. Mocanu, Decebal Constantin; Mocanu, Elena; Nguyen, Phuong H.; Gibescu, Madeleine; Liotta, Antonio: A topological insight into restricted Boltzmann machines (2016)
  9. Read, Jesse; Reutemann, Peter; Pfahringer, Bernhard; Holmes, Geoff: MEKA: a multi-label/multi-target extension to WEKA (2016)
  10. Sokolovska, Nataliya; Clément, Karine; Zucker, Jean-Daniel: Deep kernel dimensionality reduction for scalable data integration (2016)
  11. Yuille, Alan; Mottaghi, Roozbeh: Complexity of representation and inference in compositional models with part sharing (2016)
  12. Zhang, Shiliang; Jiang, Hui; Dai, Lirong: Hybrid orthogonal projection and estimation (HOPE): a new framework to learn neural networks (2016)
  13. Arora, Sanjeev; Ge, Rong; Moitra, Ankur; Sachdeva, Sushant: Provable ICA with unknown Gaussian noise, and implications for Gaussian mixtures and autoencoders (2015)
  14. Bengio, Yoshua (ed.): Editorial introduction to the neural networks special issue on deep learning of representations (2015)
  15. Berglund, Mathias; Raiko, Tapani; Cho, Kyunghyun: Measuring the usefulness of hidden units in Boltzmann machines with mutual information (2015) ioport
  16. Bhattacharyya, Malay; Bandyopadhyay, Sanghamitra: Finding quasi core with simulated stacked neural networks (2015)
  17. Cang, Zixuan; Mu, Lin; Wu, Kedi; Opron, Kristopher; Xia, Kelin; Wei, Guo-Wei: A topological approach for protein classification (2015)
  18. Elfwing, S.; Uchibe, E.; Doya, K.: Expected energy-based restricted Boltzmann machine for classification (2015)
  19. Fischer, Asja; Igel, Christian: A bound for the convergence rate of parallel tempering for sampling restricted Boltzmann machines (2015)
  20. Iocchi, Luca; Holz, Dirk; Ruiz-del-Solar, Javier; Sugiura, Komei; van der Zant, Tijn: RoboCup@Home: analysis and results of evolving competitions for domestic and service robots (2015) ioport

1 2 3 4 5 next