darch

darch: Package for deep architectures and Restricted-Bolzmann-Machines. The darch package is build on the basis of the code from G. E. Hinton and R. R. Salakhutdinov (available under Matlab Code for deep belief nets : last visit: 01.08.2013). This package is for generating neural networks with many layers (deep architectures) and train them with the method introduced by the publications ”A fast learning algorithm for deep belief nets” (G. E. Hinton, S. Osindero, Y. W. Teh) and ”Reducing the dimensionality of data with neural networks” (G. E. Hinton, R. R. Salakhutdinov). This method includes a pre training with the contrastive divergence method publishing by G.E Hinton (2002) and a fine tuning with common known training algorithms like backpropagation or conjugate gradient.


References in zbMATH (referenced in 96 articles )

Showing results 1 to 20 of 96.
Sorted by year (citations)

1 2 3 4 5 next

  1. Sun, Yuhang; Liu, Qingjie: Attribute recognition from clothing using a faster R-CNN based multitask network (2018)
  2. Bach, Stephen H.; Broecheler, Matthias; Huang, Bert; Getoor, Lise: Hinge-loss Markov random fields and probabilistic soft logic (2017)
  3. Jiang, Yiming; Yang, Chenguang; Na, Jing; Li, Guang; Li, Yanan; Zhong, Junpei: A brief review of neural networks based learning and control and their applications for robots (2017)
  4. Li, Huaxiong; Zhang, Libo; Zhou, Xianzhong; Huang, Bing: Cost-sensitive sequential three-way decision modeling using a deep neural network (2017)
  5. Tran, Truyen; Phung, Dinh; Bui, Hung; Venkatesh, Svetha: Hierarchical semi-Markov conditional random fields for deep recursive sequential data (2017)
  6. Yin, Rujie; Gao, Tingran; Lu, Yue M.; Daubechies, Ingrid: A tale of two bases: local-nonlocal regularization on image patches with convolution framelets (2017)
  7. Alain, Guillaume; Bengio, Yoshua; Yao, Li; Yosinski, Jason; Thibodeau-Laufer, Éric; Zhang, Saizheng; Vincent, Pascal: GSNs: generative stochastic networks (2016)
  8. Carbone, Anna; Jensen, Meiko; Sato, Aki-Hiro: Challenges in data science: a complex systems perspective (2016)
  9. Cárdenas-Peña, David; Collazos-Huertas, Diego; Castellanos-Dominguez, German: Centered kernel alignment enhancing neural network pretraining for MRI-based dementia diagnosis (2016)
  10. Cheng, Xiuyuan; Chen, Xu; Mallat, Stéphane: Deep Haar scattering networks (2016)
  11. Chen, Yutian; Bornn, Luke; de Freitas, Nando; Eskelin, Mareija; Fang, Jing; Welling, Max: Herded Gibbs sampling (2016)
  12. Mocanu, Decebal Constantin; Mocanu, Elena; Nguyen, Phuong H.; Gibescu, Madeleine; Liotta, Antonio: A topological insight into restricted Boltzmann machines (2016)
  13. Read, Jesse; Reutemann, Peter; Pfahringer, Bernhard; Holmes, Geoff: MEKA: a multi-label/multi-target extension to WEKA (2016)
  14. Sokolovska, Nataliya; Clément, Karine; Zucker, Jean-Daniel: Deep kernel dimensionality reduction for scalable data integration (2016)
  15. Yuille, Alan; Mottaghi, Roozbeh: Complexity of representation and inference in compositional models with part sharing (2016)
  16. Zhang, Shiliang; Jiang, Hui; Dai, Lirong: Hybrid orthogonal projection and estimation (HOPE): a new framework to learn neural networks (2016)
  17. Arora, Sanjeev; Ge, Rong; Moitra, Ankur; Sachdeva, Sushant: Provable ICA with unknown Gaussian noise, and implications for Gaussian mixtures and autoencoders (2015)
  18. Bengio, Yoshua (ed.): Editorial introduction to the neural networks special issue on deep learning of representations (2015)
  19. Berglund, Mathias; Raiko, Tapani; Cho, Kyunghyun: Measuring the usefulness of hidden units in Boltzmann machines with mutual information (2015) ioport
  20. Bhattacharyya, Malay; Bandyopadhyay, Sanghamitra: Finding quasi core with simulated stacked neural networks (2015)

1 2 3 4 5 next