The human brain is a recurrent neural network (RNN): a network of neurons with feedback connections. It can learn many behaviors / sequence processing tasks / algorithms / programs that are not learnable by traditional machine learning methods. This explains the rapidly growing interest in artificial RNNs for technical applications: general computers which can learn algorithms to map input sequences to output sequences, with or without a teacher. They are computationally more powerful and biologically more plausible than other adaptive approaches such as Hidden Markov Models (no continuous internal states), feedforward networks and Support Vector Machines (no internal states at all). Our recent applications include adaptive robotics and control, handwriting recognition, speech recognition, keyword spotting, music composition, attentive vision, protein analysis, stock market prediction, and many other sequence problems.

References in zbMATH (referenced in 13 articles , 1 standard article )

Showing results 1 to 13 of 13.
Sorted by year (citations)

  1. Schmidhuber, Jürgen: Deep learning in neural networks: an overview (2015)
  2. Graves, Alex: Supervised sequence labelling with recurrent neural networks. (2012)
  3. Liwicki, Marcus; Bunke, Horst: Combining diverse on-line and off-line systems for handwritten text line recognition (2009)
  4. Namikawa, Jun; Tani, Jun: A model for learning to segment temporal sequences, utilizing a mixture of RNN experts together with adaptive variance (2008)
  5. Kara, Sadik; Okandan, Mustafa: Atrial fibrillation classification with artificial neural networks (2007)
  6. Skowronski, Mark D.; Harris, John G.: Automatic speech recognition using a predictive echo state network classifier (2007)
  7. Grüning, André: Stack-like and queue-like dynamics in recurrent neural networks (2006)
  8. Jacobsson, Henrik: Rule extraction from recurrent neural networks: A taxonomy and review (2005)
  9. Gabrijel, Ivan; Dobnikar, Andrej: On-line identification and reconstruction of finite automata with generalized recurrent neural networks. (2003)
  10. Schmidhuber, J.; Gers, F.; Eck, D.: Learning nonregular languages: A comparison of simple recurrent networks and LSTM (2002)
  11. Hochreiter, Sepp; Mozer, Michael C.: A discrete probabilistic memory model for discovering dependencies in time (2001)
  12. Kremer, Stefan C.: Spatiotemporal connectionist networks: A taxonomy and review (2001)
  13. Hochreiter, Sepp: The vanishing gradient problem during learning recurrent neural nets and problem solutions (1998)