hebbRNN

hebbRNN: A Reward-Modulated Hebbian Learning Rule for Recurrent Neural Networks. How does our brain learn to produce the large, impressive, and flexible array of motor behaviors we possess? In recent years, there has been renewed interest in modeling complex human behaviors such as memory and motor skills using neural networks. However, training these networks to produce meaningful behavior has proven difficult. Furthermore, the most common methods are generally not biologically-plausible and rely on information not local to the synapses of individual neurons as well as instantaneous reward signals. The current package is a Matlab implementation of a biologically-plausible training rule for recurrent neural networks using a delayed and sparse reward signal. On individual trials, input is perturbed randomly at the synapses of individual neurons and these potential weight changes are accumulated in a Hebbian manner (multiplying pre- and post-synaptic weights) in an eligibility trace. At the end of each trial, a reward signal is determined based on the overall performance of the network in achieving the desired goal, and this reward is compared to the expected reward. The difference between the observed and expected reward is used in combination with the eligibility trace to strengthen or weaken corresponding synapses within the network, leading to proper network performance over time.

Keywords for this software

Anything in here will be replaced on browsers that support the canvas element


References in zbMATH (referenced in 1 article , 1 standard article )

Showing result 1 of 1.
Sorted by year (citations)

  1. Jonathan A Michaels; Hansjörg Scherberger: hebbRNN: A Reward-Modulated Hebbian Learning Rule for Recurrent Neural Networks (2016) not zbMATH