A recurrent network model trained to transcribe temporally scaled spoken digits into handwritten digits proposes that the brain flexibly encodes time-varying stimuli as neural trajectories that can be traversed at different speeds.
The interplay of recurrent excitation and short-term plasticity enables nonlinear transient amplification, an ideal mechanism for selective amplification, pattern completion, and pattern separation in recurrent neural networks.
A biologically plausible learning rule allows recurrent neural networks to learn nontrivial tasks, using only sparse, delayed rewards, and the neural dynamics of trained networks exhibit complex dynamics observed in animal frontal cortices.
A two-part neural network models reward-based training and provides a unified framework in which to study diverse computations that can be compared to electrophysiological recordings from behaving animals.
Friedrich Schuessler, Francesca Mastrogiuseppe ... Omri Barak
An analysis of the relation between neural activity and behavioral output uncovers two dynamical regimes, shows how to model them, and demonstrates how to find them in experimental data.
Recurrent neural networks trained to navigate and infer latent states exhibit strikingly similar remapping patterns to those observed in navigational brain areas, inspiring new analyses of published data and suggesting a possible function for spontaneous remapping to support context-dependent navigation.
A computational model shows that preparation arises as an optimal control strategy in input-driven recurrent neural networks performing a delayed-reaching task.
Ching Fang, Dmitriy Aronov ... Emily L Mackevicius
A recurrent network using a simple, biologically plausible learning rule can learn the successor representation, suggesting that long-horizon predictions are computations that are easily accessible in neural circuits.