A two-part neural network models reward-based training and provides a unified framework in which to study diverse computations that can be compared to electrophysiological recordings from behaving animals.
Deep neural networks can be trained to automatically find mechanistic models which quantitatively agree with experimental data, providing new opportunities for building and visualizing interpretable models of neural dynamics.
Syntactic structure-building processes can be applied to speech that is task-irrelevant and should be ignored, demonstrating that Selective Attention does not fully eliminate linguistic processing of competing speech.
The conductance-based encoding model creates a new bridge between statistical models and biophysical models of neurons, and infers visually-evoked excitatory and inhibitory synaptic conductances from spike trains in macaque retina.
Random fluctuations in neuronal firing may enable a single brain region, the medial entorhinal cortex, to perform distinct roles in cognition (by generating gamma waves) and spatial navigation (by producing a grid cell map).