Learning of state representation in recurrent network: the power of random feedback and biological constraints

  1. Physical and Health Education, Graduate School of Education, The University of Tokyo, Tokyo, Japan
  2. Theoretical Sciences Visiting Program, Okinawa Institute of Science and Technology, Tancha, Japan
  3. Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
  4. Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
  5. International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan

Peer review process

Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Richard Naud
    University of Ottawa, Ottawa, Canada
  • Senior Editor
    Panayiota Poirazi
    FORTH Institute of Molecular Biology and Biotechnology, Heraklion, Greece

Reviewer #1 (Public review):

Summary:

Can a plastic RNN serve as a basis function for learning to estimate value. In previous work this was shown to be the case, with a similar architecture to that proposed here. The learning rule in previous work was back-prop with an objective function that was the TD error function (delta) squared. Such a learning rule is non-local as the changes in weights within the RNN, and from inputs to the RNN depends on the weights from the RNN to the output, which estimates value. This is non-local, and in addition, these weights themselves change over learning. The main idea in this paper is to examine if replacing the values of these non-local changing weights, used for credit assignment, with random fixed weights can still produce similar results to those obtained with complete bp. This random feedback approach is motivated by a similar approach used for deep feed-forward neural networks.

This work shows that this random feedback in credit assignment performs well but is not as well as the precise gradient-based approach. When more constraints due to biological plausibility are imposed performance degrades. These results are not surprising given previous results on random feedback. This work is incomplete because the delay times used were only a few time steps, and it is not clear how well random feedback would operate with longer delays. Additionally, the examples simulated with a single cue and a single reward are overly simplistic and the field should move beyond these exceptionally simple examples.

Strengths:

• The authors show that random feedback can approximate well a model trained with detailed credit assignment.
• The authors simulate several experiments including some with probabilistic reward schedules and show results similar to those obtained with detailed credit assignments as well as in experiments.
• The paper examines the impact of more biologically realistic learning rules and the results are still quite similar to the detailed back-prop model.

Weaknesses:

• The authors also show that an untrained RNN does not perform as well as the trained RNN. However, they never explain what they mean by an untrained RNN. It should be clearly explained. These results are actually surprising. An untrained RNN with enough units and sufficiently large variance of recurrent weights can have a high-dimensionality and generate a complete or nearly complete basis, though not orthonormal (e.g: Rajan&Abbott 2006). It should be possible to use such a basis to learn this simple classical conditioning paradigm. It would be useful to measure the dimensionality of network dynamics, in both trained and untrained RNN's.

• The impact of the article is limited by using a network with discrete time-steps, and only a small number of time steps from stimulus to reward. What is the length of each time step? If it's on the order of the membrane time constant, then a few time steps are only tens of ms. In the classical conditioning experiments typical delays are of the order to hundreds of milliseconds to seconds. Authors should test if random feedback weights work as well for larger time spans. This can be done by simply using a much larger number of time steps.

• In the section with more biologically constrained learning rules, while the output weights are restricted to only be positive (as well as the random feedback weights), the recurrent weights and weights from input to RNN are still bi-polar and can change signs during learning. Why is the constraint imposed only on the output weights? It seems reasonable that the whole setup will fail if the recurrent weights were only positive as in such a case most neurons will have very similar dynamics, and the network dimensionality would be very low. However, it is possible that only negative weights might work. It is unclear to me how to justify that bipolar weights that change sign are appropriate for the recurrent connections and inappropriate for the output connections. On the other hand, an RNN with excitatory and inhibitory neurons in which weight signs do not change could possibly work.

• Like most papers in the field this work assumes a world composed of a single cue. In the real world there many more cues than rewards, some cues are not associated with any rewards, and some are associated with other rewards or even punishments. In the simplest case, it would be useful to show that this network could actually work if there are additional distractor cues that appear at random either before the CS, or between the CS and US. There are good reasons to believe such distractor cues will be fatal for an untrained RNN, but might work with a trained RNN, either using BPPT or random feedback. Although this assumption is a common flaw in most work in the field, we should no longer ignore these slightly more realistic scenarios.

Reviewer #2 (Public review):

Summary:

Tsurumi et al. show that recurrent neural networks can learn state and value representations in simple reinforcement learning tasks when trained with random feedback weights. The traditional method of learning for recurrent network in such tasks (backpropagation through time) requires feedback weights which are a transposed copy of the feed-forward weights, a biologically implausible assumption. This manuscript builds on previous work regarding "random feedback alignment" and "value-RNNs", and extends them to a reinforcement learning context. The authors also demonstrate that certain non-negative constraints can enforce a "loose alignment" of feedback weights. The author's results suggest that random feedback may be a powerful tool of learning in biological networks, even in reinforcement learning tasks.

Strengths:

The authors describe well the issues regarding biologically plausible learning in recurrent networks and in reinforcement learning tasks. They take care to propose networks which might be implemented in biological systems and compare their proposed learning rules to those already existing in literature. Further, they use small networks on relatively simple tasks, which allows for easier intuition into the learning dynamics.

Weaknesses:

The principles discovered by the authors in these smaller networks are not applied to deeper networks or more complicated tasks, so it remains unclear to what degree these methods can scale up, or can be used more generally.

Reviewer #3 (Public review):

Summary:

The paper studies learning rules in a simple sigmoidal recurrent neural network setting. The recurrent network has a single layer of 10 to 40 units. It is first confirmed that feedback alignment (FA) can learn a value function in this setting. Then so-called bio-plausible constraints are added: (1) when value weights (readout) is non-negative, (2) when the activity is non-negative (normal sigmoid rather than downscaled between -0.5 and 0.5), (3) when the feedback weights are non-negative, (4) when the learning rule is revised to be monotic: the weights are not downregulated. In the simple task considered all four biological features do not appear to impair totally the learning.

Strengths:

(1) The learning rules are implemented in a low-level fashion of the form: (pre-synaptic-activity) x (post-synaptic-activity) x feedback x RPE. Which is therefore interpretable in terms of measurable quantities in the wet-lab.

(2) I find that non-negative FA (FA with non negative c and w) is the most valuable theoretical insight of this paper: I understand why the alignment between w and c is automatically better at initialization.

(3) The task choice is relevant since it connects with experimental settings of reward conditioning with possible plasticity measurements.

Weaknesses:

(4) The task is rather easy, so it's not clear that it really captures the computational gap that exists with FA (gradient-like learning) and simpler learning rule like a delta rule: RPE x (pre-synpatic) x (post-synaptic). To control if the task is not too trivial, I suggest adding a control where the vector c is constant c_i=1.

(5) Related to point 3), the main strength of this paper is to draw potential connection with experimental data. It would be good to highlight more concretely the prediction of the theory for experimental findings. (Ideally, what should be observed with non-negative FA that is not expected with FA or a delta rule (constant global feedback) ?).

(6a) Random feedback with RNN in RL have been studied in the past, so it is maybe worth giving some insights how the results and the analyzes compare to this previous line of work (for instance in this paper [1]). For instance, I am not very surprised that FA also works for value prediction with TD error. It is also expected from the literature that the RL + RNN + FA setting would scale to tasks that are more complex than the conditioning problem proposed here, so is there a more specific take-home message about non-negative FA? or benefits from this simpler toy task?
(6b) Related to task complexity, it is not clear to me if non-negative value and feedback weights would generally scale to harder tasks. If the task in so simple that a global RPE signal is sufficient to learn (see 4 and 5), then it could be good to extend the task to find a substantial gap between: global RPE, non-negative FA, FA, BP. For a well chosen task, I expect to see a performance gap between any pair of these four learning rules. In the context of the present paper, this would be particularly interesting to study the failure mode of non-negative FA and the cases where it does perform as well as FA.

(7) I find that the writing could be improved, it mostly feels more technical and difficult than it should. Here are some recommendations:
(7a) for instance the technical description of the task (CSC) is not fully described and requires background knowledge from other paper which is not desirable.
(7b) Also the rationale for the added difficulty with the stochastic reward and new state is not well explained.
(7c) In the technical description of the results I find that the text dives into descriptive comments of the figures but high-level take home messages would be helpful to guide the reader. I got a bit lost, although I feel that there is probably a lot of depth in these paragraphs.

(8) Related to the writing issue and 5), I wished that "bio-plausibility" was not the only reason to study positive feedback and value weights. Is it possible to develop a bit more specifically what and why this positivity is interesting? Is there an expected finding with non-negative FA both in the model capability? or maybe there is a simpler and crisp take-home message to communicate the experimental predictions to the community would be useful?

(1) https://www.nature.com/articles/s41467-020-17236-y

Author response:

Reviewer #1 (Public review):

Summary:

Can a plastic RNN serve as a basis function for learning to estimate value. In previous work this was shown to be the case, with a similar architecture to that proposed here. The learning rule in previous work was back-prop with an objective function that was the TD error function (delta) squared. Such a learning rule is non-local as the changes in weights within the RNN, and from inputs to the RNN depends on the weights from the RNN to the output, which estimates value. This is non-local, and in addition, these weights themselves change over learning. The main idea in this paper is to examine if replacing the values of these non-local changing weights, used for credit assignment, with random fixed weights can still produce similar results to those obtained with complete bp. This random feedback approach is motivated by a similar approach used for deep feed-forward neural networks.

This work shows that this random feedback in credit assignment performs well but is not as well as the precise gradient-based approach. When more constraints due to biological plausibility are imposed performance degrades. These results are not surprising given previous results on random feedback. This work is incomplete because the delay times used were only a few time steps, and it is not clear how well random feedback would operate with longer delays. Additionally, the examples simulated with a single cue and a single reward are overly simplistic and the field should move beyond these exceptionally simple examples.

Strengths:

• The authors show that random feedback can approximate well a model trained with detailed credit assignment.

• The authors simulate several experiments including some with probabilistic reward schedules and show results similar to those obtained with detailed credit assignments as well as in experiments.

• The paper examines the impact of more biologically realistic learning rules and the results are still quite similar to the detailed back-prop model.

Weaknesses:

• The authors also show that an untrained RNN does not perform as well as the trained RNN. However, they never explain what they mean by an untrained RNN. It should be clearly explained. These results are actually surprising. An untrained RNN with enough units and sufficiently large variance of recurrent weights can have a high-dimensionality and generate a complete or nearly complete basis, though not orthonormal (e.g: Rajan&Abbott 2006). It should be possible to use such a basis to learn this simple classical conditioning paradigm. It would be useful to measure the dimensionality of network dynamics, in both trained and untrained RNN's.

Thank you for pointing out the lack of explanation about untrained RNN. Untrained RNN in our simulations (except Fig. 6D/6E-gray-dotted) was randomly initialized RNN (i.e., connection weights were drawn from a pseudo normal distribution) that was used as initial RNN for training of value-RNNs. As you suggested, the performance of untrained RNN indeed improved as the number of units increased (Fig. 2J), and its highest part was almost comparable to the highest performance of trained value-RNNs (Fig. 2I). In the revision we will show the dimensionality of network dynamics (as you have suggested), and eigenvalue spectrum of the network.

• The impact of the article is limited by using a network with discrete time-steps, and only a small number of time steps from stimulus to reward. What is the length of each time step? If it's on the order of the membrane time constant, then a few time steps are only tens of ms. In the classical conditioning experiments typical delays are of the order to hundreds of milliseconds to seconds. Authors should test if random feedback weights work as well for larger time spans. This can be done by simply using a much larger number of time steps.

Thank you for pointing out this important issue, for which our explanation was lacking and our examination was insufficient. We do not consider that single time step in our models corresponds to the neuronal membrane time constant. Rather, for the following reasons, we assume that the time step corresponds to several hundreds of milliseconds:

- We assume that single RNN unit corresponds to a small neuron population that intrinsically (for genetic/developmental reasons) share inputs/outputs and are mutually connected via excitatory collaterals.

- Cortical activity is suggested to be sustained not only by fast synaptic transmission and spiking but also, even predominantly, by slower synaptic neurochemical dynamics (Mongillo et al., 2008, Science "Synaptic Theory of Working Memory" https://www.science.org/doi/10.1126/science.1150769).

- In line with such theoretical suggestion, previous research examining excitatory interactions between pyramidal cells, to which one of us (the corresponding author Morita) contributed by conducting model fitting (Morishima, Morita, Kubota, Kawaguchi, 2011, J Neurosci, https://www.jneurosci.org/content/31/28/10380), showed that mean recovery time constant from facilitation for recurrent excitation among one of the two types of cortico-striatal pyramidal cells was around 500 milliseconds.

If single time step corresponds to 500 milliseconds, three time steps from cue to reward in our simulations correspond to 1.5 sec, which matches the delay in the conditioning task used in Schultz et al. 1997 Science. Nevertheless, as you pointed out, it is necessary to examine whether our random feedback models can work for longer delays, and we will examine it in our revision.

• In the section with more biologically constrained learning rules, while the output weights are restricted to only be positive (as well as the random feedback weights), the recurrent weights and weights from input to RNN are still bi-polar and can change signs during learning. Why is the constraint imposed only on the output weights? It seems reasonable that the whole setup will fail if the recurrent weights were only positive as in such a case most neurons will have very similar dynamics, and the network dimensionality would be very low. However, it is possible that only negative weights might work. It is unclear to me how to justify that bipolar weights that change sign are appropriate for the recurrent connections and inappropriate for the output connections. On the other hand, an RNN with excitatory and inhibitory neurons in which weight signs do not change could possibly work.

Our explanation and examination about this issue were insufficient, and thank you for pointing it out and giving us helpful suggestion. In the Discussion (Line 507-510) of the original manuscript, we described "Regarding the connectivity, in our models, recurrent/feed-forward connections could take both positive and negative values. This could be justified because there are both excitatory and inhibitory connections in the cortex and the net connection sign between two units can be positive or negative depending on whether excitation or inhibition exceeds the other." However, we admit that the meaning of this description was not clear, and more explicit modeling will be necessary as you suggested.

Therefore in our revision, we will examine models, in which inhibitory units (modeling fast-spiking (FS) GABAergic cells) will be incorporated, and neuron will follow Dale’s law.

• Like most papers in the field this work assumes a world composed of a single cue. In the real world there many more cues than rewards, some cues are not associated with any rewards, and some are associated with other rewards or even punishments. In the simplest case, it would be useful to show that this network could actually work if there are additional distractor cues that appear at random either before the CS, or between the CS and US. There are good reasons to believe such distractor cues will be fatal for an untrained RNN, but might work with a trained RNN, either using BPPT or random feedback. Although this assumption is a common flaw in most work in the field, we should no longer ignore these slightly more realistic scenarios.

Thank you very much for this insightful comment. In our revision, we will examine situations where there exist not only reward-associated cue but also randomly appeared distractor cues.

Reviewer #2 (Public review):

Summary:

Tsurumi et al. show that recurrent neural networks can learn state and value representations in simple reinforcement learning tasks when trained with random feedback weights. The traditional method of learning for recurrent network in such tasks (backpropagation through time) requires feedback weights which are a transposed copy of the feed-forward weights, a biologically implausible assumption. This manuscript builds on previous work regarding "random feedback alignment" and "value-RNNs", and extends them to a reinforcement learning context. The authors also demonstrate that certain non-negative constraints can enforce a "loose alignment" of feedback weights. The author's results suggest that random feedback may be a powerful tool of learning in biological networks, even in reinforcement learning tasks.

Strengths:

The authors describe well the issues regarding biologically plausible learning in recurrent networks and in reinforcement learning tasks. They take care to propose networks which might be implemented in biological systems and compare their proposed learning rules to those already existing in literature. Further, they use small networks on relatively simple tasks, which allows for easier intuition into the learning dynamics.

Weaknesses:

The principles discovered by the authors in these smaller networks are not applied to deeper networks or more complicated tasks, so it remains unclear to what degree these methods can scale up, or can be used more generally.

In our revision, we will examine more biologically realistic models with excitatory and inhibitory units, as well as more complicated tasks with distractor cues. We will also consider whether/how the depth of networks can be increased, though we do not currently have concrete idea on this last point. Thank you also for giving us the detailed insightful 'recommendations for authors'. We will address also them in our revision.

Reviewer #3 (Public review):

Summary:

The paper studies learning rules in a simple sigmoidal recurrent neural network setting. The recurrent network has a single layer of 10 to 40 units. It is first confirmed that feedback alignment (FA) can learn a value function in this setting. Then so-called bio-plausible constraints are added: (1) when value weights (readout) is non-negative, (2) when the activity is non-negative (normal sigmoid rather than downscaled between -0.5 and 0.5), (3) when the feedback weights are non-negative, (4) when the learning rule is revised to be monotic: the weights are not downregulated. In the simple task considered all four biological features do not appear to impair totally the learning.

Strengths:

(1) The learning rules are implemented in a low-level fashion of the form: (pre-synaptic-activity) x (post-synaptic-activity) x feedback x RPE. Which is therefore interpretable in terms of measurable quantities in the wet-lab.

(2) I find that non-negative FA (FA with non negative c and w) is the most valuable theoretical insight of this paper: I understand why the alignment between w and c is automatically better at initialization.

(3) The task choice is relevant since it connects with experimental settings of reward conditioning with possible plasticity measurements.

Weaknesses:

(4) The task is rather easy, so it's not clear that it really captures the computational gap that exists with FA (gradient-like learning) and simpler learning rule like a delta rule: RPE x (pre-synpatic) x (post-synaptic). To control if the task is not too trivial, I suggest adding a control where the vector c is constant c_i=1.

Thank you for this insightful comment. We have realized that this is actually an issue that would need multilateral considerations. A previous study of one of us (Wärnberg & Kumar, 2023 PNAS) assumed that DA represents a vector error rather than a scalar RPE, and thus homogeneous DA was considered as negative control because it cannot represent vector error other than the direction of (1, 1, .., 1). In contrast, the present work assumed that DA represents a scalar RPE, and then homogeneous DA (i.e., constant feedback) would not be said as a failure mode because it can actually represent a scalar RPE and FA to the direction of (1, 1, .., 1) should in fact occur. And this FA to (1, 1, ..., 1) may actually be interesting because it means that if heterogeneity of DA inputs is not large and the feedback is not far from (1, 1, ..., 1), states are learned to be represented in such a way that simple summation of cortical neuronal activity approximates value, thereby potentially explaining why value is often correlated with regional activation (fMRI BOLD signal) of not only striatal but also cortical regions (which I have been considering as an unresolved mystery). But on the other hand, the case with constant feedback is the same as the simple delta rule, as you pointed out, and then what could be obtained from the present analyses would be that FA is actually occurring behind the successful operation of such a simple rule. Anyway we will make further examinations and considerations on this issue.

(5) Related to point 3), the main strength of this paper is to draw potential connection with experimental data. It would be good to highlight more concretely the prediction of the theory for experimental findings. (Ideally, what should be observed with non-negative FA that is not expected with FA or a delta rule (constant global feedback) ?).

In response to this insightful comment, we considered concrete predictions of our models. In the FA model, the feedback vector c and the value-weight vector w are initially at random (on average orthogonal) relationships and become gradually aligned, whereas in the non-negative model, the vectors c and w are loosely aligned from the beginning. We considered how the vectors c and w can be experimentally measured. Each element of the feedback vector c is multiplied with TD-RPE, modulating the degree of update in each pyramidal cell (more accurately, pyramidal cell population that corresponds to single RNN unit). Thus each element of c could be measured as the magnitude of response of each pyramidal cell to DA stimulation. The element of the value-weight vector w corresponding to a given pyramidal cell could be measured, if striatal neuron that receives input from that pyramidal cell can be identified (although technically demanding), as the magnitude of response of the striatal neuron to activation of the pyramidal cell.

Then, the abovementioned predictions can be tested by (i) identify cortical, striatal, and VTA regions that are connected by meso-cortico-limbic pathway and cortico-striatal-VTA pathway, (ii) identify pairs of cortical pyramidal cells and striatal neurons that are connected, (iii) measure the responses of identified pyramidal cells to DA stimulation, as well as the responses of identified striatal neurons to activation of the connected pyramidal cells, and (iv) test whether the DA->pyramidal responses and the pyramidal->striatal responses are associated across pyramidal cells, and whether such associations develop through learning. We will elaborate this tentative idea, and also other ideas, in our revision.

(6a) Random feedback with RNN in RL have been studied in the past, so it is maybe worth giving some insights how the results and the analyzes compare to this previous line of work (for instance in this paper [https://www.nature.com/articles/s41467-020-17236-y]). For instance, I am not very surprised that FA also works for value prediction with TD error. It is also expected from the literature that the RL + RNN + FA setting would scale to tasks that are more complex than the conditioning problem proposed here, so is there a more specific take-home message about non-negative FA? or benefits from this simpler toy task?

In reply to this suggestion, we will explore how our results compare to the previous studies including the paper [https://www.nature.com/articles/s41467-020-17236-y], and explore benefits of our models. At preset, we think of one possible direction. According to our results (Fig. 6E), under the non-negativity constraint, the model with random feedback and monotonic plasticity rule (bioVRNNrf) performed better, on average, than the model with backprop and non-monotonic plasticity rule (revVRNNbp) when the number of units was large, though the difference in the performance was not drastic. We will explore reasons for this, and examine if this also applies to cases with more realistic models, e.g., having separate excitatory and inhibitory units (as suggested by other reviewer).

(6b) Related to task complexity, it is not clear to me if non-negative value and feedback weights would generally scale to harder tasks. If the task in so simple that a global RPE signal is sufficient to learn (see 4 and 5), then it could be good to extend the task to find a substantial gap between: global RPE, non-negative FA, FA, BP. For a well chosen task, I expect to see a performance gap between any pair of these four learning rules. In the context of the present paper, this would be particularly interesting to study the failure mode of non-negative FA and the cases where it does perform as well as FA.

In reply to this comment and also other reviewer's comment, we will examine the performance of the different models in more complex tasks, e.g., having distractor cues or longer delays. We will also see whether or not the better performance of bioVRNNrf than revVRNNbp mentioned in the previous point applies to the different tasks.

(7) I find that the writing could be improved, it mostly feels more technical and difficult than it should. Here are some recommendations:

(7a) for instance the technical description of the task (CSC) is not fully described and requires background knowledge from other paper which is not desirable.

(7b) Also the rationale for the added difficulty with the stochastic reward and new state is not well explained.

(7c) In the technical description of the results I find that the text dives into descriptive comments of the figures but high-level take home messages would be helpful to guide the reader. I got a bit lost, although I feel that there is probably a lot of depth in these paragraphs.

Thank you for your helpful suggestions. We will thoroughly revise our writings.

(8) Related to the writing issue and 5), I wished that "bio-plausibility" was not the only reason to study positive feedback and value weights. Is it possible to develop a bit more specifically what and why this positivity is interesting? Is there an expected finding with non-negative FA both in the model capability? or maybe there is a simpler and crisp take-home message to communicate the experimental predictions to the community would be useful?

We will make considerations on whether/how the non-negative constraints could have any benefits other than biological plausibility, in particular, in theoretical aspects or applications using neuro-morphic hardware, while we will also elaborate the links to biology and concretize the model's predictions.

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation