Local online learning in recurrent networks with random feedback

  1. James M Murray  Is a corresponding author
  1. Columbia University, United States

Abstract

Recurrent neural networks (RNNs) enable the production and processing of time-dependent signals such as those involved in movement or working memory. Classic gradient-based algorithms for training RNNs have been available for decades, but are inconsistent with biological features of the brain, such as causality and locality. We derive an approximation to gradient-based learning that comports with these constraints by requiring synaptic weight updates to depend only on local information about pre- and postsynaptic activities, in addition to a random feedback projection of the RNN output error. In addition to providing mathematical arguments for the effectiveness of the new learning rule, we show through simulations that it can be used to train an RNN to perform a variety of tasks. Finally, to overcome the difficulty of training over very large numbers of timesteps, we propose an augmented circuit architecture that allows the RNN to concatenate short-duration patterns into longer sequences.

https://doi.org/10.7554/eLife.43299.001

Introduction

Many tasks require computations that unfold over time. To accomplish tasks involving motor control, working memory, or other time-dependent phenomena, neural circuits must learn to produce the correct output at the correct time. Such learning is a difficult computational problem, as it generally involves temporal credit assignment, requiring synaptic weight updates at a particular time to minimize errors not only at the time of learning but also at earlier and later times. The problem is also a very general one, as such learning occurs in numerous brain areas and is thought to underlie many complex cognitive and motor tasks encountered in experiments.

To obtain insight into how the brain might perform challenging time-dependent computations, an increasingly common approach is to train high-dimensional dynamical systems known as recurrent neural networks (RNNs) to perform tasks similar to those performed by circuits of the brain, often with the goal of comparing the RNN with neural data to obtain insight about how the brain solves computational problems (Mante et al., 2013; Carnevale et al., 2015; Sussillo et al., 2015; Remington et al., 2018). While such an approach can lead to useful insights about the neural representations that are formed once a task is learned, it so far cannot address in a satisfying way the process of learning itself, as the standard learning rules for training RNNs suffer from highly nonbiological features such as nonlocality and acausality, as we describe below.

The most straightforward approach to training an RNN to produce a desired output is to define a loss function based on the difference between the RNN output and the target output that we would like it to match, then to update each parameter in the RNN—typically the synaptic weights—by an amount proportional to the gradient of the loss function with respect to that parameter. The most widely used among these algorithms is backpropagation through time (BPTT) (Rumelhart et al., 1985). As its name suggests, BPTT is acausal, requiring that errors in the RNN output be accumulated incrementally from the end of a trial to the beginning in order to update synaptic weights. Real-time recurrent learning (RTRL) (Williams and Zipser, 1989), the other classic gradient-based learning rule, is causal but nonlocal, with the update to a particular synaptic weight in the RNN depending on the full state of the network—a limitation shared by more modern reservoir computing methods (Jaeger and Haas, 2004; Sussillo and Abbott, 2009). What’s more, both BPTT and RTRL require fine tuning in the sense that the feedback weights from the RNN output back to the network must precisely match the readout weights from the RNN to its output. Such precise matching corresponds to fine tuning in the sense that it requires a highly particular initial configuration of the synaptic weights, typically with no justification as to how such a configuration might come about in a biologically plausible manner. Further, if the readout weights are modified during training of the RNN, then the feedback weights must also be updated to match them, and it is unclear how this might be done without requiring nonlocal information.

The goal of this work is to derive a learning rule for RNNs that is both causal and local, without requiring fine tuning of the feedback weights. Our results depend crucially on two approximations. First, locality is enforced by dropping the nonlocal part of the loss function gradient, making our learning rule only approximately gradient-based. Second, we replace the finely tuned feedback weights required by gradient-based learning with random feedback weights, inspired by the success of a similar approach in nonrecurrent feedforward networks (Lillicrap et al., 2016; Liao et al., 2016). While these two approximations address distinct shortcomings of gradient-based learning and can be made independently (as discussed below in Results), only when both are made together does a learning rule emerge that is fully biologically plausible in the sense of being causal, local, and avoiding fine tuning of feedback weights. In the sections that follow, we show that, even with these approximations, RNNs can be effectively trained to perform a variety of tasks. In the Appendices, we provide supplementary mathematical arguments showing why the algorithm remains effective despite its use of an inexact loss function gradient.

Results

The RFLO learning rule

To begin, we consider an RNN, as shown in Figure 1, in which a time-dependent input vector 𝐱(t) provides input to a recurrently connected hidden layer of N units described by activity vector 𝐡(t), and this activity is read out to form a time-dependent output 𝐲(t). Such a network is defined by the following equations:

(1) hi(t+1)=hi(t)+1τ[-hi(t)+ϕ(j=1NWijhj(t)+j=1NxWijinxj(t+1))],yk(t)=i=1NWkiouthi(t).
Schematic illustration of a recurrent neural network.

The network receives time-dependent input 𝐱(t), and its synaptic weights are trained so that the output 𝐲(t) matches a target function 𝐲*(t). The projection of the error 𝜺(t) with feedback weights is used for learning the input weights and recurrent weights.

https://doi.org/10.7554/eLife.43299.002

For concreteness, we take the nonlinear function appearing in Equation (1) to be ϕ()=tanh(). The goal is to train this network to produce a target output function 𝐲*(t) given a specified input function 𝐱(t) and initial activity vector 𝐡(0). The error is then the difference between the target output and the actual output, and the loss function is the squared error integrated over time:

(2) εk(t)=yk*(t)-yk(t),L=12Tt=1Tk=1Ny[εk(t)]2.

The goal of producing the target output function 𝐲*(t) is equivalent to minimizing this loss function.

In order to minimize the loss function with respect to the recurrent weights, we take the derivative with respect to these weights:

(3) LWab=-1Tt=1Tj=1N[(𝐖out)T𝜺(t)])]jhj(t)Wab.

Next, using the update Equation (1), we obtain the following recursion relation:

(4) hj(t)Wab=(1-1τ)hj(t-1)Wab+1τδjaϕ(ua(t))hb(t-1)+1τkϕ(uj(t))Wjkhk(t-1)Wab,

where δja is the Kronecker delta function, ua(t) is the input current to unit a, and the recursion terminates with hj(0)/Wab=0. This gradient can be updated online at each timestep as the RNN is run, and implementing gradient descent to update the weights using Equation (3), we have ΔWab=-ηL/Wab, where η is a learning rate. This approach, known as RTRL (Williams and Zipser, 1989), is one of the two classic gradient-based algorithms for training RNNs. This approach can also be used for training the input and output weights of the RNN. The full derivation is presented in Appendix 1. (The other classic gradient-based algorithm, BPTT, involves a different approach for taking partial derivatives but is equivalent to RTRL; its derivation and relation to RTRL are also provided in Appendix 1.)

From a biological perspective, there are two problems with RTRL as a plausible rule for synaptic plasticity. The first problem is that it is nonlocal, with the update to synaptic weight Wab depending, through the last term in Equation (4), on every other synaptic weight in the RNN. This information would be inaccessible to a synapse in an actual neural circuit. The second problem is the appearance of (𝐖out)T in Equation (3), which means that the error in the RNN output must be fed back into the network with synaptic weights that are precisely symmetric with the readout weights. It is unclear how the readout and feedback weights could be made to match one another in a neural circuit in the brain.

In order to address these two shortcomings, we make two approximations to the RTRL learning rule. The first approximation consists of dropping a nonlocal term from the gradient, so that computing the update to a given synaptic weight requires only pre- and postsynaptic activities, rather than information about the entire state of the RNN including all of its synaptic weights. Second, as described in more detail below, we project the error back into the network for learning using random feedback weights, rather than feedback weights that are tuned to match the readout weights. These approximations, described more fully in Appendix 1, result in the following weight update equations:

(5) δWabout(t)=η1εa(t)hb(t),δWab(t)=η2[𝐁𝜺(t)]apab(t),δWabin(t)=η3[𝐁𝜺(t)]aqab(t),

where ηα are learning rates, and 𝐁 is a random matrix of feedback weights. Here we have defined

(6) pab(t)=1τϕ(ua(t))hb(t-1)+(1-1τ)pab(t-1),qab(t)=1τϕ(ua(t))xb(t-1)+(1-1τ)qab(t-1),

which are the accumulated products of the pre- and (the derivative of the) postsynaptic activity at the recurrent and input synapses, respectively. We have also defined ua(t)cWachc(t-1)+cWacinxc(t) as the total input current to unit a. While this form of the update equations does not require explicit integration and hence is more efficient for numerical simulation, it is instructive to take the continuous-time (τ1) limit of Equation (5) and the integral of Equation (6), which yields

(7) δWabout(t)=η1εa(t)hb(t),δWab(t)=η2[Bε(t)]a0tdtτet/τϕ(ua(tt))hb(tt),δWabin(t)=η3[Bε(t)]a0tdtτet/τϕ(ua(tt))xb(tt).

In this way, it becomes clear that the integrals in the second and third equations are eligibility traces that accumulate the correlations between pre- and post-synaptic activity over a time window of duration τ. The weight update is then proportional to this eligibility trace, multiplied by a feedback projection of the readout error. The fact that the timescale for the eligibility trace matches the RNN time constant τ reflects the fact that the RNN dynamics are typically correlated only up to this timescale, so that the error is associated only with RNN activity up to time τ in the past. If the error feedback were delayed rather than provided instantaneously, then eligibility traces with longer timescales might be beneficial (Gerstner et al., 2018).

Three features of the above learning rules are especially important. First, the updates are local, requiring information about the presynaptic activity and the postsynaptic input current, but no information about synaptic weights and activity levels elsewhere in the network. Second, the updates are online and can either be made at each timestep or accumulated over many timesteps and made at the end of each trial or of several trials. In either case, unlike the BPTT algorithm, it is not necessary to run the dynamics backward in time at the end of each trial to compute the weight updates. Third, the readout error is projected back to each unit in the network with weights 𝐁 that are fixed and random. An exact gradient of the loss function, on the other hand, would lead to (𝐖out)T, where ()T denotes matrix transpose, appearing in the place of 𝐁. As described above, the use of random feedback weights is inspired by a similar approach in feedforward networks (Lillicrap et al., 2016; see also Nøkland, 2016, as well as a recent implementation in feedforward spiking networks [Samadi et al., 2017]), and we shall show below that the same feedback alignment mechanism that is responsible for the success of the feedforward version is also at work in our recurrent version. (While an RNN is often described as being ‘unrolled in time’, so that it becomes a feedforward network in which each layer corresponds to one timestep, it is important to note that the unrolled version of the problem that we consider here is not identical to the feedforward case considered in Lillicrap et al. (2016) and Nøkland, 2016. In the RNN, a readout error is defined at every ‘layer’ t, whereas in the feedforward case, the error is defined only at the last layer (t=T) and is fed back to update weights in all preceding layers.)

With the above observations in mind, we refer to the above learning rule as random feedback local online (RFLO) learning. In Appendix 1, we provide a full derivation of the learning rule, and describe in detail its relation to the other gradient-based methods mentioned above, BPTT and RTRL. It should be noted that the approximations applied above to the RTRL algorithm are distinct from recent approximations made in the machine learning literature (Tallec and Ollivier, 2018; Mujika et al., 2018), where the goal was to decrease the computational cost of RTRL, rather than to increase its biological plausibility.

Because the RFLO learning rule uses an approximation of the loss function gradient rather than the exact gradient for updating the synaptic weights, a natural question to ask is whether it can be expected to decrease the loss function at all. In Appendix 2 we show that, under certain simplifying assumptions including linearization of the RNN, the loss function does indeed decrease on average with each step of RFLO learning. In particular, we show that, as in the feedforward case (Lillicrap et al., 2016), reduction of the loss function requires alignment between the learned readout weights 𝐖out and the fixed feedback weights 𝐁. We then proceed to show that this alignment tends to increase during training due to coordinated learning of the recurrent weights 𝐖 and readout weights 𝐖out. The mathematical approach for showing that alignment between readout and feedback weights occurs is similar to that used previously in the feedforward case (Lillicrap et al., 2016). In particular, the network was made fully linear in both cases in order to make mathematical headway possible, and a statistical average over inputs (in the feedforward case) or the activity vector (for the RNN) was performed. However, because a feedforward network retains no state information from one timestep to the next and because the network architectures are distinct (even if one thinks about an RNN as a feedforward network ‘unrolled in time’), the results in Appendix 2 are not simply a straightforward generalization of the feedforward case.

A number of simplifying assumptions have been made in the mathematical derivations of Appendix 2, including linear dynamics, uncorrelated neurons, and random synaptic weights, none of which will necessarily hold in a nonlinear network trained to perform a dynamical computation. Hence, although such mathematical arguments provide reason to hope that RFLO learning might be successful and insight into the mechanism by which learning occurs, it remains to be shown that RFLO learning can be used to successfully train a nonlinear RNN in practice. In the following section, therefore, we show using simulated examples that RFLO learning can perform well on a variety of tasks.

Performance of RFLO learning

In this section we illustrate the performance of the RFLO learning algorithm on a number of simulated tasks. These tasks require an RNN to produce sequences of output values and/or delayed responses to an input to the RNN, and hence are beyond the capabilities of feedforward networks. As a benchmark, we compare the performance of RFLO learning with BPTT, the standard algorithm for training RNNs. (As described in Appendix 1, the weight updates in RTRL are, when performed in batches at the end of each trial, completely equivalent to those in BPTT. Hence in this section we compare RFLO learning with BPTT only in what follows.)

Autonomous production of continuous outputs

Figure 2 illustrates the performance of an RNN trained with RFLO learning to produce a one-dimensional periodic output given no external input. Figure 2a shows the decrease of the loss function (the mean squared error of the RNN output) as the RNN is trained over many trials, where each trial corresponds to one period consisting of T timesteps, as well as the performance of the RNN at the end of training. As a benchmark for comparison with the RFLO learning rule, BPTT was also used to train the RNN. In addition, we show in Figure 2—figure supplement 1 that a variant of RFLO learning in which all outbound synapses from a given unit were constrained to be of the same sign—a biological constraint known as Dale’s law (Dale, 1935)—also yields effective learning. (A similar result, in this case using nonlocal learning rules, was recently obtained in other modeling work [Song et al., 2016].)

Figure 2 with 3 supplements see all
Periodic output task.

(a) Left panels: The mean squared output error during training for an RNN with N=30 recurrent units and no external input, trained to produce a one-dimensional periodic output with period of duration T=20τ (left) or T=160τ (right), where τ=10 is the RNN time constant. The learning rules used for training were backpropagation through time (BPTT) and random feedback local online (RFLO) learning. Solid line is median loss over nine realizations, and shaded regions show 25/75 percentiles. Right panels: The RNN output at the end of training for each type of learning (dashed lines are target outputs, offset for clarity). (b) The loss function at the end of training for target outputs having different periods. The colored lines correspond to the two learning rules from (a), while the gray line is the loss computed for an untrained RNN. (c) The normalized alignment between the vector of readout weights 𝐖out and the vector of feedback weights 𝐁 during training with RFLO learning. (d) The loss function during training with T=80τ for BPTT and RFLO, as well as versions of RFLO in which locality is enforced without random feedback (magenta) or random feedback is used without enforcing locality (cyan).

https://doi.org/10.7554/eLife.43299.003

Figure 2b shows that, in the case where the number of timesteps in the target output was not too great, both versions of RFLO learning perform comparably well to BPTT. BPTT shows an advantage, however, when the number of timesteps became very large. Intuitively, this difference in performance is due to the accumulation of small errors in the estimated gradient of the loss function over many timesteps with RFLO learning. This is less of a problem for BPTT, on the other hand, in which the exact gradient is used.

Figure 2c shows the increase in the alignment between the vector of readout weights 𝐖out and the vector of feedback weights 𝐁 during training with RFLO learning. As in the case of feedforward networks (Lillicrap et al., 2016; Nøkland, 2016), the readout weights evolve over time to become increasingly similar to the feedback weights, which are fixed during training. In Appendix 2 we provide mathematical arguments for why this alignment occurs, showing that the alignment is not due to the change in 𝐖out alone, but rather to coordinated changes in the readout and recurrent weights.

In deriving the RFLO learning rule, two independent approximations were made: locality was enforced by dropping the nonlocal term from the loss function gradient, and feedback weights were chosen randomly rather than tuned to match the readout weights. If these approximations are instead made independently, which will have the greater effect on the performance of the RNN? Figure 2d answers this question by comparing RFLO and BPTT with two alternative learning rules: one in which the local approximation is made while symmetric error feedback is maintained, and another in which the nonlocal part of the loss function gradient is retained but the error feedback is random. The results show that the local approximation is essentially fully responsible for the performance difference between RFLO and BPTT, while there is no significant loss in performance due to the random feedback alone.

It is also worthwhile to consider the relative contributions of the two types of learning in Figure 2, namely the learning of recurrent and of readout weights. Given that the learning rule for the readout weights makes use of the exact loss function gradient while that for the recurrent weights does not, it could be that the former are fully responsible for the successful training. In Figure 2—figure supplement 2 we show that this is not the case, and that training of both recurrent and readout weights significantly outperforms training of the readout weights only (with the readout fed back as an input to the RNN for stability–see Materials and methods). Also shown is the performance of an RNN in which recurrent weights but not readout weights are trained. In this case learning is completely unsuccessful. The reason is that, in order for successful credit assignment to take place, there must be some alignment between the readout weights and feedback weights. Such alignment can’t occur, however, if the readout weights are frozen. In the case of a linearized network, the necessity of coordinated learning between the two sets of weights can be shown mathematically, as done in Appendix 2.

As with other RNN training methods, performance of the trained RNN generally improves for larger network sizes (Figure 2—figure supplement 3). While the computational cost of training the RNN increases with RNN size, leading to a tradeoff between fast training and high performance for a given number of training trials, it is worthwhile to note that the cost is much lower than that of RTRL (N4 operations per timestep) and is on par with BPTT (both N2 operations per timestep, as shown in Appendix 1).

Interval matching

Figure 3 illustrates the performance of the RFLO algorithm on a ‘Ready Set Go’ task, in which the RNN is required to produce an output pulse after a time delay matching the delay between two preceding input pulses (Jazayeri and Shadlen, 2010). This task is more difficult than the production of a periodic output due to the requirement that the RNN must learn to store the information about the interpulse delay, and then produce responses at different times depending on what the delay was. Figure 3b,c illustrate the testing performance of an RNN trained with either RFLO learning or BPTT. If the RNN is trained and tested on interpulse delays satisfying Tdelay15τ, the performance is similarly good for the two algorithms. If the RNN is trained and tested with longer Tdelay, however, then BPTT performs better than RFLO learning. As in the case of the periodic output task from Figure 2, RFLO learning performs well for tasks on short and intermediate timescales, but not as well as BPTT for tasks involving longer timescales. In the following subsection, we shall address this shortcoming by constructing a network in which learned subsequence elements of short duration can be concatenated to form longer-duration sequences.

Interval-matching task.

(a) In the task, the RNN input consists of two input pulses, with a random delay Tdelay between pulses in each trial. The target output (dashed line) is a pulse trailing the second input pulse by Tdelay. (b) The time of the peak in the RNN output is observed after training with RFLO learning and testing in trials with various interpulse delays in the input. Red (blue) shows the case in which the RNN is trained with interpulse delays satisfying Tdelay15τ (20τ). (c) Same as (b), but with the RNN trained using BPTT using interpulse delays Tdelay25τ for training and testing.

https://doi.org/10.7554/eLife.43299.007

Learning a sequence of actions

In the above examples, it was shown that, while the performance of RFLO learning is comparable to that of BPTT for tasks over short and intermediate timescales, it is less impressive for tasks involving longer timescales. From the perspective of machine learning, this represents a failure of RFLO learning. From the perspective of neuroscience, however, we can adopt a more constructive attitude. The brain, after all, suffers the same limitations that we have imposed in constructing the RFLO learning rule—namely, causality and locality—and cannot be performing BPTT for learned movements and working memory tasks over long timescales of seconds or more. So how might recurrent circuits in the brain learn to perform tasks over these long timescales? One possibility is that they use a more sophisticated learning rule than the one that we have constructed. While we cannot rule out this possibility, it is worth keeping in mind that, due to the problem of vanishing or exploding gradients, all gradient-based training methods for RNNs fail eventually at long timescales. Another possibility is that a simple, fully connected recurrent circuit in the brain, like an RNN trained with RFLO learning, can only be trained directly with supervised learning over short timescales, and that a more complex circuit architecture is necessary for longer timescales.

It has long been recognized that long-duration behaviors tend to be sequences composed of short, stereotyped actions concatenated together (Lashley, 1951). Further, a great deal of experimental work suggests that learning of this type involves training of synaptic weights from cortex to striatum (Graybiel, 1998), the input structure of the basal ganglia, which in turn modifies cortical activity via thalamus. In this section we propose a circuit architecture, largely borrowed from Logiaco et al. (2018) and inspired by the subcortical loop involving basal ganglia and thalamus, that allows an RNN to learn and perform sequences of ‘behavioral syllables’.

As illustrated in Figure 4a, the first stage of learning in this scheme involves training an RNN to produce a distinct time-dependent output in response to the activation of each of its tonic inputs. In this case, the RNN output is a two-dimensional vector giving the velocity of a cursor moving in a plane. Once the RNN has been trained in this way, the circuit is augmented with a loop structure, shown schematically in Figure 4b. At one end of the loop, the RNN activity is read out with weights 𝐖s. At the other end of the loop, this readout is used to control the input to the RNN. The weights 𝐖s can be learned such that, at the end of one behavioral syllable, the RNN input driving the next syllable in the sequence is activated by the auxiliary loop. This is done most easily by gating the RNN readout so that it can only drive changes at the end of a syllable.

An RNN with multiple inputs controlled by an auxiliary loop learns to produce sequences.

(a) An RNN with a two-dimensional readout controlling the velocity of a cursor is trained to move the cursor in a different direction for each of the four possible inputs. (b) The RNN is augmented with a loop structure, which allows a readout from the RNN via learned weights 𝐖s to change the state of the input to the RNN, enabling the RNN state at the end of each cursor movement to trigger the beginning of the next movement. (c) The trajectory of a cursor performing four movements and four holds, where RFLO learning was used to train the individual movements as in (a), and learning of the weights 𝐖s was used to join these movements into a sequence, as illustrated in (b). Lower traces show comparison of this trajectory with those obtained by using either RFLO or BPTT to train an RNN to perform the entire sequence without the auxiliary loop.

https://doi.org/10.7554/eLife.43299.008

In this example, each time the end of a syllable is reached, four readout units receive input zi=j=1NWijshj, and a winner-take-all rule is applied such that the most active unit activates a corresponding RNN input unit, which drives the RNN to produce the next syllable. Meanwhile, the weights are updated with the reward-modulated Hebbian learning rule ΔWijs=ηsRzihj, where R=1 if the syllable transition matches the target and R=0 otherwise. By training over many trials, the network learns to match the target sequence of syllables. Figure 4c shows the output from an RNN trained in this way to produce a sequence of reaches and holds in a two-dimensional space. Importantly, while the duration of each behavioral syllable in this example (20τ) is relatively short, the full concatenated sequence is long (160τ) and would be very difficult to train directly in an RNN lacking such a loop structure.

How might the loop architecture illustrated in Figure 4 be instantiated in the brain? For learned motor control, motor cortex likely plays the role of the recurrent circuit controlling movements. In addition to projections to spinal cord for controlling movement directly, motor cortex also projects to striatum, and experimental evidence has suggested that modification of these corticostriatal synapses plays an important role in the learning of action sequences (Jin and Costa, 2010). Via a loop through the basal ganglia output nucleus GPi and motor thalamus, these signals pass back to motor cortex, as illustrated schematically in Figure 4. According to the model, then, behavioral syllables are stored in motor cortex, and the role of striatum is to direct the switching from one syllable to the next. Experimental evidence for both the existence of behavioral syllables and the role played by striatum in switching between syllables on subsecond timescales has been found recently in mice (Wiltschko et al., 2015; Markowitz et al., 2018). How might the weights from motor cortex in this model be gated so that this projection is active at behavioral transitions? It is well known that dopamine, in addition to modulating plasticity at corticostriatal synapses, also modulates the gain of cortical inputs to striatum (Gerfen et al., 2011). Further, it has recently been shown that transient dopamine signals occur at the beginning of each movement in a lever-press sequence in mice (da Silva et al., 2018). Together, these experimental results support a model in which dopamine bursts enable striatum to direct switching between behavioral syllables, thereby allowing for learned behavioral sequences to occur over long timescales by enabling the RNN to control its own input. Within this framework, RFLO learning provides a biologically plausible means by which the behavioral syllables making up these sequences might be learned.

Discussion

In this work we have derived an approximation to gradient-based learning rules for RNNs, yielding a learning rule that is local, online, and does not require fine tuning of feedback weights. We have shown that RFLO learning performs comparably well to BPTT when the duration of the task being trained is not too long, but that it performs less well when the task duration becomes very long. In this case, however, we showed that training can still be effective if the RNN architecture is augmented to enable the concatenation of short-duration outputs into longer output sequences. Further exploring how this augmented architecture might map onto cortical and subcortical circuits in the brain is an interesting direction for future work. Another promising area for future work is the use of layered recurrent architectures, which occur throughout cortex and have been shown to be beneficial in complex machine learning applications spanning long timescales (Pascanu et al., 2014). Finally, machine learning tasks with discrete timesteps and discrete outputs such as text prediction benefit greatly from the use of RNNs with cross-entropy loss functions and softmax output normalization. In general, these lead to additional nonlocal terms in gradient-based learning, and in future work it would be interesting to investigate whether RFLO learning can be adapted and applied to such problems while preserving locality, or whether new ideas are necessary about how such tasks are solved in the brain.

How might RFLO learning be implemented concretely in the brain? As we have discussed above, motor cortex is an example of a recurrent circuit that can be trained to produce a particular time-dependent output. Neurons in motor cortex receive information about planned actions (𝐲*(t) in the language of the model) from premotor cortical areas, as well as information about the current state of the body (𝐲(t)) from visual and/or proprioceptive inputs, giving them the information necessary to compute a time-dependent error 𝜺(t)=𝐲*(t)-𝐲(t). Hence it is possible that neurons within motor cortex might use a projection of this error signal to learn to produce a target output trajectory. Such a computation might feature a special role for apical dendrites, as in recently developed theories for learning in feedforward cortical networks (Guerguiev et al., 2017; Sacramento et al., 2017), though further work would be needed to build a detailed theory for its implementation in recurrent cortical circuits.

A possible alternative scenario is that neuromodulators might encode error signals. In particular, midbrain dopamine neurons project to many frontal cortical areas including prefrontal cortex and motor cortex, and their input is known to be necessary for learning certain time-dependent behaviors (Hosp et al., 2011; Li et al., 2017). Further, recent experiments have shown that the signals encoded by dopamine neurons are significantly richer than the reward prediction error that has traditionally been associated with dopamine, and include phasic modulation during movements (Howe and Dombeck, 2016; da Silva et al., 2018; Coddington and Dudman, 2018). This interpretation of dopamine as a continuous online error signal used for supervised learning would be distinct from and complementary to its well known role as an encoder of reward prediction error for reinforcement learning.

In addition to the gradient-based approaches (RTRL and BPTT) already discussed above, another widely used algorithm for training RNNs is FORCE learning (Sussillo and Abbott, 2009) and its more recent variants (Laje and Buonomano, 2013; DePasquale et al., 2018). The FORCE algorithm, unlike gradient-based approaches, makes use of chaotic fluctuations in RNN activity driven by strong recurrent input. These chaotic fluctuations, which are not necessary in gradient-based approaches, provide a temporally rich set of basis functions that can be summed together with trained readout weights in order to construct a desired time-dependent output. As with gradient-based approaches, however, FORCE learning is nonlocal, in this case because the update to any given readout weight depends not just on the presynaptic activity, but also on the activities of all other units in the network. Although FORCE learning is biologically implausible due to the nonlocality of the learning rule, it is, like RFLO learning, implemented online and does not require finely tuned feedback weights for the readout error. It is an open question whether approximations to the FORCE algorithm might exist that would obviate the need for nonlocal learning while maintaining sufficiently good performance.

In addition to RFLO learning, a number of other local and causal learning rules for training RNNs have been proposed. The oldest of these algorithms (Mazzoni et al., 1991; Williams, 1992) operate within the framework of reinforcement learning rather than supervised learning, meaning that only a scalar—and possibly temporally delayed—reward signal is available for training the RNN, rather than the full target function y*(t). Typical of such algorithms, which are often known as ‘node perturbation’ algorithms, is the REINFORCE learning rule (Williams, 1992), which in our notation gives the following weight update at the end of each trial:

(8) ΔWab=ηT(R-R¯)t=1Tξa(t)hb(t),

where R is the scalar reward signal (which might be defined as the negative of the loss function that we have used in RFLO learning), R¯ is the average reward over recent trials, and ξa(t) is noise current injected into unit a during training. This learning rule means, for example, that (assuming the presynaptic unit b is active) if the postsynaptic unit a is more active than usual in a given trial (i.e. ξa(t) is positive) and the reward is greater than expected, then the synaptic weight Wab should be increased so that this postsynaptic unit should be more active in future trials. A slightly more elaborate version of this learning rule replaces the summand in Equation (8) with a low-pass filtered version of this same quantity, leading to eligibility traces of similar form to those appearing in Equation (7). This learning rule has also been adapted for a network of spiking neurons (Fiete et al., 2006).

A potential shortcoming of the REINFORCE learning rule is that it depends on the postsynaptic noise current rather than on the total postsynaptic input current (i.e. the noise current plus the input current from presynaptic units). Because it is arguably implausible that a neuron could keep track of these sources of input current separately, a recently proposed version (Miconi, 2017) replaces ξa(t)f(ua(t)-u¯a(t)), where f() is a supralinear function, ua(t) is the total input current (including noise) to unit a, and u¯a(t) is the low-pass-filtered input current. This substitution is logical since the quantity ua(t)-u¯a(t) tracks the fast fluctuations of each unit, which are mainly due to the rapidly fluctuating input noise rather than to the more slowly varying recurrent and feedforward inputs.

A severe limitation of reinforcement learning as formulated in Equation (8) is the sparsity of reward information, which comes in the form of a single scalar value at the end of each trial. Clearly this provides the RNN with much less information to learn from than a vector of errors 𝜺(t)𝐲*(t)-𝐲(t) at every timestep, which is assumed to be available in supervised learning. As one would expect from this observation, reinforcement learning is typically much slower than supervised learning in RNNs, as in feedforward neural networks. A hybrid approach is to assume that reward information is scalar, as in reinforcement learning, but available at every timestep, as in supervised learning. This might correspond to setting R(t)-|𝜺(t)|2 and including this reward in a learning rule such as the REINFORCE rule in Equation (8). To our knowledge this has not been done for training recurrent weights in an RNN, though a similar idea has recently been used for training the readout weights of an RNN (Legenstein et al., 2010; Hoerzer et al., 2014). Ultimately, whether recurrent neural circuits in the brain use reinforcement learning or supervised learning is likely to depend on the task being learned and what feedback information about performance is available. For example, in a reach-to-target task such as the one modeled in Figure 4, it is plausible that a human or nonhuman primate might have a mental template of an ideal reach, and might make corrections to make the hand match the target trajectory at each timepoint in the trial. On the other hand, if only delayed binary feedback is provided in an interval-matching task such as the one modeled in Figure 3, neural circuits in the brain might be more likely to use reinforcement learning.

More recently, local, online algorithms for supervised learning in RNNs with spiking neurons have been proposed. Gilra and Gerstner (2017) and Alemi et al. (2017) have trained spiking RNNs to produce particular dynamical trajectories of RNN readouts. These works constitute a large step toward greater biological plausibility, particularly in their use of local learning rules and spiking neurons. Here we describe the most important differences between those works and RFLO learning. In both Gilra and Gerstner (2017) and Alemi et al. (2017), the RNN is driven by an input 𝐱(t) as well as the error signal 𝜺(t)=𝐲*(t)-𝐲(t), where the target output is related to the input 𝐱(t) according to

(9) y˙i*=fi(𝐲*)+gi(𝐱),

where gi(𝐱)=xi(t) in Alemi et al. (2017), but is arbitrary in Gilra and Gerstner (2017). In either case, however, it is not possible to learn arbitrary, time-dependent mappings between inputs and outputs in these networks, since the RNN output must take the form of a dynamical system driven by the RNN input. This is especially limiting if one desires that the RNN dynamics should be autonomous, so that 𝐱(t)=0 in Equation (9). It is not obvious, for example, what dynamical equations having the form of (9) would provide a solution to the interval-matching task studied in Figure 3. Of course, it is always possible to obtain an arbitrarily complex readout by making 𝐱(t) sufficiently large such that 𝐲(t) simply follows 𝐱(t) from Equation (9). However, since 𝐱(t) is provided as input, the RNN essentially becomes an autoencoder in this limit.

Two other features of Gilra and Gerstner (2017) and Alemi et al. (2017) differ from RFLO learning. First, the readout weights and the error feedback weights are related to one another in a highly specific way, being either symmetric with one another (Alemi et al., 2017), or else configured such that the loop from the RNN to the readout and back to the RNN via the error feedback pathway forms an autoencoder (Gilra and Gerstner, 2017). In either case these weights are preset to these values before training of the RNN begins, unlike the randomly set feedback weights used in RFLO learning. Second, both approaches require that the error signal 𝜺(t) be fed back to the network with (at least initially) sufficiently large gain such that the RNN dynamics are essentially slaved to produce the target readout 𝐲*(t), so that one has 𝐲(t)𝐲*(t) immediately from the beginning of training. (This follows as a consequence of the relation between the readout and feedback weights described above.) With RFLO learning, in contrast, forcing the output to always follow the target in this way is not necessary, and learning can work even if the RNN dynamics early in learning do not resemble the dynamics of the ultimate solution.

In summary, the random feedback learning rule that we propose offers a potential advantage over previous biologically plausible learning rules by making use of the full time-dependent, possibly multidimensional error signal, and also by training all weights in the network, including input, output, and recurrent weights. In addition, it does not require any special relation between the RNN inputs and outputs, nor any special relationship between the readout and feedback weights, nor a mechanism that restricts the RNN dynamics to always match the target from the start of training. Especially when extended to allow for sequence learning such as depicted in Figure 4, RFLO learning provides a plausible mechanism by which supervised learning might be implemented in recurrent circuits in the brain.

Materials and methods

Source code

Request a detailed protocol

A Python notebook implementing a simple, self-contained example of RFLO learning has been included as Source code 1 to accompany this publication. The example trains an RNN on the periodic output task from Figure 2 using RFLO learning, as well as using BPTT and RTRL for comparison.

Simulation details

Request a detailed protocol

In all simulations, the RNN time constant was τ=10. Learning rates were selected by grid search over η1,2,3=η[10-4,3×10-4,10-3,,3×10-1]. Input and readout weights were initialized randomly and uniformly over [-1,1] and [-1/N,1/N], respectively. Recurrent weights were initialized randomly as W𝒩(0,g2/N), where g=1.5 and 𝒩(0,σ2) is the normal distribution with zero mean and variance σ2. The fixed feedback weights were chosen randomly as Bij𝒩(0,1). The nonlinear activation function of the RNN units was ϕ()=tanh().

In Figure 2, the RNN size was N=30. For task durations of T=(200,400,800,1600) timesteps, the optimal learning rates after grid search were η=(0.03,0.01,0.001,0.0003) for RFLO and (0.03,0.03,0.01,0.03) for BPTT. The target output waveform was y*(t)=sin(2πt/T)+0.5sin(4πt/T)+0.25sin(8πt/T). The shaded regions in panels a, b, and d are 25/75 percentiles of performance computed over nine randomly initialized networks, and the solid curves show the median performance.

In the version of the periodic output task satisfying Dale’s law enforcing sign-constrained synapses (Figure 2—figure supplement 1), half of RNN units were assigned to be excitatory and half were inhibitory. Recurrent weights were initialized as above, with the additional step of Wijξj|Wij|, where ξj=±1 for excitatory or inhibitory units. During learning in this network, recurrent weights were updated normally but clipped to zero to prevent the weights from changing sign.

In the version of the periodic output task in which only readout weights were trained (Figure 2—figure supplement 2), the readout was fed back into the RNN as a separate input current to the recurrent units via the random feedback weights 𝐁. This is necessary to stabilize the RNN dynamics in the absence of learning of the recurrent weights, as they would be either chaotic (for large recurrent weights) or quickly decaying (for small recurrent weights) in the absence of such stabilization. The RNN was initialized as described above, and the learning rate for the readout weights was η=0.03, determined by grid search.

In Figure 3, the RNN size was N=100. The input and target output pulses were Gaussian with a standard deviation of 15 timesteps. The RNNs were trained for 5000 trials. With BPTT, the learning rate was η1,2,3=0.003, while with RFLO learning it was 0.001. Rather than performing weight updates in every trial, the updates were continuously accumulated but only implemented after batches of 10 trials.

In Figure 4, networks of size N=100 were used. In the version with the loop architecture, RFLO learning was first used to train the network to produce a particular reach trajectory in response to each of four tonic inputs for 10,000 trials, with a random input chosen in each trial, subject to the constraint that the trajectory could not move the cursor out of bounds. Next, the RNN weights were held fixed and the weights 𝐖s were learned for 10,000 additional trials while the RNN controlled its own input via the auxiliary loop. The active unit in ‘striatum’ was chosen randomly with probability pexplore=0.1 and was otherwise chosen deterministically based on the RNN input via the weights Ws, again subject to the constraint that the trajectory could not move the cursor out of bounds. In the comparison shown in subpanel (c), RNNs without the loop architecture were trained for 20,000 trials with either RFLO learning or BPTT to autonomously produce the entire sequence of 160τ timesteps.

Appendix 1

Gradient-based RNN learning and RFLO learning

In the first subsection of this appendix, we begin by reviewing the derivation of RTRL, the classic gradient-based learning rule. We show that the update equation for the recurrent weights under the RTRL rule has two undesirable features from a biological point of view. First, the learning rule is nonlocal, with the update to weight Wij depending on all of the other weights in the RNN, rather than just on information that is locally available to that particular synapse. Second, the RTRL learning rule requires that the error in the RNN readout be fed back into the RNN with weights that are precisely symmetric with the readout weights. In the second subsection, we implement approximations to the RTRL gradient in order to overcome these undesirable features, leading to the RFLO learning rules.

In the third subsection of this appendix, we review the derivation of BPTT, the most widely used algorithm for training RNNs. Because it is the standard gradient-based learning rule for RNN training, BPTT is the learning rule against which we compare RFLO learning in the main text. Finally, in the final subsection of this appendix we illustrate the equivalence of RTRL and BPTT. Although this is not strictly necessary for any of the results given in the main text, we expect that readers with an interest in gradient-based learning rules for training RNNs will be interested in this correspondence, which to our knowledge has not been very clearly explicated in the literature.

Real-time recurrent learning

In this section we review the derivation of the real-time recurrent learning (RTRL) algorithm (Williams and Zipser, 1989) for an RNN such as the one shown in Figure 1. This rule is obtained by taking a gradient of the mean-squared output error of the RNN with respect to the synaptic weights, and, as we will show later in this appendix, is equivalent (when implemented in batches rather than online) to the more widely used backpropagation through time (BPTT) algorithm.

The standard RTRL algorithm is obtained by calculating the gradient of the loss function Equation (2) with respect to the RNN weights, and then using gradient descent to find the weights that minimize the loss function (Goodfellow et al., 2016). Specifically, for each run of the network, one can calculate L/Wab and then update the weights by an amount proportional to this gradient: ΔWab=-ηL/Wab, where η determines the learning rate. This can be done similarly for the input and output weights, Wabin and Wabout, respectively. This results in the following update equations:

(10) ΔWabout=η1Tt=1Tεa(t)hb(t),ΔWab=η2Tt=1Tj=1N[(𝐖out)T𝜺(t)]jhj(t)Wab,ΔWabin=η3Tt=1Tj=1N[(𝐖out)T𝜺(t)]jhj(t)Wabin.

In these equations, ()T denotes matrix transpose, and the gradients of the hidden layer activities with respect to the recurrent and input weights are given by

(11) Pabj(t)=(1-1τ)Pabj(t-1)+1τkϕ(uj(t))WjkPabk(t-1)+1τδjaϕ(ua(t))hb(t-1),Qabj(t)=(1-1τ)Qabj(t-1)+1τkϕ(uj(t))WjkQabk(t-1)+1τδjaϕ(ua(t))xb(t-1),

where we have defined

(12) Pabj(t)hj(t)Wab,Qabj(t)hj(t)Wabin,

and 𝐮(t) is the total input to each recurrent unit at time t:

(13) ui(t)=j=1NWijhj(t-1)+j=1NxWijinxj(t).

The recursions in Equation (11) terminate with

(14) hj(0)Wab=0,hj(0)Wabin=0.

As many others have recognized previously, the synaptic weight updates given in the second and third lines of Equation (10) are not biologically realistic for a number of reasons. First, the error is projected back into the network with the particular weight matrix (Wout)T, so that the feedback and readout weights must be related to one another in a highly specific way. Second, the terms involving 𝐖 in Equation (11) mean that information about the entire network is required to update any given synaptic weight, making the rules nonlocal. In contrast, a biologically plausible learning rule for updating a weight Wab or Wabin ought to depend only on the activity levels of the pre- and post-synaptic units a and b, in addition to the error signal that is fed back into the network. Both of these shortcomings will be addressed in the following subsection.

Random feedback local online learning

In order to obtain a biologically plausible learning rule, we can attempt to relax some of the requirements in the RTRL learning rule and see whether the RNN is still able to learn effectively. Inspired by a recently used approach in feedforward networks (Lillicrap et al., 2016), we do this by replacing the (Wout)T appearing in the second and third lines of Equation (10) with a fixed random matrix 𝐁, so that the feedback projection of the output error no longer needs to be tuned to match the other weights in the network in a precise way. Second, we simply drop the terms involving 𝐖 in Equation (11), so that nonlocal information about all recurrent weights in the network is no longer required to update a particular synaptic weight. In this case we can rewrite the approximate weight-update equations as

(15) ΔWabout=1Tt=1TδWabout(t),ΔWab=1Tt=1TδWab(t),ΔWabin=1Tt=1TδWabin(t),

where

(16) δWabout(t)=η1εa(t)hb(t),δWab(t)=η2[𝐁𝜺(t)]apab(t),δWabin(t)=η3[𝐁𝜺(t)]aqab(t).

Here we have defined rank-2 versions of the eligibility trace tensors from (12):

(17) pab(t)=1τϕ(ua(t-1))hb(t-1)+(1-1τ)pab(t-1),qab(t)=1τϕ(ua(t-1))xb(t-1)+(1-1τ)qab(t-1).

As desired, the Equation (15) are local, depending only on the pre- and post-synaptic activity, together with a random feedback projection of the error signal. In addition, because all of the quantities appearing in Equation (15) are computed in real time as the RNN is run, the weight updates can be performed online, in contrast to BPTT, for which the dynamics over all timesteps must be run first forward and then backward before making any weight updates. Hence, we refer to the learning rule given by (15 - 12) as random feedback local online (RFLO) learning.

Backpropagation through time

Because it is the standard algorithm used for training RNNs, in this section we review the derivation of the learning rules for backpropagation through time (BPTT) (Rumelhart et al., 1985) in order to compare it with the learning rules presented above. The derivation here follows Lecun (1988).

Consider the following Lagrangian function:

(18) [𝐡,𝐳,𝐖,𝐖in,𝐖out,t]=izi(t){hi(t)-hi(t-1)+1τ[hi(t-1)-ϕ([𝐖𝐡(t-1)+𝐖in𝐱(t)]i)]}+12i(yi*(t)-[𝐖out𝐡(t)]i)2.

The second line is the cost function that is to be minimized, while the first line uses the Lagrange multiplier 𝐳(t) to enforce the constraint that the dynamics of the RNN should follow Equation (1). From Equation (18) we can also define the following action:

(19) S[𝐡,𝐳,𝐖,𝐖in,𝐖out]=1Tt=1T[𝐡,𝐳,𝐖,𝐖in,𝐖out,t].

We now proceed by minimizing Equation (19) with respect to each of its arguments. First, taking S/zi(t) just gives the dynamical Equation (1). Next, we set S/hi(t)=0, which yields

(20) zi(t)=(1-1τ)zi(t+1)+1τjzj(t+1)ϕ([𝐖𝐡(t)+𝐖in𝐱(t+1)]j)Wji+[(Wout)T𝜺(t)]i,

which applies at timesteps t=1,,T-1. To obtain the value at the final timestep, we take S/hi(T), which leads to

(21) zi(T)=[(Wout)T𝜺(T)]i.

Finally, taking the derivative with respect to the weights leads to the following:

(22) SWij=-1Tτt=1Tzi(t)ϕ([𝐖𝐡(t-1)+𝐖in𝐱(t)]i)hj(t-1)SWijin=-1Tτt=1Tzi(t)ϕ([𝐖𝐡(t-1)+𝐖in𝐱(t)]i)xj(t)SWijout=-1Tt=1Tεi(t)hj(t).

Rather than setting these derivatives equal to zero, which may lead to an undesired solution that corresponds to a maximum or saddle point of the action and would in any case be intractable, we use the gradients in Equation (22) to perform gradient descent, reducing the error in an iterative fashion:

(23) ΔWij=η2Tτt=1Tzi(t)ϕ([𝐖𝐡(t-1)+𝐖in𝐱(t)]i)hj(t-1)ΔWijin=η3Tτt=1Tzi(t)ϕ([𝐖𝐡(t-1)+𝐖in𝐱(t)]i)xj(t)ΔWijout=η1Tt=1Tεi(t)hj(t),

where ηi are learning rates.

The BPTT algorithm then proceeds in three steps. First, the dynamical Equation (1) for 𝐡(t) are integrated forward in time, beginning with the initial condition 𝐡(0). Second, the auxiliary variable 𝐳(t) is integrated backwards in time using Equation (20), using with the 𝐡(t) saved from the forward pass and the boundary condition 𝐳(T) from Equation (21). Third, the weights are updated according to Equation (23), using 𝐡(t) and 𝐳(t) saved from the preceding two steps.

Note that no approximations have been made in computing the gradients using either the RTRL or BPTT procedures. In fact, as we will show in the following section, the two algorithms are completely equivalent, at least in the case where RFLO weight updates are performed only at the end of each trial rather than at every timestep.

A unified view of gradient-based learning in recurrent networks

As pointed out previously (Beaufays and Wan, 1994; Srinivasan et al., 1994), the difference between RTRL and BPTT can ultimately be traced to distinct methods of bookkeeping in applying the chain rule to the gradient of the loss function. (Thanks to A. Litwin-Kumar for discussion about this correspondence). In order to make this explicit, we begin by noting that, when taking implicit dependences into account, the loss function defined in Equation (2) has the form

(24) L=L(𝐡0,,𝐡t(𝐖,𝐡t-1(𝐖,𝐡t-2())),).

In this section, we write 𝐡t𝐡(t) for notational convenience, and consider only updates to the recurrent weights 𝐖, ignoring the input 𝐱(t) to the RNN. In any gradient-based learning scheme, the weight update ΔWab should be proportional to the gradient of the loss function, which has the form

(25) LWab=tL𝐡t𝐡tWab.

The difference between RTRL and BPTT arises from the two possible ways of keeping track of the implicit dependencies from Equation (24), which give rise to the following equivalent formulations of Equation (25):

(26) LWab={tL(,ht,)htht(W,ht1(W,ht2()))Wab,RTRLtL(,ht(W,ht1()),ht+1(W,ht()),)htht(W,ht1)Wab.BPTT

In RTRL, the first derivative is simple to compute because loss function is treated as an explicit function of the variables 𝐡t. The dependence of 𝐡t on 𝐖 and 𝐡t (where t<t) is then taken into account in the second derivative, which must be computed recursively due to the nested dependence on 𝐖. In BPTT, on the other hand, the implicit dependencies are dealt with in the first derivative, which in this case must be computed recursively because all terms at times t>t depend implicitly on 𝐡t. The second derivative then becomes simple since these dependencies are no longer present.

Let us define the following:

(27) Pabi(t)hit(𝐖,𝐡t-1(𝐖,𝐡t-2()))Wab,zi(t)-L(,𝐡t(𝐖,𝐡t-1(𝐖,𝐡t-2())),)hit.

Then, using the definition of L from Equation (2) and the dynamical Equation (1) for 𝐡t to take the other derivatives appearing in Equation (26), we have

(28) LWab={1Tti[(Wout)Tε(t)]iPabi(t),RTRL1τTtza(t)ϕ(ua(t))hbt1.BPTT

The recursion relations follow from application of the chain rule in the definitions from Equation (27):

(29) Pabi(t)=(1-1τ)Pabi(t-1)+1τjϕ(ui(t-1))WijPabj(t-1)+1τδiaϕ(ua(t))hb(t-1),zi(t)=(1-1τ)zi(t+1)+1τjϕ(uj(t+1))Wjizj(t+1)+jWjioutεj(t).

These recursion relations are identical to those appearing in Equation (11) and Equation (20). Notably, the first is computed forward in time, while the second is computed backward in time. Because no approximations have been made in computing the gradient in either case for Equation (28), the two methods are equivalent, at least if RTRL weight updates are made only at the end of each trial, rather than online. For this reason, only one of the algorithms (BPTT) was compared against RFLO learning in the main text.

As discussed in previous sections, RTRL has the advantages of obeying causality and of allowing for weights to be continuously updated. But, as discussed above, RTRL has the disadvantage of being nonlocal, and also features a greater computational cost due to the necessity of updating a rank-3 tensor Pabi(t) rather than a vector zi(t) at each timestep. By dropping the second term in the first line of Equation (29), RFLO learning eliminates both of these undesirable features, so that the resulting algorithm is causal, online, local, and has a computational complexity (N2 per timestep, vs. N4 for RTRL) on par with BPTT.

https://doi.org/10.7554/eLife.43299.011

Appendix 2

Analysis of the RFLO learning rule

Given that the learning rules in Equation (7) do not move the weights directly along the steepest path that would minimize the loss function (as would the learning rules in Equation (10)), it is worthwhile to ask whether it can be shown that these learning rules in general decrease the loss function at all. To answer this question, we consider the change in weights after one trial lasting T timesteps, working in the continuous-time limit for convenience, and performing weight updates only at the end of the trial:

(30) Δ𝐖=1T0T𝑑tδ𝐖(t),Δ𝐖out=1T0T𝑑tδ𝐖out(t)

where δ𝐖 and δ𝐖out are given by Equation (7). For simplicity in this section we ignore the updates to the input weights, since the results in this case are very similar to those for recurrent weight updates.

In the first subsection of this appendix, we show that, under some approximations, the loss function tends to decrease on average under RFLO learning if there is positive alignment between the readout weights 𝐖out and the feedback weights 𝐁. In the second subsection, we show that this alignment tends to increase during RFLO learning.

Decrease of the loss function

We first consider the change in the loss function defined in Equation (2) after updating the weights:

(31) ΔL=12T0T𝑑t[𝜺2(t,𝐖+Δ𝐖,𝐖out+Δ𝐖out)-𝜺2(t,𝐖,𝐖out)].

Assuming the weight updates to be small, we ignore terms beyond leading order in Δ𝐖 and Δ𝐖out. Then, using the update rules in Equation (30) and performing some algebra, Equation (31) becomes

(32) ΔL=-η1Tab[0TdtTεa(t)hb(t)]2-η2T0TdtT0TdtTabijkWijoutBakεi(t)Pabj(t)εk(t)pab(t)ΔL(1)+ΔL(2).

Clearly the first term in Equation (32) always tends to decrease the loss function, as we would expect given that the precise gradient of L with respect to 𝐖out was used to determine this part of the learning rule. We now wish to show that, at least on average and with some simplifying assumptions, the second term in Equation (32) tends to be negative as well. Before beginning, we note in passing that this term is manifestly nonpositive like the first term if we perform RTRL, in which case kBakεk(t)pab(t)klWkloutεk(t)Pabl(t) in Equation (32), making the gradient exact.

In order to analyze ΔL(2), we will assume that the RNN is linear, with ϕ(x)=x. Further, we will average over the RNN activity 𝐡(t), assuming that the activities are correlated from one timestep to the next, but not from one unit to the next:

(33) hi(t)hj(t)𝐡=δijC(t-t).

The correlation function should be peaked at a positive value at t-t=0 and decay to 0 at much earlier and later times. Finally, because of the antisymmetry under x-x, odd powers of 𝐡 will average to zero: hi𝐡=hihjhk𝐡=0.

With these assumptions, we can express the activity-averaged second line of Equation (32) as ΔL(2)𝐡=F1+F2, with

(34) F1=-η2TNajklWkjoutBal0TdtT0TdtTyk*(t)yl*(t)×0tduτ[e(1-𝐖)(u-t)/τ]ja0tduτe-u/τC(t-u-u),

and

(35) F2=-η2TNajklmWkjoutWkloutBamWmlout0TdtT0TdtT0tduτ0tduτ×e-u/τ[e(1-𝐖)(u-t)/τ]jaC(t-t)C(t-u-u)+O(N0).

In order to make further progress, we can perform an ensemble average over 𝐖, assuming that Wij𝒩(0,g2/N) is a random variable, which leads to

(36) [e(1-𝐖)(u-t)/τ]ja𝐖=δjae(u-t)/τ+O(g2/N).

This leads to

(37) F1=-η2TN0TdtT0TdtT[𝐲*(t)]T𝐖out𝐁𝐲*(t)×0tduτ0tduτe(-t+u-u)/τC(t-u-u)+O(N0),

and

(38) F2=-η2TNTr[(𝐖out)T𝐖out𝐁𝐖out]0TdtT0TdtT0tduτ0tduτ×e(-t+u-u)/τC(t-t)C(t-u-u)+O(N0).

Putting Equation (37) and Equation (38) together, changing one integration variable, and dropping the terms smaller than O(N) then gives

(39) ΔL(2)𝐡,𝐖=-η2TN0TdtT0TdtT0tduτ0tdvτe(-t-t+u+v)/τC(u-v)×{[𝐲*(t)]T𝐖out𝐁𝐲*(t)+C(t-t)Tr[(𝐖out)T𝐖out𝐁𝐖out]}.

Because we have assumed that C(t)0, the sign of this quantity depends only on the sign of the two terms in the second line of Equation (39).

Already we can see that Equation (39) will tend to be negative when 𝐖out is aligned with B. To see this, suppose that 𝐁=α𝐖out, with α>0. Due to the exponential factor, the integrand will be vanishingly small except when tt, so that the first term in the second line in this case can be written as α|(𝐖out)T𝐲*(t)|20. The second term, meanwhile, becomes αC(t-t)Tr[((𝐖out)T𝐖out)2]0.

The situation is most transparent if we assume that the RNN readout is one-dimensional, in which case the readout and feedback weights become vectors 𝐰out and 𝐛, respectively, and Equation (39) becomes

(40) ΔL(2)𝐡,𝐖=-η2TN0TdtT0TdtT0tduτ0tdvτe(-t-t+u+v)/τC(u-v)×{y*(t)y*(t)𝐰out𝐛+C(t-t)|𝐰out|2𝐰out𝐛}.

In this case it is clear that, as in the case of feedforward networks (Lillicrap et al., 2016), the loss function tends to decrease when the readout weights become aligned with the feedback weights. In the following subsection we will show that, at least under similar approximations to the ones made here, such alignment does in fact occur.

Alignment of readout weights with feedback weights

In the preceding subsection it was shown that, assuming a linear RNN and averaging over activities and recurrent weights, the loss function tends to decrease when the alignment between the readout weights 𝐖out and the feedback weights 𝐁 becomes positive. In this subsection we ask whether such alignment does indeed occur.

In order to address this question, we consider the quantity Tr(𝐖out𝐁) and ask how it changes following one cycle of training, with combined weight updates on 𝐖 and 𝐖out. (As in the preceding subsection, external input to the RNN is ignored here for simplicity.) The effect of modifying the readout weights is obvious from Equation (15):

(41) ΔTr(𝐖out𝐁)=Tr((Δ𝐖out)𝐁)=η1abBba0TdtTεa(t)hb(t).

The update to the recurrent weights, on the other hand, modifies 𝐡(t) in the above equation. Because we are interested in the combined effect of the two weight updates and are free to make the learning rates arbitrarily small, we focus on the following quantity:

(42) G2η1η2η1,η2=0ΔTr(WoutB)=η2|η2=0abBba0TdtTεa(t)hb(t).

The goal of this subsection is thus to show that (at least on average) G>0.

In order to evaluate this quantity, we need to know how the RNN activity 𝐡(t) depends on the weight modification Δ𝐖. As in the preceding subsection, we will assume a linear RNN and will work in the continuous-time limit (τ1) for convenience. In this case, the dynamics are given by

(43) τddt𝐡(t)=(𝐖+Δ𝐖)𝐡(t).

If we wish to integrate this equation to get 𝐡(t) and expand to leading order in Δ𝐖, care must be taken due to the fact that 𝐖 and Δ𝐖 are non-commuting matrices. Taking a cue from perturbation theory in quantum mechanics (Sakurai, 1994), we can work in the ‘interaction picture’ and obtain

(44) 𝐡(t)=e𝐖t/τeΔ𝐖^t/τ𝐡(0),

where

(45) Δ𝐖^e-𝐖t/τΔ𝐖e𝐖t/τ.

We can now expand Equation (44) to obtain

(46) 𝐡(t)=[e𝐖t/τ+tτΔ𝐖e𝐖t/τ+O(η22)]𝐡(0).

For a linear network, the update rule for 𝐖 from Equation (15) is then simply

(47) ΔWab=η20TdtTcBacεc(t)h¯b(t),

where the bar denotes low-pass filtering:

(48) 𝐡¯(t)=0tdtτe-t/τ𝐡(t-t).

Combining (Equations (46–48)), the time-dependent activity vector to leading order in η2 is

(49) hi(t)=h^i(t)+η2tτjkBik0TdtT0tdt′′τe-t′′/τ[yk*(t)-lWklouth^l(t)]h^j(t-t′′)h^j(t),

where 𝐡^(t) is the unperturbed RNN activity vector (i.e. without the weight update Δ𝐖). With this result, we can express Equation (42) as G=G1+G2, where

(50) G1=abijBbaBbj0TdtT0TdtTtτ0tdt′′τe-t′′/τε^a(t)ε^j(t)h^i(t)h^i(t-t′′)

and

(51) G2=-abijkBbaBikWaiout0TdtT0TdtTtτ0tdt′′τe-t′′/τh^b(t)ε^k(t)h^j(t)h^j(t-t′′).

Here we have defined 𝜺^(t)𝐲*(t)-𝐖out𝐡^(t).

In order to make further progress, we follow the approach of the previous subsection and perform an average over RNN activity vectors, which yields

(52) G1𝐡^=Nτ0TdtTt0TdtT{𝐲*(t)𝐁T𝐁𝐲*(t)0tdt′′τe-t′′/τC(t-t+t′′)+Tr[𝐁T𝐁𝐖out(𝐖out)T]0tdt′′τe-t′′/τC(t-t)C(t-t+t′′)+O(1/N)}

and

(53) G2𝐡^=NτTr(𝐁𝐖out𝐁𝐖out)0TdtTt0TdtT0tdt′′τe-t′′/τ[C(t-t)C(t-t+t′′)+O(1/N)].

Similar to the integral in Equation (39), both of these quantities will tend to be positive if we assume that C(t)0 with a peak at t=0, and note that the integrand is large only when tt.

In order to make the result even more transparent, we can again consider the case of a one-dimensional readout, in which case Equation (52) becomes

(54) G1𝐡^=N|𝐛|2τ0TdtTt0TdtT0tdt′′τe-t′′/τ[y*(t)y*(t)C(t-t+t′′)+|𝐰out|2C(t-t)C(t-t+t′′)+O(1/N)]

and

(55) G2𝐡^=Nτ(𝐰out𝐛)20TdtTt0TdtT0tdt′′τe-t′′/τ[C(t-t)C(t-t+t′′)+O(1/N)]

This version illustrates even more clearly that the right hand sides of these equations tend to be positive.

Equation (52) (or, in the case of one-dimensional readout, Equation (54)) shows that the overlap between the readout weights and feedback weights tends to increase with training. Equation (39) (or Equation (40)) then shows that the readout error will tend to decrease during training given that this overlap is positive. While these mathematical results provide a compelling plausibility argument for the efficacy of RFLO learning, it is important to recall that some limiting assumptions were required in order to obtain them. Specifically, we assumed linearity of the RNN and vanishing of the cross-correlations in the RNN activity, neither of which is strictly true in a trained nonlinear network. In order to show that RFLO learning remains effective even without these limitations, we must turn to numerical simulations such as those performed in the main text.

https://doi.org/10.7554/eLife.43299.012

Data availability

Code implementing the RFLO learning algorithm for the example shown in Figure 2 has been included as a source code file accompanying this manuscript.

References

    1. Dale H
    (1935)
    Pharmacology and Nerve-endings (Walter Ernest Dixon memorial lecture)(Section of therapeutics and pharmacology)
    Proceedings of the Royal Society of Medicine 28:319–332.
  1. Book
    1. Goodfellow I
    2. Bengio Y
    3. Courville A
    (2016)
    Deep Learning
    MIT press.
  2. Book
    1. Lashley KS
    (1951)
    The Problem of Serial Order in Behavior, 21
    Bobbs-Merrill.
  3. Book
    1. Lecun Y
    (1988)
    A theoretical framework for back-propagation
    In: Touretzky D, Hinton G, Sejnowski T, editors. Proceedings of the 1988 Connectionist Models Summer School. Pittsburg, PA: Morgan Kaufmann. pp. 21–28.
  4. Conference
    1. Liao Q
    2. Leibo JZ
    3. Poggio TA
    (2016)
    How important is weight symmetry in Backpropagation?
    AAAI'16 Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. pp. 1837–1844.
  5. Conference
    1. Logiaco L
    2. Abbott LF
    3. Escola GS
    (2018)
    The corticothalamic loop can control cortical dynamics for flexible robust motor output, 2018
    Poster at Cosyne 2018.
    1. Mujika A
    2. Meier F
    3. Steger A
    (2018)
    Advances in Neural Information Processing Systems 31
    6594–6603, Approximating real-time recurrent learning with random kronecker factors, Advances in Neural Information Processing Systems 31, Curran.
  6. Conference
    1. Pascanu R
    2. Gülçehre Çaglar
    3. Cho K
    4. Bengio Y
    (2014)
    How to construct deep recurrent neural networks
    2nd International Conference on Learning Representations.
  7. Book
    1. Sakurai JJ
    (1994)
    Modern Quantum Mechanics
    Addison-Wesley Pub. Co.
  8. Conference
    1. Tallec C
    2. Ollivier Y
    (2018) Unbiased online recurrent optimization
    International Conference on Learning Representation.

Article and author information

Author details

  1. James M Murray

    Zuckerman Mind, Brain and Behavior Institute, Columbia University, New York, United States
    Contribution
    Conceptualization, Writing—original draft, Writing—review and editing
    For correspondence
    jm4347@columbia.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3706-4895

Funding

National Institutes of Health (DP5 OD019897)

  • James M Murray

National Science Foundation (DBI-1707398)

  • James M Murray

Gatsby Charitable Foundation

  • James M Murray

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

The author is grateful to LF Abbott, GS Escola, and A Litwin-Kumar for helpful discussions and feedback on the manuscript. Support for this work was provided by the National Science Foundation NeuroNex program (DBI-1707398), the National Institutes of Health (DP5 OD019897), and the Gatsby Charitable Foundation.

Copyright

© 2019, Murray

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 5,697
    views
  • 913
    downloads
  • 50
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. James M Murray
(2019)
Local online learning in recurrent networks with random feedback
eLife 8:e43299.
https://doi.org/10.7554/eLife.43299

Share this article

https://doi.org/10.7554/eLife.43299

Further reading

    1. Neuroscience
    John P Grogan, Matthias Raemaekers ... Sanjay G Manohar
    Research Article

    Motivation depends on dopamine, but might be modulated by acetylcholine which influences dopamine release in the striatum, and amplifies motivation in animal studies. A corresponding effect in humans would be important clinically, since anticholinergic drugs are frequently used in Parkinson’s disease, a condition that can also disrupt motivation. Reward and dopamine make us more ready to respond, as indexed by reaction times (RT), and move faster, sometimes termed vigour. These effects may be controlled by preparatory processes that can be tracked using electroencephalography (EEG). We measured vigour in a placebo-controlled, double-blinded study of trihexyphenidyl (THP), a muscarinic antagonist, with an incentivised eye movement task and EEG. Participants responded faster and with greater vigour when incentives were high, but THP blunted these motivational effects, suggesting that muscarinic receptors facilitate invigoration by reward. Preparatory EEG build-up (contingent negative variation [CNV]) was strengthened by high incentives and by muscarinic blockade, although THP reduced the incentive effect. The amplitude of preparatory activity predicted both vigour and RT, although over distinct scalp regions; frontal activity predicted vigour, whereas a larger, earlier, central component predicted RT. The incentivisation of RT was partly mediated by the CNV, though vigour was not. Moreover, the CNV mediated the drug’s effect on dampening incentives, suggesting that muscarinic receptors underlie the motivational influence on this preparatory activity. Taken together, these findings show that a muscarinic blocker impairs motivated action in healthy people, and that medial frontal preparatory neural activity mediates this for RT.

    1. Medicine
    2. Neuroscience
    LeYuan Gu, WeiHui Shao ... HongHai Zhang
    Research Article

    The advent of midazolam holds profound implications for modern clinical practice. The hypnotic and sedative effects of midazolam afford it broad clinical applicability. However, the specific mechanisms underlying the modulation of altered consciousness by midazolam remain elusive. Herein, using pharmacology, optogenetics, chemogenetics, fiber photometry, and gene knockdown, this in vivo research revealed the role of locus coeruleus (LC)-ventrolateral preoptic nucleus noradrenergic neural circuit in regulating midazolam-induced altered consciousness. This effect was mediated by α1 adrenergic receptors. Moreover, gamma-aminobutyric acid receptor type A (GABAA-R) represents a mechanistically crucial binding site in the LC for midazolam. These findings will provide novel insights into the neural circuit mechanisms underlying the recovery of consciousness after midazolam administration and will help guide the timing of clinical dosing and propose effective intervention targets for timely recovery from midazolam-induced loss of consciousness.