Interplay between external inputs and recurrent dynamics during movement preparation and execution in a network model of motor cortex
Abstract
The primary motor cortex has been shown to coordinate movement preparation and execution through computations in approximately orthogonal subspaces. The underlying network mechanisms, and the roles played by external and recurrent connectivity, are central open questions that need to be answered to understand the neural substrates of motor control. We develop a recurrent neural network model that recapitulates the temporal evolution of neuronal activity recorded from the primary motor cortex of a macaque monkey during an instructed delayedreach task. In particular, it reproduces the observed dynamic patterns of covariation between neural activity and the direction of motion. We explore the hypothesis that the observed dynamics emerges from a synaptic connectivity structure that depends on the preferred directions of neurons in both preparatory and movementrelated epochs, and we constrain the strength of both synaptic connectivity and external input parameters from data. While the model can reproduce neural activity for multiple combinations of the feedforward and recurrent connections, the solution that requires minimum external inputs is one where the observed patterns of covariance are shaped by external inputs during movement preparation, while they are dominated by strong directionspecific recurrent connectivity during movement execution. Our model also demonstrates that the way in which singleneuron tuning properties change over time can explain the level of orthogonality of preparatory and movementrelated subspaces.
Editor's evaluation
The study develops a recurrent network model of M1 for centerout reaches, starting from a conventional tuning (or representational) perspective. Through recurrent connectivity, the model shows uncorrelated tuning for movement direction during preparation and execution with the dynamic transition between the two states. The continuous attractor model provides an important example of flexible switching between neural representations and is supported by convincing simulations and analysis.
https://doi.org/10.7554/eLife.77690.sa0Introduction
The activity of the primary motor cortex (M1) during movement preparation and execution plays a key role in the control of voluntary limb movement (Evarts, 1968; Whishaw et al., 1993; Whishaw, 2000; Graziano et al., 2002; Harrison et al., 2012; Scott, 2012; Brown and Teskey, 2014). Classic studies of motor preparation were performed in a delayedreaching task setting, showing that firing rates correlate with taskrelevant parameters during the delay period, despite no movement occurring (Hanes and Schall, 1996; Tanji and Evarts, 1976; Churchland et al., 2006a; Churchland et al., 2006b; Messier and Kalaska, 2000; Dorris et al., 1997; Glimcher and Sparks, 1992; Glimcher and Sparks, 1992; Wurtz and Goldberg, 1972; Darlington et al., 2018; Darlington and Lisberger, 2020). More recent works have shown that preparatory activity is also displayed before nondelayed movements (Lara et al., 2018), that it is involved in reach correction (Ames et al., 2019), and that when multiple reaches are executed rapidly and continuously, each upcoming reach is prepared by the motor cortical activity while the current reach is in action (Zimnik and Churchland, 2021). Preparation and execution of different reaches are thought to be processed simultaneously without interference in the motor cortex through computation along orthogonal dimensions (Zimnik and Churchland, 2021). Indeed, the preparatory and movementrelated subspaces identified by linear dimensionality reduction methods are almost orthogonal (Elsayed et al., 2016) so that simple linear readouts that transform motor cortical activity into movement commands will not produce premature movement during the planning stage (Kaufman et al., 2014). However, response patterns in these two epochs are nevertheless linked, as demonstrated by the fact that a linear transformation can explain the flow of activity from the preparatory subspace to the movement subspace (Elsayed et al., 2016). How this populationlevel strategy is implemented at the circuit level is still under investigation (Kao et al., 2021, ). A related open question (Malonis et al., 2021) is whether inputs from areas upstream to the primary motor cortex (such as from the thalamus and other cortical regions, here referred to as external inputs) that have been shown to be necessary to sustain movement generation (Sauerbrei et al., 2020) are specific to the type of movement being generated throughout the whole course of the motor action, or if they serve to set the initial conditions for the dynamics of the motor cortical network to evolve as shaped by recurrent connections (Churchland et al., 2012; Shenoy et al., 2013; Hennequin et al., 2014; Sussillo et al., 2015; Kaufman et al., 2016; Vyas et al., 2020).
In this work, we use a network modeling approach to explain the relationship between network connectivity, external inputs, and computations in orthogonal dimensions. Our analysis is based on electrophysiological recordings from M1 of a macaque monkey performing a delayed centerout reaching task. The dynamics of motor cortical neurons during reaching limb movements has been shown to be lowdimensional (Gallego et al., 2018). Here, we develop a lowdimensional description of the dynamics using order parameters that quantify the covariation between neural activity and the direction of motion (see Georgopoulos et al., 1986; Schwartz et al., 1988; Georgopoulos et al., 1989; Georgopoulos et al., 1993 but also Scott and Kalaska, 1997; Scott et al., 2001). Recorded neurons are tuned to the direction of motion both during movement preparation and execution, but their preferred direction and amplitude of the tuning function change over time (Hatsopoulos et al., 2007; Churchland and Shenoy, 2007; Rickert et al., 2009). Interestingly, major changes happen when the activity flows from the preparatory to the movementrelated subspaces. We describe neuronal selectivity during the task by four parameters: two angular variables corresponding to the preferred direction during movement preparation and execution, respectively; and two parameters that represent the strength of tuning in the two epochs. We characterized the empirical distribution of these parameters, and investigated potential network mechanisms that can generate the observed tuning properties by building a recurrent neural network model, whose synaptic weights depend on tuning properties of pre and postsynaptic neurons, and external inputs can contain information about movement direction. First, we analytically derived a lowdimensional description of the dynamics in terms of a few observables denoted as order parameters, which recapitulate the temporal evolution of the populationlevel patterns of tuning to the direction of motion. Then, we inferred the strength of recurrent connections and external inputs from data, by imposing that the model reproduce the observed dynamics of the order parameters. There are multiple combinations of feedforward and recurrent connections that allow the model to generate neural activity that strongly resembles the one from recordings both at the singleneuron and population level, and that can be transformed into realistic patterns of muscle activity by a linear readout. To break the model degeneracy, we imposed an extra cost associated with large external inputs – that likely require more metabolic energy consumption compared to local recurrent inputs. The resulting solution suggests that different network mechanisms operate during movement preparation and execution. During the delay period, the population activity is shaped by external inputs that are tuned to the preferred directions of the neurons. During movement execution, the localized pattern of activity is maintained via strong directionspecific recurrent connections. Finally, we show how the specific way in which neurons tuning properties rearrange over time produces the observed level of orthogonality between the preparatory and movementrelated subspaces.
Results
Subjects and task
We analyzed multielectrode recordings from the primary motor cortex (M1) of two macaque monkeys performing a previously reported instructeddelay, centerout reaching task (Rubino et al., 2006). The monkey’s arm was on a twolink exoskeletal robotic arm, so that the position of the monkey’s hand controlled the location of a cursor projected onto a horizontal screen. The task consisted of three periods (Figure 1a): a hold period, during which the monkey was trained to hold the cursor on a center target and wait 500ms for the instruction cue; an instruction period, during which the monkey was presented with one of eight evenly spaced peripheral targets and continued to hold at the center for an additional 1,000–1,500ms; a movement period, signalled by a go cue, when the monkey initiated the reach to the peripheral target. Successful trials for which the monkeys reached the target were rewarded with a juice or water reward. The peripheral target was present on the screen throughout the whole instruction and movement periods. In line with previous studies (e.g. Elsayed et al., 2016), the preparatory and movementrelated epochs were defined as two 300ms time intervals beginning, respectively, 100ms after target onset and 50ms before the start of the movement.
Patterns of correlation between neural activity and movement direction
Studies of motor cortex have shown that tuning to movement direction is not a timeinvariant property of motor cortical neurons, but rather varies in time throughout the course of the motor action; singleneuron encoding of entire movement trajectories has been also reported (e.g. [Hatsopoulos et al., 2007; Churchland and Shenoy, 2007]). We measured temporal variations in neurons preferred direction by binning time into $160ms$ time bins and fitting the binned trialaveraged spike counts as a function of movement direction with a cosine function. As the variability in preferred direction during the delay period alone and during movement execution alone was significantly smaller than the variability during the entire duration of the task (Figure 1—figure supplement 1), we characterized neurons tuning properties only in terms of these two epochs. This simplifying assumption is discussed further in Methods and Discussion. We will show later that such a simplification is enough for the model to recapitulate neuronal activity both at the level of singleunits and at the population level.
Figure 1b–c show two examples of tuning curves, where the trialaveraged and timeaveraged activity during movement preparation and execution is plotted as a function of the location of the target on the screen, together with a cosine fitting curve. In the two epochs, neurons change their preferred direction  denoted, respectively, as ${\theta}_{A}$ and ${\theta}_{B}$  and their degree of participation, which is proportional to the amplitude of the cosine tuning function  denoted as ${\eta}_{A}$ and ${\eta}_{B}$. A scatter plot of the preferred directions in preparatory (${\theta}_{A}$) and execution (${\theta}_{B}$) epochs for all neurons is shown in Figure 1d, while a similar scatter plot for the degrees of tuning in preparatory (${\eta}_{A}$) and execution (${\eta}_{B}$) epochs is shown in Figure 1e. Neurons preferred direction in the two epochs are moderately correlated (circular correlation coefficient $r=0.4,p={\mathrm{3\hspace{0.17em}10}}^{5}$), and so are their degree of participation, even if to a lesser degree (correlation coefficient $r=0.3,p={10}^{4}$). Neuronal tuning to movement direction is reflected in a population activity profile that is spatially localized, as shown in Figure 1fg, were we plot the normalized firing rate for all neurons, as a function of their preferred direction, separately for the two epochs. An example of the activity of a single neuron in the two epochs is highlighted in red. It illustrates how the activity of single neurons is drastically rearranged across time, while the population activity remains localized around the same angular location, which corresponds to the location of the target on the screen.
Recurrent neural network model
The experimental observations described above motivated us to build a network model in which neuronal selectivity properties match the ones observed in the data. In particular, we built a network model in which neurons are characterized by the four same parameters we use to fit the preferred directions ${\theta}_{A},{\theta}_{B}$ and degrees of tuning, ${\eta}_{A},{\eta}_{B}$, leading to a 4dimensional selectivity space. The subspace defined by the coordinates $({\theta}_{A},{\eta}_{A})$ is denoted as map $A$, while the subspace defined by $({\theta}_{B},{\eta}_{B})$ is denoted as map $B$. We will denote the coordinate vector by
and refer to the units in the network as neurons, even though each unit in the model could instead represents a group of M1 neurons with similar functional properties, and likewise the connection between two units in our model could represent the effective connection between the two functionally similar groups of neurons in M1.
In this model, neurons with coordinates (selectivity parameters) $x$ are described by their firing rate $r(x,t)$ whose temporal evolution is given by:
where $\tau $ is the time constant of firing rate dynamics. $[\phantom{\rule{thickmathspace}{0ex}}{]}_{+}$ is the thresholdlinear (a.k.a. relu) transfer function that converts synaptic inputs in firing rates, and ${I}^{\text{tot}}(x;t)$ is the total synaptic input to neurons with coordinates $x$. We set the time constant to $\tau =25$ ms, which is of the same order of magnitude of the membrane time constant, and we checked that for values of $\tau $ in the range $10ms100ms$ our results did not quantitatively change. The total input to a neuron is
The first term in the r.h.s. of (2) is the recurrent input, which depends on the firing rates of presynaptic neurons $r({x}^{\prime},t)$, and on $J(x,{x}^{\prime})$, the strength of recurrent connections from neurons with coordinates ${x}^{\prime}$ to neurons with coordinates $x$. ${I}^{\text{ext}}(x;t)$ is the external input, and $\rho (x)$ is the probability density of $x$.
The recurrent term in ${I}^{\text{tot}}$ is the sum of single neuron contributions: here, we assumed that the cortical network is sufficiently large that instead of summing over the contributions of single neurons, each one at a given coordinate $x=\{{\theta}_{A},{\theta}_{B},{\eta}_{A},{\eta}_{B}\}$, we can integrate over a continuous distribution of $x$. The probability density of the coordinates $\rho (x)$ is set to match the empirical distribution of the preferred direction and degree of participation. The preferred directions are not significantly correlated with the degrees of tuning (Figure 1—figure supplement 2), and we therefore took them to be independent:
The distribution of preferred directions was well fitted by:
as shown in Figure 1—figure supplement 2, while for the sake of simplicity we assumed ${\rho}_{p}({\eta}_{A},{\eta}_{B})={\rho}_{pA}({\eta}_{A}){\rho}_{pB}({\eta}_{B})$ and estimated ${\rho}_{pA},{\rho}_{pB}$ nonparametrically using kernel density estimation. The last two ingredients that we need to specify to define the network dynamics are the recurrent couplings and external inputs. The strength of synaptic connections from a presynaptic neuron with preferred directions $({\theta}_{A}^{\prime},{\theta}_{B}^{\prime})$ and participation strengths $({\eta}_{A}^{\prime},{\eta}_{B}^{\prime})$ to a postsynaptic neuron with preferred directions $({\theta}_{A},{\theta}_{B})$ and participation strengths $({\eta}_{A},{\eta}_{B})$ is given by
where $j}_{0$ represents a uniform inhibitory term; ${j}_{s}^{A}$ and ${j}_{s}^{B}$ measure the amplitude of the symmetric directionspecific connections in map $A$ and map $B$, respectively; and $j}_{a$ measures the amplitude of asymmetric connections from map $A$ to map $B$. In the Methods, we explain how the dynamics of a network with such recurrent connectivity can reproduce the spatially localized population activity that we observed in the data, and we provide an intuition for the role of the different coupling parameters. A schematic depiction of the network is shown in Figure 2a. We parameterized the external input in analogy with the recurrent input:
where $C}_{0$ represents an untuned homogeneous external input, ${C}_{A}$ and ${C}_{B}$ represent untuned mapspecific inputs to maps $A$ and $B$, respectively, ${\u03f5}_{A}$ and ${\u03f5}_{B}$ represent directionnally tuned inputs to maps $A$ and $B$, respectively, and ${\mathrm{\Phi}}^{\text{ext}}$ is the direction encoded by external inputs. An example of the spatially localized population activity at a given time $t$ visualized in different 2D subspaces of the 4D space with coordinates $x=\{{\theta}_{A},{\theta}_{B},{\eta}_{A},{\eta}_{B}\}$ is shown in Figure 2b, blue and an example of tuning curves from the network’s activity is shown in Figure 2c.
Order parameters
The simplicity of the model described by Equations 1–6 allowed us to derive a low dimensional description of the model dynamics in terms of a few populationlevel signals denoted as order parameters. The order parameters quantify the average population activity ${r}_{0}(t)$, and the degree to which the population activity is localized in map A (${r}_{A}(t)$) and B (${r}_{B}(t)$). These parameters are defined by the following equations:
Note that ${r}_{A}$ and ${r}_{B}$ are the first Fourier coefficient of the population rate over the domain ${\theta}_{A}$ and ${\theta}_{B}$, respectively.
These order parameters can be both computed in the model and from data. Therefore, they can provide us with a simple tool to compare model and data, and to infer model network parameters from data. Figure 3a shows the dynamics of the order parameters computed from the data during the delayed reaching task. As expected, we see that during the preparatory period the activity activity is localized in map $A$ (${r}_{A}(t)>0$) but not in map $B$ (${r}_{B}(t)\sim 0$). Around the time of the go cue, network activity dynamically reorganizes, such as the network becomes now strongly localized in map $B$ (${r}_{B}(t)>0$), while the degree of modulation in map $A$ slowly decreases towards zero. We can think of map $A$ and map $B$ as two different coordinate systems that the network can use to produce distinct patterns of population activity, that are both localized around the location of the target on the screen, but in different ways. A spatially localized activity profile associated with either ${r}_{A}(t)>0$ or ${r}_{B}(t)>0$ is denoted as a bump. A definition of this term provided in the Methods. At different times, the bump of activity has different shapes – either localised in map $A$, or in map $B$, or in both maps – but its location remains constant. Next, we studied possible mechanisms underlying these observations.
Tuned states in the model: External inputs vs recurrent connectivity
We next investigated the network model to understand the mechanisms underlying the observed dynamics. We first derived equations governing the temporal evolution of the order parameters (see Methods). We then analyzed the stationary solutions of the equations in the absence of tuned inputs (${C}_{A}={C}_{B}={\u03f5}_{A}={\u03f5}_{B}=0$), in the space of the three parameters defining the couplings strength: ${j}_{0},{j}_{s}^{A},{j}_{s}^{B}$. We found that, similarly to the ring model (BenYishai et al., 1995; Hansel and Sompolinsky, 1998), there exists two qualitatively distinct regions in the space of parameters characterizing recurrent connectivity (Figure 3b). For weak recurrent connectivity (gray area in Figure 3b), the activity in the network is uniform in the absence of tuned inputs. Thus, in this region, tuned network activity must rely on external inputs. For strong recurrent connectivity (white area in Figure 3b), network activity is tuned, even in the absence of tuned inputs. In this case, the location of the bump in network activity is determined by initial conditions. These two regimes are separated by a bifurcation line, where the uniform solution becomes unstable due to a Turing instability (BenYishai et al., 1995; Hansel and Sompolinsky, 1998).
This analysis shows that a network activity profile that is localised in a given map, say map A (${r}_{A}(t)>0$), can be sustained either thanks to the strong recurrent connectivity (i.e. a strong enough parameter ${j}_{s}^{A}$ in Equation 5), or thanks to the external input term proportional to ${\u03f5}_{A}(t)$ (Equation 6). Hence, our model can potentially generate the observed dynamics of the order parameters (Figure 3a) for different choices of recurrent connections and external inputs parameters, ranging in between the following two opposite scenarios:
Recurrent connections are absent: ${j}_{s}^{A}={j}_{s}^{B}={j}_{a}=0$. In this case, the system is simply driven by feedforward inputs. The localized activity is the result of external inputs that selectively excites or inhibits specific neurons during motor preparation and execution, thanks to sufficiently large values of the parameter ${r}_{A}(t)>0$ during preparation and ${\u03f5}_{B}(t)>0$ during execution. This scenario is depicted schematically in Figure 3c.
The network is strongly recurrent, with strongly directionspecific connections, and external inputs are untuned (${\u03f5}_{A}=0$, ${\u03f5}_{B}=0$). Analytical study of the dynamics (Methods) shows that, in order for such a system to exhibit tuning, the strength of the couplings has to exceed a critical value shown in Figure 3b. Moreover, when the synaptic strength exceeds this value, the activity can be localized in map A during preparation and in map B during execution simply as the result of homogeneous external inputs changing their strength, without the need for tuned external inputs to selectively excite/inhibit specific neurons. This scenario is depicted in Figure 3d.
We next turn to the question of which of these two scenarios best describes the data.
Fitting the model to the data
Our next goal is to infer the strength of recurrent connections (${j}_{0},{j}_{s}^{A},{j}_{s}^{B}$) and feedforward inputs (${C}_{0}(t),{C}_{A}(t),{C}_{B}(t),{\u03f5}_{A}(t),{\u03f5}_{B}(t)$) that best describes the data. To do so, we imposed that our network reproduce the dynamics of the populationlevel order parameters $({r}_{0},{r}_{A},{r}_{B})$. Specifically, for a given set of recurrent connections and feedforward inputs, we can compute analytically the dynamics of the order parameters (Methods). Based on these calculations, we build an iterative procedure that minimizes the reconstruction error ${E}_{\text{rec}}$, i.e. the squared difference between the predicted and observed dynamics of the order parameters. The results show that there is a large region of the $({j}_{0},{j}_{s}^{A},{j}_{s}^{B})$space where the reconstruction error has essentially the same value (Figure 4—figure supplement 2). Solutions range from a a network with zero recurrent connections (i.e. a purely feedforward scenario, shown in Figure 4ac) to a network with strong directionspecific connections.
To break the degeneracy of solutions, we added an energetic cost to the reconstruction error. This reflects the idea that a biological network where the computation is only driven by longrange connections from upstream areas is likely to consume more metabolic energy than a network where the computation is processed through shorterrange recurrent connections. Our inference procedure minimizes the cost function:
where ${E}_{\text{ext}}$ is the total magnitude of external inputs (see Methods, Equation 31). The hyperparameter $\alpha $ describes the relative strength of these two terms. For very small values of $\alpha $, the algorithm mainly minimizes external inputs, and the resulting reconstruction of the dynamics is poor. For very large values of $\alpha $, multiple solutions are found, that yield similar dynamics (see Figure 4c and Figure 4i). Figure 4—figure supplement 1 shows the results we obtained by using intermediate values of $\alpha $, that are small enough to yield a unique solution, but large enough to guarantee a good reconstruction of the dynamics. Interestingly, all solutions cluster in a small region of the parameter space, that is close to the bifurcation surface in such a way that the parameter ${j}_{s}^{B}$ is close to (either larger or smaller) its critical value; ${j}_{s}^{A}$ is much smaller than its critical value, and $j}_{a$ is zero.
We show the results obtained with two distinct values of $\alpha $ in Figure 4d and Figure 4g; the corresponding dynamics of the inferred external inputs in Figure 4e and Figure 4f; and the resulting dynamics of the order parameters from mean field equations in Figure 4f and Figure 4i. In particular, Figure 4d corresponds to a smaller value of $\alpha $, that is we are penalizing more strongly for large external inputs: the resulting coupling parameters are stronger than their critical value. Figure 4g corresponds to a larger value of $\alpha $: the strength of the couplings is below the critical line, yielding larger external inputs (Figure 4h) and a smaller reconstruction error (Figure 4i). The purely feedforward case is added in Figure 4ac for comparison. While a purely feedforward network requires a strong input tuned to map $B$ right before the movement onset (Figure 4b), in our solution such input is either absent (Figure 4e) or much weaker than the corresponding untuned input (Figure 4h). Our analysis therefore suggests that recurrent connectivity plays a major role in maintaining the degree of spatial modulation observed in the data.
Importantly, the same analysis applied to a second dataset recorded from a different macaque monkey performing the same task yielded qualitatively similar results (see Figure 4—figure supplement 3).
The model generates realistic neural and muscle activity
We have shown that the model is able to reproduce the dynamics of the order parameters computed from data. Here, we ask how the activity of single neurons in our model compares to data, and whether a readout of neural activity can generate realistic patterns of muscle activity. We simulated the dynamics of a network of 16,000 neurons. To each neuron $i$, we assigned the coordinates ${\theta}_{A}^{(i)},{\theta}_{B}^{(i)},{\eta}_{A}^{(i)},{\eta}_{B}^{(i)}$ so as to match the empirical distribution of coordinates (Equation 3 and Figure 5—figure supplement 3a). We added a noise term to the total current that each neuron is subject to, modelled as an OrnsteinUhlenbeck process (see Methods, Equation 33).
First, we checked that the dynamics of the order parameters – previously computed by numerically integrating the analytical equations – is also correctly reconstructed by simulations (Figure 5—figure supplement 1b). Figure 5—figure supplement 1 shows the case of a network with couplings parameters stronger than their critical value, where the dynamics is dominated by recurrent currents and is very sensitive to noise The location of the bump undergoes a small diffusion around the value predicted by the mean field analysis (Figure 5—figure supplement 1c and Figure 5—figure supplement 1d).
Then, we computed tuning curves during the preparatory and execution epochs from simulations, and estimated the values ${\theta}_{A}^{(i)},{\theta}_{B}^{(i)},{\eta}_{A}^{(i)},{\eta}_{B}^{(i)}$ from the location and the amplitude of the tuning functions. The reconstructed values of neurons tuning parameters are consistent with the values initially assigned to them when we built the network (Figure 5—figure supplement 2b). We noticed (Figure 5—figure supplement 2c) that the quality of the cosine fit strongly correlates with the magnitude of ${\eta}_{A},{\eta}_{B}$, as we see in the data (Figure 1—figure supplement 2ef): for neurons with a low value of ${\eta}_{A},{\eta}_{B}$ the tuning functions poorly resemble a cosine. Overall, tuning curves from simulations strongly resemble the ones from data (see Figure 5—figure supplement 3, and Discussion).
Next, we showed that the network activity closely resembles the one from recordings. At the level of the population, we used canonical correlation analysis to identify common patterns of activity between simulations and recordings, and to show that they strongly correlate (see Figure 5a and Methods). At the level of single units, for each neuron in the data we selected a neuron in the model network with the closest value of ${\theta}_{A},{\theta}_{B},{\eta}_{A},{\eta}_{B}$ variables. A sidebyside comparison of the time course of responses shows a good qualitative agreement (Figure 5c). We also noted that both the activity from recordings and from simulations present sequential activity and rotational dynamics (Figure 5—figure supplement 4). Finally, a linear readout of the activity from simulations can generate realistic patterns of muscle activity, which closely match electromyographic (EMG) signals recorded from Monkey Bx during a centerout reaching movement (Figure 5b).
So far in this section, we discussed results from a network with strong couplings; Figure 5—figure supplement 5 shows that the dynamics of a purely feedforward network is almost indistinguishable from from the strongly recurrent case  although canonical correlations between the activity of a purely feedforward network and the data are slightly lower than the ones for networks with strong recurrent couplings, during movement execution (average canonical correlation of 0.74 for the purely feedforward network, and 0.82 for the strongly recurrent one). We also noticed that networks with coupling parameters below the bifurcation surface are robust to noise in the ${\theta}_{A}^{(i)},{\theta}_{B}^{(i)}$ variables. Conversely, the solutions above the bifurcation surface that do not require tuned external inputs are very sensitive to such noise, and the location of the bump of the activity drifts towards a few discrete attractors. When the noise is weak, the drift happens on a time scale that is much larger that the time of movement execution; the larger the level of the noise, the faster the drift.
PCA subspaces dedicated to movement preparation and execution
In the previous sections, we considered the lowdimensional description of the dynamics given by the order parameters. Here, we use the commonly used principal component analysis (PCA) to compare the dimensionality of data and network model. In particular, we ask whether our formalism can explain the orthogonality of the preparatory and movementrelated linear subspaces. As previously reported in the literature Kaufman et al., 2014; Elsayed et al., 2016, the top panel in Figure 6a shows that the top principal components of the preparatory activity explain most of the variance of the preparatory activity, but very little variance of the movementrelated activity; vice versa, the top movementrelated principal components explain very little variance of the preparatory activity (Figure 6a, bottom panel). The activity of our network model also displays this property (Figure 6b). We also quantified the level of orthogonality of the two subspaces using the alignment index as defined in Elsayed et al., 2016, that measures the percentage of variance of each epoch’s activity explained by both sets of PCs. Figure 6c shows that the alignment index is much smaller than the one computed between two subspaces drawn at random (random alignment index, explained in the Methods), in both data and network simulations. Note that model with uniform degrees of participation (${\eta}_{A}={\eta}_{B}=1$) displays a higher overlap between the preparatory and movementrelated subspaces, as shown in Figure 6—figure supplement 2c. In the Methods section, we explain how the dimensionality of the activity depends on the connectivity of the network.
Discussion
Orthogonal spaces dedicated to movement preparation and execution
Studies on the dynamics of motor cortical activity during delayed reaching tasks have shown that the primary motor cortex employs an ‘orthogonal but linked strategy’ (Kaufman et al., 2014; Elsayed et al., 2016) to coordinate planning and execution of movements.
In this work, we explored the hypothesis that this strategy emerges as result of a specific recurrent functional architecture, in which synaptic connections store information about two distinct patterns of activity that underlie movement preparation and movement execution. We built a network model in which neurons are characterized by their selectivity properties in both preparatory and movement execution epochs (preferred direction and degree of participation), and in which synaptic connectivity is shaped by these selectivity properties in a Hebbianlike fashion.
A strong correlation between the selectivity properties of the preparatory and movementrelated epochs will produce strongly correlated patterns of activity in these two intervals and a strong overlap between the respective PCA subspaces. We inferred the distribution of these tuning features from data and showed that the correlation between the preparatory and movementrelated patterns of activity is small enough to allow for almost orthogonal subspaces, which is thought to be important for the preparatory activity not to cause premature movement. At the same time, the correlation is nonzero, and that allows the activity to flow from the preparatory to the movementrelated subspaces with minimal external inputs, as summarized in the next section.
Interplay between external and recurrent currents
We analytically described the temporal evolution of the population activity in the lowdimensional space defined by maps A and B in terms of a few order parameters, which can be easily computed from data. Different combinations of the strength of directionspecific recurrent connections and of tuned external inputs allow the model to accurately reproduce the dynamics of the order parameters from data. We argue that solutions that require less inputs from areas upstream of the motor cortex are favorable in terms of metabolic energy consumption. With the addition of a cost proportional to the magnitude of external inputs, we find solutions where recurrent connections are strong and direction specific. In the resulting scenario, during movement preparation, an external input tuned to map A sustains a populationlevel activity localized in map A, and pins the location of the peak of activity. During movement execution, the localized activity is sustained mostly by recurrent connectivity; the correlation between preferred directions ${\theta}_{A}$ and ${\theta}_{B}$ allows the activity in map B during movement execution to be localized around the same location as it was in map A during movement preparation. The inferred strength of recurrent connectivity is close to the critical value above which the recurrent network can generate localized patterns of activity in the absence of tuned inputs. Solutions well beyond the bifurcation line require an implausible fine tuning of the recurrent connections, as heterogeneity in the connectivity causes a systematic drift of the encoded direction of motion on a typical time scales of seconds – the larger the structural noise in the couplings, the faster the drift, as has been extensively studied in continuous attractor models (Tsodyks and Sejnowski, 1995; Zhang, 1996; Renart et al., 2003; Itskov et al., 2011; Seeholzer et al., 2019; Compte et al., 2000). It has been shown that homeostatic mechanisms could compensate for the heterogeneity in cellular excitability and synaptic inputs to reduce systematic drifts of the activity (Renart et al., 2003) and that shortterm synaptic facilitation in recurrent connections could also significantly improve the robustness of the model (Itskov et al., 2011) – even when combined with shortterm depression (Seeholzer et al., 2019). While a full characterization of our model in the presence of structural heterogeneity is beyond the scope of this work, we note that solutions close but below the bifurcation line are stable with respect to perturbations in the couplings. In this case, tuned inputs are present also during movement execution, but their magnitude is much weaker than the untuned ones: directionspecific couplings are strong and amplify the weak external inputs tuned to map B, therefore playing a major role into shaping the observed dynamics during movement execution.
Our prediction that external inputs are directionspecific during movement preparation but mostly nonspecific during movement execution needs to be tested experimentally. Interestingly, it agrees with several previous studies on the activity of the primary motor cortex during limb movement. In particular, Kaufman et al., 2016 showed that changes in neural activity that characterize the transition from movement preparation to execution reflect when movement is made but are invariant to movement direction and type, in monkeys performing a reaching task; Inagaki et al., 2022 detected a large, movement nonspecific thalamic input to the cortex just before movement onset, in mice performing a licking task. The authors of Nashef et al., 2019 used highfrequency stimulation to in terfere with the normal flow of information through the cerebellarthalamocortical (CTC) pathway in monkeys performing a centerout reaching task. This perturbation produced reversible motor deficits, preceded by substantial changes in the activity of motor cortical neurons around movement execution. Interestingly, the spatial tuning of motor cortical cells was unaffected, and their early preparatory activity was mostly intact. These results are in line with our prediction, if we interpret the conditioninvariant inputs that we inferred during movement execution as thalamic inputs that are part of the CTC pathway. We speculate that the directionspecific inputs that we inferred during movement preparation have a different origin. Further simultaneous recordings of M1 and upstream regions, as well as measures of synaptic strength between motor cortical neurons, will be necessary to test our predictions.
Cosine tuning
While cosine tuning functions have been extensively used to describe the firing properties of motor cortical neurons (e.g. Georgopoulos et al., 1982), and they were also hypothesized to be the optimal tuning profile to minimize the expected errors in force production (Todorov, 2002), more recent work has emphasized that tuning functions in the motor cortex present heterogeneous shapes. Specifically, the authors of Lalazar et al., 2016 showed that tuning functions are well fitted by a sum of a cosinemodulated component, and an unstructured component, often including terms with higher spatial frequency. They also argued that the unstructured component is key for a readout of motor cortical activity to reproduce EMG activity. In this work, we showed that a noisy recurrent network model that is based on cosine tuning reproduces well the ${R}^{2}$ coefficient of the cosine fit of tuning curves from data (see a comparison between Figure 1—figure supplement 2ef and Figure 5—figure supplement 2c). Moreover, tuning curves from simulations present heterogeneous shapes (Figure 5—figure supplement 3), including bimodal profiles, especially for neurons with low values of ${\eta}_{A},{\eta}_{B}$. Indeed, we showed that the ${R}^{2}$ of the cosine fit strongly correlates with the variables $\eta $, both in the data and in the model. Neurons with a low degree of participation have low ${R}^{2}$ coefficient of the cosine fit, and present heterogeneous tuning curves. We also showed that a linear readout of the activity of our model can very accurately reconstruct EMG activity. Finally, our model could easily be extended to incorporate other tuning functions. While we leave the analysis of a model with an arbitrary shape of the tuning function to future work, we don’t expect our results to strongly depend on the specific shape of the tuning function.
Comparison with the ring model
The idea that the tuning properties of motor cortical neurons could emerge from directionspecific synaptic connections goes back to the work of Lukashin and Georgopoulos, 1993. However, it was with the theoretical analysis of the so called ring model (Amari, 1977; BenYishai et al., 1995; Somers et al., 1995; Hansel and Sompolinsky, 1998) that localized patterns of activity were formalized as attractor states of the dynamics in networks with strongly specific recurrent connections. Related models were later used to describe maintenance of internal representations of continuous variables in various brain regions (Zhang, 1996; Redish et al., 1996; McNaughton et al., 1996; Seung et al., 2000; Tsodyks, 1999; Camperi and Wang, 1998; Compte et al., 2000; Samsonovich and McNaughton, 1997; Stringer et al., 2002; Burak and Fiete, 2009) and were extended to allow for storage of multiple continuous manifolds (Battaglia and Treves, 1998; Romani and Tsodyks, 2010; Monasson and Rosay, 2015) to model the firing patterns of place cells the hippocampus of rodents exploring multiple environments. While our formalism builds on the same theoretical framework of these previous works, we would like to stress two main differences between our model and the ones previously considered in the literature. First, we studied the dynamic interplay between fluctuating external inputs and recurrent currents, that causes the activity to flow from the preparatory map to the movementrelated one and, consequently, neurons tuning curves and order parameters to change over time, while at the population level the pattern of activity remains localized around the same location. Then, we introduced an extra dimension representing the degree of participation of single neurons to the population pattern of activity; this is an effective way to introduce neurontoneuron variability in the responses, which decreases the level of orthogonality between the preparatory and movementrelated subspaces, and yields tuning curves whose shape resembles the one computed from data – in contrast with the classic ring model, where all tuning curves have the same shape.
Representational vs dynamical system approaches
There has been a debate whether neuronal activity in motor cortex is better described by socalled ‘representational models’ or dynamical systems models (Churchland and Shenoy, 2007; Michaels et al., 2016). Representational models (Michaels et al., 2016; Inoue et al., 2018) are models in which neuronal firing rates are related to movement parameters (Evarts, 1968; Georgopoulos et al., 1982; Georgopoulos et al., 1984; Paninski et al., 2004; Moran and Schwartz, 1999; Smith et al., 1975; HeppReymond et al., 1978; Cheney and Fetz, 1980; Kalaska et al., 1989; Taira et al., 1996; Cabel et al., 2001). Dynamical systems models (Sussillo et al., 2015) are instead recurrent neural networks whose synaptic connectivity is trained in order to produce a given pattern of muscle activity. Such models have been argued to reproduce better the dynamical patterns of population activity in motor cortex (Michaels et al., 2016).
In our model, firing rates are described by a system of coupled ODEs, and the synaptic connectivity is built from the neuronal tuning properties, in the spirit of the classical ring model discussed above, and of the model introduced by Lukashin and Georgopoulos, 1993. Our model is thus a dynamical system where kinematic parameters can be decoded from the population activity.
Importantly, our model is constrained to reproduce the dynamics of a few order parameters that are a lowdimensional representation of the activity of recorded neurons. In contrast to kinematicencoding models, our model can recapitulate the heterogeneity of singleunit responses. Moreover, as in trained recurrent network models, a linear readout of the network activity can reproduce realistic muscle signals.
The advantage of our model with respect to a trained RNN is that it yields a lowrank connectivity matrix which is simple enough to allow for analytical tractability of the dynamics. The model can be used to test specific hypotheses on the relationship between network connectivity, external inputs and neural dynamics, and on the learning mechanisms that may lead to the emergence of a given connectivity structure. The model is also helpful to illustrate the problem of degeneracy of network models. An interesting future direction would be to compare the connectivity matrices of trained RNNs and of our model.
Extension to modeling activity underlying more complex tasks
Neurons directional tuning properties have been shown to be influenced by many contextual factors that we neglected in our analysis (HeppReymond et al., 1999; Muir and Lemon, 1983), to depend on the acquisition of new motor skills (Paz et al., 2003; Li et al., 2001; Wise et al., 1998) and other features of movement such as the shoulder abduction/adduction angle even for similar hand kinematic profiles (Scott and Kalaska, 1997; Scott et al., 2001), or the speed of movement for similar trajectories (Churchland and Shenoy, 2007). A large body of work (Scott et al., 2001; Ajemian et al., 2000; Gribble and Scott, 2002; HeppReymond et al., 1999; Todorov, 2000; Holdefer and Miller, 2002; Sergio et al., 2005) has shown that the activity in the primary motor cortex covaries with many parameters of movement other than the hand kinematics – for a review, see Scott, 2003. More recent studies Michaels et al., 2016; Russo et al., 2018; Sergio et al., 2005; Schroeder et al., 2021 have also suggested that the largest signals in motor cortex may not correlate with taskrelevant variables at all.
Our model can be extended to more realistic scenarios in several ways. A simplifying assumption we made is that the task can be clearly separated into a preparatory phase and one movementrelated phase. A possible extension is one where the motor action is composed of a sequence of epochs, corresponding to a sequence of maps in our model. It will be interesting to study the role of asymmetric connections for storing a sequence of maps. Such a network model could be used to study the storage of motor motifs in the motor cortex (Logiaco et al., 2021); external inputs could then combine these building blocks to compose complex actions. Moreover, incorporating variability in the degree of symmetry of the connections could allow to model features that are not currently included, such as the speed of movement.
In summary, we proposed a simple model that can explain recordings during a reaching task. It provides a scaffold upon which more sophisticated models could be built, to explain neural activity underlying more complex tasks.
Methods
Animals
Figures 1—6 are based on electrophysiological recordings from the primary motor cortex of Macaque monkey Rk (141 units, 391 trials), while in Figure 4—figure supplement 3 we analyzed recordings from Macaque monkey Rj (57 units, 191 trials). For details on the electrophysiology and on the multielectrode array implants, see Russo et al., 2018.
Circular statistics
To define the statistical measures of angular variables (Fisher, 1995; Berens, 2009), let us consider a set of angles ${\{{\theta}_{i}\}}_{i}$, and visualize them as vectors on the plane:
The mean vector is then defined as
from which we can get the mean angular direction $\overline{\theta}$ using the four quadrant inverse tangent function. The length of the mean resultant vector,
is a measure of how concentrated the data sample is around the mean direction: the closer $R$ is to 1, the more concentrated the data is. We quantify the spread in a data set through the circular variance given by
which ranges from 0 to 1. To compute the correlation between two sets of angular variables, ${\{{\theta}_{i}\}}_{i}$ and ${\{\varphi \}}_{i}$, we used the correlation coefficient (Jammalamadaka and SenGupta, 2001) defined as:
where $\overline{\theta}$ and $\overline{\varphi}$ are the mean angular directions of the two sets.
Preparatory and movementrelated epochs
In our analysis, we characterized neurons’ tuning properties during movement preparation and execution. The choice of considering only two epochs is an approximation based on the observation that neurons preferred directions are more conserved within the delay period alone and within movement execution alone than across periods (Figure 1—figure supplement 1). In Figure 1—figure supplement 1, we considered three temporal epochs: the delay period (blue lines), movement execution (red) and the whole duration of the task (grey). We binned each epoch into ${N}_{W}$ time bins of length 160 ms: the preparatory and execution epochs consist of ${N}_{W}=3$ bins each, while the whole task consists of ${N}_{W}=7$ bins. For each neuron, we first computed its preferred direction in each time bin, by fitting the trialaveraged and timeaveraged firing rate with a cosine function. Then, for each epoch, and for each neuron, we computed the circular variance of preferred directions across the ${N}_{W}$ bins. The cumulative distribution of circular variances is shown in Figure 1—figure supplement 1a: the variability of preferred direction within the delay epochs and within movement execution is nonzero, although their distribution is skewed towards zero.
For each epoch, we computed the median variability in preferred direction and used a bootstrap analysis to check its statistical significance. Let us consider the matrix of single trial activity of a given neuron, for a specific condition, and during a given epoch; the size of the matrix is ${N}_{W}\times n$, where $n$ is the number of trials per condition (in the data, $n$ ranges between 41 and 47). We repeat the same procedure for all of the 8 conditions, and zscored the 8 activity matrices across conditions (this is done because the average activity changes across time bins, and in the following we will shuffle time bins). To check if tuning curves are exactly conserved across timebins, we generated $B=1000$ bootstrap samples of the activity matrix for a given condition, where each entry is chosen at random (with repetitions) between the possible ${N}_{W}\times n$ entries of the original matrix. For each bootstrap sample matrix, we computed the variance of preferred direction as we did for the original matrix. This is repeated for all neurons, all epochs, all conditions. The cumulative distribution of all variances of preferred direction is shown in Figure 1—figure supplement 1b. Then, for each bootstrap sample, we computed the corresponding median variance of preferred direction; the histogram is shown in Figure 1—figure supplement 1c. From Figure 1—figure supplement 1c, it is obvious that the median variance of preferred directions from the data is significantly larger than the one from the bootstrap samples. This suggests that neurons do change their preferred direction within epochs, even though major changes happen between epochs.
Analysis of the model
We studied the rate model with couplings defined by (5) with two complementary approaches. First, an analytic approximation based on meanfield arguments and valid in the limit of large network size allowed us to derive a lowdimensional description of the network dynamics in terms of a few latent variables, and to fit the model parameters to the data. Next, we checked that simulations of the dynamics of a network of $N={10}^{4}$ neurons reproduce the results that we derived in the limit $N\to \mathrm{\infty}$.
The meanfield equations are derived following the methods introduced in BenYishai et al., 1995; Romani and Tsodyks, 2010. In the limit where the number of neurons is large, the average activity of a neuron with coordinates ${\theta}_{A},{\theta}_{B},{\eta}_{A},{\eta}_{B}$ is described by Equations 1; 2, where we have defined $x=\{{\theta}_{A},{\theta}_{B},{\eta}_{A},{\eta}_{B}\}$. In order to derive a lower dimensional description of the dynamics, we rewrite (1) in terms of the average activity rate (14), and of the second Fourier components of the activity rate modulated by ${\eta}_{A/B}$:
The phase ${\psi}_{A(/B)}$ is defined so that the parameter ${r}_{A(/B)}$ is a real nonnegative number, yielding Equation 14 for ${r}_{A(/B)}$, and the following equation for the phase ${\psi}_{A(/B)}$, representing the position of the peak of the activity profile in map $A(/B)$:
From (1), we see that the order parameters evolve in time according to the following set of equations,
where the total input (2) is rewritten as:
Stationary states for homogeneous external inputs
We first characterize the model by studying the fixedpoints of the network dynamics when subject to a constant and homogeneous external input, and focusing on the scenario where the joint distribution (3) of the ${\theta}_{A},{\theta}_{B}$ is of the form
which fits the empirical distribution of the data well for $x=2/3$ (Figure 1—figure supplement 2). For now, we leave the distributions ${\rho}_{pA}({\eta}_{A})$ and ${\rho}_{pB}({\eta}_{B})$ unspecified. We first consider the case where the external input to the network is a constant that is independent of ${\theta}_{A},{\theta}_{B}$:
The stationary solutions of (1,16) are of the form:
where we have defined
Here, $\{{r}_{0},{r}_{A},{r}_{B}\}$ are solutions of the system (15) with the left hand side set to zero. The second term on the r.h.s in the last equation of (20) is obtained from (16) by observing that in the stationary state ${\psi}_{A}={\psi}_{B}$ if either ${j}_{a}>0$ or if ${\theta}_{A}$ and ${\theta}_{B}$ are correlated (as we are assuming here). As in BenYishai et al., 1995; Hansel and Sompolinsky, 1998, we can distinguish broad from narrow activity profiles. The term broad activity profile refers to the scenario where the activity of all the neurons is above threshold, the dynamics is linear and the stationary state reduces to:
By inserting the above equation in (14), we find that the only solution is homogeneous over the maps ${\theta}_{A}$ and $\theta}_{B$:
where the notation $\u27e8.\u27e9$ represents an average over the measure $d\mu $, i.e.
First, we notice that a nonzero homogeneous state is present if
Then, the stability of this state with respect to a small perturbation $\{\delta {r}_{0},\delta {r}_{A},\delta {r}_{B}\}$ can be studied by linearizing (15) around the stationary solution, at fixed ${\psi}_{A}={\psi}_{B}=0$. The resulting Jacobian matrix is
where
The homogeneous solution (22) is stable if ${j}_{0}<1$ and if the couplings parameters $\{{j}_{s}^{A},{j}_{s}^{B},{j}_{a}\}$ satisfy the following system of inequalities:
If ${j}_{0}\ge 1$, the system undergoes an amplitude instability. For values of $\{{j}_{s}^{A},{j}_{s}^{B},{j}_{a}\}$ that exceed the threshold implicitly defined by (23), there is a region of the fourdimensional space $({\theta}_{A},{\theta}_{B},{\eta}_{A},{\eta}_{B})$ where the total input ${I}^{\text{tot}}$ (19) is negative. The dynamics is no longer linear and the activity profile at the fixed point is localized around a particular direction, and characterized by positive stationary values of the order parameters ${r}_{A},{r}_{B}$. This ‘bump’ of activity can be localized either more strongly in map A (${r}_{A}>{r}_{B}>0$), in map B (${r}_{B}>{r}_{A}>0$) or at the same level in both maps (${r}_{A}={r}_{B}>0$). Since maps A and B are correlated according to 3, the location of the stationary bump of activity is constrained to be the same in the two maps: ${\psi}_{A}={\psi}_{B}\equiv \psi $, with arbitrary $\psi $. The asymmetric term in the connectivity, modulated by the parameter $j}_{a$, further contributes to aligning the location of the bump in map B to the one in map A. The system can relax to a continuous manifold of fixed points parameterized by $\psi $ that are marginally stable. In this continuous attractor regime, the system can store in memory any direction of motion $\psi $, as the activity is localized in absence of tuned inputs.
Examples of twodimensional crosssections of the fourdimensional activity profile in the marginal phase are shown in Figure 2b. Figure 2c shows onedimensional crosssections of the activity as a function of ${\theta}_{A}$ or ${\theta}_{B}$, for different values of ${\eta}_{A}$ and ${\eta}_{B}$. These correspond to the tuning curves of the model for stationary external inputs. The tuning profile can have either a fullcosine modulation or a rectified cosine modulation, depending on the values of ${\eta}_{A},{\eta}_{B}$. The plots of Figure 1fg correspond to the activity as a function of ${\theta}_{A},{\theta}_{B}$ in the case of a discrete number of neurons – each neuron with a different value of ${\eta}_{A},{\eta}_{B}$.
Timedependent external input
The activity of motor cortical neurons shows no signature of being in a stationary state but instead displays complex transients. To model the data, we assumed that the neurons are responding both to recurrent inputs and to fluctuating external inputs that can be either homogeneous or tuned to ${\theta}_{A},{\theta}_{B}$, with peak at constant location ${\mathrm{\Phi}}_{A}={\mathrm{\Phi}}_{B}\equiv \mathrm{\Phi}$ (Equation 6). The total input 2 that neurons are subject to at a given time $t$ is:
where
In order to draw a correspondence between the model and the data, we note that the activity of each neuron in the data corresponds to the activity rate at a specific coordinate ${\theta}_{A},{\theta}_{B},{\eta}_{A},{\eta}_{B}$ in the model. From Equations 1 and 24 we see that, if we assume that changes in the external inputs happen on a time scale larger than $\tau $, the modulation of the activity profile in map A at each time $t$ is $\mathrm{cos}({\theta}_{A}\mathrm{\Phi})$ and has amplitude proportional to ${\eta}_{A}$, while the modulation of the activity profile in map B is $\mathrm{cos}({\theta}_{B}\mathrm{\Phi})$ and has amplitude proportional to ${\eta}_{B}$. Accordingly, for each recorded neuron ${\theta}_{A}$ and ${\theta}_{B}$ represent the location of the peak of the neuron’s tuning curve computed during the preparatory and movementrelated epoch, respectively, while ${\eta}_{A}$ and ${\eta}_{B}$ are proportional to the amplitude of the tuning curves.
Fitting the model to the data
Neurons activity rates were computed by smoothing the spike trains with a Gaussian kernel with s.d. of 25ms and averaging them across all trials with the same condition; ${r}_{k}^{(i)}(t)$ denotes the rate of neuron $i$ at time $t$ for condition $k$, each condition corresponding to one of the 8 angular locations of the target on the screen. Since trials had highly variable length, we normalized the responses along the temporal dimension before averaging them over trials, as follows. We divided the activity into three temporal intervals: from the target onset to the go cue; from the go cue to the start of the movement; from the start of the movement to the end of the movement. For each interval, we normalized the response times to the average length of the interval across trials. We then aggregated the three intervals together. We defined the preparatory and execution epochs – denoted by A and B – as two 300ms time intervals beginning, respectively, 100ms after target onset and 50ms before the start of the movement, in line with (Elsayed et al., 2016). Figure 3—figure supplement 1 shows that our results do not change qualitatively when the lengths of the preparatory and execution intervals are increased. For each neuron $i$, we fitted the activity rate averaged across time within each epoch as a function of the angular position $\mathrm{\Phi}$ of the target with a cosine function:
where the parameters ${\theta}_{A}^{(i)}$ and ${\theta}_{B}^{(i)}$ represent the neuron’s preferred direction during the preparatory and execution epochs. In our the model, the direction modulation of the rates, see (25), is proportional to ${\eta}_{A\backslash B}$, that measures how strongly the neuron participates in the two epochs of movement; hence, we defined ${\eta}_{A\backslash B}^{(i)}$ to be proportional to the amplitude of the tuning curve:
The scatter plot of ${\eta}_{A},{\eta}_{B}$ (Figure 1e) shows an outlier with ${\eta}_{B}\sim 1$. We checked that our results hold true if we discard that point. Figure 1—figure supplement 2 shows that both ${\eta}_{A}$ and ${\eta}_{B}$ strongly correlate with the ${R}^{2}$coefficient of the cosine fit. In other words, neurons with a higher value of ${\eta}_{A},{\eta}_{B}$ are the ones whose tuning curves more strongly resemble a cosine (this holds true in our simulations too, see Figure 5—figure supplement 2c, and Discussion). The order parameters (14) are computed at each time $t$ by approximating the integrals with the sums:
where $N$ is the number of neurons and ${n}_{c}=8$ is the number of conditions. The angular location of the localized activity ${\psi}_{A/B}^{(k)}(t)$ can be computed from (14) as
However, this estimate is strongly affected by the heterogeneity in the distribution of ${\theta}_{A},{\theta}_{B}$  deviating from the rotational symmetry of the model. That is why in Figure 2 we approximated ${\psi}_{A/B}^{(k)}(t)$ by the $kth$ angular location of the target on the screen. We checked that computing ${\psi}_{A/B}^{(k)}(t)$ by using either method does not affect the dynamics of the order parameters ${r}_{A/B}^{\text{data}}(t)$.
We assumed that the observed dynamics of the order parameters ${r}_{0}(r),{r}_{A}(t),{r}_{B}(t)$ obeys Equations 15; 16 with timedependent external inputs of the form (6). We inferred the value of the parameters ${\{{C}_{0}(t),{C}_{A}(t),{C}_{B}(t),{\u03f5}_{A}(t),{\u03f5}_{B}(t)\}}_{t}$ of the external currents and ${j}_{0},{j}_{s}^{A},{j}_{s}^{B},{j}_{a}$ of the coupling matrix that allow us to reconstruct the dynamics of the order parameters ${r}_{0},{r}_{A},{r}_{B}$ computed from data; since this inference problem is undetermined, we required as further constraint that the model reconstruct the dynamics of the following two additional order parameters:
In this way, for given coupling parameters ${j}_{0},{j}_{s}^{A},{j}_{s}^{B},{j}_{a}$, we can uniquely identify the external currents parameters that produced the observed dynamics. Still, an equally good reconstruction of ${r}_{0},{r}_{A},{r}_{B},{r}_{0A},{r}_{0B}$ can be obtained for different choices of coupling parameters (2). Hence, we inferred the model parameters by minimizing a cost function composed of two terms: one that is proportional to the reconstruction error of the temporal evolution of the order parameters and the other that represents an energetic cost penalizing large external inputs.
Fitting the model to the data: details
The fitting procedure was divided in the following steps:
The time interval $T$ going from the target onset till the end of the movement was binned into $\mathrm{\Delta}t=5ms$ time bins: $T=\{\mathrm{\Delta}{t}_{1},\mathrm{\Delta}{t}_{2},\dots \mathrm{\Delta}{t}_{T}\}$.
The couplings parameters were initialized to zero: ${j}_{0}=0,{j}_{s}^{A}=0,{j}_{s}^{B}=0,{j}_{a}=0$. At the first time bin, the external currents parameters were initialized to zero:
 ${C}_{0}(\mathrm{\Delta}{t}_{1})={C}_{A}(\mathrm{\Delta}{t}_{1})={C}_{B}(\mathrm{\Delta}{t}_{1})={\u03f5}_{A}(\mathrm{\Delta}{t}_{1})={\u03f5}_{B}(\mathrm{\Delta}{t}_{1})=0$
and the reconstructed order parameters (r_{0}, …) were initialized to the order parameters estimated from the data (${r}_{0}^{\text{data}},\dots$):
 $\begin{array}{ll}& {r}_{0}(\mathrm{\Delta}{t}_{1})={r}_{0}^{\text{data}}(\mathrm{\Delta}{t}_{1}),\phantom{\rule{1em}{0ex}}{r}_{A}(\mathrm{\Delta}{t}_{1})={r}_{A}^{\text{data}}(\mathrm{\Delta}{t}_{1}),\phantom{\rule{1em}{0ex}}{r}_{B}(\mathrm{\Delta}{t}_{1})={r}_{B}^{\text{data}}(\mathrm{\Delta}{t}_{1}),\\ & {r}_{0A}(\mathrm{\Delta}{t}_{1})={r}_{0A}^{\text{data}}(\mathrm{\Delta}{t}_{1}),\phantom{\rule{1em}{0ex}}{r}_{0B}(\mathrm{\Delta}{t}_{1})={r}_{0B}^{\text{data}}(\mathrm{\Delta}{t}_{1}).\end{array}$
For each time step $\mathrm{\Delta}{t}_{i},i=2,\mathrm{\dots},T$:
We started from the reconstructed order parameters at the previous time step:
${r}_{0}(\mathrm{\Delta}{t}_{i1}),{r}_{A}(\mathrm{\Delta}{t}_{i1}),{r}_{B}(\mathrm{\Delta}{t}_{i1}),{r}_{0A}(\mathrm{\Delta}{t}_{i1}),{r}_{0A}(\mathrm{\Delta}{t}_{i1}),$and we let the dynamical system (15, 29) with external currents parameters ${C}_{0},{C}_{A},{C}_{B},{\u03f5}_{A},{\u03f5}_{B}$ evolve for $\mathrm{\Delta}t=5ms$ to estimate the order parameters at the current time step:
${r}_{0}(\mathrm{\Delta}{t}_{i}),{r}_{A}(\mathrm{\Delta}{t}_{i}),{r}_{B}(\mathrm{\Delta}{t}_{i}),{r}_{0A}(\mathrm{\Delta}{t}_{i}),{r}_{0A}(\mathrm{\Delta}{t}_{i}).$
We inferred the value of the external currents parameters
by minimizing the reconstruction error:
 (30) $\begin{array}{ll}& {E}_{\mathrm{\Delta}t}({C}_{0},{C}_{A},{C}_{B},{\u03f5}_{A},{\u03f5}_{B})=\frac{[{r}_{0}(\mathrm{\Delta}t){r}_{0}^{\text{data}}(\mathrm{\Delta}t){]}^{2}}{\frac{1}{T}\sum _{i}{r}_{0}^{\text{data}}(\mathrm{\Delta}{t}_{i})}+\frac{[{r}_{A}(\mathrm{\Delta}){r}_{A}^{\text{data}}(\mathrm{\Delta}t){]}^{2}}{\frac{1}{T}\sum _{i}{r}_{A}^{\text{data}}(\mathrm{\Delta}{t}_{i})}\\ & +\frac{[{r}_{B}(\mathrm{\Delta}t){r}_{B}^{\text{data}}(\mathrm{\Delta}t){]}^{2}}{\frac{1}{T}\sum _{i}{r}_{B}^{\text{data}}(\mathrm{\Delta}{t}_{i})}+\frac{[{r}_{0A}(\mathrm{\Delta}t){r}_{0A}^{\text{data}}(\mathrm{\Delta}t){]}^{2}}{\frac{1}{T}\sum _{i}{r}_{0A}^{\text{data}}(\mathrm{\Delta}{t}_{i})}+\frac{[{r}_{0B}(\mathrm{\Delta}t){r}_{0B}^{\text{data}}(\mathrm{\Delta}t){]}^{2}}{\frac{1}{T}\sum _{i}{r}_{0B}^{\text{data}}(\mathrm{\Delta}{t}_{i})},\end{array}$
that quantifies the difference between the order parameters estimated from the data and the reconstructed ones; note that the dependence of the cost function $E$ on ${C}_{0},{C}_{A},{C}_{B},{\u03f5}_{A},{\u03f5}_{B}$ is implicitly contained in the reconstructed order parameters. We minimized the cost function (30) by using an interior point method algorithm [Byrd et al., 1999] starting from the initial condition
 ${C}_{0}(\mathrm{\Delta}{t}_{i1}),{C}_{A}(\mathrm{\Delta}{t}_{i1}),{C}_{B}(\mathrm{\Delta}{t}_{i1}),{\u03f5}_{A}(\mathrm{\Delta}{t}_{i1}),{\u03f5}_{B}(\mathrm{\Delta}{t}_{i1});$
we imposed that ${\u03f5}_{A}>0,{\u03f5}_{B}>0$ and added a $L1$ regularization term to the external inputs, not to infer pathologically large positive and negative inputs that balance each other. This favors solutions with smaller external inputs and nonzero reconstruction error with respect to ones with larger inputs and almostzero reconstruction error.
The external currents inferred with step 3 depend on our initial choice of of the couplings parameters ${j}_{0},{j}_{s}^{A},{j}_{s}^{B},{j}_{a}$.
Using step 3, the value of the couplings parameters ${j}_{0},{j}_{s}^{A},{j}_{s}^{B},{j}_{a}$ is inferred by minimizing the cost function
 ${E}_{\text{tot}}=\alpha {E}_{\text{rec}}+{E}_{\text{ext}}$
composed of two terms: the reconstruction error and a term that favors small external currents:
 (31) $\begin{array}{ll}{E}_{\text{rec}}=& \frac{1}{T}\sum _{i=2}^{T}\left\{{E}_{\mathrm{\Delta}{t}_{i}}\left[{C}_{0}(\mathrm{\Delta}{t}_{i}),{C}_{A}(\mathrm{\Delta}{t}_{i}),{C}_{B}(\mathrm{\Delta}{t}_{i}),{\u03f5}_{A}(\mathrm{\Delta}{t}_{i}),{\u03f5}_{B}(\mathrm{\Delta}{t}_{i})\right]\right\},\\ {E}_{\text{ext}}=& \frac{1}{T}\sum _{i=2}^{T}\left[{C}_{0}(\mathrm{\Delta}{t}_{i})+{C}_{A}(\mathrm{\Delta}{t}_{i})+{C}_{B}(\mathrm{\Delta}{t}_{i})+{\u03f5}_{A}(\mathrm{\Delta}{t}_{i})+{\u03f5}_{B}(\mathrm{\Delta}{t}_{i})\right].\end{array}$
The minimization is done using a surrogate optimization algorithm [Gutmann, 2001].
The result does not depend on the choice of the time bin $\mathrm{\Delta}t$. Also, the result weakly depends on the time constant $\tau $ in the mean field equations (e.g. 15), if $\tau $ varies on in the range: 10 – 100ms. We set $\tau =25$ ms.
Simulations of the model
We simulated the dynamics of a finite network of $N$ neurons. To each neuron $i$, we assigned the variables ${\theta}_{A}^{(i)},{\theta}_{B}^{(i)},{\eta}_{A}^{(i)},{\eta}_{B}^{(i)}$ as follows.
In the ${\theta}_{A\backslash B}$space, we sampled ${N}_{\theta}$ points ${\{{\theta}_{A}^{(i)},{\theta}_{B}^{(i)}\}}_{i=1}^{{N}_{\theta}}$ equally spaced along the lines
${\theta}_{A}{\theta}_{B}=\text{const}$in such a way that their joint distribution matches the distribution of the data (4), as shown in Figure 5—figure supplement 2a.
In the ${\eta}_{A\backslash B}$space, we drew ${N}_{\eta}$ points ${\{{\eta}_{A}^{(i)},{\eta}_{B}^{(i)}\}}_{i=1}^{{N}_{\eta}}$ at random from the empirical distribution ${\rho}_{pA}({\eta}_{A}){\rho}_{pB}({\eta}_{B})$.
For $i=1,2,\mathrm{\dots}{N}_{\theta}$, we assigned to a block of ${N}_{\eta}$ neurons the same coordinates ${\theta}_{A}^{(i)},{\theta}_{B}^{(i)}$ in the ${\theta}_{A\backslash B}$space, and all possible coordinates ${\{{\eta}_{A}^{(j)},{\eta}_{B}^{(j)}\}}_{j=1}^{{N}_{\eta}}$ in the ${\eta}_{A\backslash B}$space, so that the overall number of neurons is $N={N}_{\theta}{N}_{\eta}$.
The network dynamics we simulated is defined by the following stochastic differential equation:
where
$W$ in (33) is a Wiener process, and the parameters $\gamma =75/s,{\sigma}_{n}=0.35Hz/\sqrt{s}$ set the magnitude of the noise fluctuations. The level of noise is chosen so that the results of the PCA analysis (see next section) match the data. Note that by setting the noise to zero and taking the limit $N\to \mathrm{\infty}$ we recover the meanfield Equation 1. The results of Figure 3 are obtained from a network of $N=16000$ neurons; we simulate the network dynamics for 8 location of the external input $\mathrm{\Phi}$ and 10 trials for each of the 8 conditions, that is 10 instances of the noisy dynamics. The order parameters shown in Figure 3 are computed from singletrial activity, and then averaged over trials. The correlationbased analysis, instead, is obtained from trialaveraged activity.
Correlationbased analysis
The correlationbased analysis explained in this section was performed both on the smoothed and trialaveraged spike trains from recordings, and on the trialaveraged activity rates from simulations. Out of the 16,000 units in the simulated network model, we considered a subset of the same size $N=141$ as the recorded neurons. In particular, for each neuron in the data, we chose the corresponding one from simulations with the closest value of parameters ${\theta}_{A},{\theta}_{B},{\eta}_{A},{\eta}_{B}$  the one with smallest euclidean distance in the space defined by (${\theta}_{A},{\theta}_{B},{\eta}_{A},{\eta}_{B}$). To compute signal correlations, we preprocessed the data as follows, both for the recordings and for the simulations. For each neuron, we normalized the activity by its standard deviation (computed across all times and all conditions); then, we meancentered the activity across conditions. The $T=300$ ms long preparatory activity for all $C=8$ conditions was concatenated into a $N\times TC$ matrix denoted by $P$, and the movementrelated activity was grouped into an analogous matrix $M$.
CCA: We used a canonical correlation analysis (CCA) to compare the population activity from data and simulations. First, we projected the activity onto the $K$ PCA dimensions that captured 90% of the activity variance across conditions; for the preparatory epoch, $K=20$ for both the data and the simulations; for the movementrelated epoch, $K=14$ for the data, and $K=10$ for the simulations. We then applied CCA to look for common patterns in the activity matrices from data and simulations. In brief, CCA finds a set of linear transformations of the activity matrices, in such a way that the transformed variables (called canonical variables) are maximally correlated. Figure 5a shows a high correlation between the sets of canonical variables from data and simulations. We repeated the procedure for the simulations of the purely feedforward network. In this case, the movementrelated activity is slightly higher dimensional than for the recurrent network ($K=12$), and the average canonical correlation is a bit lower. Note that both activities are trialaveraged, and the same realization of the noise was used in both sets of simulations (feedforward and recurrent).
PCA: We obtained correlation matrices relative to preparatory and movement related activity by computing the correlations between the rows of the respective matrices. We then identified the prepPCs and movePCs by performing PCA separately on the matrices $P$ and $M$. The degree of orthogonality between the prep and move subspaces was quantified by the Alignment Index $A$ (Elsayed et al., 2016), measuring the amount of variance of the preparatory activity explained by the first $K$ movePCs:
where ${E}_{\text{mov}}$ is the matrix defined by the top $K$ movePCs, ${C}_{\text{prep}}$ is the covariance matrix of the preparatory activity and ${\sigma}_{\text{prep}}(i)$ is the $i$th eigenvalue of $C}_{\text{prep}$ was set to the number of principal components needed to explain 90% of the execution activity variance. Hence, the Alignment Index ranges from 0 (orthogonal subspaces) to 1 (aligned subspaces). As random test, we computed the Random Alignment Index between two sets of $K$ dimensions drawn at random within the space occupied by neural activity, using the Monte Carlo procedure described in Elsayed et al., 2016. We performed the same analysis on both the data ($K=14$) and the model ($K=10$) trial averaged activity.
The dimensionality of the preparatory and movementrelated subspaces can be understood as follows. The activity of the ring model encoding the value of a singular angular variable is twodimensional in the Euclidean space. Similarly, the activity of the doublering model encoding two distinct circular maps is fourdimensional. Our model is an extension of the doublering model, where both the connectivity matrix and the external fields (Equations 5; 6) are a sum of several terms, each one composed of an $\eta $dependent term multiplying a $\theta $dependent term. The connectivity matrix is still rankfour, but its eigenvectors are modulated by the $\eta $ variables. Although the dynamics is fourdimensional, we have shown that during movement preparation the activity is localized only in map A, while during movement execution it is predominantly localized in map B: in either epoch, we expect only two eigenvalues to explain most of the activity variance, as we indeed see from simulations in absence of additive noise (Figure 6—figure supplement 1). The noise term introduces extra random dimensions; as the dynamics gets higher dimensional, both the alignment index and the random alignment index get smaller.
jPCA: We quantified the rotational structure in the data by applying the jPCA (Churchland et al., 2012) dimensionality reduction technique to both simulated activity and recordings during movement execution. jPCA is a technique to identify the dimensions that capture rotational dynamics in the data. Given the matrix of activity $M$ defined above, jPCA finds the best fit for the real skewedsymmetric matrix $R$ that transforms the activity $M$ into its derivative $\dot{M}$:
The plane capturing the strongest rotations is spanned by the eigenvectors of the matrix $R$ associated with the two largest eigenvalues. Figure 5—figure supplement 4 shows that the trajectories from simulations qualitatively resembled the ones from data when projected onto the jPCA subspace that capture most of the variance.
EMG signals
The EMG signals were recorded from macaque monkey Bx during the execution of a centerout reaching movement, as described in Balasubramanian et al., 2020. Bipolar electromyographic electrodes were implanted in 13 individual muscles (Anterior Deltoid, Posterior Deltoid, Pectoralis Major, Biceps Lateral, Biceps Medial, Triceps Long head, Triceps Lateral, Brachioradialis, Flexor Carpi Ulnaris, Flexor Digitorum Superficialis, Extensor Carpi Radialis, Extensor Digitorum Communis, Extensor Carpi Ulnaris). The signals were amplified individually and bandpass filtered between 0.3 and 1 kHz prior to digitization and sampled at 10 kHz. The EMG signals shown in Figure 5b were rectified and smoothed; signals from different trials were rescaled to match the average length of movement execution, and then averaged over trials. The linear readout of the activity from simulations was defined as
where $r}_{j$ is the trialaveraged activity from simulations, with the $C=8$ conditions concatenated into a matrix of dimensions $N\times TC$, where $T$ the average duration of movement execution; $z$ is a matrix with dimension $m\times TC$, $m=13$ being the number of recorded muscles. For each muscle $k$, the EMG signal ${z}_{k}(t)$ was regressed onto the activity rate ${\{{r}_{j}(t)\}}_{j}$ to find the readout weight vector ${\{{W}_{k,j}\}}_{j=0}^{N}$. For the reconstructed EMG signals shown in Figure 5b, we used as inputs the activity of N = 1000 units drawn at random from the 16000 units in the network model. For each muscle $k$, we crossvalidated the regression by randomly partitioning the TC timesteps into 10 folds of nonconsecutive time points. The decoder was trained on 9 folds using LASSO regression, and tested on the leftout fold. We repeated the procedure 10 times and reported a normalized mean squared error of 0.0066.
Data availability
Source data and all the codes used for data analysis will be made publicly available at https://github.com/lbachromano/M1_Preparatory_Movement_Representation (copy archived at BachschmidRomano et al., 2023).
References

Kinematic coordinates in which motor cortical cells encode movement directionJournal of Neurophysiology 84:2191–2203.https://doi.org/10.1152/jn.2000.84.5.2191

Dynamics of pattern formation in lateralinhibition type neural fieldsBiological Cybernetics 27:77–87.https://doi.org/10.1007/BF00337259

SoftwareM1_Preparatory_Movement_Representation, version swh:1:rev:1ab4d05d900a55f4f44a7d02b39db72620fae02aSoftware Heritage.

Circstat: a matlab toolbox for circular statisticsJournal of Statistical Software 31:1–21.https://doi.org/10.18637/jss.v031.i10

Motor cortex is functionally organized as a set of spatially distinct representations for complex movementsThe Journal of Neuroscience 34:13574–13585.https://doi.org/10.1523/JNEUROSCI.250014.2014

Accurate path integration in continuous attractor network models of grid cellsPLOS Computational Biology 5:e1000291.https://doi.org/10.1371/journal.pcbi.1000291

An interior point algorithm for largescale nonlinear programmingSIAM Journal on Optimization 9:877–900.

A model of visuospatial working memory in prefrontal cortex: recurrent network and cellular bistabilityJournal of Computational Neuroscience 5:383–405.https://doi.org/10.1023/a:1008837311948

Functional classes of primate corticomotoneuronal cells and their relation to active forceJournal of Neurophysiology 44:773–791.https://doi.org/10.1152/jn.1980.44.4.773

Preparatory activity in premotor and motor cortex reflects the speed of the upcoming reachJournal of Neurophysiology 96:3130–3146.https://doi.org/10.1152/jn.00307.2006

Temporal complexity and heterogeneity of singleneuron activity in premotor and motor cortexJournal of Neurophysiology 97:4235–4257.https://doi.org/10.1152/jn.00095.2007

Neural implementation of Bayesian inference in a sensorimotor behaviorNature Neuroscience 21:1442–1451.https://doi.org/10.1038/s415930180233y

Neuronal activity in monkey superior colliculus related to the initiation of saccadic eye movementsThe Journal of Neuroscience 17:8566–8579.https://doi.org/10.1523/JNEUROSCI.172108566.1997

Relation of pyramidal tract activity to force exerted during voluntary movementJournal of Neurophysiology 31:14–27.https://doi.org/10.1152/jn.1968.31.1.14

Static spatial effects in motor cortex and area 5: quantitative relations in a twodimensional spaceExperimental Brain Research 54:446–454.https://doi.org/10.1007/BF00235470

A radial basis function method for global optimizationJournal of Global Optimization 19:201–227.https://doi.org/10.1023/A:1011255519438

Book13 Modeling Feature Selectivity in Local Cortical CircuitsUniversity of California San Diego.

Encoding of movement fragments in the motor cortexThe Journal of Neuroscience 27:5105–5114.https://doi.org/10.1523/JNEUROSCI.357006.2007

Euronal coding of static force in the primate motor cortexJournal de Physiologie 74:237–239.

ContextDependent force coding in motor and premotor cortical areasExperimental Brain Research 128:123–133.https://doi.org/10.1007/s002210050827

Primary motor cortical neurons encode functional muscle synergiesExperimental Brain Research 146:233–243.https://doi.org/10.1007/s002210021166x

Decoding arm speed during reachingNature Communications 9:1–14.https://doi.org/10.1038/s41467018076473

ShortTerm facilitation may stabilize parametric working memory traceFrontiers in Computational Neuroscience 5:40.https://doi.org/10.3389/fncom.2011.00040

Cortical activity in the null space: permitting preparation without movementNature Neuroscience 17:440–448.https://doi.org/10.1038/nn.3643

Tuning curves for arm posture control in motor cortex are consistent with random connectivityPLOS Computational Biology 12:e1004910.https://doi.org/10.1371/journal.pcbi.1004910

A dynamical neural network model for motor cortical activity during movement: population coding of movement trajectoriesBiological Cybernetics 69:517–524.

Deciphering the hippocampal polyglot: the hippocampus as a path integration systemThe Journal of Experimental Biology 199:173–185.https://doi.org/10.1242/jeb.199.1.173

Transitions between spatial attractors in placecell modelsPhysical Review Letters 115:098101.https://doi.org/10.1103/PhysRevLett.115.098101

Motor cortical representation of speed and direction during reachingJournal of Neurophysiology 82:2676–2692.https://doi.org/10.1152/jn.1999.82.5.2676

Corticospinal neurons with a special role in precision gripBrain Research 261:312–316.https://doi.org/10.1016/00068993(83)906352

Spatiotemporal tuning of motor cortical neurons for hand position and velocityJournal of Neurophysiology 91:515–532.https://doi.org/10.1152/jn.00587.2002

Preparatory activity in motor cortex reflects learning of local visuomotor skillsNature Neuroscience 6:882–890.https://doi.org/10.1038/nn1097

Dynamic encoding of movement direction in motor cortical neuronsThe Journal of Neuroscience 29:13870–13882.https://doi.org/10.1523/JNEUROSCI.544108.2009

Continuous attractors with morphed/correlated mapsPLOS Computational Biology 6:e1000869.https://doi.org/10.1371/journal.pcbi.1000869

Propagating waves mediate information transfer in the motor cortexNature Neuroscience 9:1549–1557.https://doi.org/10.1038/nn1802

Path integration and cognitive mapping in a continuous attractor neural network modelThe Journal of Neuroscience 17:5900–5920.https://doi.org/10.1523/JNEUROSCI.171505900.1997

The role of primary motor cortex in goaldirected movements: insights from neurophysiological studies on nonhuman primatesCurrent Opinion in Neurobiology 13:671–677.https://doi.org/10.1016/j.conb.2003.10.012

The computational and neural basis of voluntary motor control and planningTrends in Cognitive Sciences 16:541–549.https://doi.org/10.1016/j.tics.2012.09.008

Stability of working memory in continuous attractor networks under the control of shortterm plasticityPLOS Computational Biology 15:e1006928.https://doi.org/10.1371/journal.pcbi.1006928

Motor cortex neural correlates of output kinematics and kinetics during isometricforce and armreaching tasksJournal of Neurophysiology 94:2353–2378.https://doi.org/10.1152/jn.00989.2004

Cortical control of arm movements: a dynamical systems perspectiveAnnual Review of Neuroscience 36:337–359.https://doi.org/10.1146/annurevneuro062111150509

An emergent model of orientation selectivity in cat visual cortical simple cellsThe Journal of Neuroscience 15:5448–5465.https://doi.org/10.1523/JNEUROSCI.150805448.1995

Selforganizing continuous attractor networks and path integration: onedimensional models of head direction cellsNetwork 13:217–242.

A neural network that finds a naturalistic solution for the production of muscle activityNature Neuroscience 18:1025–1033.https://doi.org/10.1038/nn.4042

Anticipatory activity of motor cortex neurons in relation to direction of an intended movementJournal of Neurophysiology 39:1062–1068.https://doi.org/10.1152/jn.1976.39.5.1062

Direct cortical control of muscle activation in voluntary arm movements: a modelNature Neuroscience 3:391–398.https://doi.org/10.1038/73964

Cosine tuning minimizes motor errorsNeural Computation 14:1233–1260.https://doi.org/10.1162/089976602753712918

Associative memory and hippocampal place cellsInternational Journal of Neural Systems 6:81–86.

Computation through neural population dynamicsAnnual Review of Neuroscience 43:249–275.https://doi.org/10.1146/annurevneuro092619094115

Changes in motor cortical activity during visuomotor adaptationExperimental Brain Research 121:285–299.https://doi.org/10.1007/s002210050462

Activity of superior colliculus in behaving monkey. 3. cells discharging before eye movementsJournal of Neurophysiology 35:575–586.https://doi.org/10.1152/jn.1972.35.4.575

Representation of spatial orientation by the intrinsic dynamics of the headdirection cell ensemble: a theoryThe Journal of Neuroscience 16:2112–2126.https://doi.org/10.1523/JNEUROSCI.160602112.1996

Independent generation of sequence elements by motor cortexNature Neuroscience 24:412–424.https://doi.org/10.1038/s41593021007985
Article and author information
Author details
Funding
National Institutes of Health (R01NS104898)
 Nicolas Brunel
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Wei Liang from the Hatsopoulos lab for providing the EMG signals; Jason MacLean, Alex P Vaz, Subhadra Mokashe, Alessandro Sanzeni for helpful discussions; and Stephen H Scott for pointing the work of Nashef et al., 2021 to our attention. This work has been supported by NIH R01NS104898.
Ethics
The data analyzed in this paper has been previously published in Rubino, Robbins, Hatsopoulos, Nature neuroscience. 2006 Dec;9(12):154957, where the ethics approval is described. No ethical approval was necessary for this article because no new experiments were performed.
Copyright
© 2023, BachschmidRomano et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 740
 views

 106
 downloads

 5
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Neuroscience
Modern neurophysiological recordings are performed using multichannel sensor arrays that are able to record activity in an increasingly high number of channels numbering in the 100s to 1000s. Often, underlying lowerdimensional patterns of activity are responsible for the observed dynamics, but these representations are difficult to reliably identify using existing methods that attempt to summarize multivariate relationships in a post hoc manner from univariate analyses or using current blind source separation methods. While such methods can reveal appealing patterns of activity, determining the number of components to include, assessing their statistical significance, and interpreting them requires extensive manual intervention and subjective judgment in practice. These difficulties with component selection and interpretation occur in large part because these methods lack a generative model for the underlying spatiotemporal dynamics. Here, we describe a novel component analysis method anchored by a generative model where each source is described by a biophysically inspired statespace representation. The parameters governing this representation readily capture the oscillatory temporal dynamics of the components, so we refer to it as oscillation component analysis. These parameters – the oscillatory properties, the component mixing weights at the sensors, and the number of oscillations – all are inferred in a datadriven fashion within a Bayesian framework employing an instance of the expectation maximization algorithm. We analyze highdimensional electroencephalography and magnetoencephalography recordings from human studies to illustrate the potential utility of this method for neuroscience data.

 Neuroscience
A neural signature of serial dependence has been found, which mirrors the attractive bias of visual information seen in behavioral experiments.