Interplay between external inputs and recurrent dynamics during movement preparation and execution in a network model of motor cortex
Abstract
The primary motor cortex has been shown to coordinate movement preparation and execution through computations in approximately orthogonal subspaces. The underlying network mechanisms, and the roles played by external and recurrent connectivity, are central open questions that need to be answered to understand the neural substrates of motor control. We develop a recurrent neural network model that recapitulates the temporal evolution of neuronal activity recorded from the primary motor cortex of a macaque monkey during an instructed delayed-reach task. In particular, it reproduces the observed dynamic patterns of covariation between neural activity and the direction of motion. We explore the hypothesis that the observed dynamics emerges from a synaptic connectivity structure that depends on the preferred directions of neurons in both preparatory and movement-related epochs, and we constrain the strength of both synaptic connectivity and external input parameters from data. While the model can reproduce neural activity for multiple combinations of the feedforward and recurrent connections, the solution that requires minimum external inputs is one where the observed patterns of covariance are shaped by external inputs during movement preparation, while they are dominated by strong direction-specific recurrent connectivity during movement execution. Our model also demonstrates that the way in which single-neuron tuning properties change over time can explain the level of orthogonality of preparatory and movement-related subspaces.
Editor's evaluation
The study develops a recurrent network model of M1 for center-out reaches, starting from a conventional tuning (or representational) perspective. Through recurrent connectivity, the model shows uncorrelated tuning for movement direction during preparation and execution with the dynamic transition between the two states. The continuous attractor model provides an important example of flexible switching between neural representations and is supported by convincing simulations and analysis.
https://doi.org/10.7554/eLife.77690.sa0Introduction
The activity of the primary motor cortex (M1) during movement preparation and execution plays a key role in the control of voluntary limb movement (Evarts, 1968; Whishaw et al., 1993; Whishaw, 2000; Graziano et al., 2002; Harrison et al., 2012; Scott, 2012; Brown and Teskey, 2014). Classic studies of motor preparation were performed in a delayed-reaching task setting, showing that firing rates correlate with task-relevant parameters during the delay period, despite no movement occurring (Hanes and Schall, 1996; Tanji and Evarts, 1976; Churchland et al., 2006a; Churchland et al., 2006b; Messier and Kalaska, 2000; Dorris et al., 1997; Glimcher and Sparks, 1992; Glimcher and Sparks, 1992; Wurtz and Goldberg, 1972; Darlington et al., 2018; Darlington and Lisberger, 2020). More recent works have shown that preparatory activity is also displayed before non-delayed movements (Lara et al., 2018), that it is involved in reach correction (Ames et al., 2019), and that when multiple reaches are executed rapidly and continuously, each upcoming reach is prepared by the motor cortical activity while the current reach is in action (Zimnik and Churchland, 2021). Preparation and execution of different reaches are thought to be processed simultaneously without interference in the motor cortex through computation along orthogonal dimensions (Zimnik and Churchland, 2021). Indeed, the preparatory and movement-related subspaces identified by linear dimensionality reduction methods are almost orthogonal (Elsayed et al., 2016) so that simple linear readouts that transform motor cortical activity into movement commands will not produce premature movement during the planning stage (Kaufman et al., 2014). However, response patterns in these two epochs are nevertheless linked, as demonstrated by the fact that a linear transformation can explain the flow of activity from the preparatory subspace to the movement subspace (Elsayed et al., 2016). How this population-level strategy is implemented at the circuit level is still under investigation (Kao et al., 2021, ). A related open question (Malonis et al., 2021) is whether inputs from areas upstream to the primary motor cortex (such as from the thalamus and other cortical regions, here referred to as external inputs) that have been shown to be necessary to sustain movement generation (Sauerbrei et al., 2020) are specific to the type of movement being generated throughout the whole course of the motor action, or if they serve to set the initial conditions for the dynamics of the motor cortical network to evolve as shaped by recurrent connections (Churchland et al., 2012; Shenoy et al., 2013; Hennequin et al., 2014; Sussillo et al., 2015; Kaufman et al., 2016; Vyas et al., 2020).
In this work, we use a network modeling approach to explain the relationship between network connectivity, external inputs, and computations in orthogonal dimensions. Our analysis is based on electrophysiological recordings from M1 of a macaque monkey performing a delayed center-out reaching task. The dynamics of motor cortical neurons during reaching limb movements has been shown to be low-dimensional (Gallego et al., 2018). Here, we develop a low-dimensional description of the dynamics using order parameters that quantify the covariation between neural activity and the direction of motion (see Georgopoulos et al., 1986; Schwartz et al., 1988; Georgopoulos et al., 1989; Georgopoulos et al., 1993 but also Scott and Kalaska, 1997; Scott et al., 2001). Recorded neurons are tuned to the direction of motion both during movement preparation and execution, but their preferred direction and amplitude of the tuning function change over time (Hatsopoulos et al., 2007; Churchland and Shenoy, 2007; Rickert et al., 2009). Interestingly, major changes happen when the activity flows from the preparatory to the movement-related subspaces. We describe neuronal selectivity during the task by four parameters: two angular variables corresponding to the preferred direction during movement preparation and execution, respectively; and two parameters that represent the strength of tuning in the two epochs. We characterized the empirical distribution of these parameters, and investigated potential network mechanisms that can generate the observed tuning properties by building a recurrent neural network model, whose synaptic weights depend on tuning properties of pre and post-synaptic neurons, and external inputs can contain information about movement direction. First, we analytically derived a low-dimensional description of the dynamics in terms of a few observables denoted as order parameters, which recapitulate the temporal evolution of the population-level patterns of tuning to the direction of motion. Then, we inferred the strength of recurrent connections and external inputs from data, by imposing that the model reproduce the observed dynamics of the order parameters. There are multiple combinations of feedforward and recurrent connections that allow the model to generate neural activity that strongly resembles the one from recordings both at the single-neuron and population level, and that can be transformed into realistic patterns of muscle activity by a linear readout. To break the model degeneracy, we imposed an extra cost associated with large external inputs – that likely require more metabolic energy consumption compared to local recurrent inputs. The resulting solution suggests that different network mechanisms operate during movement preparation and execution. During the delay period, the population activity is shaped by external inputs that are tuned to the preferred directions of the neurons. During movement execution, the localized pattern of activity is maintained via strong direction-specific recurrent connections. Finally, we show how the specific way in which neurons tuning properties rearrange over time produces the observed level of orthogonality between the preparatory- and movement-related subspaces.
Results
Subjects and task
We analyzed multi-electrode recordings from the primary motor cortex (M1) of two macaque monkeys performing a previously reported instructed-delay, center-out reaching task (Rubino et al., 2006). The monkey’s arm was on a two-link exoskeletal robotic arm, so that the position of the monkey’s hand controlled the location of a cursor projected onto a horizontal screen. The task consisted of three periods (Figure 1a): a hold period, during which the monkey was trained to hold the cursor on a center target and wait 500ms for the instruction cue; an instruction period, during which the monkey was presented with one of eight evenly spaced peripheral targets and continued to hold at the center for an additional 1,000–1,500ms; a movement period, signalled by a go cue, when the monkey initiated the reach to the peripheral target. Successful trials for which the monkeys reached the target were rewarded with a juice or water reward. The peripheral target was present on the screen throughout the whole instruction and movement periods. In line with previous studies (e.g. Elsayed et al., 2016), the preparatory and movement-related epochs were defined as two 300ms time intervals beginning, respectively, 100ms after target onset and 50ms before the start of the movement.
Patterns of correlation between neural activity and movement direction
Studies of motor cortex have shown that tuning to movement direction is not a time-invariant property of motor cortical neurons, but rather varies in time throughout the course of the motor action; single-neuron encoding of entire movement trajectories has been also reported (e.g. [Hatsopoulos et al., 2007; Churchland and Shenoy, 2007]). We measured temporal variations in neurons preferred direction by binning time into time bins and fitting the binned trial-averaged spike counts as a function of movement direction with a cosine function. As the variability in preferred direction during the delay period alone and during movement execution alone was significantly smaller than the variability during the entire duration of the task (Figure 1—figure supplement 1), we characterized neurons tuning properties only in terms of these two epochs. This simplifying assumption is discussed further in Methods and Discussion. We will show later that such a simplification is enough for the model to recapitulate neuronal activity both at the level of single-units and at the population level.
Figure 1b–c show two examples of tuning curves, where the trial-averaged and time-averaged activity during movement preparation and execution is plotted as a function of the location of the target on the screen, together with a cosine fitting curve. In the two epochs, neurons change their preferred direction - denoted, respectively, as and - and their degree of participation, which is proportional to the amplitude of the cosine tuning function - denoted as and . A scatter plot of the preferred directions in preparatory () and execution () epochs for all neurons is shown in Figure 1d, while a similar scatter plot for the degrees of tuning in preparatory () and execution () epochs is shown in Figure 1e. Neurons preferred direction in the two epochs are moderately correlated (circular correlation coefficient ), and so are their degree of participation, even if to a lesser degree (correlation coefficient ). Neuronal tuning to movement direction is reflected in a population activity profile that is spatially localized, as shown in Figure 1f-g, were we plot the normalized firing rate for all neurons, as a function of their preferred direction, separately for the two epochs. An example of the activity of a single neuron in the two epochs is highlighted in red. It illustrates how the activity of single neurons is drastically rearranged across time, while the population activity remains localized around the same angular location, which corresponds to the location of the target on the screen.
Recurrent neural network model
The experimental observations described above motivated us to build a network model in which neuronal selectivity properties match the ones observed in the data. In particular, we built a network model in which neurons are characterized by the four same parameters we use to fit the preferred directions and degrees of tuning, , leading to a 4-dimensional selectivity space. The subspace defined by the coordinates is denoted as map , while the subspace defined by is denoted as map . We will denote the coordinate vector by
and refer to the units in the network as neurons, even though each unit in the model could instead represents a group of M1 neurons with similar functional properties, and likewise the connection between two units in our model could represent the effective connection between the two functionally similar groups of neurons in M1.
In this model, neurons with coordinates (selectivity parameters) are described by their firing rate whose temporal evolution is given by:
where is the time constant of firing rate dynamics. is the threshold-linear (a.k.a. relu) transfer function that converts synaptic inputs in firing rates, and is the total synaptic input to neurons with coordinates . We set the time constant to ms, which is of the same order of magnitude of the membrane time constant, and we checked that for values of in the range our results did not quantitatively change. The total input to a neuron is
The first term in the r.h.s. of (2) is the recurrent input, which depends on the firing rates of presynaptic neurons , and on , the strength of recurrent connections from neurons with coordinates to neurons with coordinates . is the external input, and is the probability density of .
The recurrent term in is the sum of single neuron contributions: here, we assumed that the cortical network is sufficiently large that instead of summing over the contributions of single neurons, each one at a given coordinate , we can integrate over a continuous distribution of . The probability density of the coordinates is set to match the empirical distribution of the preferred direction and degree of participation. The preferred directions are not significantly correlated with the degrees of tuning (Figure 1—figure supplement 2), and we therefore took them to be independent:
The distribution of preferred directions was well fitted by:
as shown in Figure 1—figure supplement 2, while for the sake of simplicity we assumed and estimated non-parametrically using kernel density estimation. The last two ingredients that we need to specify to define the network dynamics are the recurrent couplings and external inputs. The strength of synaptic connections from a pre-synaptic neuron with preferred directions and participation strengths to a post-synaptic neuron with preferred directions and participation strengths is given by
where represents a uniform inhibitory term; and measure the amplitude of the symmetric direction-specific connections in map and map , respectively; and measures the amplitude of asymmetric connections from map to map . In the Methods, we explain how the dynamics of a network with such recurrent connectivity can reproduce the spatially localized population activity that we observed in the data, and we provide an intuition for the role of the different coupling parameters. A schematic depiction of the network is shown in Figure 2a. We parameterized the external input in analogy with the recurrent input:
where represents an untuned homogeneous external input, and represent untuned map-specific inputs to maps and , respectively, and represent directionnally tuned inputs to maps and , respectively, and is the direction encoded by external inputs. An example of the spatially localized population activity at a given time visualized in different 2-D subspaces of the 4-D space with coordinates is shown in Figure 2b, blue and an example of tuning curves from the network’s activity is shown in Figure 2c.
Order parameters
The simplicity of the model described by Equations 1–6 allowed us to derive a low dimensional description of the model dynamics in terms of a few population-level signals denoted as order parameters. The order parameters quantify the average population activity , and the degree to which the population activity is localized in map A () and B (). These parameters are defined by the following equations:
Note that and are the first Fourier coefficient of the population rate over the domain and , respectively.
These order parameters can be both computed in the model and from data. Therefore, they can provide us with a simple tool to compare model and data, and to infer model network parameters from data. Figure 3a shows the dynamics of the order parameters computed from the data during the delayed reaching task. As expected, we see that during the preparatory period the activity activity is localized in map () but not in map (). Around the time of the go cue, network activity dynamically reorganizes, such as the network becomes now strongly localized in map (), while the degree of modulation in map slowly decreases towards zero. We can think of map and map as two different coordinate systems that the network can use to produce distinct patterns of population activity, that are both localized around the location of the target on the screen, but in different ways. A spatially localized activity profile associated with either or is denoted as a bump. A definition of this term provided in the Methods. At different times, the bump of activity has different shapes – either localised in map , or in map , or in both maps – but its location remains constant. Next, we studied possible mechanisms underlying these observations.
Tuned states in the model: External inputs vs recurrent connectivity
We next investigated the network model to understand the mechanisms underlying the observed dynamics. We first derived equations governing the temporal evolution of the order parameters (see Methods). We then analyzed the stationary solutions of the equations in the absence of tuned inputs (), in the space of the three parameters defining the couplings strength: . We found that, similarly to the ring model (Ben-Yishai et al., 1995; Hansel and Sompolinsky, 1998), there exists two qualitatively distinct regions in the space of parameters characterizing recurrent connectivity (Figure 3b). For weak recurrent connectivity (gray area in Figure 3b), the activity in the network is uniform in the absence of tuned inputs. Thus, in this region, tuned network activity must rely on external inputs. For strong recurrent connectivity (white area in Figure 3b), network activity is tuned, even in the absence of tuned inputs. In this case, the location of the bump in network activity is determined by initial conditions. These two regimes are separated by a bifurcation line, where the uniform solution becomes unstable due to a Turing instability (Ben-Yishai et al., 1995; Hansel and Sompolinsky, 1998).
This analysis shows that a network activity profile that is localised in a given map, say map A (), can be sustained either thanks to the strong recurrent connectivity (i.e. a strong enough parameter in Equation 5), or thanks to the external input term proportional to (Equation 6). Hence, our model can potentially generate the observed dynamics of the order parameters (Figure 3a) for different choices of recurrent connections and external inputs parameters, ranging in between the following two opposite scenarios:
Recurrent connections are absent: . In this case, the system is simply driven by feedforward inputs. The localized activity is the result of external inputs that selectively excites or inhibits specific neurons during motor preparation and execution, thanks to sufficiently large values of the parameter during preparation and during execution. This scenario is depicted schematically in Figure 3c.
The network is strongly recurrent, with strongly direction-specific connections, and external inputs are untuned (, ). Analytical study of the dynamics (Methods) shows that, in order for such a system to exhibit tuning, the strength of the couplings has to exceed a critical value shown in Figure 3b. Moreover, when the synaptic strength exceeds this value, the activity can be localized in map A during preparation and in map B during execution simply as the result of homogeneous external inputs changing their strength, without the need for tuned external inputs to selectively excite/inhibit specific neurons. This scenario is depicted in Figure 3d.
We next turn to the question of which of these two scenarios best describes the data.
Fitting the model to the data
Our next goal is to infer the strength of recurrent connections () and feedforward inputs () that best describes the data. To do so, we imposed that our network reproduce the dynamics of the population-level order parameters . Specifically, for a given set of recurrent connections and feedforward inputs, we can compute analytically the dynamics of the order parameters (Methods). Based on these calculations, we build an iterative procedure that minimizes the reconstruction error , i.e. the squared difference between the predicted and observed dynamics of the order parameters. The results show that there is a large region of the -space where the reconstruction error has essentially the same value (Figure 4—figure supplement 2). Solutions range from a a network with zero recurrent connections (i.e. a purely feedforward scenario, shown in Figure 4a-c) to a network with strong direction-specific connections.
To break the degeneracy of solutions, we added an energetic cost to the reconstruction error. This reflects the idea that a biological network where the computation is only driven by long-range connections from upstream areas is likely to consume more metabolic energy than a network where the computation is processed through shorter-range recurrent connections. Our inference procedure minimizes the cost function:
where is the total magnitude of external inputs (see Methods, Equation 31). The hyperparameter describes the relative strength of these two terms. For very small values of , the algorithm mainly minimizes external inputs, and the resulting reconstruction of the dynamics is poor. For very large values of , multiple solutions are found, that yield similar dynamics (see Figure 4c and Figure 4i). Figure 4—figure supplement 1 shows the results we obtained by using intermediate values of , that are small enough to yield a unique solution, but large enough to guarantee a good reconstruction of the dynamics. Interestingly, all solutions cluster in a small region of the parameter space, that is close to the bifurcation surface in such a way that the parameter is close to (either larger or smaller) its critical value; is much smaller than its critical value, and is zero.
We show the results obtained with two distinct values of in Figure 4d and Figure 4g; the corresponding dynamics of the inferred external inputs in Figure 4e and Figure 4f; and the resulting dynamics of the order parameters from mean field equations in Figure 4f and Figure 4i. In particular, Figure 4d corresponds to a smaller value of , that is we are penalizing more strongly for large external inputs: the resulting coupling parameters are stronger than their critical value. Figure 4g corresponds to a larger value of : the strength of the couplings is below the critical line, yielding larger external inputs (Figure 4h) and a smaller reconstruction error (Figure 4i). The purely feedforward case is added in Figure 4a-c for comparison. While a purely feedforward network requires a strong input tuned to map right before the movement onset (Figure 4b), in our solution such input is either absent (Figure 4e) or much weaker than the corresponding untuned input (Figure 4h). Our analysis therefore suggests that recurrent connectivity plays a major role in maintaining the degree of spatial modulation observed in the data.
Importantly, the same analysis applied to a second dataset recorded from a different macaque monkey performing the same task yielded qualitatively similar results (see Figure 4—figure supplement 3).
The model generates realistic neural and muscle activity
We have shown that the model is able to reproduce the dynamics of the order parameters computed from data. Here, we ask how the activity of single neurons in our model compares to data, and whether a readout of neural activity can generate realistic patterns of muscle activity. We simulated the dynamics of a network of 16,000 neurons. To each neuron , we assigned the coordinates so as to match the empirical distribution of coordinates (Equation 3 and Figure 5—figure supplement 3a). We added a noise term to the total current that each neuron is subject to, modelled as an Ornstein-Uhlenbeck process (see Methods, Equation 33).
First, we checked that the dynamics of the order parameters – previously computed by numerically integrating the analytical equations – is also correctly reconstructed by simulations (Figure 5—figure supplement 1b). Figure 5—figure supplement 1 shows the case of a network with couplings parameters stronger than their critical value, where the dynamics is dominated by recurrent currents and is very sensitive to noise The location of the bump undergoes a small diffusion around the value predicted by the mean field analysis (Figure 5—figure supplement 1c and Figure 5—figure supplement 1d).
Then, we computed tuning curves during the preparatory and execution epochs from simulations, and estimated the values from the location and the amplitude of the tuning functions. The reconstructed values of neurons tuning parameters are consistent with the values initially assigned to them when we built the network (Figure 5—figure supplement 2b). We noticed (Figure 5—figure supplement 2c) that the quality of the cosine fit strongly correlates with the magnitude of , as we see in the data (Figure 1—figure supplement 2e-f): for neurons with a low value of the tuning functions poorly resemble a cosine. Overall, tuning curves from simulations strongly resemble the ones from data (see Figure 5—figure supplement 3, and Discussion).
Next, we showed that the network activity closely resembles the one from recordings. At the level of the population, we used canonical correlation analysis to identify common patterns of activity between simulations and recordings, and to show that they strongly correlate (see Figure 5a and Methods). At the level of single units, for each neuron in the data we selected a neuron in the model network with the closest value of variables. A side-by-side comparison of the time course of responses shows a good qualitative agreement (Figure 5c). We also noted that both the activity from recordings and from simulations present sequential activity and rotational dynamics (Figure 5—figure supplement 4). Finally, a linear readout of the activity from simulations can generate realistic patterns of muscle activity, which closely match electromyographic (EMG) signals recorded from Monkey Bx during a center-out reaching movement (Figure 5b).
So far in this section, we discussed results from a network with strong couplings; Figure 5—figure supplement 5 shows that the dynamics of a purely feedforward network is almost indistinguishable from from the strongly recurrent case - although canonical correlations between the activity of a purely feedforward network and the data are slightly lower than the ones for networks with strong recurrent couplings, during movement execution (average canonical correlation of 0.74 for the purely feedforward network, and 0.82 for the strongly recurrent one). We also noticed that networks with coupling parameters below the bifurcation surface are robust to noise in the variables. Conversely, the solutions above the bifurcation surface that do not require tuned external inputs are very sensitive to such noise, and the location of the bump of the activity drifts towards a few discrete attractors. When the noise is weak, the drift happens on a time scale that is much larger that the time of movement execution; the larger the level of the noise, the faster the drift.
PCA subspaces dedicated to movement preparation and execution
In the previous sections, we considered the low-dimensional description of the dynamics given by the order parameters. Here, we use the commonly used principal component analysis (PCA) to compare the dimensionality of data and network model. In particular, we ask whether our formalism can explain the orthogonality of the preparatory and movement-related linear subspaces. As previously reported in the literature Kaufman et al., 2014; Elsayed et al., 2016, the top panel in Figure 6a shows that the top principal components of the preparatory activity explain most of the variance of the preparatory activity, but very little variance of the movement-related activity; vice versa, the top movement-related principal components explain very little variance of the preparatory activity (Figure 6a, bottom panel). The activity of our network model also displays this property (Figure 6b). We also quantified the level of orthogonality of the two subspaces using the alignment index as defined in Elsayed et al., 2016, that measures the percentage of variance of each epoch’s activity explained by both sets of PCs. Figure 6c shows that the alignment index is much smaller than the one computed between two subspaces drawn at random (random alignment index, explained in the Methods), in both data and network simulations. Note that model with uniform degrees of participation () displays a higher overlap between the preparatory and movement-related subspaces, as shown in Figure 6—figure supplement 2c. In the Methods section, we explain how the dimensionality of the activity depends on the connectivity of the network.
Discussion
Orthogonal spaces dedicated to movement preparation and execution
Studies on the dynamics of motor cortical activity during delayed reaching tasks have shown that the primary motor cortex employs an ‘orthogonal but linked strategy’ (Kaufman et al., 2014; Elsayed et al., 2016) to coordinate planning and execution of movements.
In this work, we explored the hypothesis that this strategy emerges as result of a specific recurrent functional architecture, in which synaptic connections store information about two distinct patterns of activity that underlie movement preparation and movement execution. We built a network model in which neurons are characterized by their selectivity properties in both preparatory and movement execution epochs (preferred direction and degree of participation), and in which synaptic connectivity is shaped by these selectivity properties in a Hebbian-like fashion.
A strong correlation between the selectivity properties of the preparatory and movement-related epochs will produce strongly correlated patterns of activity in these two intervals and a strong overlap between the respective PCA subspaces. We inferred the distribution of these tuning features from data and showed that the correlation between the preparatory and movement-related patterns of activity is small enough to allow for almost orthogonal subspaces, which is thought to be important for the preparatory activity not to cause premature movement. At the same time, the correlation is non-zero, and that allows the activity to flow from the preparatory to the movement-related subspaces with minimal external inputs, as summarized in the next section.
Interplay between external and recurrent currents
We analytically described the temporal evolution of the population activity in the low-dimensional space defined by maps A and B in terms of a few order parameters, which can be easily computed from data. Different combinations of the strength of direction-specific recurrent connections and of tuned external inputs allow the model to accurately reproduce the dynamics of the order parameters from data. We argue that solutions that require less inputs from areas upstream of the motor cortex are favorable in terms of metabolic energy consumption. With the addition of a cost proportional to the magnitude of external inputs, we find solutions where recurrent connections are strong and direction specific. In the resulting scenario, during movement preparation, an external input tuned to map A sustains a population-level activity localized in map A, and pins the location of the peak of activity. During movement execution, the localized activity is sustained mostly by recurrent connectivity; the correlation between preferred directions and allows the activity in map B during movement execution to be localized around the same location as it was in map A during movement preparation. The inferred strength of recurrent connectivity is close to the critical value above which the recurrent network can generate localized patterns of activity in the absence of tuned inputs. Solutions well beyond the bifurcation line require an implausible fine tuning of the recurrent connections, as heterogeneity in the connectivity causes a systematic drift of the encoded direction of motion on a typical time scales of seconds – the larger the structural noise in the couplings, the faster the drift, as has been extensively studied in continuous attractor models (Tsodyks and Sejnowski, 1995; Zhang, 1996; Renart et al., 2003; Itskov et al., 2011; Seeholzer et al., 2019; Compte et al., 2000). It has been shown that homeostatic mechanisms could compensate for the heterogeneity in cellular excitability and synaptic inputs to reduce systematic drifts of the activity (Renart et al., 2003) and that short-term synaptic facilitation in recurrent connections could also significantly improve the robustness of the model (Itskov et al., 2011) – even when combined with short-term depression (Seeholzer et al., 2019). While a full characterization of our model in the presence of structural heterogeneity is beyond the scope of this work, we note that solutions close but below the bifurcation line are stable with respect to perturbations in the couplings. In this case, tuned inputs are present also during movement execution, but their magnitude is much weaker than the untuned ones: direction-specific couplings are strong and amplify the weak external inputs tuned to map B, therefore playing a major role into shaping the observed dynamics during movement execution.
Our prediction that external inputs are direction-specific during movement preparation but mostly non-specific during movement execution needs to be tested experimentally. Interestingly, it agrees with several previous studies on the activity of the primary motor cortex during limb movement. In particular, Kaufman et al., 2016 showed that changes in neural activity that characterize the transition from movement preparation to execution reflect when movement is made but are invariant to movement direction and type, in monkeys performing a reaching task; Inagaki et al., 2022 detected a large, movement non-specific thalamic input to the cortex just before movement onset, in mice performing a licking task. The authors of Nashef et al., 2019 used high-frequency stimulation to in- terfere with the normal flow of information through the cerebellar-thalamo-cortical (CTC) pathway in monkeys performing a center-out reaching task. This perturbation produced reversible motor deficits, preceded by substantial changes in the activity of motor cortical neurons around movement execution. Interestingly, the spatial tuning of motor cortical cells was unaffected, and their early preparatory activity was mostly intact. These results are in line with our prediction, if we interpret the condition-invariant inputs that we inferred during movement execution as thalamic inputs that are part of the CTC pathway. We speculate that the direction-specific inputs that we inferred during movement preparation have a different origin. Further simultaneous recordings of M1 and upstream regions, as well as measures of synaptic strength between motor cortical neurons, will be necessary to test our predictions.
Cosine tuning
While cosine tuning functions have been extensively used to describe the firing properties of motor cortical neurons (e.g. Georgopoulos et al., 1982), and they were also hypothesized to be the optimal tuning profile to minimize the expected errors in force production (Todorov, 2002), more recent work has emphasized that tuning functions in the motor cortex present heterogeneous shapes. Specifically, the authors of Lalazar et al., 2016 showed that tuning functions are well fitted by a sum of a cosine-modulated component, and an unstructured component, often including terms with higher spatial frequency. They also argued that the unstructured component is key for a readout of motor cortical activity to reproduce EMG activity. In this work, we showed that a noisy recurrent network model that is based on cosine tuning reproduces well the coefficient of the cosine fit of tuning curves from data (see a comparison between Figure 1—figure supplement 2e-f and Figure 5—figure supplement 2c). Moreover, tuning curves from simulations present heterogeneous shapes (Figure 5—figure supplement 3), including bimodal profiles, especially for neurons with low values of . Indeed, we showed that the of the cosine fit strongly correlates with the variables , both in the data and in the model. Neurons with a low degree of participation have low coefficient of the cosine fit, and present heterogeneous tuning curves. We also showed that a linear readout of the activity of our model can very accurately reconstruct EMG activity. Finally, our model could easily be extended to incorporate other tuning functions. While we leave the analysis of a model with an arbitrary shape of the tuning function to future work, we don’t expect our results to strongly depend on the specific shape of the tuning function.
Comparison with the ring model
The idea that the tuning properties of motor cortical neurons could emerge from direction-specific synaptic connections goes back to the work of Lukashin and Georgopoulos, 1993. However, it was with the theoretical analysis of the so called ring model (Amari, 1977; Ben-Yishai et al., 1995; Somers et al., 1995; Hansel and Sompolinsky, 1998) that localized patterns of activity were formalized as attractor states of the dynamics in networks with strongly specific recurrent connections. Related models were later used to describe maintenance of internal representations of continuous variables in various brain regions (Zhang, 1996; Redish et al., 1996; McNaughton et al., 1996; Seung et al., 2000; Tsodyks, 1999; Camperi and Wang, 1998; Compte et al., 2000; Samsonovich and McNaughton, 1997; Stringer et al., 2002; Burak and Fiete, 2009) and were extended to allow for storage of multiple continuous manifolds (Battaglia and Treves, 1998; Romani and Tsodyks, 2010; Monasson and Rosay, 2015) to model the firing patterns of place cells the hippocampus of rodents exploring multiple environments. While our formalism builds on the same theoretical framework of these previous works, we would like to stress two main differences between our model and the ones previously considered in the literature. First, we studied the dynamic interplay between fluctuating external inputs and recurrent currents, that causes the activity to flow from the preparatory map to the movement-related one and, consequently, neurons tuning curves and order parameters to change over time, while at the population level the pattern of activity remains localized around the same location. Then, we introduced an extra dimension representing the degree of participation of single neurons to the population pattern of activity; this is an effective way to introduce neuron-to-neuron variability in the responses, which decreases the level of orthogonality between the preparatory and movement-related subspaces, and yields tuning curves whose shape resembles the one computed from data – in contrast with the classic ring model, where all tuning curves have the same shape.
Representational vs dynamical system approaches
There has been a debate whether neuronal activity in motor cortex is better described by so-called ‘representational models’ or dynamical systems models (Churchland and Shenoy, 2007; Michaels et al., 2016). Representational models (Michaels et al., 2016; Inoue et al., 2018) are models in which neuronal firing rates are related to movement parameters (Evarts, 1968; Georgopoulos et al., 1982; Georgopoulos et al., 1984; Paninski et al., 2004; Moran and Schwartz, 1999; Smith et al., 1975; Hepp-Reymond et al., 1978; Cheney and Fetz, 1980; Kalaska et al., 1989; Taira et al., 1996; Cabel et al., 2001). Dynamical systems models (Sussillo et al., 2015) are instead recurrent neural networks whose synaptic connectivity is trained in order to produce a given pattern of muscle activity. Such models have been argued to reproduce better the dynamical patterns of population activity in motor cortex (Michaels et al., 2016).
In our model, firing rates are described by a system of coupled ODEs, and the synaptic connectivity is built from the neuronal tuning properties, in the spirit of the classical ring model discussed above, and of the model introduced by Lukashin and Georgopoulos, 1993. Our model is thus a dynamical system where kinematic parameters can be decoded from the population activity.
Importantly, our model is constrained to reproduce the dynamics of a few order parameters that are a low-dimensional representation of the activity of recorded neurons. In contrast to kinematic-encoding models, our model can recapitulate the heterogeneity of single-unit responses. Moreover, as in trained recurrent network models, a linear readout of the network activity can reproduce realistic muscle signals.
The advantage of our model with respect to a trained RNN is that it yields a low-rank connectivity matrix which is simple enough to allow for analytical tractability of the dynamics. The model can be used to test specific hypotheses on the relationship between network connectivity, external inputs and neural dynamics, and on the learning mechanisms that may lead to the emergence of a given connectivity structure. The model is also helpful to illustrate the problem of degeneracy of network models. An interesting future direction would be to compare the connectivity matrices of trained RNNs and of our model.
Extension to modeling activity underlying more complex tasks
Neurons directional tuning properties have been shown to be influenced by many contextual factors that we neglected in our analysis (Hepp-Reymond et al., 1999; Muir and Lemon, 1983), to depend on the acquisition of new motor skills (Paz et al., 2003; Li et al., 2001; Wise et al., 1998) and other features of movement such as the shoulder abduction/adduction angle even for similar hand kinematic profiles (Scott and Kalaska, 1997; Scott et al., 2001), or the speed of movement for similar trajectories (Churchland and Shenoy, 2007). A large body of work (Scott et al., 2001; Ajemian et al., 2000; Gribble and Scott, 2002; Hepp-Reymond et al., 1999; Todorov, 2000; Holdefer and Miller, 2002; Sergio et al., 2005) has shown that the activity in the primary motor cortex covaries with many parameters of movement other than the hand kinematics – for a review, see Scott, 2003. More recent studies Michaels et al., 2016; Russo et al., 2018; Sergio et al., 2005; Schroeder et al., 2021 have also suggested that the largest signals in motor cortex may not correlate with task-relevant variables at all.
Our model can be extended to more realistic scenarios in several ways. A simplifying assumption we made is that the task can be clearly separated into a preparatory phase and one movement-related phase. A possible extension is one where the motor action is composed of a sequence of epochs, corresponding to a sequence of maps in our model. It will be interesting to study the role of asymmetric connections for storing a sequence of maps. Such a network model could be used to study the storage of motor motifs in the motor cortex (Logiaco et al., 2021); external inputs could then combine these building blocks to compose complex actions. Moreover, incorporating variability in the degree of symmetry of the connections could allow to model features that are not currently included, such as the speed of movement.
In summary, we proposed a simple model that can explain recordings during a reaching task. It provides a scaffold upon which more sophisticated models could be built, to explain neural activity underlying more complex tasks.
Methods
Animals
Figures 1—6 are based on electrophysiological recordings from the primary motor cortex of Macaque monkey Rk (141 units, 391 trials), while in Figure 4—figure supplement 3 we analyzed recordings from Macaque monkey Rj (57 units, 191 trials). For details on the electrophysiology and on the multi-electrode array implants, see Russo et al., 2018.
Circular statistics
To define the statistical measures of angular variables (Fisher, 1995; Berens, 2009), let us consider a set of angles , and visualize them as vectors on the plane:
The mean vector is then defined as
from which we can get the mean angular direction using the four quadrant inverse tangent function. The length of the mean resultant vector,
is a measure of how concentrated the data sample is around the mean direction: the closer is to 1, the more concentrated the data is. We quantify the spread in a data set through the circular variance given by
which ranges from 0 to 1. To compute the correlation between two sets of angular variables, and , we used the correlation coefficient (Jammalamadaka and SenGupta, 2001) defined as:
where and are the mean angular directions of the two sets.
Preparatory and movement-related epochs
In our analysis, we characterized neurons’ tuning properties during movement preparation and execution. The choice of considering only two epochs is an approximation based on the observation that neurons preferred directions are more conserved within the delay period alone and within movement execution alone than across periods (Figure 1—figure supplement 1). In Figure 1—figure supplement 1, we considered three temporal epochs: the delay period (blue lines), movement execution (red) and the whole duration of the task (grey). We binned each epoch into time bins of length 160 ms: the preparatory and execution epochs consist of bins each, while the whole task consists of bins. For each neuron, we first computed its preferred direction in each time bin, by fitting the trial-averaged and time-averaged firing rate with a cosine function. Then, for each epoch, and for each neuron, we computed the circular variance of preferred directions across the bins. The cumulative distribution of circular variances is shown in Figure 1—figure supplement 1a: the variability of preferred direction within the delay epochs and within movement execution is non-zero, although their distribution is skewed towards zero.
For each epoch, we computed the median variability in preferred direction and used a bootstrap analysis to check its statistical significance. Let us consider the matrix of single trial activity of a given neuron, for a specific condition, and during a given epoch; the size of the matrix is , where is the number of trials per condition (in the data, ranges between 41 and 47). We repeat the same procedure for all of the 8 conditions, and z-scored the 8 activity matrices across conditions (this is done because the average activity changes across time bins, and in the following we will shuffle time bins). To check if tuning curves are exactly conserved across time-bins, we generated bootstrap samples of the activity matrix for a given condition, where each entry is chosen at random (with repetitions) between the possible entries of the original matrix. For each bootstrap sample matrix, we computed the variance of preferred direction as we did for the original matrix. This is repeated for all neurons, all epochs, all conditions. The cumulative distribution of all variances of preferred direction is shown in Figure 1—figure supplement 1b. Then, for each bootstrap sample, we computed the corresponding median variance of preferred direction; the histogram is shown in Figure 1—figure supplement 1c. From Figure 1—figure supplement 1c, it is obvious that the median variance of preferred directions from the data is significantly larger than the one from the bootstrap samples. This suggests that neurons do change their preferred direction within epochs, even though major changes happen between epochs.
Analysis of the model
We studied the rate model with couplings defined by (5) with two complementary approaches. First, an analytic approximation based on mean-field arguments and valid in the limit of large network size allowed us to derive a low-dimensional description of the network dynamics in terms of a few latent variables, and to fit the model parameters to the data. Next, we checked that simulations of the dynamics of a network of neurons reproduce the results that we derived in the limit .
The mean-field equations are derived following the methods introduced in Ben-Yishai et al., 1995; Romani and Tsodyks, 2010. In the limit where the number of neurons is large, the average activity of a neuron with coordinates is described by Equations 1; 2, where we have defined . In order to derive a lower dimensional description of the dynamics, we rewrite (1) in terms of the average activity rate (14), and of the second Fourier components of the activity rate modulated by :
The phase is defined so that the parameter is a real nonnegative number, yielding Equation 14 for , and the following equation for the phase , representing the position of the peak of the activity profile in map :
From (1), we see that the order parameters evolve in time according to the following set of equations,
where the total input (2) is rewritten as:
Stationary states for homogeneous external inputs
We first characterize the model by studying the fixed-points of the network dynamics when subject to a constant and homogeneous external input, and focusing on the scenario where the joint distribution (3) of the is of the form
which fits the empirical distribution of the data well for (Figure 1—figure supplement 2). For now, we leave the distributions and unspecified. We first consider the case where the external input to the network is a constant that is independent of :
The stationary solutions of (1,16) are of the form:
where we have defined
Here, are solutions of the system (15) with the left hand side set to zero. The second term on the r.h.s in the last equation of (20) is obtained from (16) by observing that in the stationary state if either or if and are correlated (as we are assuming here). As in Ben-Yishai et al., 1995; Hansel and Sompolinsky, 1998, we can distinguish broad from narrow activity profiles. The term broad activity profile refers to the scenario where the activity of all the neurons is above threshold, the dynamics is linear and the stationary state reduces to:
By inserting the above equation in (14), we find that the only solution is homogeneous over the maps and :
where the notation represents an average over the measure , i.e.
First, we notice that a nonzero homogeneous state is present if
Then, the stability of this state with respect to a small perturbation can be studied by linearizing (15) around the stationary solution, at fixed . The resulting Jacobian matrix is
where
The homogeneous solution (22) is stable if and if the couplings parameters satisfy the following system of inequalities:
If , the system undergoes an amplitude instability. For values of that exceed the threshold implicitly defined by (23), there is a region of the four-dimensional space where the total input (19) is negative. The dynamics is no longer linear and the activity profile at the fixed point is localized around a particular direction, and characterized by positive stationary values of the order parameters . This ‘bump’ of activity can be localized either more strongly in map A (), in map B () or at the same level in both maps (). Since maps A and B are correlated according to 3, the location of the stationary bump of activity is constrained to be the same in the two maps: , with arbitrary . The asymmetric term in the connectivity, modulated by the parameter , further contributes to aligning the location of the bump in map B to the one in map A. The system can relax to a continuous manifold of fixed points parameterized by that are marginally stable. In this continuous attractor regime, the system can store in memory any direction of motion , as the activity is localized in absence of tuned inputs.
Examples of two-dimensional cross-sections of the four-dimensional activity profile in the marginal phase are shown in Figure 2b. Figure 2c shows one-dimensional cross-sections of the activity as a function of or , for different values of and . These correspond to the tuning curves of the model for stationary external inputs. The tuning profile can have either a full-cosine modulation or a rectified cosine modulation, depending on the values of . The plots of Figure 1f-g correspond to the activity as a function of in the case of a discrete number of neurons – each neuron with a different value of .
Time-dependent external input
The activity of motor cortical neurons shows no signature of being in a stationary state but instead displays complex transients. To model the data, we assumed that the neurons are responding both to recurrent inputs and to fluctuating external inputs that can be either homogeneous or tuned to , with peak at constant location (Equation 6). The total input 2 that neurons are subject to at a given time is:
where
In order to draw a correspondence between the model and the data, we note that the activity of each neuron in the data corresponds to the activity rate at a specific coordinate in the model. From Equations 1 and 24 we see that, if we assume that changes in the external inputs happen on a time scale larger than , the modulation of the activity profile in map A at each time is and has amplitude proportional to , while the modulation of the activity profile in map B is and has amplitude proportional to . Accordingly, for each recorded neuron and represent the location of the peak of the neuron’s tuning curve computed during the preparatory and movement-related epoch, respectively, while and are proportional to the amplitude of the tuning curves.
Fitting the model to the data
Neurons activity rates were computed by smoothing the spike trains with a Gaussian kernel with s.d. of 25ms and averaging them across all trials with the same condition; denotes the rate of neuron at time for condition , each condition corresponding to one of the 8 angular locations of the target on the screen. Since trials had highly variable length, we normalized the responses along the temporal dimension before averaging them over trials, as follows. We divided the activity into three temporal intervals: from the target onset to the go cue; from the go cue to the start of the movement; from the start of the movement to the end of the movement. For each interval, we normalized the response times to the average length of the interval across trials. We then aggregated the three intervals together. We defined the preparatory and execution epochs – denoted by A and B – as two 300ms time intervals beginning, respectively, 100ms after target onset and 50ms before the start of the movement, in line with (Elsayed et al., 2016). Figure 3—figure supplement 1 shows that our results do not change qualitatively when the lengths of the preparatory and execution intervals are increased. For each neuron , we fitted the activity rate averaged across time within each epoch as a function of the angular position of the target with a cosine function:
where the parameters and represent the neuron’s preferred direction during the preparatory and execution epochs. In our the model, the direction modulation of the rates, see (25), is proportional to , that measures how strongly the neuron participates in the two epochs of movement; hence, we defined to be proportional to the amplitude of the tuning curve:
The scatter plot of (Figure 1e) shows an outlier with . We checked that our results hold true if we discard that point. Figure 1—figure supplement 2 shows that both and strongly correlate with the -coefficient of the cosine fit. In other words, neurons with a higher value of are the ones whose tuning curves more strongly resemble a cosine (this holds true in our simulations too, see Figure 5—figure supplement 2c, and Discussion). The order parameters (14) are computed at each time by approximating the integrals with the sums:
where is the number of neurons and is the number of conditions. The angular location of the localized activity can be computed from (14) as
However, this estimate is strongly affected by the heterogeneity in the distribution of - deviating from the rotational symmetry of the model. That is why in Figure 2 we approximated by the angular location of the target on the screen. We checked that computing by using either method does not affect the dynamics of the order parameters .
We assumed that the observed dynamics of the order parameters obeys Equations 15; 16 with time-dependent external inputs of the form (6). We inferred the value of the parameters of the external currents and of the coupling matrix that allow us to reconstruct the dynamics of the order parameters computed from data; since this inference problem is undetermined, we required as further constraint that the model reconstruct the dynamics of the following two additional order parameters:
In this way, for given coupling parameters , we can uniquely identify the external currents parameters that produced the observed dynamics. Still, an equally good reconstruction of can be obtained for different choices of coupling parameters (2). Hence, we inferred the model parameters by minimizing a cost function composed of two terms: one that is proportional to the reconstruction error of the temporal evolution of the order parameters and the other that represents an energetic cost penalizing large external inputs.
Fitting the model to the data: details
The fitting procedure was divided in the following steps:
The time interval going from the target onset till the end of the movement was binned into time bins: .
The couplings parameters were initialized to zero: . At the first time bin, the external currents parameters were initialized to zero:
and the reconstructed order parameters (r0, …) were initialized to the order parameters estimated from the data ():
For each time step :
We started from the reconstructed order parameters at the previous time step:
and we let the dynamical system (15, 29) with external currents parameters evolve for to estimate the order parameters at the current time step:
We inferred the value of the external currents parameters
by minimizing the reconstruction error:
- (30)
that quantifies the difference between the order parameters estimated from the data and the reconstructed ones; note that the dependence of the cost function on is implicitly contained in the reconstructed order parameters. We minimized the cost function (30) by using an interior point method algorithm [Byrd et al., 1999] starting from the initial condition
we imposed that and added a regularization term to the external inputs, not to infer pathologically large positive and negative inputs that balance each other. This favors solutions with smaller external inputs and non-zero reconstruction error with respect to ones with larger inputs and almost-zero reconstruction error.
The external currents inferred with step 3 depend on our initial choice of of the couplings parameters .
Using step 3, the value of the couplings parameters is inferred by minimizing the cost function
composed of two terms: the reconstruction error and a term that favors small external currents:
- (31)
The minimization is done using a surrogate optimization algorithm [Gutmann, 2001].
The result does not depend on the choice of the time bin . Also, the result weakly depends on the time constant in the mean field equations (e.g. 15), if varies on in the range: 10 – 100ms. We set ms.
Simulations of the model
We simulated the dynamics of a finite network of neurons. To each neuron , we assigned the variables as follows.
In the -space, we sampled points equally spaced along the lines
in such a way that their joint distribution matches the distribution of the data (4), as shown in Figure 5—figure supplement 2a.
In the -space, we drew points at random from the empirical distribution .
For , we assigned to a block of neurons the same coordinates in the -space, and all possible coordinates in the -space, so that the overall number of neurons is .
The network dynamics we simulated is defined by the following stochastic differential equation:
where
in (33) is a Wiener process, and the parameters set the magnitude of the noise fluctuations. The level of noise is chosen so that the results of the PCA analysis (see next section) match the data. Note that by setting the noise to zero and taking the limit we recover the mean-field Equation 1. The results of Figure 3 are obtained from a network of neurons; we simulate the network dynamics for 8 location of the external input and 10 trials for each of the 8 conditions, that is 10 instances of the noisy dynamics. The order parameters shown in Figure 3 are computed from single-trial activity, and then averaged over trials. The correlation-based analysis, instead, is obtained from trial-averaged activity.
Correlation-based analysis
The correlation-based analysis explained in this section was performed both on the smoothed and trial-averaged spike trains from recordings, and on the trial-averaged activity rates from simulations. Out of the 16,000 units in the simulated network model, we considered a subset of the same size as the recorded neurons. In particular, for each neuron in the data, we chose the corresponding one from simulations with the closest value of parameters - the one with smallest euclidean distance in the space defined by (). To compute signal correlations, we preprocessed the data as follows, both for the recordings and for the simulations. For each neuron, we normalized the activity by its standard deviation (computed across all times and all conditions); then, we mean-centered the activity across conditions. The ms long preparatory activity for all conditions was concatenated into a matrix denoted by , and the movement-related activity was grouped into an analogous matrix .
CCA: We used a canonical correlation analysis (CCA) to compare the population activity from data and simulations. First, we projected the activity onto the PCA dimensions that captured 90% of the activity variance across conditions; for the preparatory epoch, for both the data and the simulations; for the movement-related epoch, for the data, and for the simulations. We then applied CCA to look for common patterns in the activity matrices from data and simulations. In brief, CCA finds a set of linear transformations of the activity matrices, in such a way that the transformed variables (called canonical variables) are maximally correlated. Figure 5a shows a high correlation between the sets of canonical variables from data and simulations. We repeated the procedure for the simulations of the purely feedforward network. In this case, the movement-related activity is slightly higher dimensional than for the recurrent network (), and the average canonical correlation is a bit lower. Note that both activities are trial-averaged, and the same realization of the noise was used in both sets of simulations (feedforward and recurrent).
PCA: We obtained correlation matrices relative to preparatory and movement related activity by computing the correlations between the rows of the respective matrices. We then identified the prep-PCs and move-PCs by performing PCA separately on the matrices and . The degree of orthogonality between the prep- and move- subspaces was quantified by the Alignment Index (Elsayed et al., 2016), measuring the amount of variance of the preparatory activity explained by the first move-PCs:
where is the matrix defined by the top move-PCs, is the covariance matrix of the preparatory activity and is the -th eigenvalue of was set to the number of principal components needed to explain 90% of the execution activity variance. Hence, the Alignment Index ranges from 0 (orthogonal subspaces) to 1 (aligned subspaces). As random test, we computed the Random Alignment Index between two sets of dimensions drawn at random within the space occupied by neural activity, using the Monte Carlo procedure described in Elsayed et al., 2016. We performed the same analysis on both the data () and the model () trial averaged activity.
The dimensionality of the preparatory and movement-related subspaces can be understood as follows. The activity of the ring model encoding the value of a singular angular variable is two-dimensional in the Euclidean space. Similarly, the activity of the double-ring model encoding two distinct circular maps is four-dimensional. Our model is an extension of the double-ring model, where both the connectivity matrix and the external fields (Equations 5; 6) are a sum of several terms, each one composed of an -dependent term multiplying a -dependent term. The connectivity matrix is still rank-four, but its eigenvectors are modulated by the variables. Although the dynamics is four-dimensional, we have shown that during movement preparation the activity is localized only in map A, while during movement execution it is predominantly localized in map B: in either epoch, we expect only two eigenvalues to explain most of the activity variance, as we indeed see from simulations in absence of additive noise (Figure 6—figure supplement 1). The noise term introduces extra random dimensions; as the dynamics gets higher dimensional, both the alignment index and the random alignment index get smaller.
jPCA: We quantified the rotational structure in the data by applying the jPCA (Churchland et al., 2012) dimensionality reduction technique to both simulated activity and recordings during movement execution. jPCA is a technique to identify the dimensions that capture rotational dynamics in the data. Given the matrix of activity defined above, jPCA finds the best fit for the real skewed-symmetric matrix that transforms the activity into its derivative :
The plane capturing the strongest rotations is spanned by the eigenvectors of the matrix associated with the two largest eigenvalues. Figure 5—figure supplement 4 shows that the trajectories from simulations qualitatively resembled the ones from data when projected onto the jPCA subspace that capture most of the variance.
EMG signals
The EMG signals were recorded from macaque monkey Bx during the execution of a center-out reaching movement, as described in Balasubramanian et al., 2020. Bi-polar electromyographic electrodes were implanted in 13 individual muscles (Anterior Deltoid, Posterior Deltoid, Pectoralis Major, Biceps Lateral, Biceps Medial, Triceps Long head, Triceps Lateral, Brachioradialis, Flexor Carpi Ulnaris, Flexor Digitorum Superficialis, Extensor Carpi Radialis, Extensor Digitorum Communis, Extensor Carpi Ulnaris). The signals were amplified individually and bandpass filtered between 0.3 and 1 kHz prior to digitization and sampled at 10 kHz. The EMG signals shown in Figure 5b were rectified and smoothed; signals from different trials were rescaled to match the average length of movement execution, and then averaged over trials. The linear readout of the activity from simulations was defined as
where is the trial-averaged activity from simulations, with the conditions concatenated into a matrix of dimensions , where the average duration of movement execution; is a matrix with dimension , being the number of recorded muscles. For each muscle , the EMG signal was regressed onto the activity rate to find the readout weight vector . For the reconstructed EMG signals shown in Figure 5b, we used as inputs the activity of N = 1000 units drawn at random from the 16000 units in the network model. For each muscle , we cross-validated the regression by randomly partitioning the TC time-steps into 10 folds of non-consecutive time points. The decoder was trained on 9 folds using LASSO regression, and tested on the left-out fold. We repeated the procedure 10 times and reported a normalized mean squared error of 0.0066.
Data availability
Source data and all the codes used for data analysis will be made publicly available at https://github.com/lbachromano/M1_Preparatory_Movement_Representation (copy archived at Bachschmid-Romano et al., 2023).
References
-
Kinematic coordinates in which motor cortical cells encode movement directionJournal of Neurophysiology 84:2191–2203.https://doi.org/10.1152/jn.2000.84.5.2191
-
Dynamics of pattern formation in lateral-inhibition type neural fieldsBiological Cybernetics 27:77–87.https://doi.org/10.1007/BF00337259
-
SoftwareM1_Preparatory_Movement_Representation, version swh:1:rev:1ab4d05d900a55f4f44a7d02b39db72620fae02aSoftware Heritage.
-
Circstat: a matlab toolbox for circular statisticsJournal of Statistical Software 31:1–21.https://doi.org/10.18637/jss.v031.i10
-
Motor cortex is functionally organized as a set of spatially distinct representations for complex movementsThe Journal of Neuroscience 34:13574–13585.https://doi.org/10.1523/JNEUROSCI.2500-14.2014
-
Accurate path integration in continuous attractor network models of grid cellsPLOS Computational Biology 5:e1000291.https://doi.org/10.1371/journal.pcbi.1000291
-
An interior point algorithm for large-scale nonlinear programmingSIAM Journal on Optimization 9:877–900.
-
A model of visuospatial working memory in prefrontal cortex: recurrent network and cellular bistabilityJournal of Computational Neuroscience 5:383–405.https://doi.org/10.1023/a:1008837311948
-
Functional classes of primate corticomotoneuronal cells and their relation to active forceJournal of Neurophysiology 44:773–791.https://doi.org/10.1152/jn.1980.44.4.773
-
Preparatory activity in premotor and motor cortex reflects the speed of the upcoming reachJournal of Neurophysiology 96:3130–3146.https://doi.org/10.1152/jn.00307.2006
-
Temporal complexity and heterogeneity of single-neuron activity in premotor and motor cortexJournal of Neurophysiology 97:4235–4257.https://doi.org/10.1152/jn.00095.2007
-
Neural implementation of Bayesian inference in a sensorimotor behaviorNature Neuroscience 21:1442–1451.https://doi.org/10.1038/s41593-018-0233-y
-
Neuronal activity in monkey superior colliculus related to the initiation of saccadic eye movementsThe Journal of Neuroscience 17:8566–8579.https://doi.org/10.1523/JNEUROSCI.17-21-08566.1997
-
Relation of pyramidal tract activity to force exerted during voluntary movementJournal of Neurophysiology 31:14–27.https://doi.org/10.1152/jn.1968.31.1.14
-
Static spatial effects in motor cortex and area 5: quantitative relations in a two-dimensional spaceExperimental Brain Research 54:446–454.https://doi.org/10.1007/BF00235470
-
A radial basis function method for global optimizationJournal of Global Optimization 19:201–227.https://doi.org/10.1023/A:1011255519438
-
Book13 Modeling Feature Selectivity in Local Cortical CircuitsUniversity of California San Diego.
-
Encoding of movement fragments in the motor cortexThe Journal of Neuroscience 27:5105–5114.https://doi.org/10.1523/JNEUROSCI.3570-06.2007
-
Euronal coding of static force in the primate motor cortexJournal de Physiologie 74:237–239.
-
Context-Dependent force coding in motor and premotor cortical areasExperimental Brain Research 128:123–133.https://doi.org/10.1007/s002210050827
-
Primary motor cortical neurons encode functional muscle synergiesExperimental Brain Research 146:233–243.https://doi.org/10.1007/s00221-002-1166-x
-
Decoding arm speed during reachingNature Communications 9:1–14.https://doi.org/10.1038/s41467-018-07647-3
-
Short-Term facilitation may stabilize parametric working memory traceFrontiers in Computational Neuroscience 5:40.https://doi.org/10.3389/fncom.2011.00040
-
Cortical activity in the null space: permitting preparation without movementNature Neuroscience 17:440–448.https://doi.org/10.1038/nn.3643
-
Tuning curves for arm posture control in motor cortex are consistent with random connectivityPLOS Computational Biology 12:e1004910.https://doi.org/10.1371/journal.pcbi.1004910
-
A dynamical neural network model for motor cortical activity during movement: population coding of movement trajectoriesBiological Cybernetics 69:517–524.
-
Deciphering the hippocampal polyglot: the hippocampus as a path integration systemThe Journal of Experimental Biology 199:173–185.https://doi.org/10.1242/jeb.199.1.173
-
Transitions between spatial attractors in place-cell modelsPhysical Review Letters 115:098101.https://doi.org/10.1103/PhysRevLett.115.098101
-
Motor cortical representation of speed and direction during reachingJournal of Neurophysiology 82:2676–2692.https://doi.org/10.1152/jn.1999.82.5.2676
-
Corticospinal neurons with a special role in precision gripBrain Research 261:312–316.https://doi.org/10.1016/0006-8993(83)90635-2
-
Spatiotemporal tuning of motor cortical neurons for hand position and velocityJournal of Neurophysiology 91:515–532.https://doi.org/10.1152/jn.00587.2002
-
Preparatory activity in motor cortex reflects learning of local visuomotor skillsNature Neuroscience 6:882–890.https://doi.org/10.1038/nn1097
-
Dynamic encoding of movement direction in motor cortical neuronsThe Journal of Neuroscience 29:13870–13882.https://doi.org/10.1523/JNEUROSCI.5441-08.2009
-
Continuous attractors with morphed/correlated mapsPLOS Computational Biology 6:e1000869.https://doi.org/10.1371/journal.pcbi.1000869
-
Propagating waves mediate information transfer in the motor cortexNature Neuroscience 9:1549–1557.https://doi.org/10.1038/nn1802
-
Path integration and cognitive mapping in a continuous attractor neural network modelThe Journal of Neuroscience 17:5900–5920.https://doi.org/10.1523/JNEUROSCI.17-15-05900.1997
-
The role of primary motor cortex in goal-directed movements: insights from neurophysiological studies on non-human primatesCurrent Opinion in Neurobiology 13:671–677.https://doi.org/10.1016/j.conb.2003.10.012
-
The computational and neural basis of voluntary motor control and planningTrends in Cognitive Sciences 16:541–549.https://doi.org/10.1016/j.tics.2012.09.008
-
Stability of working memory in continuous attractor networks under the control of short-term plasticityPLOS Computational Biology 15:e1006928.https://doi.org/10.1371/journal.pcbi.1006928
-
Motor cortex neural correlates of output kinematics and kinetics during isometric-force and arm-reaching tasksJournal of Neurophysiology 94:2353–2378.https://doi.org/10.1152/jn.00989.2004
-
Cortical control of arm movements: a dynamical systems perspectiveAnnual Review of Neuroscience 36:337–359.https://doi.org/10.1146/annurev-neuro-062111-150509
-
An emergent model of orientation selectivity in cat visual cortical simple cellsThe Journal of Neuroscience 15:5448–5465.https://doi.org/10.1523/JNEUROSCI.15-08-05448.1995
-
Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cellsNetwork 13:217–242.
-
A neural network that finds a naturalistic solution for the production of muscle activityNature Neuroscience 18:1025–1033.https://doi.org/10.1038/nn.4042
-
Anticipatory activity of motor cortex neurons in relation to direction of an intended movementJournal of Neurophysiology 39:1062–1068.https://doi.org/10.1152/jn.1976.39.5.1062
-
Direct cortical control of muscle activation in voluntary arm movements: a modelNature Neuroscience 3:391–398.https://doi.org/10.1038/73964
-
Cosine tuning minimizes motor errorsNeural Computation 14:1233–1260.https://doi.org/10.1162/089976602753712918
-
Associative memory and hippocampal place cellsInternational Journal of Neural Systems 6:81–86.
-
Computation through neural population dynamicsAnnual Review of Neuroscience 43:249–275.https://doi.org/10.1146/annurev-neuro-092619-094115
-
Changes in motor cortical activity during visuomotor adaptationExperimental Brain Research 121:285–299.https://doi.org/10.1007/s002210050462
-
Activity of superior colliculus in behaving monkey. 3. cells discharging before eye movementsJournal of Neurophysiology 35:575–586.https://doi.org/10.1152/jn.1972.35.4.575
-
Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theoryThe Journal of Neuroscience 16:2112–2126.https://doi.org/10.1523/JNEUROSCI.16-06-02112.1996
-
Independent generation of sequence elements by motor cortexNature Neuroscience 24:412–424.https://doi.org/10.1038/s41593-021-00798-5
Article and author information
Author details
Funding
National Institutes of Health (R01NS104898)
- Nicolas Brunel
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Wei Liang from the Hatsopoulos lab for providing the EMG signals; Jason MacLean, Alex P Vaz, Subhadra Mokashe, Alessandro Sanzeni for helpful discussions; and Stephen H Scott for pointing the work of Nashef et al., 2021 to our attention. This work has been supported by NIH R01NS104898.
Ethics
The data analyzed in this paper has been previously published in Rubino, Robbins, Hatsopoulos, Nature neuroscience. 2006 Dec;9(12):1549-57, where the ethics approval is described. No ethical approval was necessary for this article because no new experiments were performed.
Copyright
© 2023, Bachschmid-Romano et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 821
- views
-
- 119
- downloads
-
- 9
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.