Neural population dynamics in motor cortex are different for reach and grasp
Abstract
Low-dimensional linear dynamics are observed in neuronal population activity in primary motor cortex (M1) when monkeys make reaching movements. This population-level behavior is consistent with a role for M1 as an autonomous pattern generator that drives muscles to give rise to movement. In the present study, we examine whether similar dynamics are also observed during grasping movements, which involve fundamentally different patterns of kinematics and muscle activations. Using a variety of analytical approaches, we show that M1 does not exhibit such dynamics during grasping movements. Rather, the grasp-related neuronal dynamics in M1 are similar to their counterparts in somatosensory cortex, whose activity is driven primarily by afferent inputs rather than by intrinsic dynamics. The basic structure of the neuronal activity underlying hand control is thus fundamentally different from that underlying arm control.
Introduction
The responses of populations of neurons in primary motor cortex (M1) exhibit rotational dynamics – reflecting a neural oscillation at the population level – when animals make arm movements, including reaching and cycling (Churchland et al., 2012; Lara et al., 2018a; Russo et al., 2018; Shenoy et al., 2013). One interpretation of this population-level behavior is that M1 acts as a pattern generator that drives muscles to give rise to movement. A major question is whether such population dynamics reflect a general principle of M1 function, or whether they underlie some behaviors and effectors but not others. To address this question, we examined the dynamics in the neuronal population activity during grasping movements, which involve a plant (the hand) that serves a different function, comprises more joints, and is characterized by different mechanical properties (Rathelot and Strick, 2009). While the hand is endowed with many degrees of freedom, hand kinematics can be largely accounted for within a small subspace (Ingram et al., 2008; Overduin et al., 2015; Santello et al., 1998; Tresch and Jarc, 2009) so we might expect to observe low-dimensional neural dynamics during hand movements, not unlike those observed during arm movements.
To test this, we recorded the neural activity in M1 using chronically implanted electrode arrays as monkeys performed a grasping task, restricting our analyses to responses before object contact (Figure 1—figure supplement 1). Animals were required to hold their arms still at the elbow and shoulder joints as a robotic arm presented each object to their contralateral hand. This task – which can be likened to catching a tossed object or grasping an offered one – limits proximal limb movements and isolates grasping movements. For comparison, we also examined the responses of M1 neurons during a center-out reaching task (Hatsopoulos et al., 2007). In addition, we compared grasping responses in M1 to their counterparts in somatosensory cortex (SCx), which is primarily driven by afferent input and therefore should not exhibit autonomous dynamics (Russo et al., 2018).
Results
First, we used jPCA to search for rotational dynamics in a low-dimensional manifold of M1 population activity (Figure 1; Churchland et al., 2012). Replicating previous findings, reaching was associated with a variety of different activity patterns at the single-neuron level (Figure 1A) that were collectively governed by rotational dynamics at the population level (Figure 1C,E). During grasp, individual M1 neurons similarly exhibited a variety of different response profiles (Figure 1B), but rotational dynamics were weak or absent at the population level (Figure 1D,E).

M1 rotational dynamics during reaching and grasping.
(A) Normalized peri-event histograms aligned to movement onset (black square) for four representative neurons during the reaching task (Monkey 4, Dataset 5). Each shade of gray indicates a different reach direction, trial-averaged for each reaching condition (eight total). (B) Normalized peri-event histograms aligned to maximum aperture (black square) for four representative neurons during the grasping task (Monkey 2, Dataset 2). Each shade of blue indicates a neuron’s response, trial-averaged for different object groups. (C) Rotational dynamics in the population response during reaching for Monkey 4 (Dataset 5) projected onto the first jPCA plane. Different shades of gray denote different reach directions. (D) Lack of similar M1 rotational dynamics during grasping. Different shades of blue indicate different object groups, for Monkey 2 (Dataset 2). (E) FVE (fraction of variance explained) in the rate of change of neural PCs (dx/dt) explained by the best fitting rotational dynamical system. The difference in FVE for reach and grasp is significant (two-sample two-sided equal-variance t-test, t(16) = −19.44, p=4.67e-13). Error bars denote standard error of the mean and data points represent the outcomes of cross-validation folds (across conditions – see Materials and methods) for each of two monkeys. (F) FVE in the rate of change of neural PCs (dx/dt) explained by the best fitting linear dynamical system, not constrained to be rotational. The difference in FVE is highly significant (two-sample two-sided equal-variance t-test, t(16) = −21.37 p=1.57e-14). Error bars denote standard error of the mean and data points represent the outcomes of cross-validation folds for each of two monkeys (fourfold for reaching data, and 5-fold for grasping data). The lack of dynamical structure during grasping relative to reach is further established in a series of control analyses (Figure 1—figure supplement 1).
Given the poor fit of rotational dynamics to neural activity during grasp, we next assessed whether activity could be described by a linear dynamical system of any kind. To test for linear dynamics, we fit a regression model using the first 10 principal components of the M1 population activity (x(t)) to predict their rates of change (dx/dt). We found x(t) to be far less predictive of dx/dt in grasp than in reach, suggesting much weaker linear dynamics in M1 during grasp (Figure 1F). We verified that these results were not an artifact of data alignment, movement epoch, peak firing rate, smoothing, population size, or number of behavioral conditions (Figure 1—figure supplement 2).
The possibility remains that dynamics are present in M1 during grasp, but that they are higher-dimensional or more nonlinear than during reach. Indeed, M1 population activity during a reach-grasp-manipulate task is higher dimensional than is M1 activity during reach alone (Rouse and Schieber, 2018). In light of this, we used Latent Factor Analysis via Dynamical Systems (LFADS) to infer and exploit latent dynamics and thereby improve estimation of single-trial firing rates, then applied a decoder to evaluate the level of improvement. Naturally, the benefit of LFADS is only realized if the neural population acts like a dynamical system. Importantly, such dynamics are minimally constrained and can, in principle, be arbitrarily high dimensional and/or highly nonlinear. First, as expected, we found that in both datasets, neural reconstruction of single trials improved with LFADS (Figure 2—figure supplement 1A,B). However, LFADS yielded a significantly greater improvement in reconstruction accuracy for reach than for grasp (t(311) = 7.07, p=5.11e-12; Figure 2—figure supplement 1Β). Second, a standard Kalman filter was used to decode joint angle kinematics from the inferred latent factors (Figure 2). If latent dynamics in M1 play a key role in the generation of temporal sequences of muscle activations, which in turn give rise to movement, LFADS should substantially improve kinematic decoding. Replicating previous results, we found decoding accuracy to be substantially improved for reaching when processing firing rates using LFADS (Figure 2A,C) (R2 = 0.93 and 0.57 with and without LFADS, respectively). In contrast, LFADS offered minimal improvement in accuracy when decoding grasping kinematics in two monkeys (Figure 2B,C) (R2 = 0.46 and 0.37), regardless of the latent dimensionality of the model (Figure 2—figure supplement 1C) or whether external inputs were included (Figure 2—figure supplement 1D). These decoding results demonstrate that the strong dynamical structure seen in the M1 population activity during reach is not observed during grasp, even when dimensionality and linearity constraints are relaxed.

Decoding of kinematics based on population activity pre-processed with Gaussian smoothing or with LFADS.
(A) End-point coordinates of center-out reaching with actual kinematics (top) or kinematics reconstructed with neural data preprocessed with Gaussian smoothing (middle) or LFADS (bottom). Coordinates are color-coded according to the eight directions of movement. While conditions are visually separable in both Gaussian and LFADS reconstructions, the later provides a smoother and more reliable estimate. (B) Single-trial time-varying angles of five hand joints (black, dashed) from monkey three as it grasped five objects along with their decoded counterparts (Gaussian-smoothed in green, LFADS-inferred in red). Both Gaussian-smoothed and LFADS-inferred firing rates yield similar decoding errors. Here, ‘4mcp flexion’ refers to flexion/extension of the fourth metacarpophalangeal joint; ‘5pip flexion’ - flexion/extension of the fifth proximal interphalangeal joint; and ‘1cmc flexion’ - flexion/extension of the first carpo-metacarpal joint. (C) Difference in performance gauged by the coefficient of determination between decoders with LFADS and Gaussian smoothing for reach (gray) and grasp (blue). Each point denotes the mean performance increase across 10-fold cross-validation of all degrees of freedom pooled across monkeys for reach (2 monkeys with 2 DoFs each) and grasp (2 monkeys with 22 and 29 DoFs, respectively). All decoders were fit using a population of 37 M1 neurons. LFADS leads to significantly larger decoder performance improvement for reach than for grasp. Stars indicate significance of a Mann-Whitney-Wilcoxon test for unmatched samples: *** - alpha of 0.001 for one-sided alternative hypothesis.
As a separate way to examine the neural dynamics in grasping responses, we computed a neural ‘tangling’ metric, which assesses the degree to which network dynamics are governed by a smooth and consistent flow field (Russo et al., 2018). In a smooth, autonomous dynamical system, neural trajectories passing through nearby points in state space should have similar derivatives. The tangling metric (Q) assesses the degree to which this is the case over a specified (reduced) number of dimensions. During reaching, muscle activity and movement kinematics have been shown to exhibit more tangling than does M1 activity, presumably because the cortical circuits act as a dynamical pattern generator whereas muscles are input-driven (Russo et al., 2018). We replicated these results for reaching: neural activity was much less tangled than the corresponding arm kinematics (position, velocity, and acceleration of joint angles) (Figure 3A), as long as the subspace was large enough (>2D), (Figure 3—figure supplement 1). For grasp, however, M1 activity was as tangled as the corresponding hand kinematics, or even more so (Figure 3B), over all subspaces (Figure 3—figure supplement 1). Next, we compared tangling in the grasp-related activity in M1 to its counterpart in SCx, which, as a sensory area, is expected to exhibit tangled activity (as shown during reaching movements [Russo et al., 2018]). Surprisingly, population activity patterns in both M1 and SCx were similarly tangled during grasp (Figure 3C). In summary, M1 responses during grasp do not exhibit the properties of an autonomous dynamical system, but rather are tangled to a similar degree as are sensory cortical responses (Figure 3D).

Tangling in reach and grasp.
(A) Tangling metric (Q) for population responses in motor cortex vs. Q for kinematics during reaching. Kinematic tangling is higher than neural tangling, consistent with motor cortex acting as a pattern generation during reach. (B) Q-M1 population vs. Q-kinematics for grasping. Neural tangling is higher than kinematic tangling, which argues against pattern generation as the dominant mode during grasp. (C) Q-M1 population vs. Q-SCx population. Neural tangling is similar in M1 and SCx. For plots A-C, each point represents the max Q value for a (trial-averaged) neural state at a single time point and single task condition for one monkey (Monkey 1, Dataset 1). (D) Log of Q-motor/Q-kinematics of the arm during reach (KA), Q-motor/Q-kinematics of the hand during grasp (KH), and Q-motor/Q-sensory during grasp (Ns). Each point represents the log-ratio for a single condition and time point (pooled across two monkeys each). Black bars denote the mean log-ratio. The differences between reaching-derived and grasping-derived log-ratios are significant and substantial (two-sample two-sided equal-variance t-test: KH | t(2978)=-43, p=1.03e-130; Ns |t(2978)=-39 p=1.87e-121). Tangling is insensitive to the precise dimensionality, provided it exceeds a minimum dimensionality (Figure 3—figure supplement 1).
Discussion
We find that M1 does not exhibit low-dimensional dynamics during grasp as it does during reach (Churchland et al., 2012), reach-to-grasp (Rouse and Schieber, 2018), or reach-like center-out pointing (Pandarinath et al., 2015). The difference between reach- and grasp-related neuronal dynamics seems to stem from the fundamentally different kinematics and functions of these movements, rather than from effector-specific differences, since dynamics are observed for reach-like finger movements. That rotational dynamics are observed in reach-to-grasp likely reflects the reaching component of the behavior, consistent with the observation that movement signals are broadcast widely throughout motor cortex (Musall et al., 2019; Stavisky et al., 2019; Willett et al., 2020).
Other factors might also explain the different dynamical profiles in M1 between reach and grasp. One might conjecture that M1 population dynamics are much higher dimensional and/or more nonlinear for grasp than for reach, which might explain our failure to detect dynamics in grasp-related M1 activity. However, both LFADS (Pandarinath et al., 2018; Figure 2—figure supplement 1) and the tangling metric (Figure 3—figure supplement 1) can accommodate high-dimensional systems and some degree of nonlinearity in the dynamics. We verified that our failure to observe dynamics did not stem from a failure to adequately characterize a high-dimensional grasp-related response in M1 commensurate with the dimensionality of the movement (See ‘Dimensionality of the neuronal response’ in the Materials and methods, Figure 3—figure supplement 2). We cannot exclude the possibility that dynamics may be observed in a much higher dimensional space than we can resolve with our sample, one whose dimensionality far exceeds that of the movement itself. To test this hypothesis will require large-scale neural recordings obtained during grasping.
Another possibility is that M1 dynamics are under greater influence from extrinsic inputs for grasp than for reach: inputs can push neuronal activity away from the trajectories dictated by the intrinsic dynamics, thereby giving rise to tangling. M1 receives input from large swaths of the brain that each exhibit their own dynamics, including the supplementary motor area (Lara et al., 2018b; Russo et al., 2020), premotor and posterior parietal cortices (Michaels et al., 2018), and motor thalamus (Sauerbrei et al., 2020), in addition to responding to somatosensory and visual inputs (Suminski et al., 2010). Our findings are consistent with the hypothesis that grasp involves more inputs to M1 than does reach, or that grasp-related inputs are more disruptive to the intrinsic dynamics in M1 than are their reach-related counterparts (Figure 2—figure supplement 1).
Whatever the case may be, the low-dimensional linear dynamics observed in M1 during reaching are not present during grasping, consistent with an emerging view that the cortical circuits that track and control the hand differ from those that track and control the proximal limb (Goodman et al., 2019; Rathelot and Strick, 2009).
Materials and methods
Behavior and neurophysiological recordings for grasping task
Request a detailed protocolWe recorded single-unit responses in the primary motor and somatosensory cortices (M1 and SCx) of two monkeys (Macaca mulatta) (M1: N1 = 58, N2 = 53 | SCx: N1 = 26 N2=28) as they grasped each of 35 objects an average of 10 times per session. We refer to these recordings as Dataset 1 and Dataset 2, which were recorded from Monkey 1 and Monkey 2, respectively. Neuronal recordings were obtained across 6 and 9 sessions, respectively, and are used in the jPCA and tangling analyses. We also recorded simultaneously from populations of neurons in M1 in two monkeys (N3 = 44, N4 = 37) during a single session of this same task. These are called, respectively, Dataset 3 and Dataset 4. The first of these (N3) was recorded from a third Monkey, Monkey 3; the second population of simultaneously recorded neurons (N4) was obtained from the same monkey (Monkey 1) as the first set of sequentially recorded neurons (N1). The recordings in Monkey 1 were achieved with different arrays and separated by 3 years. Simultaneously recorded populations were used for the decoding analyses.
On each trial (Figure 1—figure supplement 1), one of 25 objects was manually placed on the end of an industrial robotic arm (MELFA RV-1A, Mitsubishi Electric, Tokyo, Japan). After a 1–3 s delay, randomly drawn on a trial-by-trial basis, the robot translated the object toward the animal’s stationary hand. The animal was required to maintain its arms in the primate chair for the trial to proceed: if light sensors on the arm rest became unobstructed before the robot began to move, the trial was aborted. We also confirmed that the animal produced minimal proximal limb movement by inspecting videos of the experiments and from the reconstructed kinematics. The object began 12.8 cm from the animal’s hand and followed a linear trajectory toward the hand at a constant speed of 16 cm/s for a duration of 800 ms. As the object approached, the animal shaped its hand to grasp it. Some of the shapes were presented at different orientations, requiring a different grasping strategy, yielding 35 unique ‘objects’. Each object was presented 8–11 times in a given session.
The timing of start of movement, maximum aperture, and grasp events were inferred on the basis of the recorded kinematics. A subset of trials from each session were manually scored for each of these three events. On the basis of these training data, joint angular kinematic trajectories spanning 200 ms before and after each frame were used as features to train a multi-class linear discriminant classifier to discriminate among these four classes: all three events of interest and ‘no event’. Log likelihood ratio was used to determine which ‘start of movement’, ‘maximum aperture’, and ‘grasp’ times were most probable relative to ‘no event’. Events were sequentially labeled for each trial to enforce the constraint that start of movement precedes maximum aperture, and maximum aperture precedes grasp. The median interval between the start of movement and maximum aperture was 450 ± 85 ms (median ± interquartile range) for Monkey 1 (across both sets of recordings), 240.0 ± 10.0 ms for Monkey 2, and 456 ± 216 ms for Monkey 3. The interval between maximum aperture and grasp was 356 ± 230 ms for Monkey 1, 410 ± 160 ms for Monkey 2, and 274 ± 145 ms for Monkey 3. Total grasp times from start of movement to grasp were 825 ± 280 ms for Monkey 1, 650 ± 170 ms for Monkey 2, and 755 ± 303 ms for Monkey 3.
Neural recordings were obtained from two monkeys (N1 and N2) using semi-chronic electrode arrays (SC96 arrays, Gray Matter Research, Bozeman, MT) (Dotson et al., 2017; Figure 1—figure supplement 1). Electrodes, which were individually depth-adjustable, were moved to different depths on different sessions to capture new units. Units spanning both M1 and SCx were recorded using these arrays, and SCx data comprise populations from both proprioceptive subdivisions of SCx, namely, Brodmann’s areas 3a and 2. Simultaneous neural recordings were obtained from one monkey (N3) using a combination of Utah electrode arrays (UEAs, Blackrock Microsystems, Inc, Salt Lake City, UT) and floating microelectrode arrays (FMAs, Microprobes for Life Science, Gaithersburg, MD) targeting rostral and caudal subdivisions of the hand representation of M1, respectively. In the other monkey (N4), simultaneous population recordings were obtained using a single 64-channel Utah array targeting the hand representation of (rostral) M1. Single units from all sessions (treated as distinct units) were extracted using an Offline Sorter (Plexon Inc, Dallas TX). Units were identified based on inter-spike interval distribution and waveform shape and size.
Hand joint kinematics, namely the angles and angular velocities about all motile axes of rotation in the joints of the wrist and digits, were tracked at a rate of 100 Hz by means of a 14-camera motion tracking system (MX-T series, VICON, Los Angeles, CA). The VICON system tracked the three-dimensional positions of the markers, and time-varying joint angles were computed using inverse kinematics based on a musculoskeletal model of the human arm (https://simtk.org/projects/ulb_project) (Anderson and Pandy, 2001; Anderson and Pandy, 1999; de Leva, 1996; Delp et al., 1990; Dempster and Gaughran, 1967; Holzbaur et al., 2005; Yamaguchi and Zajac, 1989) implemented in Opensim (https://simtk.org/frs/index.php?group_id=91) (Delp et al., 2007) with segments scaled to the sizes of those in a monkey limb using Opensim’s built-in scaling function. Task and kinematic recording methods are reported in an earlier publication (Goodman et al., 2019). We used a linear discriminant classifier as detailed in this previous publication to determine whether objects indeed evoked distinct kinematics (Figure 1—figure supplement 1).
All surgical, behavioral, and experimental procedures conformed to the guidelines of the National Institutes of Health and were approved by the University of Chicago Institutional Animal Care and Use Committee.
Behavior and neurophysiological recordings for reaching task
Request a detailed protocolTo compare grasp to reach, we analyzed previously published single- and multi-unit responses from two additional M1 populations (M1: N5 = 76, N6 = 107) recorded from two additional monkeys (Monkeys 4 and 5, respectively) operantly trained to move a cursor in a variable delay center out reaching task (Hatsopoulos et al., 2007). These recordings are called Dataset 5 and Dataset 6, respectively. The monkey’s arm rested on cushioned arm troughs secured to links of a two-joint exoskeletal robotic arm (KINARM system; BKIN Technologies, Kingston, Ontario, Canada) underneath a projection surface. The shoulder and elbow joint angles were sampled at 500 Hz by the motor encoders of the robotic arm, and the x and y positions of the hand were computed using the forward kinematic equations. The center-out task involved movements from a center target to one of eight peripherally positioned targets (5 to 7 cm away). Targets were radially defined, spanning a full 360° rotation about the central target in 45° increments. Each trial comprised two epochs: first, an instruction period lasting 1 to 1.5 s, during which the monkey held its hand over the center target to make the peripheral target appear; second, a ‘go’ period, cued by blinking of the peripheral target, which indicated to the monkey that it could begin to move toward the target. Following the ‘go’ cue, movement onset was 324 ± 106 ms (median ± interquartile range) for Monkey 4 in Dataset 5, and 580 ± 482 ms for Monkey 5 in Dataset 6. Total movement duration was 516 ± 336 ms for Monkey 4 in Dataset 5 and 736 ± 545 ms for Monkey 5 in Dataset 6. Single- and multi-unit activities were recorded from each monkey during the course of a single session using an UEA implanted in the upper limb representation of contralateral M1. All surgical, behavioral, and experimental procedures conformed to the guidelines of the National Institutes of Health and were approved by the University of Chicago Institutional Animal Care and Use Committee.
Information about all grasping and reaching datasets and their associated analyses is provided in Table 1 of Supplementary file 1.
Differences between reach and grasp and their potential implications for population dynamics
Request a detailed protocolIn this section, we discuss differences between the reach and grasp tasks that might have had an impact on the neuronal dynamics.
First, movements were cued differently in the two tasks. For reaching, targets blinked to cue movement. For grasping, there was no explicit movement cue; rather, the animals could begin preshaping their hand as soon as the robot began to move, although they had to wait for the object to reach the hand to complete their grasp and obtain a reward. Nonetheless, we found that the delay between the beginning of the robot’s approach and hand movement onset (median ± interquartile range: Monkey 1 – 271 ± 100 ms; Monkey 2 – 419 ± 101 ms; numbers not available for Monkey 3) was similar to the delay in the reaching task between the go cue and start of movement. Note, moreover, that the nature of the ‘delay’ period should have little effect on neuronal dynamics. Indeed, self-initiated reaches and those that are executed rapidly with little to no preparation are nonetheless associated with rotational M1 dynamics (Lara et al., 2018a).
Second, the kinematics of reaching and grasping are quite different, and differences in the respective ranges of motion or speeds could mediate the observed differences in neuronal dynamics. However, the ranges of motion and distribution of speeds were similar for reach and grasp (Figure 1—figure supplement 1C–D,G), suggesting that the observed differences in neuronal dynamics are not trivial consequences of differences in the kinematics. On a related note, grasping movements with no reach (lasting roughly 700 ms) were generally slower than those reported in in the context of reach (lasting roughly 300 ms) (Bonini et al., 2014; Chen et al., 2009; Lehmann and Scherberger, 2013; Rouse and Schieber, 2015; Roy et al., 2000; Theverapperuma et al., 2006), as the animals had to wait for the robot to transport the object to their hand. Note, however, that similar constraints on movement duration and speed during reaching do not affect the presence or nature of M1 rotational dynamics during those movements (Churchland et al., 2012). As such, speed differences should not lead to qualitatively different M1 population dynamics.
Third, we considered whether grasping without reaching might simply be too ‘unnatural’ to be controlled by stereotyped M1 dynamics. However, we observed the presence of two hallmarks of grasping behavior: a clearly-defined maximum aperture phase and the presence of hand pre-shaping (Jeannerod, 1984; Jeannerod, 1981; Santello et al., 2002; Santello and Soechting, 1998). The latter is evidenced by a gradual improvement in our ability to classify objects based on the kinematics they evoke as the trial proceeded (Figure 1—figure supplement 1E): Upon start of movement, the hand is in a generic configuration that is independent of the presented object. However, as the trial proceeds, hand kinematics become increasingly object-specific, culminating in a high classification performance just before object contact. Furthermore, grasping kinematics have been previously shown to be robust to different types of reaches (Wang and Stelmach, 1998).
Data analysis
Data pre-processing
Request a detailed protocolFor both reach and grasp, neuronal responses were aligned to the start of movement, resampled at 100 Hz so that they would be at the same temporal resolution, averaged across trials, then smoothed by convolution with a Gaussian (25 ms S.D.). For jPCA, we then followed the data pre-processing steps described in Churchland et al., 2012: normalization of individual neuronal firing rates, subtraction of the cross-condition mean peri-event time histogram (PETH) from each neuron’s response in each condition, and applying principal component analysis (PCA) to reduce the dimensionality of the population response. For LFADS and the tangling analyses, only the normalization of neurons’ firing rates was performed. Although the condition-invariant response varies in a meaningful way (Kaufman et al., 2016), its inclusion obstructs our ability to use jPCA to visualize neural trajectories whose initial conditions vary, and thus our ability to use jPCA to evaluate claims of dynamical structure. Even when this component is especially large, dynamical structure in the remaining condition-dependent neural activity has been observed (Rouse and Schieber, 2018), thus subtraction of even a large condition-independent response should permit the inference of neural dynamics. We used 10 dimensions instead of six (Churchland et al., 2012) as a compromise between the lower dimensional reach data and the higher dimensional grasp data.
jPCA
Request a detailed protocolWe applied to the population data (reduced to 10 dimensions by PCA) a published dimensionality reduction method, jPCA (Churchland et al., 2012), which finds orthonormal basis projections that capture rotational structure in the data. We used a similar number of dimensions for both reach and grasp, as PCA revealed no stark differences in the effective dimensionality of the neural population between the two tasks (Figure 1—figure supplement 1F). With jPCA, the neural state is compared with its derivative, and the strictly rotational dynamical system that explains the largest fraction of variance in that derivative is identified. The delay periods between the presentation/go-cue for each monkey varied, along with the reaction times, so we selected a single time interval (averaging 700 ms) that maximized rotational variance across all of them. For the reach data, data were aligned to the start of movement and the analysis window was centered on this event, whereas for the grasp data, data were aligned to maximum hand aperture, and we analyzed the interval centered on this event. In some cases, the center of this 700 ms window was shifted between −350 ms to +350 ms relative to the alignment event to obtain an estimate of how rotational dynamics change over the course of the trial (e.g. Figure 1—figure supplement 2). These events were chosen for alignment as they were associated with both the largest peak firing rates and the strongest rotational dynamics. Other alignment events were also tested, to test robustness (Figure 1—figure supplement 2B).
Object clustering
Request a detailed protocolEach of the 35 objects was presented 10 times per session, which yields a smaller number of trials per condition than were used to assess jPCA during reaching (at least 40). To permit pooling across a larger number of trials when visualizing and quantifying population dynamics with jPCA (Figure 1), objects in the grasp task were grouped into eight object clusters on the basis of the trial-averaged similarity of hand posture across all 30 joint degrees of freedom 10 ms prior to grasp (i.e. object contact). Objects were hierarchically clustered into eight clusters on the basis of the Ward linkage function (MATLAB clusterdata). Eight clusters were chosen to match the number of conditions in the reaching task. Cluster sizes were not uniform; the smallest comprised two and the largest nine different objects, with the median cluster comprising four objects.
As the clustering method just described yielded different cluster sizes, we assessed an alternative clustering procedure (Figure 1—figure supplement 2F) that guaranteed objects were divided into seven equally-sized clusters (five objects per cluster). Rather than determining cluster membership on the basis of a linkage threshold, cluster linkages were instead used to sort the objects on the basis of their dendrogram placements (MATLAB dendrogram). Clusters were obtained by grouping the first five objects in this sorted list into a common cluster, then the next five, and so on. This resulted in slightly poorer performance of jPCA (see Quantification).
For completeness, we also assessed jPCA without clustering (Figure 1—figure supplement 2E), which also resulted in slightly poorer performance of jPCA and was considerably more difficult to visualize given the large number of conditions.
Quantification
Request a detailed protocolIn a linear dynamical system, the derivative of the state is a linear function of the state. We wished to assess whether a linear dynamical system could account for the neural activity. To this end, we first produced a de-noised low-dimensional neural state (X) by reducing the dimensionality of the neuronal responses to 10 using PCA. Second, we numerically differentiated X to estimate the derivative, . Next, we used regression to fit a linear model, predicting the derivative of the neuronal state from the current state: . Finally, we computed the fraction of variance explained (FVE) by this model:
M was constrained to be skew-symmetric () unless otherwise specified; indicates the mean of a matrix across samples, but not across dimensions; and indicates the Frobenius norm of a matrix. Unless otherwise specified, analysis of reaching data from each monkey was fourfold cross-validated, whereas analysis of grasp data was 5-fold cross-validated.
Control comparisons between arm and hand data
Request a detailed protocolWe performed several controls comparing arm and hand data to ensure that our results were not an artifact of trivial differences in the data or pre-processing steps.
First, we considered whether alignment of the data to different events might impact results. For the arm data, we aligned each trial to target onset and movement onset (Figure 1—figure supplement 2A). For the hand data, we aligned each trial to presentation of the object, movement onset, and the time at which the hand reached maximum aperture during grasp (Figure 1—figure supplement 2B). Linear dynamics were strongest (although still very weak) when neuronal responses were aligned to maximum aperture, so this alignment is reported throughout the main text.
Second, we assessed whether rotations might be obscured due to differences in firing rates in the hand vs. arm responses. To this end, we compared peak firing rates for trial-averaged data from the arm and hand after pre-processing (excluding normalization) to directly contrast the inputs to the jPCA analysis given the two effectors/tasks (Figure 1—figure supplement 2C). Peak firing rates were actually higher for the hand than the arm, eliminating the possibility that our failure to observe dynamics during grasp was a consequence of weak responses. We also verified that differences in dynamics could not be attributed to differences in the degree to which neurons were modulated in the two tasks. To this end, we computed the modulation range (90th percentile firing – 10th percentile firing) and found that modulation was similar in reach and grasp (Figure 1—figure supplement 2D).
Third, we assessed whether differences in the sample size might contribute to differences in variance explained (Figure 1—figure supplement 2E). To this end, we took five random samples of 55 neurons from the reaching data set – chosen to match the minimum number of neurons in the grasping data – and computed the cross-validated fraction of variance explained by the rotational dynamics. The smaller samples yielded identical fits as the full sample.
Fourth, we assessed whether the low variance explained by linear dynamics in the hand might be due to poor sampling of the joint motion space (Figure 1—figure supplement 2G). To this end, we computed FVE for only rightward reaches, and found that the variance explained for all directions versus only rightward reaches were comparable. Therefore, we expect that our sampling of hand motions would not affect our ability to observe linear dynamics.
Fifth, we considered whether our smoothing kernel might impact results (Figure 1—figure supplement 2H). We compared the FVE for the optimal linear dynamical system across various smooth kernels – from 5 to 40 ms – and found that the difference between hand and arm dynamics remains substantial regardless of kernel width.
Finally, since our analyses involve averaging across trials, we assessed whether trial-to-trial variability was different for reach and grasp. To this end, we computed for each neuron the coefficient of variation (CV) of spike counts over 100 ms bins around movement onset. We found the trial-to-trial variability to be stable over the trial and similar for reach and grasp (Figure 1—figure supplement 2I).
Decoding
Preprocessing
Request a detailed protocolFor decoding, we preprocessed the neural data using one of two methods: smoothing with a Gaussian kernel (σ = 20 ms) or latent factor analysis via dynamical systems (LFADS) (Pandarinath et al., 2018). LFADS is a generative model that assumes that observed spiking responses arise from an underlying dynamical system and estimates that system using deep learning. We used the same number of neurons in the reaching and grasping analyses to train the LFADS models and fixed the number of factors in all models to 30, at which performance of both reach and grasp models had levelled off (Figure 2—figure supplement 1C). We allowed two continuous controllers while training the model, which could potentially capture the influence of external inputs on dynamics (Pandarinath et al., 2018), since these had significant positive influence on decoding performance (Figure 2—figure supplement 1D). Hyper-parameter tuning was performed as previously described (Keshtkaran and Pandarinath, 2019).
Neural reconstruction
Request a detailed protocolTo compare our ability to reconstruct single-trial responses using Gaussian smoothing and LFADS, we first computed peri-event time histograms (PETHs) within condition using all training trials (excluding one test trial). We then computed the correlation between the firing rates of each test trial (smoothed with a Gaussian kernel or reconstructed with LFADS) with the PETH of the corresponding condition averaged across the training trails (Figure 2—figure supplement 1A). We repeated this procedure with a different trial left out for each condition. We report the difference in correlation coefficient obtained after LFADS processing and Gaussian smoothing (Figure 2—figure supplement 1B).
Kalman filter
Request a detailed protocolTo predict hand and arm kinematics, we applied the Kalman filter (Kalman, 1960), commonly used for kinematic decoding (Menz et al., 2015; Okorokova et al., 2020; Wu et al., 2004). In this approach, kinematic dynamics can be described by a linear relationship between past and future states:
where is a vector of joint angles at time , is a state transition matrix, and is a vector of random numbers drawn from a Gaussian distribution with zero mean and covariance matrix . The kinematics can be also explained in terms of the observed neural activity :
Here, is a vector of instantaneous firing rates across a population of M1 neurons at time , preprocessed either with Gaussian kernel or LFADS, B is an observation model matrix, and is a random vector drawn from a Gaussian distribution with zero mean and covariance matrix W. We tested multiple values of the latency, Δ, and report decoders using the latency that maximized decoder accuracy (150 ms).
We estimated the matrices using linear regression on each training set, and then used those estimates in the Kalman filter update algorithm to infer the kinematics of each corresponding test set (see Faragher, 2012; Okorokova et al., 2015 for details). Briefly, at each time , kinematics were first predicted using the state transition equation (3), then updated with observation information from equation (4). Update of the kinematic prediction was achieved by a weighted average of the two estimates from (3) and (4): the weight of each estimate was inversely proportional to its uncertainty (determined in part by V and W for the estimates based on xt-1 and zt-Δ, respectively), which changed as a function of time and was thus recomputed for every time step.
To assess decoding performance, we performed 10-fold cross-validation in which we trained the parameters of the filter on a randomly selected 90% of the trials and tested the model using the remaining 10% of trials. Importantly, we trained separate Kalman filters for the two types of neural preprocessing techniques (Gaussian smoothing and LFADS) and then compared their performance on the same trials. Performance was quantified using the coefficient of determination () for the held-out trials across test sets.
Tangling
Request a detailed protocolWe computed tangling of the neural population data (reduced to 20 dimensions by PCA) using a published method (Russo et al., 2018). In brief, the tangling metric estimates the extent to which neural population trajectories are inconsistent with what would be expected if they were governed by an autonomous dynamical system, with smaller values indicating consistency with such dynamical structure. Specifically, tangling measures the degree to which similar neural states, either during different movements or at different times for the same movement, are associated with different derivatives. This is done by finding, for each neural state (indexed by ), the maximum value of the tangling metric across all other neural states (indexed by ):
Here, is the neural state at time t (a 20-dimensional vector containing the neural responses at that time), is the temporal derivative of the neural state (estimated numerically), and is the Euclidean norm, while is a small constant added for robustness to noise (Russo et al., 2018). This analysis is not constrained to work solely for neural data; indeed, we also apply this same analysis to trajectories of joint angular kinematics to compare their tangling to that of neural trajectories.
The neural data were pre-processed using the same alignment, trial averaging, smoothing, and normalization methods described above. Joint angles were collected for both hand and arm data. For this analysis, joint angle velocity and acceleration were computed (six total dimensions for arm, 90 dimensions for hand). For reaching, we analyzed the epoch from 200 ms before to 100 ms after movement onset. For grasping, we analyzed the epoch starting 200 ms before to 100 ms after maximum aperture. Neuronal responses were binned in 10 ms bins to match the sampling rate of the kinematics.
The tangling metric is partially dependent on the dimensionality of the underlying data. To eliminate the possibility that our results were a trivial consequence of selecting a particular number of principal components, we tested tangling at different dimensionalities and selected the dimensionality at which Q had largely leveled off for both the population neural activity and kinematics (Figure 3—figure supplement 1). Namely, we report results using six principal components (the maximum) for reach kinematics and their associated neural responses, and using 20 for kinematics and neuronal responses during grasp.
Dimensionality of the neuronal response
Request a detailed protocolOne possibility is that our failure to observe autonomous dynamics during grasp stems from a failure to properly characterize the neural manifold, which in principle could be much higher dimensional for grasp than it is for reach. However, the first D dimensions of a manifold can be reliably estimated from fewer than 2*D projections if two conditions hold: the eigenvalue spectrum is not flat, and the samples approximate random projections of the underlying manifold (Halko et al., 2011). The scree plot shows that the first condition is met (Figure 1—figure supplement 1F). To evaluate the second condition and determine whether neurons are random projections of the low-dimensional manifold, we applied a Gine-Ajne test (Prentice, 1978) to the first 5, 10, and 20 PCs. We found that the null hypothesis of spherical uniformity was not rejected (p>0.5 for all dimensionalities and data sets). While we cannot rule out that the possibility that there exists a small, unrecorded fraction of neurons that span a disjoint manifold subspace from that we measured, the failure to reject spherical uniformity provides evidence that these neurons approximate random projections. To further examine the possibility that dynamics occupy a space that we were unable to resolve with our neuronal sample, we implemented LFADS with a different number of latent factors. We found that, to the extent that decoding performance improved with additional latent factors, it levelled off at ~10 factors (Figure 2—figure supplement 1). If the dynamics were distributed over a high-dimensional manifold, we might expect that performance would increase slowly with the number of latent factors over the entire range afforded by the sample size. This was not the case.
Yet another possibility we considered is that the neuronal manifold beyond the first few dimensions reflects noise, which would preclude the identification of dynamics embedded in higher order dimensions. To examine this possibility, we assessed our ability to relate the monkeys’ behavior during the grasp task to the neural data over subsets of dimensions. First, we found that the ability to classify objects based on the population response projected on progressively smaller subspaces – removing high-variance principal components first – remained above chance even after dozens of PCs were removed. This suggests that behaviorally relevant neuronal activity was distributed over many dimensions, and that this signal clearly rose above the noise (Figure 3—figure supplement 2A). For this analysis, we used multiclass linear discriminant analysis based on population responses evoked over a 150 ms window before object contact. Second, we found that the ability to decode kinematics based on the population response projected on progressively smaller subspaces remained above chance after removal of many PCs, consistent with the classification analysis (Figure 3—figure supplement 2B). For this analysis, we used population responses over an 800 ms window centered on maximum aperture for reaching and movement onset for grasping. Thus, high-order PCs do not simply reflect noise but rather comprise behaviorally relevant signals.
In summary, then, our sample size is sufficient, in principle, to recover dynamics embedded in a high-dimensional manifold. The weak dynamics in the grasping response that we did recover occupy a low-dimensional manifold, and we were able to resolve the population response for the grasping behavior across a large number of dimensions (40+ principal components).
Statistics
For most of analyses, sample sizes were large and data were distributed approximately normally so we used two-sided t-test. However, for some analyses, the data were right-skewed and the sample size was small, so we used non-parametric tests, either the Wilcoxon signed rank test or the Mann-Whitney-Wilcoxon test depending on whether the samples were matched (for example, comparison of same kinematic DoFs reconstructed with either Gaussian smoothing or LFADS) or not (for example, comparison of kinematic DoFs reconstruction from different datasets).
Data availability
The data that support the findings of this study have been deposited in Dryad, accessible at https://doi.org/10.5061/dryad.xsj3tx9cm.
-
Dryad Digital RepositoryNeural population dynamics in motor cortex are different for reach and grasp.https://doi.org/10.5061/dryad.xsj3tx9cm
References
-
A dynamic optimization solution for vertical jumping in three dimensionsComputer Methods in Biomechanics and Biomedical Engineering 2:201–231.https://doi.org/10.1080/10255849908907988
-
Dynamic optimization of human walkingJournal of Biomechanical Engineering 123:381–390.https://doi.org/10.1115/1.1392310
-
Neural representation of hand kinematics during prehension in posterior parietal cortex of the macaque monkeyJournal of Neurophysiology 102:3310–3328.https://doi.org/10.1152/jn.90942.2008
-
Adjustments to Zatsiorsky-Seluyanov's segment inertia parametersJournal of Biomechanics 29:1223–1230.https://doi.org/10.1016/0021-9290(95)00178-6
-
An interactive graphics-based model of the lower extremity to study orthopaedic surgical proceduresIEEE Transactions on Biomedical Engineering 37:757–767.https://doi.org/10.1109/10.102791
-
OpenSim: open-source software to create and analyze Dynamic simulations of movementIEEE Transactions on Bio-Medical Engineering 54:1940–1950.https://doi.org/10.1109/TBME.2007.901024
-
Properties of body segments based on size and weightAmerican Journal of Anatomy 120:33–54.https://doi.org/10.1002/aja.1001200104
-
Understanding the basis of the kalman filter via a simple and intuitive derivation [Lecture notes]IEEE Signal Processing Magazine 29:128–132.https://doi.org/10.1109/MSP.2012.2203621
-
Encoding of movement fragments in the motor cortexJournal of Neuroscience 27:5105–5114.https://doi.org/10.1523/JNEUROSCI.3570-06.2007
-
A model of the upper extremity for simulating musculoskeletal surgery and analyzing neuromuscular controlAnnals of Biomedical Engineering 33:829–840.https://doi.org/10.1007/s10439-005-3320-7
-
The statistics of natural hand movementsExperimental Brain Research 188:223–236.https://doi.org/10.1007/s00221-008-1355-3
-
The timing of natural prehension movementsJournal of Motor Behavior 16:235–254.https://doi.org/10.1080/00222895.1984.10735319
-
A new approach to linear filtering and prediction problemsJournal of Basic Engineering 82:35–45.https://doi.org/10.1115/1.3662552
-
BookEnabling hyperparameter optimization in sequential autoencoders for spiking neural dataIn: Wallach H, Larochelle H, Beygelzimer A, d’Alché-Buc F, Fox E, Garnett R, editors. Advances in Neural Information Processing Systems, 32. Curran Associates, Inc. pp. 15937–15947.
-
Reach and gaze representations in macaque parietal and premotor grasp AreasJournal of Neuroscience 33:7038–7049.https://doi.org/10.1523/JNEUROSCI.5568-12.2013
-
Neural dynamics of variable Grasp-Movement preparation in the macaque frontoparietal networkThe Journal of Neuroscience 38:5759–5773.https://doi.org/10.1523/JNEUROSCI.2557-17.2018
-
Single-trial neural dynamics are dominated by richly varied movementsNature Neuroscience 22:1677–1686.https://doi.org/10.1038/s41593-019-0502-4
-
Decoding hand kinematics from population responses in sensorimotor cortex during graspingJournal of Neural Engineering 17:046035.https://doi.org/10.1088/1741-2552/ab95ea
-
Representation of muscle synergies in the primate brainJournal of Neuroscience 35:12615–12624.https://doi.org/10.1523/JNEUROSCI.4302-14.2015
-
On invariant tests of uniformity for directions and orientationsThe Annals of Statistics 6:169–176.https://doi.org/10.1214/aos/1176344075
-
Spatiotemporal distribution of location and object effects in reach-to-grasp kinematicsJournal of Neurophysiology 114:3268–3282.https://doi.org/10.1152/jn.00686.2015
-
Hand kinematics during reaching and grasping in the macaque monkeyBehavioural Brain Research 117:75–82.https://doi.org/10.1016/S0166-4328(00)00284-9
-
Postural hand synergies for tool useThe Journal of Neuroscience 18:10105–10115.https://doi.org/10.1523/JNEUROSCI.18-23-10105.1998
-
Patterns of hand motion during grasping and the influence of sensory guidanceThe Journal of Neuroscience 22:1426–1435.https://doi.org/10.1523/JNEUROSCI.22-04-01426.2002
-
Gradual molding of the hand to object contoursJournal of Neurophysiology 79:1307–1320.https://doi.org/10.1152/jn.1998.79.3.1307
-
Cortical control of arm movements: a dynamical systems perspectiveAnnual Review of Neuroscience 36:337–359.https://doi.org/10.1146/annurev-neuro-062111-150509
-
Incorporating feedback from multiple sensory modalities enhances brain-machine interface controlJournal of Neuroscience 30:16777–16787.https://doi.org/10.1523/JNEUROSCI.3967-10.2010
-
Finger movements during reach-to-grasp in the monkey: amplitude scaling of a temporal synergyExperimental Brain Research 169:433–448.https://doi.org/10.1007/s00221-005-0167-y
-
Coordination among the body segments during reach-to-grasp action involving the trunkExperimental Brain Research 123:346–350.https://doi.org/10.1007/s002210050578
-
Modeling and decoding motor cortical activity using a switching kalman filterIEEE Transactions on Biomedical Engineering 51:933–942.https://doi.org/10.1109/TBME.2004.826666
-
A planar model of the knee joint to characterize the knee extensor mechanismJournal of Biomechanics 22:1–10.https://doi.org/10.1016/0021-9290(89)90179-6
Decision letter
-
Samantha R SantacruzReviewing Editor; The University of Texas at Austin, United States
-
Richard B IvrySenior Editor; University of California, Berkeley, United States
-
Samantha R SantacruzReviewer; The University of Texas at Austin, United States
-
Marco CapogrossoReviewer; École polytechnique fédérale de Lausanne, Switzerland
In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.
Acceptance summary:
The authors present a short report demonstrating the difference in neural dynamics between grasping and reaching behaviors. This work is broadly interesting to those in the field of motor control and leverages cutting-edge techniques to elucidate neural dynamics associated with the two aforementioned motor behaviors. We are enthusiastic about the suitability of this publication in eLife.
Decision letter after peer review:
Thank you for submitting your article "Neural population dynamics in motor cortex are different for reach and grasp" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Samantha R Santacruz as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Richard Ivry as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Marco Capogrosso (Reviewer #2).
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.
Summary:
The authors present a cohesive and elegant short report demonstrating the difference in neural dynamics between grasping and reaching behaviors. The reviewers are enthusiastic about this work and agree that this study is of great interest to the field since increasingly the motor cortex is modelled as a system with strong dynamical properties. However, they find that the manuscript would be greatly strengthened by clarifications in the analyses, statistics, and animals utilized. The manuscript is suitable for publication in eLife subject to the revisions detailed below.
Essential revisions:
1) Analysis to convince the reader that motor cortical neural activity analyzed is as grasp-modulated as much as it is reach-modulated, which would control for the possibility that neural activity is just more reach-modulated so looks like reach conditions have stronger dynamics. Are the percentage of neurons modulated by the task same in grasp vs. reach (in the PSTH that you analyze)? Since these dynamics questions reflect how well changes (modulation) in firing rate are predicted, it would be important to know that the amount of modulation is comparable. Further, please clarify the point articulated in paragraph three of subsection “Control comparisons between arm and hand data”. Since firing rates are normalized before jPCA, why is analyzing the peak firing rates without normalization a valid way to "directly contrast the inputs to the jPCA analysis"?
2) Analysis to show that reach and grasp PSTHs are equally representative of individual trials, which would control for the possibility that grasp activity is just more variable trial-to-trial so analyzing the PSTH isn't representative of true dynamics. How reliable is the trial-to-trial neural activity for reach vs. grasp? Ensuring that the PSTH is equally reflective of trial activity is important for fairly comparing these two conditions.
3) Please report R2 for the neural reconstruction with LFADS for reach vs. grasp in Figure 2. This value would indicate whether using a non-linear dynamics model (LFADS) can accurately predict neural activity even in the case of grasp, which is important to do prior to any of the kinematic decoding.
4) Clarify the number of animals used for each analysis. It is difficult to understand from the results and reported figures how many animals were used and for which analysis. We suggest using a table to report this information in an accessible format. When performing statistics with data combined across animals, we also suggest using a linear mixed effect model with "animal" as a random effect.
[Editors' note: further revisions were suggested prior to acceptance, as described below.]
Thank you for resubmitting your article "Neural population dynamics in motor cortex are different for reach and grasp" for consideration by eLife. Your revised article has been reviewed by two of the original peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Richard Ivry as the Senior Editor.
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, when editors judge that a submitted work as a whole belongs in eLife but that some conclusions require a modest amount of additional new data, as they do with your paper, we are asking that the manuscript be revised to either limit claims to those supported by data in hand, or to explicitly state that the relevant conclusions require additional supporting data.
Our expectation is that the authors will eventually carry out the additional experiments and report on how they affect the relevant conclusions either in a preprint on bioRxiv or medRxiv, or if appropriate, as a Research Advance in eLife, either of which would be linked to the original paper.
Summary:
The authors present a short report demonstrating the difference in neural dynamics between grasping and reaching behaviors. The reviewers remain enthusiastic about this work and the overall interest that it will have to the field, but there remain outstanding concerns regarding the statistics and interpretation of results. Below you will find more detailed comments. The manuscript is suitable for publication in eLifeeLife if the points detailed below can be addressed.
Revisions for this paper:
– Pertaining to Essential Revisions #3: The authors now report an R2 for the NEURAL reconstruction using LFADS for reaching and grasping. The authors have placed this result in the Materials and methods section, rather than in the Results section. Secondly and more importantly, this result is: "The average correlation between measured and reconstructed firing rates was 0.44 +/- 0.022 and 0.48 +/- 0.021 for single trials and 0.73 +/- 0.03 and 0.76 +/- 0.011 when averaged within condition, for reach and grasp respectively". This suggests that both reach and grasp NEURAL activity are equally explained by LFADS. This result appears to go against the main message of their paper (which to this point has been that there are no discernible dynamics in grasping, but there are in reaching). We would like to see this result reported in the Results section before Figure 2, and would like to see the message of the paper reflect this result (maybe something along the lines of "Grasping dynamics are high dimensional, non-linear, and can't be used for decoding with a linear decoder whereas reaching dynamics are low dimensional, linear, and can be used for decoding").
– The R2 result from above also seems to contradict the tangling results in Figure 3 (that Q-M1/Q-kinematics is higher for grasping than reaching). However upon further inspection of Figure 3, it seems like the reaching and grasping Q-kinematics are quite different (mean Q-kinematics seems to be about ~1x104 for reaching, ~0.3x104 for grasping), whereas it looks like the Q-motor cortex may be similar for both reaching and grasping. Perhaps the kinematics themselves may be driving the significant differences in the Q-ratio while the Q-motor cortex values may be comparable (which would be more consistent with the above result for approx equal R2 from LFADS)? This should be addresses in the revision.
– Pertaining to essential revisions #2: We appreciate the inclusion of panel I in Figure 1—figure supplement 2 to address this point. The main point of this question was to assess whether trial-to-trial variability affected the estimate of the PSTH and thus the ability of a linear model to capture dynamics from the PSTH. Displaying the coefficient of variation as a bar graph collapses over all temporal differences in trial-to-trial variability. For example, it is consistent within this bar plot that trial-to-trial activity is approx. uniform across the reaching behavior epoch, but for grasp is low at the beginning of the trial then high at the end of the trial for example. This hypothetical difference would make it so that the grasping PSTH is consistent at the beginning and noisy at the end, and could explain why it is harder to estimate grasping PSTH with linear dynamics. If this is the case, it may be that reach and grasp neural dynamics are not very different, just that grasp behavior tends to be more variable so the PSTH is not reflective of the true dynamics that may be ongoing during grasp. Another way to address this concern would be to report R2 of neural activity estimated from fitting dynamics on single trials and showing the same differences as in Figure 1. This gets around the issue of trial-averaging and potential trial-to-trial variability differences. We ask that the authors report this R2 value.
– There remain some overall concerns with the statistics performed. When performing statistics, data points from different subjects cannot be pooled together. This is because performing tests on pooled data violates the assumption of iid samples because part of the variance in the samples is explained by the fact that data some of the samples are from one animal and some from the other (intra-animal vs inter-animal). In this case manuscript, the authors are comparing 2 monkeys against 2 different monkeys, and everything is pooled together. We ask the authors to clarify and justify their methodology.
https://doi.org/10.7554/eLife.58848.sa1Author response
Essential revisions:
1) Analysis to convince the reader that motor cortical neural activity analyzed is as grasp-modulated as much as it is reach-modulated, which would control for the possibility that neural activity is just more reach-modulated so looks like reach conditions have stronger dynamics. Are the percentage of neurons modulated by the task same in grasp vs. reach (in the PSTH that you analyze)? Since these dynamics questions reflect how well changes (modulation) in firing rate are predicted, it would be important to know that the amount of modulation is comparable. Further, please clarify the point articulated in paragraph three of subsection “Control comparisons between arm and hand data”. Since firing rates are normalized before jPCA, why is analyzing the peak firing rates without normalization a valid way to "directly contrast the inputs to the jPCA analysis"?
Thank you for this comment, we have addressed this point in Figure 1—figure supplement 2, Panel D, and in the relevant passage of the text. The goal was to directly address the reviewer’s question, namely whether firing rates are similar across neuronal populations and tasks. The modulation depths were similar for reach and grasp responses.
2) Analysis to show that reach and grasp PSTHs are equally representative of individual trials, which would control for the possibility that grasp activity is just more variable trial-to-trial so analyzing the PSTH isn't representative of true dynamics. How reliable is the trial-to-trial neural activity for reach vs. grasp? Ensuring that the PSTH is equally reflective of trial activity is important for fairly comparing these two conditions.
We assessed trial-to-trial variability by computing the coefficient of variation of spike counts over a 500-ms window centred on movement onset. The results of this analysis are shown in Figure 1—figure supplement 2, Panel I. Trial-to-trial variability was similar for reach and grasp.
3) Please report R2 for the neural reconstruction with LFADS for reach vs. grasp in Figure 2. This value would indicate whether using a non-linear dynamics model (LFADS) can accurately predict neural activity even in the case of grasp, which is important to do prior to any of the kinematic decoding.
We now report average correlations between firing rates estimated with Gaussian smoothing and those estimated with LFADS. We find that the correlations between the two are similar for reach and grasp in both condition-averaged responses and on a trial-by-trial basis (now reported in the Materials and methods section). This result indicates that the model captured similar amount of variance in reach and grasp datasets, yet this variance was more informative about kinematics for reach than for grasp, as evidenced by the decoding analysis.
4) Clarify the number of animals used for each analysis. It is difficult to understand from the results and reported figures how many animals were used and for which analysis. We suggest using a table to report this information in an accessible format. When performing statistics with data combined across animals, we also suggest using a linear mixed effect model with "animal" as a random effect.
We have added a table to provide the requested information. We used different monkeys in the reach and grasp tasks, so unfortunately our data do not admit the suggested repeated-measures analysis design.
[Editors' note: further revisions were suggested prior to acceptance, as described below.]
Revisions for this paper:
– Pertaining to Essential Revisions #3: The authors now report an R2 for the NEURAL reconstruction using LFADS for reaching and grasping. The authors have placed this result in the Materials and methods section, rather than in the Results section. Secondly and more importantly, this result is: "The average correlation between measured and reconstructed firing rates was 0.44 +/- 0.022 and 0.48 +/- 0.021 for single trials and 0.73 +/- 0.03 and 0.76 +/- 0.011 when averaged within condition, for reach and grasp respectively". This suggests that both reach and grasp NEURAL activity are EQUALLY explained by LFADS. This result appears to go against the main message of their paper (which to this point has been that there are no discernible dynamics in grasping, but there are in reaching). We would like to see this result reported in the Results section before Figure 2, and would like to see the message of the paper reflect this result (maybe something along the lines of "Grasping dynamics are high dimensional, non-linear, and can't be used for decoding with a linear decoder whereas reaching dynamics are low dimensional, linear, and can be used for decoding").
We thank the reviewers for this comment. In the first revision, we reported mean correlations between the trial-averaged responses and their smoothed and LFADS-processed counterparts. This was a mistake. What we should have done instead is to compute the correlation between the response obtained on one trial and the smoothed or LFADS-processed response averaged over the other trials. We would then predict that, for reaching, LFADS should yield responses that generalize better because it leverages the latent dynamics to reconstruct the single trial response. This is indeed what we found for the reaching responses and significantly less so for the grasping responses:
“First, as expected, we found that in both datasets, neural reconstruction of single trials improved with LFADS (0.34 and 0.23 correlation improvement in reach and grasp, correspondingly, Figure 2—figure supplement 1 (A, B)). However, neural reconstruction improvement was on average significantly higher for reach than for grasp (t(311) = 7.07, p = 5.11e-12; Figure 2—figure supplement 1).”
– The R2 result from above also seems to contradict the tangling results in Figure 3 (that Q-M1/Q-kinematics is higher for grasping than reaching). However upon further inspection of Figure 3, it seems like the reaching and grasping Q-kinematics are quite different (mean Q-kinematics seems to be about ~1x104 for reaching, ~0.3x104 for grasping), whereas it looks like the Q-motor cortex may be similar for both reaching and grasping. Perhaps the kinematics themselves may be driving the significant differences in the Q-ratio while the Q-motor cortex values may be comparable (which would be more consistent with the above result for approx equal R2 from LFADS)? This should be addresses in the revision.
Now that we have done the analysis properly, the contradiction between the LFADS reconstruction and the tangling analysis is resolved. Regarding the raw tangling values, these depend on task, number of conditions, time binning, smoothing, and other factors, and thus fundamentally constitute a relative measure (Russo et al., 2018). The kinematics are matched for the aforementioned factors and thus make a nice comparison.
– Pertaining to essential revisions #2: We appreciate the inclusion of panel I in Figure 1—figure supplement 2 to address this point. The main point of this question was to assess whether trial-to-trial variability affected the estimate of the PSTH and thus the ability of a linear model to capture dynamics from the PSTH. Displaying the coefficient of variation as a bar graph collapses over all temporal differences in trial-to-trial variability. For example, it is consistent within this bar plot that trial-to-trial activity is approx. uniform across the reaching behavior epoch, but for grasp is low at the beginning of the trial then high at the end of the trial for example. This hypothetical difference would make it so that the grasping PSTH is consistent at the beginning and noisy at the end, and could explain why it is harder to estimate grasping PSTH with linear dynamics. If this is the case, it may be that reach and grasp neural dynamics are not very different, just that grasp behavior tends to be more variable so the PSTH is not reflective of the true dynamics that may be ongoing during grasp. Another way to address this concern would be to report R2 of neural activity estimated from fitting dynamics on single trials and showing the same differences as in Figure 1. This gets around the issue of trial-averaging and potential trial-to-trial variability differences. We ask that the authors report this R2 value.
The linear dynamical analysis cannot be applied to single trials because these are too noisy. Indeed, even the much more constrained jPCA cannot be computed from single trials (see Figures 3B,D in Pandarinath et al., 2018). LFADS, on the other hand, is well suited for this purpose. Accordingly, the analysis described above is in the spirit of what is requested here. To address the specific possibility, raised by the reviewer, that the variability may be distributed differently within the trial for reaching and grasping, we recomputed the CV at different epochs during each trial and found these to be homogeneous over the trial and similar for reaching and grasping (see Figure 1—figure supplement 2).
– There remain some overall concerns with statistics performed. When performing statistics, data points from different subjects cannot be pooled together. This is because performing tests on pooled data violates the assumption of iid samples because part of the variance in the samples is explained by the fact that data some of the samples are from one animal and some from the other (intra-animal vs inter-animal). In this case manuscript, the authors are comparing 2 monkeys against 2 different monkeys, and everything is pooled together. We ask the authors to clarify and justify their methodology.
We agree that it would have been preferable to obtain reaching and grasping data from the same animals. However, the two tasks were used in two different studies with different goals and, unfortunately, different animals. Note, however, that the differences between reach and grasp are very strong and persist even if we compare the least favorable pair of animals.
https://doi.org/10.7554/eLife.58848.sa2Article and author information
Author details
Funding
National Institute of Neurological Disorders and Stroke (NS082865)
- Nicholas G Hatsopoulos
- Sliman J Bensmaia
National Institute of Neurological Disorders and Stroke (NS096952)
- Aneesha K Suresh
National Institute of Neurological Disorders and Stroke (NS045853)
- Nicholas G Hatsopoulos
National Institute of Neurological Disorders and Stroke (NS111982)
- Nicholas G Hatsopoulos
National Institute of Neurological Disorders and Stroke (NS101325)
- Sliman J Bensmaia
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Sangwook A Lee, Gregg A Tabot and Alexander T Rajan for help with data collection, as well as Mohammad Reza Keshtkaran and Chethan Pandarinath for help with the LFADS implementation. This work was supported by NINDS grants NS082865, NS101325, NS096952, NS045853, and NS111982.
Ethics
Animal experimentation: All surgical, behavioral, and experimental procedures conformed to the guidelines of the National Institutes of Health and were approved by the University of Chicago Institutional Animal Care and Use Committee (#72042).
Senior Editor
- Richard B Ivry, University of California, Berkeley, United States
Reviewing Editor
- Samantha R Santacruz, The University of Texas at Austin, United States
Reviewers
- Samantha R Santacruz, The University of Texas at Austin, United States
- Marco Capogrosso, École polytechnique fédérale de Lausanne, Switzerland
Publication history
- Received: May 12, 2020
- Accepted: October 27, 2020
- Accepted Manuscript published: November 17, 2020 (version 1)
- Version of Record published: November 25, 2020 (version 2)
Copyright
© 2020, Suresh et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 4,189
- Page views
-
- 646
- Downloads
-
- 17
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
In value-based decision making, options are selected according to subjective values assigned by the individual to available goods and actions. Despite the importance of this faculty of the mind, the neural mechanisms of value assignments, and how choices are directed by them, remain obscure. To investigate this problem, we used a classic measure of utility maximization, the Generalized Axiom of Revealed Preference, to quantify internal consistency of food preferences in Caenorhabditis elegans, a nematode worm with a nervous system of only 302 neurons. Using a novel combination of microfluidics and electrophysiology, we found that C. elegans food choices fulfill the necessary and sufficient conditions for utility maximization, indicating that nematodes behave as if they maintain, and attempt to maximize, an underlying representation of subjective value. Food choices are well-fit by a utility function widely used to model human consumers. Moreover, as in many other animals, subjective values in C. elegans are learned, a process we find requires intact dopamine signaling. Differential responses of identified chemosensory neurons to foods with distinct growth potentials are amplified by prior consumption of these foods, suggesting that these neurons may be part of a value-assignment system. The demonstration of utility maximization in an organism with a very small nervous system sets a new lower bound on the computational requirements for utility maximization and offers the prospect of an essentially complete explanation of value-based decision making at single neuron resolution in this organism.
-
- Neuroscience
While there is a wealth of knowledge about core object recognition—our ability to recognize clear, high-contrast object images—how the brain accomplishes object recognition tasks under increased uncertainty remains poorly understood. We investigated the spatiotemporal neural dynamics underlying object recognition under increased uncertainty by combining MEG and 7 Tesla (7T) fMRI in humans during a threshold-level object recognition task. We observed an early, parallel rise of recognition-related signals across ventral visual and frontoparietal regions that preceded the emergence of category-related information. Recognition-related signals in ventral visual regions were best explained by a two-state representational format whereby brain activity bifurcated for recognized and unrecognized images. By contrast, recognition-related signals in frontoparietal regions exhibited a reduced representational space for recognized images, yet with sharper category information. These results provide a spatiotemporally resolved view of neural activity supporting object recognition under uncertainty, revealing a pattern distinct from that underlying core object recognition.