1. Neuroscience
Download icon

Mesoscopic-scale functional networks in the primate amygdala

  1. Jeremiah K Morrow
  2. Michael X Cohen
  3. Katalin M Gothard  Is a corresponding author
  1. Department of Physiology, University of Arizona, United States
  2. Department of Behavioral Neuroscience, Oregon Health and Sciences University, United States
  3. Radboud University Medical Center, Netherlands
  4. Donders Center for Neuroscience, Netherlands
Research Article
  • Cited 0
  • Views 837
  • Annotations
Cite this article as: eLife 2020;9:e57341 doi: 10.7554/eLife.57341

Abstract

The primate amygdala performs multiple functions that may be related to the anatomical heterogeneity of its nuclei. Individual neurons with stimulus- and task-specific responses are not clustered in any of the nuclei, suggesting that single-units may be too-fine grained to shed light on the mesoscale organization of the amygdala. We have extracted from local field potentials recorded simultaneously from multiple locations within the primate (Macaca mulatta) amygdala spatially defined and statistically separable responses to visual, tactile, and auditory stimuli. A generalized eigendecomposition-based method of source separation isolated coactivity patterns, or components, that in neurophysiological terms correspond to putative subnetworks. Some component spatial patterns mapped onto the anatomical organization of the amygdala, while other components reflected integration across nuclei. These components differentiated between visual, tactile, and auditory stimuli suggesting the presence of functionally distinct parallel subnetworks.

Introduction

The division of the amygdala into nuclei reflects the developmental origins and the input-output connections of each nucleus with other structures (Amaral et al., 1992; Pessoa et al., 2019; Swanson and Petrovich, 1998). Cell-type-specific circuit dissection of the rodent amygdala revealed further compartmentalization of the amygdala (reviewed by Duvarci and Pare, 2014; Fadok et al., 2018; Gafford and Ressler, 2016; Janak and Tye, 2015). However, the anatomical compartmentalization is rarely reproduced by the response properties of single neurons (e.g., Beyeler et al., 2018; Kyriazi et al., 2018; Morrow et al., 2019; Putnam and Gothard, 2019). This is not surprising given the difficulty in capturing the activity of functional networks by subsampling the constituent neurons, especially when the network includes neurons with multidimensional response properties (Gothard, 2020). At the other end of the spectrum of scale, neuroimaging techniques that monitor the activity of brain-wide networks often lack sufficient resolution to capture nuclear or subnuclear activation in the amygdala (Bickart et al., 2012; Roy et al., 2009).

The local field potentials (LFPs), reflecting the aggregate activity of hundreds to tens of thousands of neurons (Buzsáki et al., 2012; Einevoll et al., 2013; Pesaran et al., 2018), may provide a more adequate mesoscopic-scale view of intra-amygdala activity. Although task-relevant LFP signals have been successfully extracted from the amygdala of non-primate species in the context of fear conditioning (Courtin et al., 2014; Paré et al., 2002; Seidenbecher et al., 2003; Stujenske et al., 2014), anticipatory anxiety (Paré et al., 2002), and reward learning (Popescu et al., 2009), remarkably little is known about the information content of the local field potentials in the primate amygdala.

We recently reported that the majority of multisensory neurons in the monkey amygdala not only respond to visual, tactile, and auditory stimuli, but also discriminate, via different spike train metrics, between sensory modalities and even between individual stimuli of the same sensory modality (Morrow et al., 2019). Multisensory neurons and neurons selective for a particular sensory modality were not clustered in any nucleus or subnuclear region, suggesting an organization scheme in spatially distributed but functionally coordinated networks.

We expected that the combined excitatory, inhibitory, and neuromodulatory effects elicited by different sensory modalities would be better captured in the dynamics of the LFPs than by the response properties of single neurons (Buzsáki et al., 2012; Mazzoni et al., 2008). Specifically, we explored the hypothesis that stimuli of different sensory modalities elicit different spatiotemporal patterns of activity in the local field potentials recorded from linear arrays of electrodes. Rather than analyze the signals on each contact independently, we used a method of non-orthogonal covariance matrix decomposition called generalized eigendecomposition (GED) to identify co-activity patterns across the set of contacts. Because neural sources project linearly and simultaneously to multiple contacts, linear multivariate decomposition methods are highly successful at separating functionally distinct but spatially overlapping sources (Parra et al., 2005). These co-activity patterns (also called ‘components’) are the product of putative functional subnetworks within the brain. GED has been shown to reliably reconstruct network-level LFP dynamics in both simulated and empirical data that are often missed by conventional methods of source-separation (Cohen, 2017a; de Cheveigné and Parra, 2014; Parra et al., 2005; Van Veen et al., 1997). Here, we adapted this method to our data and discovered multiple, statistically dissociable patterns of subnetwork activity tied to the functional and spatial structure of the amygdala that was missed by the analysis of single-unit activity (Morrow et al., 2019).

Results

We monitored LFPs simultaneously from the entire dorso-ventral expanse of the amygdala using linear electrode arrays (V-probes, Plexon Inc, Dallas, TX) that have 16 equidistant contacts (400 μm spacing, Figure 1). We systematically sampled the medio-lateral and anterior-posterior regions of the amygdala during different recording sessions. In each session the signals were referenced to the common average across the 16 contacts; this ensured that our signals were locally generated and not volume conducted from distant regions.

Behavioral setup and example average traces across all trials on each of the 16 contacts.

All trials were separated by a 3–4 s inter-trial interval (ITI) and preceded by a visual fixation cue presented at the center of a monitor in front of the animal. If the monkey successfully fixated on the cue, the cue was removed and a single stimulus from one of the three sensory modalities (visual, tactile, auditory) was presented. All trials were followed by a brief delay (0.7–1.4 s) before a small consistent amount of juice was delivered. The color code of the peri-event LFP activity corresponds to the estimated location of the recording electrode within the nuclei of the amygdala. The lines with alternating colors refer to contacts located within 200 µm of an anatomical boundary between nuclei. ME = medial, magenta; CE = central, green; AB = accessory basal, yellow; BA = basal, cyan; LA = lateral, orange; non-amygdala contacts are colored black.

Sets of eight visual, tactile, and auditory stimuli were presented to the monkey as static images, gentle airflow, and random sounds (Figure 1). Each stimulus was presented 12–20 times and was followed by juice reward. All stimuli were chosen to be unfamiliar and devoid of any inherent significance to the animal (assuming that images of fractals or sounds made by musical instruments that were not previously encountered by the monkeys have no intrinsic value). Given that the delivery of all stimuli was followed by the same reward, the learned significance of these stimuli was not different across sensory modalities. Stimuli with socially salient content like faces or vocalizations were avoided, as were images or sounds associated with food (e.g., pictures of fruit or the sound of the feed bin opening). Airflow nozzles were never directed toward the eyes or into the ears to avoid potentially aversive stimulation of these sensitive areas.

The LFP signal was compared between a baseline window (from −1.5 to −1.0 s relative to fixation cue onset) and a stimulus window (from 0 to +1.0 s relative to stimulus onset). Ninety-five percent of the contacts showed significant changes in LFP activity relative to baseline (623/656, 95.0% of all contacts, Wilcoxon rank-sum tests; Bonferroni corrected for 16 comparisons, ɑ = 0.01/16 = 0.000625). Rather than analyze the signals on each contact individually, we used a guided source-separation method, called generalized eigendecomposition (GED), to identify covariation patterns across the contacts that were maximally activated during stimulus delivery relative to the baseline period (Figure 2a–c). We then assessed whether these covariation patterns, also called ‘components,’ were spatially defined and modality-specific (Figure 2b–g) (for further details see Materials and methods). Note that unlike principal components analysis, GED components do not have an orthogonality constraint, which facilitates physiologically interpretability (Cohen, 2017a).

Visualization of GED.

(a) The elements of the equation for generalized eigendecomposition (RWΛ = SW), where R and S are 16x16 covariance matrices (corresponding to the 16 contacts) derived from baseline and stimulus activity respectively. These contact covariance matrices were generated for each trial and then averaged over trials (see Materials and methods). The columns in the W matrix (highlighted by the arrowheads below the plot) contain the eigenvectors. Each eigenvector from the matrix can be written as wn (indicated by the w1 → w16 above the plots). The diagonal matrix of eigenvalues is represented by Λ. The individual eigenvalues are denoted by λ1λ16 shown along the diagonal. A general color scale for the heatmap images is shown under the left side of the equation. (b) MRI-based reconstruction of recording sites in this session. (c) The average peri-stimulus LFP for each contact (color convention is the same as in Figure 1). Light gray shading represents the baseline period and the gray dotted line denotes fixspot onset (note that these data are aligned to stimulus onset; the baseline and fixspot indicators shown here are for illustrative purposes only). The purple shading marks stimulus delivery. Note that the data in (c) are contained in a time-by-contacts matrix (represented by X in subsequent equations). (d) Example of the component map associated with the largest eigenvalue (created by multiplying the stimulus covariance matrix with an eigenvector and termed Sw1). (e) The component time series associated with this eigenvalue. This time series (represented as Xw1) is created by multiplying the LFP data matrix (i.e., X) with the eigenvector in the first column of the (W) matrix (w1). (f, g) the same as for (d) and (e) but associated with the second largest eigenvalue. (h) Histogram of the number of significant components per session when including all contacts regardless of location (i.e., within or outside of the amygdala). (i) Scree plots of the eigenvalues derived from GED from four example sessions when all contacts were included in analysis regardless of location. In the scree plots, the points above the dotted line correspond to the significant eigenvalues. (j) Histogram of the number of significant components per session when only amygdala contacts were included in the analysis. (k) Scree plots of the eigenvalues from the same sessions in (i) but only using amygdala contacts. Note that decreasing the number of contacts decreases the number of total components that GED is able to extract. This does not always result in decreases in the number of significant components (left two panels); however, small (right middle) and occasionally large (far right) decreases in the number of significant components were observed when removing non-amygdala contacts from the analysis.

We found between 1 and 5 simultaneously active and statistically significant components during each recording session (Figure 2h–i), leading to a total of 116 components obtained from 41 sessions (significance was computed via permutation testing with corrections for multiple comparisons; see Methods). In some of the recording sessions a subset of contacts on the linear array were estimated to be located outside of the amygdala. Removing these contacts from the analysis eliminated 24 components (i.e., 92 components were attributable solely to activity recorded in the amygdala), emphasizing the importance of the spatial location of each contact (Figure 2j–k). Only results from significant components are discussed for the remainder of this manuscript (p<0.05 corrected for multiple comparisons via permutation testing; details in Materials and methods).

The relative contribution of the signal from each contact to a particular component can be visualized in a ‘component map’ (Figure 2d and f). This map can then be co-registered with the anatomical map of the amygdala to reveal whether contacts in different nuclei contribute to the components in distinct ways. Each component also has its own time series (Figure 2e and g) that results from passing the raw LFP signals though the associated eigenvector to obtain a reconstructed signal that corresponds to the estimated output of the subnetwork captured in the component (Haufe et al., 2014b).

GED components map onto anatomical boundaries

The component maps show the extent to which the signals recorded from each anatomically localized contact contribute to the putative subnetwork captured by the component (Figure 3). The further a contact weight value is from 0, the more the signal on that particular contact contributes to the component/subnetwork. Adjacent contacts that fall on one side of the zero line co-vary, and thus contribute similarly to the same component. Values of opposing signs (positive vs negative) make opposing contributions to the components (i.e., signals on contacts associated with positive weights inversely covary with the signals on contacts associated with negative weights). We applied a change-point detection algorithm (Killick et al., 2012; Lavielle, 2005) to cluster the contacts according to shifts in the rolling average of adjacent values in the component maps (see Materials and methods). The boundaries of the statistical clusters matched both internal and external anatomical boundaries of the amygdala (Figure 3), which were estimated through a combination of high-resolution MRI and histology (Figure 3—figure supplement 1). This is important because GED is a purely statistical decomposition that is not constrained by anatomical information, spatial arrangement, or relative distances of the electrode positions. Moreover, this method discovered subnuclear divisions that neuroimaging would not be able to detect but are well known from histological analyses. For example, in Figure 3e and f the horizontal dotted lines intersecting the basal nucleus correspond to known boundaries between the magnocellular, intermediate, and parvocellular subdivisions (Amaral et al., 1992).

Figure 3 with 1 supplement see all
Components map onto anatomical boundaries.

Six examples of how component maps (left) match the MRI-based reconstruction of the electrode array positions (right). On each component map the x axis corresponds to the weight calculated for each contact (i.e., see Sw in Figure 2) of the V-probe and the y axis lists the contacts. In each panel, the colors of the contact weights match their estimated nuclear locations following the same convention as in previous figures. Gray dotted lines in the component map plots denote change points in the contact weights that statistically separate groups of contacts based on their coactivity patterns (see Methods). Ventricles are contoured in light gray dotted lines. Med = medial nucleus; Acc. basal = accessory basal nucleus; ER = entorhinal cortex. (a–b) Examples of large transitions in contact weights at boundaries between the amygdala and surrounding tissue, particularly on the ventral contacts. (c–d) Examples of component maps in which the statistical clustering (dotted horizontal lines) match the geometry of the nuclear boundaries. (e–f) Two examples of component maps in which the statistical boundaries correspond to known subnuclear divisions of the basal nucleus.

We quantified the match between the statistical grouping of contacts and the anatomical grouping of contacts as a percent overlap (see Methods). The overall number of matching contacts was 723/1072 (67.4%). We then determined which of the components (rank-ordered first, second, etc., see histogram in Figure 2h–j) showed the best match between the statistical grouping and the anatomical grouping of contacts. Only the first components showed statistically better than chance overlap (249/349, 71.3%, paired t-test, t = 2.76, df = 40, p=0.009, 1000 permutations, Bonferroni corrected for five comparisons, ɑ = 0.05/5 = 0.01).

GED components discriminate between sensory modalities

It is possible that the selectivity for sensory modality seen at the single-unit level in the amygdala (Morrow et al., 2019) is also evident in the components obtained by GED. Time-frequency analyses of the component time series showed that each subnetwork responded selectively to visual, tactile, or auditory stimuli or a combination thereof (Figure 4). Of the 116 identified components, 102 discriminated between sensory modalities in at least one frequency range for at least one sensory modality (cluster-mass t-tests, ɑ = 0.01, see Methods), as shown by the change in power elicited by stimuli of each sensory modality (Figure 4a–f). Only 13 out of these 102 components showed responses restricted to a single sensory modality. These results replicate, at a larger spatial scale, our findings from the single unit analyses (Morrow et al., 2019), that is, that the majority of sensory signals processed in the amygdala are multisensory.

Figure 4 with 1 supplement see all
GED-based components show modality-selective but not modality-specific changes in multiple frequency bands.

(a–f) Example time-frequency plots created from the GED-based components separated by stimulus modality (i.e., visual, tactile, and auditory). Each time-frequency plot shows the relative difference in power between the baseline and stimulus periods for a GED-based component time series (scale at far right, 1–100 Hz, logarithmic scale). Black contoured lines denote clusters of significant changes in power between baseline and stimulus delivery (see Materials and methods). Solid lines at 0 and 1 s denote the start and end of stimulus delivery. (a) Increase in power centered around 4.5 Hz elicited by visual stimuli. (b) Increase in power across low (1–3 Hz) frequencies elicited by auditory stimuli with minor power changes for visual and tactile stimuli. (c) Moderate increase in power from 1 to 2 and 5–8 Hz for visual stimuli and 4–8 Hz for tactile stimuli. (d) Moderate decreases in power for all stimuli, centered around 4 Hz. (e) Disparate responses to visual, tactile, and auditory stimuli. Note how multiple frequencies bands show time varying increases in power for all three stimuli, however, the time-frequency power around 1 Hz is similar between the sensory modalities. (f) A component with fairly similar responses across sensory modalities but note the lack of high frequency activity for auditory stimuli. (g) Maximum separation of the selectivity for sensory modality occurs below 10 Hz. Solid lines represent the mean z-value within significant clusters at each frequency for each modality (visual = red, tactile = green, auditory = blue). Dotted lines denote mean +/- 5 standard errors of the mean. Gold asterisks denote significant differences between the visual and tactile responses, magenta asterisks denote significant differences between visual and auditory, and cyan asterisks denote differences between visual and auditory (Wilcoxon rank-sum tests, p<0.01). (h–j) Pairwise comparisons of selectivity between modalities for components that responded to multiple sensory modalities. Solid lines represent mean and dotted lines represent the mean +/- 5 standard errors of mean. Black asterisks denote significant differences (Wilcoxon rank-sum tests, p<0.01). (k) Spectral profiles were generated from the significant components from all sessions. PCA was then used to extract the prominent features of these profiles (right). Corresponding scree plots shown on the left (see Materials and methods). These plots were created using all available data regardless of estimated location of the contact. The spectral profile and scree plots were color coded according to the rank of the associated component (1st = blue; 2nd = orange; 3rd = yellow; 4th = purple). The power at each frequency is plotted in arbitrary units of energy. Arrows highlight peaks in the spectra that correspond to frequencies that have been extensively studied in other brain regions (i.e., delta, theta, and low and high gamma). (l) Same as in (k) but only using data from contacts that were estimated to be within the amygdala. Figure 4—figure supplement 1 shows how removing non-amygdala contacts can – but does not necessarily – impact spectral profiles in two individual recording sessions.

The power profiles of the component time series were modality-selective but not modality specific, that is, the processing of visual, tactile, or auditory stimuli was not restricted to a particular frequency domain or a particular latency relative to stimulus onset. Sensory modalities were better differentiated at lower frequencies (Wilcoxon rank-sum tests, ɑ = 0.01; Figure 4g). Typically, the largest changes in low-frequency power (<10 Hz) were elicited by visual stimuli, followed by tactile stimuli, with the lowest values for auditory stimuli (Figure 4g), confirming in physiological terms the richer anatomical connectivity of the monkey amygdala with visual areas compared to auditory or tactile areas (Amaral et al., 1992). The same observation holds when only assessing components with multisensory responses (72/89 components) (Wilcoxon rank-sum tests, ɑ = 0.01; Figure 4h–j); visual stimuli elicited larger power changes than tactile, which in turn elicited larger changes than auditory stimuli.

We created a spectral profile based on the significant component signals across all datasets (Figure 4k). We then used principal components analysis to extract the features of the component activity that were most prominent across recording sessions. While the spectral profile from the first component roughly followed the 1/f function typically observed in LFP signals, the subsequent spectra showed peaks around the delta (3.5–4.5 Hz), theta (6-8), low-gamma (~38 Hz), and high-gamma (63–75 Hz) frequency bands. In order to determine whether these spectral profiles were truly amygdala generated or were influenced by non-amygdala sources, we re-created the spectra using the components that included only electrodes estimated to be within amygdala boundaries (based on MRI co-registration). While some changes in the spectra were expected from the removal of non-amygdala contacts, it is noteworthy that the prominent spectral peaks (for example, at 4, 7, 38, and 70 Hz) were preserved (Figure 4l). The preservation of these spectral peaks after removing possible influence of non-amygdala electrodes shows that the primate amygdala exhibits prominent rhythms at peaks often observed in the cortex. Examples of how removing non-amygdala contacts from the analyses impacts the components in individual recording sessions are shown in Figure 4—figure supplement 1.

Modality-specific GED analyses

Given the activity modulation by sensory modality (Figure 4g–j), we re-computed the GED separately for each modality to investigate whether the components could have been driven by a single sensory modality (see Methods). No significant differences were observed between the number of components (visual = 81, tactile = 79, and auditory = 71) extracted via modality-specific GED (χ2 = 0.37, p=0.83) (Figure 5a–c). Furthermore, the sensory-modality-specific component maps were significantly correlated with the component maps that combined all three sensory modalities (Figure 5c). We found, however, that the component maps based solely on visual trials matched better the nuclear boundaries than the component maps based only on either auditory or tactile trials (Table 1). Specifically, the changepoint positions in the first and second visual component maps coincided with the estimated anatomical boundaries of the nuclei (paired t-tests, p=0.0006 and p=0.001, respectively; same parameters as previous analyses, see Table 1 for full stats). Overall, these results suggest that the main GED findings reflect a mixture of sensory-independent and sensory-dependent functional organization.

Comparison of GED components across sensory modalities.

(a) Histograms showing the distribution of significant components from visual, tactile, and auditory trials. (b) MRI-based reconstruction of recording sites from an example session (left) and component maps (right) generated from data collected in the same example session using all trials (left) and only visual, tactile, or auditory trials (last three plots in the row). All component maps shown in panel b are based on the first component resulting from each analysis. (c) Pairwise correlations between component maps generated using all trials (x-axis) vs. component maps generated from modality-specific components (y-axis). Each dot shows the paired map projections onto one electrode from one component. Arrowheads in (b) show how the value from a single contact in each of the component maps projected into the scatter plots in (c).

Table 1
Only visual components significantly map onto anatomical boundaries.

Only the first and second visual component maps matched the anatomical boundaries when correcting for multiple comparisons (paired t-tests, Bonferroni corrected for 12 comparisons, p=0.05/12 = 0.004). C.I. indicates the 95% confidence interval.

Component #1Component #2Component #3Component #4
Visualt(38)=3.75
p=0.00059
C.I. = [0.4, 1.56]
t(22)=3.74
p=0.0011
C.I. = [0.57, 1.98]
t(12)=0.25
p=0.81
C.I. = [−0.54, 0.68]
t(4)=−0.054
p=0.62
C.I. = [−1.80, 1.22]
Tactilet(35)=0.63
p=0.53
C.I. = [−0.34, 0.65]
t(25)=1.97
p=0.06
C.I. = [−0.02, 0.97]
t(12)=1.42
p=0.18
C.I. = [−0.27, 1.29]
t(3)=1.87
p=0.16
C.I. = [−1.03, 3.97]
Auditoryt(34)=−2.46
p=0.019
C.I. = [−0.76,–0.07]
t(23)=−0.74
p=0.47
C.I. = [−0.58, 0.27]
t(9)=−0.02
p=0.99
C.I. = [−0.96, 0.94]
t(1)=−0.97
p=0.51
C.I. = [−14.15, 12.14]

Discussion

Neither the fine-grained single unit literature nor the coarser neuroimaging literature brought conclusive evidence for or against the functional compartmentalization of the primate amygdala. Only a few single unit studies reported an uneven distribution of neurons with particular response properties across the nuclei (e.g., Grabenhorst et al., 2019, Grabenhorst et al., 2016; Mosher et al., 2010; Zhang et al., 2013). To complicate things further, an increasing number of single unit studies reported that neurons in the amygdala respond to multiple stimulus dimensions and task parameters (Kyriazi et al., 2018; Morrow et al., 2019; Munuera et al., 2018; Putnam and Gothard, 2019; Saez et al., 2015Gothard, 2020). These multidimensional neurons are not clustered in any nuclei or subnuclear region, which seems to be at odds with the compartmentalized view of the amygdala. Local field potentials, multi-unit activity, or higher yield single-unit recordings could provide better approximations of network-level activity and may therefore help to resolve this apparent contradiction (for a critique of neuron-based concepts see Buzsáki, 2010; Yuste, 2015).

Remarkably little is known about the neurophysiology of the primate amygdala at the network level, due in part to its layerless architecture that is not expected to generate the predictable LFP patterns observed in layered structures like the neocortex or the hippocampus. In the primate amygdala, each nucleus and nuclear subregion has different cytoarchitecture and also distinct input-output connections with both cortical and subcortical structures (Amaral et al., 1992; Ghashghaei and Barbas, 2002; Sah et al., 2003, Pessoa et al., 2019). For example, projections from the amygdala to dopaminergic neurons in the substantia nigra mainly arise from the central nucleus. These central nucleus projections are further distinguished into medial and lateral subdivisions that terminate in separate subregions of the substantia nigra (Fudge and Haber, 2000). Likewise, the subregions of the dorsolateral prefrontal cortex are reciprocally connected to distinct nuclear subdivisions of the basolateral complex (Amaral and Price, 1984; Ghashghaei and Barbas, 2002). This organization could give rise to parallel, nucleus-anchored processing loops that induce nuclear-specific patterns of neural activity. It is also possible that the rich intra-amygdala connectivity (Bonda, 2000; Pitkänen and Amaral, 1991) distributes signals arriving to or originating from a particular area of the amygdala across multiple nuclei, evening out functional differences expected based on structural consideration alone. The human neuroimaging literature already alluded to this type of organization, based on anatomical connectivity (Bickart et al., 2012) or resting state fMRI data (Roy et al., 2009).

Here, we provide evidence for a mesoscopic organization of the amygdala, into putative subnetworks that often respect nuclear boundaries, but in some cases reflect synchronization of neural populations across multiple subnuclear regions, that were identified by guided source separation of the local field potentials recorded from multi-contact electrodes. The presence of any identifiable subnetworks demonstrates that a layered architecture is not a prerequisite for meaningful organization of LFP activity. The presence of multiple subnetworks suggests functional compartmentalization. Importantly, the strongest of the subnetworks were contoured by the internal and external boundaries of the nuclei of the amygdala. This is remarkable because GED is ‘blind’ to the spatial arrangement of the contacts on the V-probe and to the location of the V-probe in the amygdala. This is evidence that a mesoscale physiological feature of the amygdala (the LFP), unlike the single units, is bound by anatomical constraints. The implications of this finding are twofold: (1) expanding GED to three-dimensions (using multiple linear electrode arrays at different medio-lateral and rostral-caudal positions) will likely generate a more complete functional map of the primate amygdala, and (2) these new physiological features revealed by GED can then be set in register with the known neuroanatomy encompassing all nuclei.

Indeed, addressing these implications through multi-array recordings would ameliorate several of the limitations of the experiments presented here. For example, the use of a single electrode array limited us to assessing network structure along a single axis (i.e., dorsal-ventral) that was dependent on the angle of insertion of the probe. By collecting LFP from contacts on additional arrays placed at different medial-lateral and anterior-posterior positions, component maps could be constructed in multiple planes that intersect the nuclei of the amygdala. These experiments will better localize the sources of the LFP signals and provide a platform for testing anatomy-based hypotheses regarding non-contiguous (i.e., spatially distributed) inter-nuclear subnetworks (Pitkänen et al., 1995; Pitkänen and Amaral, 1991; Sah et al., 2003).

Recent studies have demonstrated inter-nuclear differences in decoding accuracy obtained from pseudo-populations of neurons for various task-related parameters using nearest-neighbor and support-vector machine classifiers (Grabenhorst et al., 2019; Grabenhorst et al., 2016). In future experiments, GED analyses could be used to generate components that maximally differentiate signals based on similar task parameters. Comparisons of component mapping to the results obtained from these decoding methodologies would provide complimentary metrics for examining the structure-function relationships of single unit and LFP activity.

The subnetworks identified by GED are functionally relevant because the coactivity patterns elicited by visual, tactile, and auditory stimuli discriminated between sensory modalities (as shown in Figure 4). While no activity pattern was specific for a particular sensory modality, the majority of activity patterns carried information about one or multiple sensory modalities. For example, the activity of the subnetwork shown in Figure 4e conveys information about all sensory modalities albeit with different power in each frequency band, without any one frequency band being assigned to a single sensory modality. Modality-specific analyses further support the hypothesis that functional subnetworks in the amygdala process multisensory information (Figure 5). These results complement the selectivity and specificity pattern of the single-unit responses recorded simultaneously from the same contacts (Morrow et al., 2019).

There were, however, two notable areas where the components were sensitive to sensory modality: (1) Visual stimuli elicited larger increases in low-frequency (<10 Hz) power (Figure 4g–i); and (2) only the visual components significantly mapped onto MRI-defined nuclear boundaries. This is consistent with the outcome of anatomical tract tracing studies that compared inputs to the primate amygdala from visual, auditory, tactile, and multisensory areas of the temporal lobe and found a preponderance of projections to the amygdala from visual areas (Amaral and Price, 1984; Amaral et al., 1992; Stefanacci and Amaral, 2002). Thus, the mesoscopic intranuclear spatial structure of the multisensory subnetworks is driven predominantly by visual processing. Interestingly, the single-unit data collected in these experiments (Morrow et al., 2019) did not show this bias towards the visual modality. This shows that the LFP contains information that was not readily accessible from the spiking data alone. Without monitoring eye movements or facial movements, we can neither confirm nor deny alternative explanations, such as stimulus-related motor responses that could account, at least in part, for the observed pattern of activity. Indeed, neurons in the amygdala respond with phasic bursts of activity to fixations on certain components of images such as faces (Minxha et al., 2017) and eyes (Mosher et al., 2014), and during production of facial expressions (Livneh et al., 2012; Mosher et al., 2016).

Further progress in understanding the mesoscale organization of the primate amygdala will come from expanding the network analyses presented here to cover field-field or spike-field interactions between the amygdala and connected structures. The first few attempts in this direction have been successful in revealing the directionality of interactions between the amygdala and the anterior cingulate cortex during aversive learning (Taub et al., 2018). Brain-wide circuits are indeed the domain where LFP analyses might be most revealing (Pesaran et al., 2018). Brain states, like affect or attention, can be characterized by spatiotemporal interactions between a hub and the cortical areas functionally linked to the hub. For example, the pulvinar and amygdala are hubs placed at the intersection of multiple brain-wide circuits and they both are expected to coordinate the activity of multiple cortical and subcortical areas during behavior (Bridge et al., 2016; Pessoa et al., 2019). The coordination of spatial attention across brain-wide networks has been recently attributed to directionally selective, theta-band interactions between the pulvinar, the frontal eye fields, and the parietal cortex (Fiebelkorn et al., 2019). The multivariate signal decomposition techniques used here, expanded to datasets recorded simultaneously from multiple nodes of amygdala-centered circuits, have great potential to determine how the amygdala coordinates the activity of other structures during social and affective behaviors.

Materials and methods

Surgical procedures

Request a detailed protocol

Two adult male rhesus macaques, F and B (weight 9 and 14 kg; age 9 and 8 years respectively), were prepared for neurophysiological recordings from the amygdala. The stereotaxic coordinates of the right amygdala in each animal were determined based on high-resolution 3T structural magnetic resonance imaging (MRI) scans (isotropic voxel size = 0.5 mm for monkey F and 0.55 mm for monkey B). A square (26 × 26 mm inner dimensions) polyether ether ketone (a.k.a. PEEK), MRI compatible recording chamber was surgically attached to the skull and a craniotomy was made within the chamber. The implant also included three titanium posts, used to attach the implant to a ring that was locked into a head fixation system. Between recording sessions, the craniotomy was sealed with a silicone elastomer that can prevent growth and scarring of the dura (Spitler and Gothard, 2008). All procedures comply with the NIH guidelines for the use of non-human primates in research and have been approved by The University of Arizona’s Institutional Animal Care and Use Committee.

Experimental design

Electrophysiological procedures

Request a detailed protocol

Local field potential activity was recorded with linear electrode arrays (V-probes, Plexon Inc, Dallas, TX) that have 16 equidistant contacts along a 236 μm diameter shaft. Data were collected using a Plexon OmniPlex data acquisition hardware and software (RRID:SCR_014803). A single electrode array was acutely lowered into the right amygdala for each recording session using a Thomas Recording Motorized Electrode Manipulator (Thomas Recording GmbH, Giessen, Germany). The first contact of the array was located 300 μm from the tip of the probe and each subsequent contact was spaced 400 μm apart; this arrangement allowed us to monitor simultaneously the entire dorso-ventral expanse of the amygdala. Impedance for each contact typically ranged from 0.2 to 1.2 MΩ. The anatomical location of each electrode was calculated by drawing the chamber to scale on a series of coronal MR images and aligning the chamber to fiducial markers (co-axial columns of high contrast material). Histological verification of these recording site estimates was done in monkey B (see supplemental Materials and methods, and Figure 3—figure supplement 1). During recordings, slip-fitting grids with 1 mm distance between cannula guide holes were placed in the chamber, this allowed a systematic sampling of most medio-lateral and anterior-posterior locations in the amygdala. A twenty-three-gauge cannula was inserted through the guide holes and placed 4–6 mm into the cortex. V-probes were driven through the cannula and down to the amygdala at a rate of 70–100 µm per second, slowing to 5–30 µm per second after the tip of the V-probe crossed into the estimated location of the central nucleus. Data from a total of 41 recording sessions monkey (F = 25, monkey B = 16) were analyzed. The raw LFP data used in these analyses is available at https://doi.org/10.5281/zenodo.3752137.

The analog signal from each channel on the V-probe was digitized at the headstage (Plexon Inc, HST/16D Gen2) before being sent through a Plexon pre-amplifier, filtering from 0.1 to 300 Hz and sampling continuously at 40 kHz. LFP was extracted from each contact and down sampled at 1 kHz for analysis. Signals were initially referenced to the shaft of the electrode and were re-referenced offline to the average signal across all electrodes on the probe (i.e., common average reference). This referencing scheme minimizes the contribution of volume conduction from distant sources that spread to all contacts simultaneously and ensured that the recorded LFPs reflected local mesoscale brain dynamics.

Stimulus delivery

Request a detailed protocol

The monkey was seated in a primate chair and placed in a recording booth featuring a 1280 × 720 resolution monitor (ASUSTek Computer Inc, Beitou, Taiwan), two Audix PH5-VS powered speakers (Audix Corporation, Wilsonville, OR) to either side of the monitor, a custom made airflow delivery apparatus (Crist Instruments Company Inc, Damascus, MD), and a juice spout. Juice delivery was controlled by a peristaltic pump (New Era Pump Systems, Inc, Farmingdale, NY, model: NE-9000). The airflow system was designed to deliver gentle, non-aversive airflow stimuli to various locations on the face and head (i.e., the pressure of the air flow was set to be perceptible but not aversive). The system, based on the designs of Huang and Sereno and Goldring et al. (Goldring et al., 2014; Huang and Sereno, 2007), consists of a solenoid manifold and an airflow regulator (Crist Instruments Company, Inc), which controlled the intensity of the airflow directed toward the monkey. Low pressure vinyl tubing lines (ID 1/8 inch) were attached to ten individual computer-controlled solenoid valves and fed through a series of Loc-line hoses (Lockwood Products Inc, Lake Oswego, OR). The Loc-line hoses were placed such that they did not move during stimulus delivery and were out of the monkey’s line of sight. All airflow nozzles were placed ~2 cm from the monkey’s fur and outflow was regulated to 20 psi. At this pressure and distance, the air flow caused a visible deflection of the monkey’s fur.

Stimulus delivery was controlled using custom written code in Presentation Software (Neurobehavioral Systems, Inc, Berkeley, CA). The monkey’s eye movements were tracked by an infrared eye tracker (ISCAN Inc, Burlington, MA, camera type: RK826PCI-01) with a sampling rate of 120 Hz. Eye position was calibrated prior to every session using a 5-point test. During the task the animal was required to fixate for 125–150 ms a central cue (‘fixspot’) that subtended 0.35 dva. After successful fixation, the fixspot was removed and the monkeys were free to move their eyes around the screen. Removal of the fixspot was followed by the delivery of a stimulus randomly drawn from a pool of neutral visual, tactile, and auditory stimuli. In monkey F, there was no delay between the fixspot removal and stimulus onset, while in monkey B a 200 ms delay was used. Stimulus delivery lasted for 1 s and was followed (after a delay of 700–1200 ms) by juice reward. Each stimulus was presented 12–20 times and was followed by the same amount of juice (~1 mL). Trials were separated by a 3–4 s inter-trial interval (ITI).

For each recording session, a set of eight novel images were selected at random from a large pool of pictures of fractals and objects. Images were displayed centrally on the monitor and covered ~10.5×10.5 dva area. During trials with visual stimuli, the monkey was required to keep his eye within the boundary of the image. If the monkey looked outside of the image boundary, the trial was terminated without reward and repeated following an ITI.

Tactile stimulation was delivered to eight sites on the face and head: the lower muzzle, upper muzzle, brow, and above the ears on both sides of the head (see Morrow et al., 2019 for further details). The face was chosen because in a previous study a large proportion of neurons in the amygdala respond to tactile stimulation of the face (Mosher et al., 2016). Two ‘sham’ nozzles were directed away from the monkey on either side of the head to control for the noise made by the solenoid opening and/or by the movement of air through the nozzle. Pre-experiment checks ensured that the airflow was perceptible (caused visible deflection of hair) but not aversive. The monkeys displayed slight behavioral responses (e.g., minor startle responses) to the stimuli during the first habituation session, but they did not overtly respond to these stimuli during the experimental sessions.

For each recording session, a set of eight novel auditory stimuli were taken from freesound.org, edited to be 1 s in duration, and amplified to have the same maximal volume using Audacity sound editing software (Audacity version 2.1.2, RRID:SCR_007198). Sounds included musical notes from a variety of instruments, synthesized sounds, and real-world sounds (e.g., tearing paper). The auditory stimuli for each session were drawn at random from a stimulus pool using a MATLAB script (The MathWorks Inc, Natick, MA, version 2016b, RRID:SCR_001622).

All stimuli were specifically chosen to be unfamiliar and devoid of any inherent or learned significance for the animal. Stimuli with socially salient content like faces or vocalizations were avoided as were images or sounds associated with food (e.g., pictures of fruit or the sound of the feed bin opening). Airflow nozzles were never directed toward the eyes or into the ears to avoid potentially aversive stimulation of these sensitive areas.

Data analysis

LFP signals were extracted for analysis in MATLAB using scripts from the Plexon MATLAB software development kit and down sampled to 1000 Hz in MATLAB. Artifacts (e.g., signals from broken contacts, sharp spikes in the LFP caused by movement of the animal, or 60 cycle line noise) were removed from the signal prior to further analyses. These data were originally collected for a study designed to assess single-unit processing in the amygdala (Morrow et al., 2019). While the number of sessions and the number of trials per session were originally selected with single-unit processing in mind, these experiments provided ample LFP data for analysis as all functioning contacts provided usable LFP data regardless of whether well-isolated single units were present.

All analyses were performed using custom made MATLAB scripts.

Peri-event LFP

Request a detailed protocol

The LFP signal from every trial was taken for each contact for two time windows: a baseline window from −1.5 to −1.0 s relative to fixspot onset and a stimulus delivery window from 0 to +1.0 s relative to stimulus onset. The medians of these two distributions of values were compared using a Wilcoxon rank-sum test to determine the number of contacts with significant event-related changes in LFP signals. These tests were Bonferroni-corrected for multiple comparisons within each session (i.e., tests for differences on each of 16 contacts results in an adjusted alpha level of 0.01/16 = 0.000625).

Covariance matrices

Request a detailed protocol

Covariance matrices were generated by taking the LFP signal for each contact during a time window (baseline or stimulus delivery) to get a timepoints-by-contacts matrix for a given trial. The LFP at each timepoint on each trial was subtracted from the mean LFP across time on the trial. The timepoints-by-contacts matrix was then multiplied by its transpose to create a square, symmetric covariance matrix for a single trial (i.e., contacts-by-contacts covariance matrix). These trial-wise covariance matrices were generated for both the baseline and stimulus delivery time periods. The average of the stimulus covariance matrices made the S matrix and the average of the baseline matrices made the R matrix.

Generalized eigendecomposition

Request a detailed protocol

Generalized eigendecomposition (GED) was used to generate components that maximized stimulus related changes in activity. GED is a guided source-separation technique based on decades of statistics and engineering work (de Cheveigné and Parra, 2014; Parra et al., 2005; Tomé, 2006; Van Veen et al., 1997). Eigendecomposition can be used to decompose a multivariate signal to generate ‘components’ that capture patterns of covariance across recording contacts. We use generalized eigendecomposition as an optimization algorithm to design a spatial filter (a set of weights across all contacts) that maximizes the ratio of the stimulus covariance matrix to the pre-stimulus covariance matrix (also called the baseline or reference matrix). This can be expressed through the Rayleigh quotient, which identifies a vector w that maximizes the ‘multivariate signal-to-noise ratio’ between two covariance matrices:

wmax=argmaxwTSwwTRw

Where S is the covariance matrix generated from data collected during the stimulus delivery time window, R is a covariance matrix generated from the data collected during the baseline window, and w is an eigenvector (wT is the transpose of w). When w = wmax, the value of the ratio is an eigenvalue, λ. The full solution to this equation is obtained from a generalized eigendecomposition (RWΛ = SW, Figure 2), where W is a matrix with eigenvectors in the columns, and Λ is a diagonal matrix containing the eigenvalues.

The upshot of the generalized eigendecomposition is that the eigenvectors are the directions of covariance patterns that maximally separate the S and R matrices (i.e., contact-by-contact contributions to the component) and eigenvalues contain the magnitude of the ratio between S and R along direction w. The goal of this maximization function is therefore to find multichannel covariance patterns that are prominent during stimulus delivery but not during baseline. Should the co-activity patterns be similar during the baseline and stimulus windows, the ratio S/R will be close to one (λ = 1), however, large differences in multichannel activity that arise during stimulus delivery will manifest as relatively larger eigenvalues (λ >> 1).

Note that there are no anatomical or spatial constraints on the decomposition, nor are there any spatial smoothness or peakedness constraints. This means that any interpretable spatial structure that arises from the components results from the nature of the correlations in the data, and not from biases imposed on the analysis method.

To determine a statistical threshold for λ, we shuffled the labels for the S and R matrices for each trial and performed GED 500 times to create a null distribution of eigenvalues that are associated with activity that was not time-locked to stimulus onset. The observed eigenvalues were compared to this null distribution and components associated with eigenvalues above the 99th percentile of this distribution were considered to be statistically significant (similar in concept to a 1-tailed t-test with an alpha level of 0.01).

GED is one of several multivariate decomposition methods that have been explored in neuroscience (others include principal components analysis, independent components analysis, Tucker decomposition, and non-negative matrix factorization). Different methods have different maximization criteria and thus can produce different results. GED has several advantages, including that it is amenable to inferential statistical thresholding, whereas other decompositions are descriptive and thus selecting components for subsequent interrogation may be subjective or biased. Furthermore, validation studies have shown that GED has higher accuracy for recovering ground-truth simulations compared to PCA or ICA (Cohen, 2017b; Haufe et al., 2014a). Nonetheless, it is possible that different analysis methods can reveal patterns in the data that are not captured by GED.

Component maps and time series

Request a detailed protocol

While eigenvectors are difficult to interpret on their own because they both boost signal and suppress noise (i.e., any irrelevant activity), multiplication of an eigenvector by a covariance matrix creates a forward model of the spatial filter that is easier to visually inspect (Haufe et al., 2014b). We refer to these filters as ‘component maps’ because they convey the relative contribution of each contact to a component signal (i.e., each element of the eigenvector is related to how the LFP on a specific contact contributed to the extracted component). For these data, component maps are generated by multiplying the stimulus covariance matrix, S, with the eigenvector, w, corresponding to the nth component (Swn).

Component time series were created by multiplying the transpose of the eigenvector matrix by the contacts-by-timepoints LFP data matrix. For example in Figure 2e, we show the product of multiplying the first column of the eigenvector matrix (i.e., the eigenvector, w1, associated with the largest eigenvalue) with the matrix of LFP voltage values (labelled as X). The first dimension of X is dependent on the number of contacts used (e.g., 16) and the second dimension is dependent on the number of timepoints. In this example, the transpose of w1 is a 1-by-16 matrix containing the elements of the eigenvector along the second dimension and X is a 16-by-number-of-timepoints matrix. The product of this matrix multiplication is a 1-by-number-of-timepoints matrix that is the component time-series. These time-series data are the weighted average of the activity across contacts that is captured by the component.

Anatomical grouping versus statistical clustering of contacts

Request a detailed protocol

If there is an influence of the spatial location of the contacts on component activity, this should manifest as abrupt shifts in sequential values in the component maps. If these shifts correlate with anatomically defined nuclear boundaries, this would suggest that the cytoarchitectural heterogeneity of the nuclei manifests as a (potentially) functional signal. To assess this possibility, we used the MATLAB function ‘findchangepnts’ to identify transitions in the channel weight values. This function works to find the points at which sequential values deviate from some statistical parameter by exhaustively grouping values in all possible sequential configurations and determining the residual error given by a test statistic. We set this function to detect an unspecified number of transitions in the mean values of sequential points (i.e., the ‘statistic’ input set to mean with no specified minimum or maximum number of transitions). To prevent overfitting, a proportionality constant of 0.05 was used (‘MinThreshold’ input set to 0.05). The proportionality constant is a fixed penalty for adding subsequent changepoints such that new changepoints that do not reduce the residual error by at least 0.05 are rejected (see MATLAB documentation for further details on the findchangepnts function).

To determine if these statistically defined changepoints matched anatomical boundaries estimated via high-resolution MRI, we grouped all contacts on a recording session according to their estimated anatomical location. If all contacts located within a single nucleus were statistically grouped together (i.e., no within nucleus changepoint was detected), each contact was considered to be matching between the two grouping methods. If contacts from multiple nuclei were grouped together, each additional contact from a non-matching nucleus was not included from the total matching count. Likewise, when contacts were grouped anatomically but separated in the statistical clustering, only the grouping that captured the most contacts was counted as matching (e.g., if seven contacts were grouped anatomically but this was split by changepoints into one group of 4 contacts and another of 3 contacts, only the 4-contact group was counted as matching). Contacts estimated to be within 200 μm of a nuclear boundary were excluded from this assessment (i.e., not considered matching or non-matching) due to the slight uncertainty in their anatomical grouping. As we were only interested in the spatial information within the amygdala, non-amygdala contacts were also excluded from this analysis.

To determine if the percentage of matching contacts was statistically better than chance, we compared the number of matching contacts obtained from the methods described above to values obtained using cut-and-shift based permutation testing for each significant component. In this method, components maps would be cut at a random point along the 16 element vector and the values before this cut point were shifted to the end of the vector. This new map was then compared to the anatomy-based map (i.e., the anatomy maps are kept as is while the component maps were shuffled). We repeated this 1000 times for each component to get a null distribution of ‘matching’ values. We then compared the observed distributions of matching values for each component group based on the relative strength of the components (i.e., 1 to 5) to this null distribution using paired t-tests, Bonferroni corrected for the five comparisons (ɑ=0.05/5 = 0.01). This allowed us to determine whether the statistical mapping of the average 1st to 5th component (in rank-order) matched the anatomical mapping better than expected by chance.

Time-frequency decompositions

Request a detailed protocol

Component time series were created by multiplying the LFP data with an eigenvector. Time frequency (TF) analysis on the component time series was implemented via convolution with a series of complex Morlet wavelets (logarithmically spaced from 1 to 100 Hz in 80 steps) following standard procedures (Cohen, 2014). To determine whether any sections of the time-frequency power differed significantly from baseline we used a cluster-mass test similar in design to the method described in Maris and Oostenveld, 2007. TF power matrices were generated for a baseline time window from −1.5 s to −1.0 s relative to fixspot onset and a stimulus time window from 0 to +1.0 s relative to stimulus onset. The difference between these two matrices was then taken to find how TF power changed from baseline to stimulus onset. We then repeated this process shuffling both the labels for the baseline and stimulus time windows and the exact start time of the windows (randomly within 500 ms of actual start time). This generated a set of difference maps that were not time-locked to any specific event during the trials. We repeated this 1000 times and used the mean and standard deviation from these shuffles to z-score normalize the values in the observed difference matrix. Each of the individual difference maps was z-scored using the same parameters as for the observed data. Lastly, clusters of values in these matrices in which the z-scores on adjacent positions were greater than 2.33 (corresponding to ɑ = 0.01) were created. The z-scores within these clusters were then summed to create a ‘cluster-mass value’ for each cluster in the shuffled permutations and the actual data. The cluster-mass values from the difference matrix generated from the observed data was compared to the distribution of maximum cluster-mass values generated by the shuffled permutations (i.e., we compared only the largest cluster in each shuffle to the observed data). Observed clusters-mass values greater than 99% of the values obtained via the shuffles were considered to be statistically significant and are outlined in black in the plots in Figure 4a–f.

Spectral profiles

Request a detailed protocol

Power spectra were created from each of the component time series. Principal components analysis was used to extract the prominent features of these spectral profiles across all components. The signals associated with the four principal components accounting for the most explained variance (Figure 4i–j, scree plots) were extracted and plotted (Figure 4i–j).

Modality-specific analyses

Request a detailed protocol

Modality-specific analyses were conducted by grouping all trials from only one particular sensory domain and re-running the generalized eigendecompositions using the same parameters detailed in the Generalized eigendecomposition section above. Likewise, generation of component maps and the comparisons of the component maps to the MRI-defined nuclear boundaries followed the methods as previously stated in the above sections .

MRI-based estimations and histological verification of recording sites

In order to verify the accuracy of our recording system, we used a combination of high-resolution MRI and histology. Initially, in vivo 3T MRI scans were performed as described in the main Methods section to guide placement of the V-probe on each recording session. After all data collection from monkey B had finished, we selected a site in the center of the amygdala as the target of an injection of a cell staining dye (Blue Tissue Marking Dye, Triangle Biomedical Sciences, Inc, Durham, NC) to determine the accuracy of estimates of the electrode positions (Figure 3—figure supplement 1a). Lastly, ex vivo 7T MRIs and histology were used to verify injection site location.

Injection procedure

Request a detailed protocol

The same methods described for guiding the placement of the electrode arrays during electrophysiological recordings were used to guide the insertion of a 30-gauge cannula into the amygdala of monkey B. A piece of metal tubing (outer diameter 640 µm) was attached near the top of the injection cannula using Gorilla Glue (Maine Wood Concept Inc, Cincinnati, OH) so that the cannula could be placed into the Motorized Electrode Manipulator in the same way as the V-probes. The cannula was thin enough to fit through the same guide tubes used to deliver the V-probes and was lowered at the same rate into the amygdala. A Hamilton syringe, attached to thin rubber tubing, was used to deliver 2 µl of dye over the course of 12 min to ensure ample staining of the target area. We allowed the cannula to sit for 30 min post injection before removing it from the amygdala. Approximately two hours following dye injection, the animal was prepped for euthanasia, perfused with 4% PFA, and decapitated. The head was placed in 4% PFA for two weeks.

7T ex vivo MRI

Request a detailed protocol

Fifteen days after euthanasia, the head was scanned using a 7T MRI scan with 250 µm isotropic voxels. Scan parameters were as follows: 86 mm ID quad volume coil; 3D gradient-echo with 250 µm isotropic resolution and matrix size of 320 × 288 × 256; TR = 100 ms; TE = 6 ms; FA = 30; and NA = 6.

Assessment of accuracy using MRIs and histology

Request a detailed protocol

Estimates of the location of the injection site were made independently for each of the three images (3T, 7T, and histology). The images from the 3T, 7T, and histology were then co-registered using major anatomical landmarks. The distances between all injection site estimates were measured and the largest difference in these values was used to ensure the most conservative assessment of the accuracy.

References

  1. 1
    The Amygdala: Neurobiological Aspects of Emotion, Memory, and Mental Dysfunction
    1. DG Amaral
    2. JL Price
    3. A Pitkänen
    4. T Carmichael
    (1992)
    New York: Wiley-Liss.
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
    Analyzing Neural Time Series Data: Theory and Practice
    1. MX Cohen
    (2014)
    Cambridge: MIT Press.
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
  59. 59
  60. 60
  61. 61

Decision letter

  1. Daeyeol Lee
    Reviewing Editor; Johns Hopkins University, United States
  2. Kate M Wassum
    Senior Editor; University of California, Los Angeles, United States
  3. Daeyeol Lee
    Reviewer; Johns Hopkins University, United States

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

Local field potential activity recorded from the primate amygdala in response to visual, somatosensory, and auditory stimuli was analyzed using the generalized eigendecomposition to identify the change points in the component maps. The change points identified with this novel method corresponded to the known anatomical boundaries between the subdivisions of the amygdala.

Decision letter after peer review:

Thank you for submitting your article "Mesoscopic-scale functional networks in the primate amygdala" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Daeyeol as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by a Reviewing Editor and Kate Wassum as the Senior Editor.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

As the editors have judged that your manuscript is of interest, but as described below that additional experiments are required before it is published, we would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). First, because many researchers have temporarily lost access to the labs, we will give authors as much time as they need to submit revised manuscripts. We are also offering, if you choose, to post the manuscript to bioRxiv (if it is not already there) along with this decision letter and a formal designation that the manuscript is "in revision at eLife". Please let us know if you would like to pursue this option. (If your work is more suitable for medRxiv, you will need to post the preprint yourself, as the mechanisms for us to do so are still in development.)

Summary:

The authors analyzed the local field potential activity recorded from the primate amygdala in response to visual, somatosensory, and auditory stimuli. They used a method called the generalized eigendecomposition (GED) to identify the channels in the linear array of electrodes that showed co-variation in their response to sensory stimuli different from the baseline activity, and identified the "change points" in the component maps (i.e., covariance matrix multiplied by an eigenvector). The main finding in this manuscript is that these change points in the component maps roughly and significantly correspond to the known anatomical boundaries between the subdivisions of the amygdala, suggesting that the GED can successfully identify different functional subnetworks within the amygdala. The authors also analyzed the component time series, which is the average LFP weighted by the component, and showed that these component time series can distinguish among different sensory modalities. Although a novel application of the GED can reveal a potentially important principle in the anatomical organization in the amygdala, this was not demonstrated convincingly in this study yet. This may be possible with some additional analyses.

Essential revisions:

1) Although the authors have applied an interesting method to identify LFP activity recorded from multiple channels that covary and carry information about the modality of sensory stimuli, the manuscript does not provide any new insight about the anatomical or functional organization in the amygdala. The way in which GED component weights map onto anatomical boundaries of amygdala nuclei (Figure 3) is impressive, and these results are strengthened by the careful histological and MRI-based reconstruction of recording sites. Nevertheless, this seems merely a validation of the method. Would it be possible to discover novel principle, for example, by applying GED separately for different sensory modalities? Similarly, if GED is applied to maximally distinguish between the LFP recorded in response to different sensory modalities, will the resulting components different from those identified in the current study?

More specifically, each session had 8 visual, 8 tactile, and 8 auditory stimuli, and the anatomically-related functional network analysis was done by averaging across the covariance matrices corresponding to all these types of stimuli. How do the functional maps look if the effect of each stimulus type was investigated separately? It would be interesting to see how component map separates among the amygdala nuclei when different sensory modalities are not averaged in. When we talk about functional networks, the 'function' itself is quite important in understanding the contribution of the network. It will be important to test if the function-anatomy boundaries are same or different if they show the functional separations for the visual, tactile, and auditory stimuli separately. This is largely because different sub-regions might be contributing differently depending on the content of the information. For this reason, if the boundaries remain similar, that would strengthen the finding.

2) The authors state that LFPs allow for a novel way of clustering functional network within the amygdala sub-regions in a manner that cannot be resolved from single-unit activity. However, it would be even more convincing and more direct if the authors could pit these results (i.e., ability to identify functional network) directly with single-unit / multi-unit data collected from the same contact sites in this same paper. In particular, it would be useful to show the multi-unit data as a comparison since the resolution becomes lower than single units but still are in the spike domain. Since the authors have both of these data (Morrow, Mosher and Gothard, 2019), it might be powerful if they could directly compare the LFP-based metrics to spike-based metrics to inspect whether the functional separability reported here is only present (or is better). This will be very helpful even though their previous work showed poor or no functional separability of the amygdala nuclei based on single-unit activity.

3) Another weakness in this study is that the animal's behavior was not well controlled during the baseline period. This 1-s baseline period includes the onset of fixation target and eye movements to fixate that target. Therefore, any changes in LFP associated with these two events could be mistaken for the activity related to stimulus presentation. The fact that the component time series are different for different sensory modalities mitigate this problem to some degree, but the eye movement data should be analyzed more carefully. For example, were there differences between stimulus conditions in the rate with which the animals broke fixation? Such effects could indicate differences in motivation between the conditions which would complicate an interpretation of the findings in Figure 4 purely in terms of sensory modalities.

4) The mapping of GED component weights to anatomical divisions relies on the detection of change points between consecutive weights. Would the approach be sensitive to detect similarities in weights between non-contiguous subregions of the amygdala? For example, Pitkanen and Amaral, (1998) showed with tracer injections that specific focal areas in the dorsal lateral nucleus label focal areas both in the ventral (non-contiguous) lateral nucleus and in the distant accessory basal nucleus. Such anatomical networks between non-contiguous amygdala areas may well form functional networks, and this could be reflected in co-activity patterns in the present data.

5) It would be helpful to discuss how the present results, and GED results generally, depend on the penetration angle of the probe. For example, there may well be functional networks between lateral and accessory basal nucleus, but these may not be detectable with a vertical penetration that does not sample these nuclei simultaneously. This is not a criticism of the present recordings, but it would be good to discuss what recordings would be required to map amygdala networks comprehensively.

6) When analyzing the time-frequency spectra in the LFP signals corresponding to various stimuli, the authors report that the spectral profiles remain similar even after they have removed the contacts outside the amygdala. However, in some places, there seem to be visible differences and some alterations. For example, the scree plot summary showing the number of significant components in Figure 2H and Figure 2J seem rather different; the component-to-frequency mappings do not seem to be clearly maintained between Figure 4K and 4L; and there are some visible changes in Figure 4—figure supplement 1 as well. It is unclear how to interpret these differences in such cases.

7) The authors may want to acknowledge some human neuroimaging data that have identified, or attempted to identify, sub-region-based amygdala sub-networks. Because such studies also argue for mesoscale organizations of the amygdala, discussing this will increase impact of the finding – it would be informative for the authors to discuss how the current findings inform such fMRI findings in humans in the Discussion section. Given that fMRI signals are "more like" LFP signals, it might be worthwhile to discuss although the human networks identified still do not have the same anatomical resolution as in the current study. Some potentially relevant references include:

Bickart et al., (2012).

Roy et al., (2009).

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Mesoscopic-scale functional networks in the primate amygdala" for further consideration by eLife. Your revised article has been evaluated by Kate Wassum (Senior Editor) and a Reviewing Editor.

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below:

1) With respect to the question of whether the GED approach could uncover a novel functional principle in the amygdala, the new analyses in Figure 5 and the related Results text go in that direction, but this is not very clear yet. The results suggest that GED components from visual trials matched nuclear boundaries better than components from auditory and tactile trials (subsection “Modality-specific GED analyses”). The authors derive the attractive conclusion (Discussion section) that "the mesoscopic intranuclear spatial structure of the multisensory subnetworks is driven predominantly by visual processing", consistent with known visual input patterns to amygdala. This could be a key point of the paper but the data that support this conclusion are not clearly shown and merely stated in subsection “Modality-specific GED analyses”. It would be very helpful if the authors actually quantify which modality is better across all the data. They can also include a panel in Figure 5 to clearly document this effect, and perhaps visualize the similarity to the amygdala visual inputs known from tracing studies. This would help the paper to derive new insight about the amygdala's functional organization beyond validating the GED method.

2) Although the authors have made some clarifications, it seems still possible whether differences between sensory conditions in LFP signals could be partly related to different motor responses to the different cues. For example, visual stimuli could provoke different gaze patterns as the animal explores the stimulus (focusing on the screen) compared to auditory stimuli (the animal might look at the sound source). Similarly, the tactile stimulus might evoke specific muscle activations (e.g. face expressions). It is worth mentioning in the discussion that, in addition to the sensory stimuli themselves, some of the reported neural effects could be influenced by stimulus-related motor responses.

3) The authors argue in the introduction that an LFP approach may be needed to uncover functional networks, and that single-neuron approaches "may not carry sufficient information about the state of the network to accurately describe the neural computations taking place therein". However, the present findings do not fully deliver on this promise or justify such statements. For example, in the reviewers' comments, stimulus discrimination was suggested as a test but this did not show very strong results. This contrasts markedly with the often exquisite discriminations afforded by single neurons (indeed, as documented in the authors' own previous papers). Accordingly, it might be better to tone down the statements about functional networks and limitations of single-neuron approaches in Introduction and Discussion section.

4) Subsection "Modality-specific GED analyses": "Furthermore, the sensory-modality-specific component maps were significantly correlated with the component maps that combined all three sensory modalities (Figure 5D-K)." Should this refer to Figure 5C? There is not Figure 5D-K.

5) Figure 4L: the effect of removing non-amygdala sources on spectral profiles are still not very clear. Please include an interpretation of why the highlighted peaks are associated with different components in panels K and L (4, 7, 70 Hz).

https://doi.org/10.7554/eLife.57341.sa1

Author response

Summary:

The authors analyzed the local field potential activity recorded from the primate amygdala in response to visual, somatosensory, and auditory stimuli. […] This may be possible with some additional analyses.

We thank the reviewers for suggesting additional analyses to elevate the findings of this study above a mere validation of the method (which was indeed one of our goals). We have addressed each criticism and revised the manuscript based on the suggestions of the reviewers. We have addressed the concerns of the reviewers as follows:

Essential revisions:

1) Although the authors have applied an interesting method to identify LFP activity recorded from multiple channels that covary and carry information about the modality of sensory stimuli, the manuscript does not provide any new insight about the anatomical or functional organization in the amygdala. The way in which GED component weights map onto anatomical boundaries of amygdala nuclei (Figure 3) is impressive, and these results are strengthened by the careful histological and MRI-based reconstruction of recording sites. Nevertheless, this seems merely a validation of the method. Would it be possible to discover novel principle, for example, by applying GED separately for different sensory modalities? Similarly, if GED is applied to maximally distinguish between the LFP recorded in response to different sensory modalities, will the resulting components different from those identified in the current study?

More specifically, each session had 8 visual, 8 tactile, and 8 auditory stimuli, and the anatomically-related functional network analysis was done by averaging across the covariance matrices corresponding to all these types of stimuli. How do the functional maps look if the effect of each stimulus type was investigated separately? It would be interesting to see how component map separates among the amygdala nuclei when different sensory modalities are not averaged in. When we talk about functional networks, the 'function' itself is quite important in understanding the contribution of the network. It will be important to test if the function-anatomy boundaries are same or different if they show the functional separations for the visual, tactile, and auditory stimuli separately. This is largely because different sub-regions might be contributing differently depending on the content of the information. For this reason, if the boundaries remain similar, that would strengthen the finding.

In our initial analyses, we pooled trials across modalities for three reasons: (1) pooling data maximized the signal-to-noise ratio, which improved the statistical reliability of the matrix decompositions; (2) modality-independent decompositions ensured that our findings were not biased by data selection; (3) our previous single-unit results show that networks of multisensory neurons are distributed throughout the amygdala.

However, the reviewers bring up an excellent point. The GED analysis can be specifically tailored to maximize modality-specific differences. We therefore ran a series of modality-specific analyses. As reported in detail in the revised manuscript, these findings show that many features of our components analysis remained modality-independent, while some features (e.g., component mapping onto nuclear boundaries) showed some modality-specific patterns. These new analyses enhance the impact of our paper, and we appreciate the suggestion. We added a new Figure 5 to illustrate these results as well as expanded the Results section and Discussion section.

2) The authors state that LFPs allow for a novel way of clustering functional network within the amygdala sub-regions in a manner that cannot be resolved from single-unit activity. However, it would be even more convincing and more direct if the authors could pit these results (i.e., ability to identify functional network) directly with single-unit / multi-unit data collected from the same contact sites in this same paper. In particular, it would be useful to show the multi-unit data as a comparison since the resolution becomes lower than single units but still are in the spike domain. Since the authors have both of these data (Morrow, Mosher and Gothard, 2019), it might be powerful if they could directly compare the LFP-based metrics to spike-based metrics to inspect whether the functional separability reported here is only present (or is better). This will be very helpful even though their previous work showed poor or no functional separability of the amygdala nuclei based on single-unit activity.

This is an important point, and we appreciate the reviewers bringing this up. In our 2019 paper, we showed a lack of regional clustering of neurons with unisensory and various combinations of multisensory responses. It was clear that the state of the network was not adequately captured by the sparsely sampled single units along a single V-probe in the amygdala (typically less than 10 simultaneously monitored neurons).

In fact, we have tried some of the analyses suggested by the reviewers (using the same dataset; presented at SFN last year). To determine whether the coupling between single units and LFP’s is modality-specific, we computed the spike-field coherence and found that the pairwise phase consistency frequently differed between the three sensory modalities, suggesting that stimuli of different sensory modalities may engage independent (though potentially overlapping) networks of neurons. While pairwise phase consistency was prominent at low (~1-10 Hz) and some higher (~40 Hz) frequencies in the single channel data, high frequency pairwise phase consistency was generally attenuated in GED-based components (Morrow et al., 2019). This by itself suggested that GED-based assessments of spike field coherence may be more resistant to high frequency artifacts from multiunit and single unit activity.

Indeed, Belitski et al., (2010) showed that the extraction of stimulus information from frequency bands higher than 50Hz required averaging hundreds of milliseconds of data. This is not possible in our case because, unlike in the motor cortex cortex where neurons show prolonged modulations of firing rate in relation to movement, a large proportion of neurons in the amygdala respond to stimuli with a sharp phasic change of firing rate. This phasic response rarely lasts longer than 150ms. Typically, significant changes of firing are observed 110-140 ms after stimulus onset and the firing rate returns to baseline 250-300 ms from stimulus onset (Gothard et al., 2007; Mosher et al., 2010; Morrow et al., 2019) leaving only about 100 ms of useful multiunit activity for analysis. After this 100 ms window, a large number of phasic neurons drop out from the neural ensemble activated by each stimulus. For these reasons it is unlikely that the approach suggested by the reviewers would add to our report.

We chose not to include these findings here because (1) we realized while working on the poster that a sufficiently rigorous treatment of this question requires more methodological and analysis work, which would detract from this paper, and (2) we felt that we had insufficient number of simultaneously recorded neurons to address this question satisfactorily. In fact, we plan to record, in the immediate future, from 2x32 contacts/amygdala (two V-probes with 32 contacts each placed at different anterior-posterior and medio-lateral locations in the amygdala) that will provide a higher yield of single units and also a better spatial resolution for the anatomical topography of hypothesized subnetworks detailed in the manuscript. The larger number of neurons may compensate for the short duration each neuron is active.

3) Another weakness in this study is that the animal's behavior was not well controlled during the baseline period. This 1-s baseline period includes the onset of fixation target and eye movements to fixate that target. Therefore, any changes in LFP associated with these two events could be mistaken for the activity related to stimulus presentation. The fact that the component time series are different for different sensory modalities mitigate this problem to some degree, but the eye movement data should be analyzed more carefully. For example, were there differences between stimulus conditions in the rate with which the animals broke fixation? Such effects could indicate differences in motivation between the conditions which would complicate an interpretation of the findings in Figure 4 purely in terms of sensory modalities.

The reviewers have caught an important omission in our description of the experiment. When we initially began analyzing these data we used a baseline that was relative to stimulus onset; however, we changed our methods so that the baseline would be generated relative to the time of fixspot onset for each trial in order to avoid the issues that the reviewers flagged here (but we did not adjust the text to reflect this change). We have updated the text to better explain that the behavioral task was designed to separate activity during a window before the pre-stimulus fixation cue from stimulus-related activity.

Further, we required the monkeys to fixate before the delivery of stimuli, partly to have them initiate a trial (motivation, vigilance) and partly to ensure that they attend to the monitor for the presentation of the visual stimuli. We did not maintain the requirement to fixate on a central point once the stimulus was present. The updated text shows that the baseline window does not include the fixation window (which was very short = 125-150 ms, see Results section, Materials and methods section and Figure 1) and that during the presentation of the stimuli the monkey was no longer required to fixate a central point (Materials and methods section). Therefore, the animals were free to move their eyes during both the baseline (pre-fixation) and stimulus delivery windows. Lastly, we shifted the baseline window forward and backwards by 500 ms to further ensure that our results were not dependent on the data in the baseline window. This resulted in very minor changes in the total number of significant components (111 and 119, respectively, both less than 5% change), suggesting that the results we obtained are not attributable to the data used for baseline.

4) The mapping of GED component weights to anatomical divisions relies on the detection of change points between consecutive weights. Would the approach be sensitive to detect similarities in weights between non-contiguous subregions of the amygdala? For example, Pitkanen and Amaral, (1998) showed with tracer injections that specific focal areas in the dorsal lateral nucleus label focal areas both in the ventral (non-contiguous) lateral nucleus and in the distant accessory basal nucleus. Such anatomical networks between non-contiguous amygdala areas may well form functional networks, and this could be reflected in co-activity patterns in the present data.

Yes, the GED method can detect non-continuous networks. In fact, one advantage of GED is that it is completely blind to spatial and anatomical information. It is a purely statistical decomposition that has no spatial/anatomical constraints. This is part of the reason why it’s remarkable that we see a strong convergence with the estimated anatomical nuclear subdivisions — these concordances naturally arise from the data and are not imposed onto the method.

Inspection of Figure 3 shows that several components were driven by trans-nuclear networks (in particular, panels C, E, and F).

We have included additional text the Discussion section and Materials and methods section, to highlight this feature to readers.

5) It would be helpful to discuss how the present results, and GED results generally, depend on the penetration angle of the probe. For example, there may well be functional networks between lateral and accessory basal nucleus, but these may not be detectable with a vertical penetration that does not sample these nuclei simultaneously. This is not a criticism of the present recordings, but it would be good to discuss what recordings would be required to map amygdala networks comprehensively.

Again, an excellent point and something we did not emphasize enough in the discussion. Indeed, expanding GED to three-dimensions (using multiple linear probes placed at different medio-lateral and rostral-caudal positions) will likely generate a more complete functional map of the primate amygdala, one that could potentially track the processing flow from the lateral, to the basal and accessory basal nuclei. These types of experiments are exactly what we hope to execute in the near future. We have expanded the Discussion section that addresses this point.

6) When analyzing the time-frequency spectra in the LFP signals corresponding to various stimuli, the authors report that the spectral profiles remain similar even after they have removed the contacts outside the amygdala. However, in some places, there seem to be visible differences and some alterations. For example, the scree plot summary showing the number of significant components in Figure 2H and Figure 2J seem rather different; the component-to-frequency mappings do not seem to be clearly maintained between Figure 4K and 4L; and there are some visible changes in Figure 4—figure supplement 1 as well. It is unclear how to interpret these differences in such cases.

Thank you for pointing this out. This was confusingly written in the original manuscript. Figure 2 shows that the prominent spectral peaks are preserved when excluding the components that contained non-amygdala sources. In particular, we highlight the peaks at 4, 7, 38, and 70 Hz. On the other hand, the spectral profiles are clearly not identical, which is to be expected — the spectra in panel k include contributions from the hippocampus and rhinal cortex. We have now clarified this text in the Discussion section.

7) The authors may want to acknowledge some human neuroimaging data that have identified, or attempted to identify, sub-region-based amygdala sub-networks. Because such studies also argue for mesoscale organizations of the amygdala, discussing this will increase impact of the finding – it would be informative for the authors to discuss how the current findings inform such fMRI findings in humans in the Discussion section. Given that fMRI signals are "more like" LFP signals, it might be worthwhile to discuss although the human networks identified still do not have the same anatomical resolution as in the current study. Some potentially relevant references include:

Bickart et al., (2012).

Roy et al., (2009).

These are indeed pioneering studies that already alluded to this organization scheme that transcend the nuclear boundaries and thus belong to the Discussion section.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below:

1) With respect to the question of whether the GED approach could uncover a novel functional principle in the amygdala, the new analyses in Figure 5 and the related Results text go in that direction, but this is not very clear yet. The results suggest that GED components from visual trials matched nuclear boundaries better than components from auditory and tactile trials (subsection “Modality-specific GED analyses”). The authors derive the attractive conclusion (Discussion section) that "the mesoscopic intranuclear spatial structure of the multisensory subnetworks is driven predominantly by visual processing", consistent with known visual input patterns to amygdala. This could be a key point of the paper but the data that support this conclusion are not clearly shown and merely stated in subsection “Modality-specific GED analyses”. It would be very helpful if the authors actually quantify which modality is better across all the data. They can also include a panel in Figure 5 to clearly document this effect, and perhaps visualize the similarity to the amygdala visual inputs known from tracing studies. This would help the paper to derive new insight about the amygdala's functional organization beyond validating the GED method.

We have quantified and statistically compared the differences between sensory modalities in terms of how well the computed components predict the estimated nuclear boundaries in the amygdala. Instead of an additional panel to Figure 5, we have added a new table that contains more detailed statistical information than what we could provide in subplot or a graphical representation. This table shows that the first two components calculated for visual stimuli predict reliably the nuclear boundaries (even after conservative Bonferroni corrections). The first component calculated for responses to auditory stimuli approaches significance but was trending in the opposite direction (i.e., showed worse matching than the null distribution generated from randomly shuffled component maps). The components maps calculated for tactile stimuli showed no statistically significant relationship to nuclear boundaries. These results, together with those presented in Figure 4, suggest that the visual inputs carry the lion’s share of modality-dependent activity patterns in the amygdala. This outcome confirms the predictions of multiple anatomical tract tracing studies, namely that the amygdala receives a disproportionately larger volume of anatomical inputs from visual areas. We have described this in the text (subsection “GED components discriminate between sensory modalities”).

2) Although the authors have made some clarifications, it seems still possible whether differences between sensory conditions in LFP signals could be partly related to different motor responses to the different cues. For example, visual stimuli could provoke different gaze patterns as the animal explores the stimulus (focusing on the screen) compared to auditory stimuli (the animal might look at the sound source). Similarly, the tactile stimulus might evoke specific muscle activations (e.g. face expressions). It is worth mentioning in the Discussion section that, in addition to the sensory stimuli themselves, some of the reported neural effects could be influenced by stimulus-related motor responses.

We agree that without monitoring eye movements, the movements of the pinna or of the facial musculature during the presentation of sensory stimuli, we can neither confirm nor deny that stimulus-related motor responses could account for the observed pattern of activity in the amygdala. We have added to the Discussion section addressing this possibility and supported it citations from the literature.

3) The authors argue in the Introduction that an LFP approach may be needed to uncover functional networks, and that single-neuron approaches "may not carry sufficient information about the state of the network to accurately describe the neural computations taking place therein". However, the present findings do not fully deliver on this promise or justify such statements. For example, in the reviewers' comments, stimulus discrimination was suggested as a test but this did not show very strong results. This contrasts markedly with the often exquisite discriminations afforded by single neurons (indeed, as documented in the authors' own previous papers). Accordingly, it might be better to tone down the statements about functional networks and limitations of single-neuron approaches in Introduction and Discussion section.

At the request of the reviewers, we have removed this controversial statement from the Introduction and toned down our rhetoric in the Discussion section.

4) Subsection "Modality-specific GED analyses": "Furthermore, the sensory-modality-specific component maps were significantly correlated with the component maps that combined all three sensory modalities (Figure 5D-K)." Should this refer to Figure 5C? There is not Figure 5D-K.

Thank you for catching this error. Yes, the text should have read 5C and has been corrected.

5) Figure 4L: the effect of removing non-amygdala sources on spectral profiles are still not very clear. Please include an interpretation of why the highlighted peaks are associated with different components in panels K and L (4, 7, 70 Hz).

We agree that this point required more detailed explanations. We have added a paragraph to subsection “GED components discriminate between sensory modalities” that explains differences in the spectral profiles for both cases: including or excluding the electrodes estimated to be outside the amygdala. Given the importance of these observations for comparing the spectral profiles in the amygdala to the cortex, we have updated Figure 4—figure supplement 1 in an attempt to make this point more clear.

https://doi.org/10.7554/eLife.57341.sa2

Article and author information

Author details

  1. Jeremiah K Morrow

    1. Department of Physiology, University of Arizona, Tucson, United States
    2. Department of Behavioral Neuroscience, Oregon Health and Sciences University, Portland, United States
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0712-6733
  2. Michael X Cohen

    1. Radboud University Medical Center, Nijmegen, Netherlands
    2. Donders Center for Neuroscience, Nijmegen, Netherlands
    Contribution
    Conceptualization, Resources, Data curation, Software, Formal analysis, Supervision, Validation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-1879-3593
  3. Katalin M Gothard

    Department of Physiology, University of Arizona, Tucson, United States
    Contribution
    Conceptualization, Resources, Supervision, Funding acquisition, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing
    For correspondence
    kgothard@email.arizona.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9642-2985

Funding

National Institute of Mental Health (P50MH100023)

  • Katalin M Gothard

National Institute of Mental Health (R01MH121009)

  • Katalin M Gothard

European Research Council (StG 638589)

  • Michael X Cohen

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

Supported by P50MH100023 and 1R01MH121009 (KMG).

Ethics

Animal experimentation: All procedures comply with the NIH guidelines for the use of non-human primates in research as outlined in the Guide for the Care and Use of Laboratory Animals and have been approved by the Institutional Animal Care and Use Committee of the University of Arizona (protocol #08‐101).

Senior Editor

  1. Kate M Wassum, University of California, Los Angeles, United States

Reviewing Editor

  1. Daeyeol Lee, Johns Hopkins University, United States

Reviewer

  1. Daeyeol Lee, Johns Hopkins University, United States

Publication history

  1. Received: March 27, 2020
  2. Accepted: August 24, 2020
  3. Accepted Manuscript published: September 2, 2020 (version 1)
  4. Version of Record published: September 14, 2020 (version 2)

Copyright

© 2020, Morrow et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 837
    Page views
  • 103
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    Chen Chen et al.
    Research Article

    While animals track or search for targets, sensory organs make small unexplained movements on top of the primary task-related motions. While multiple theories for these movements exist—in that they support infotaxis, gain adaptation, spectral whitening, and high-pass filtering—predicted trajectories show poor fit to measured trajectories. We propose a new theory for these movements called energy-constrained proportional betting, where the probability of moving to a location is proportional to an expectation of how informative it will be balanced against the movement’s predicted energetic cost. Trajectories generated in this way show good agreement with measured trajectories of fish tracking an object using electrosense, a mammal and an insect localizing an odor source, and a moth tracking a flower using vision. Our theory unifies the metabolic cost of motion with information theory. It predicts sense organ movements in animals and can prescribe sensor motion for robots to enhance performance.

    1. Ecology
    2. Neuroscience
    Felix JH Hol et al.
    Tools and Resources

    Female mosquitoes need a blood meal to reproduce, and in obtaining this essential nutrient they transmit deadly pathogens. Although crucial for the spread of mosquito-borne diseases, blood feeding remains poorly understood due to technological limitations. Indeed, studies often expose human subjects to assess biting behavior. Here, we present the biteOscope, a device that attracts mosquitoes to a host mimic which they bite to obtain an artificial blood meal. The host mimic is transparent, allowing high-resolution imaging of the feeding mosquito. Using machine learning we extract detailed behavioral statistics describing the locomotion, pose, biting, and feeding dynamics of Aedes aegypti, Aedes albopictus, Anopheles stephensi, and Anopheles coluzzii. In addition to characterizing behavioral patterns, we discover that the common insect repellent DEET repels Anopheles coluzzii upon contact with their legs. The biteOscope provides a new perspective on mosquito blood feeding, enabling the high-throughput quantitative characterization of this lethal behavior.