The modulation of neural gain facilitates a transition between functional segregation and integration in the brain
Abstract
Cognitive function relies on a dynamic, contextsensitive balance between functional integration and segregation in the brain. Previous work has proposed that this balance is mediated by global fluctuations in neural gain by projections from ascending neuromodulatory nuclei. To test this hypothesis in silico, we studied the effects of neural gain on network dynamics in a model of largescale neuronal dynamics. We found that increases in neural gain directed the network through an abrupt dynamical transition, leading to an integrated network topology that was maximal in frontoparietal ‘rich club’ regions. This gainmediated transition was also associated with increased topological complexity, as well as increased variability in timeresolved topological structure, further highlighting the potential computational benefits of the gainmediated network transition. These results support the hypothesis that neural gain modulation has the computational capacity to mediate the balance between integration and segregation in the brain.
https://doi.org/10.7554/eLife.31130.001Introduction
The function of complex networks such as the human brain requires a tradeoff between functional specialization and global communication (Deco et al., 2015a; Park and Friston, 2013; Tononi et al., 1994). Contemporary models of brain function suggest that this balance is manifest through dynamically changing patterns of correlated activity, constrained by the brains’ structural backbone (Deco et al., 2013; Honey et al., 2007; Varela et al., 2001). This in turn allows exploration of a repertoire of cortical states that balance the opposing topological properties of segregation (i.e. modular architectures with high functional specialization) and integration (i.e. interconnection between specialist regions [Deco et al., 2015b; Ghosh et al., 2008]).
Recent work has demonstrated that the extent of integration in the brain is important for a range of cognitive functions, including effective task performance (Bassett et al., 2015; Shine et al., 2016a), episodic memory retrieval (Westphal et al., 2017) and conscious awareness (Barttfeld et al., 2015; Godwin et al., 2015). Furthermore, the topological properties of functional brain networks have been shown to fluctuate over time (Chang and Glover, 2010; Hutchison et al., 2013), both within individual neuroimaging sessions (Shine et al., 2016a; Zalesky et al., 2014) and over the course of weeks to months (Shine et al., 2016b). While the extent of integration in the brain may relate to more effective interregional communication, perhaps via synchronous oscillatory activity (Fries, 2015; Lisman and Jensen, 2013; Varela et al., 2001), there are also benefits related to a relatively segregated network architecture, including lower metabolic costs (Bullmore and Sporns, 2012; Zalesky et al., 2014) and effective performance as a function of learning (Bassett et al., 2015). However, despite these insights, the biological mechanisms responsible for driving fluctuations between integration and segregation remain unclear.
A candidate mechanism underlying flexible brain network dynamics is the global alteration in neural gain mediated by ascending neuromodulatory nuclei such as the locus coeruleus (AstonJones and Cohen, 2005; Sara, 2009). This small pontine nucleus projects diffusely throughout the brain and releases noradrenaline, a potent modulatory neurotransmitter that alters the precision and responsivity of targeted neurons (Waterhouse et al., 1988). Alterations in this system are known to play a crucial role in cognition, as there is evidence for a nonlinear (invertedU shaped) relationship between noradrenaline concentration and cognitive performance (Robbins and Arnsten, 2009; Figure 1a).
Mechanistically, the noradrenergic system has been shown to alter neural gain (ServanSchreiber et al., 1990) Figure 1b), increasing the signal to noise ratio of afferent input onto regions targeted by projections from the locus coeruleus. A crucial question is how these local changes in neural gain influence the configuration of the brain at the network level. Recent work has linked fluctuations in network topology to changes in pupil diameter (Eldar et al., 2013; Shine et al., 2016a; Shine et al., 2018), an indirect measure of locus coeruleus activity (Joshi et al., 2016; Murphy et al., 2014; Reimer et al., 2014, 2016), providing evidence for a link between the noradrenergic system and networklevel topology. However, despite these insights, the mechanisms through which alterations in neural gain mediate fluctuations in global network topology are poorly understood.
Biophysical models of largescale neuronal activity have yielded numerous insights into the dynamics of brain function, both during the resting state as well as in the context of taskdriven brain function (Deco et al., 2009; Honey et al., 2007); for review, see Breakspear, 2017. Whereas prior research in this area has examined the influence of local dynamics, coupling strength, structural network topology and stochastic fluctuations on functional network topology (Deco et al., 2015b; Deco and Jirsa, 2012; Deco et al., 2017; Gollo et al., 2015; Woolrich and Stephan, 2013), the direct influence of neural gain has not been studied. Here, we used a combination of biophysical modeling and graph theoretical analyses (Sporns, 2013) to characterize the effect of neural gain on emergent network topology. Based on previous work (Shine et al., 2016a; Shine et al., 2018), we hypothesized that manipulations of neural gain would modulate the extent of integration in timeaveraged patterns of functional connectivity.
Results
To test this hypothesis, we implemented a generic 2dimensional neuronal oscillator model (Fitzhugh, 1961; Stefanescu and Jirsa, 2011) within the Virtual Brain toolbox (Jirsa et al., 2010; Sanz Leon et al., 2013) to generate regional time series that were constrained by a directed white matter connectome derived from the CoCoMac database (Kötter, 2004) Figure 1d). The simulated neuronal time series were passed through a BalloonWindkessel model to simulate realistic BOLD data. Graph theoretical analyses were then applied to timeaveraged correlations of regional BOLD data to estimate the functional topological signatures of network fluctuations (see Materials and methods for further details).
To simulate the effect of ascending neuromodulatory effects on interregional dynamics, we systematically manipulated neural gain (σ; Figure 1b) and excitability (γ; Figure 1c). These two parameters alter different aspects of a sigmoidal transfer function, which models the nonlinear relationship between presynaptic afferent inputs and local firing rates (Freeman, 1979). When the σ and γ parameters are both low, fluctuations in regional activity arise mainly due to noise and local feedback. As the σ and γ parameters increase, the influence of activity communicated from connected input regions also increases, leading to nonlinear crosstalk and hence, changes in global brain topology and dynamics. Here, we investigated the topological signature of simulated BOLD time series across a parameter space spanned by σ and γ in order to understand the combined effect of neural gain and excitability on global brain network dynamics.
Neural gain and excitability modulate networklevel topological integration
We simulated BOLD time series data across a range of σ (0–1) and γ (0–1) and then subjected the time series from our simulation to graph theoretical analyses (Rubinov and Sporns, 2010). This allowed us to estimate the amount of integration in the timeaveraged functional connectivity matrix across the parameter space (Figure 2a). Specifically, we used the mean participation coefficient (B_{A}) of the timeaveraged connectivity matrix at each combination of σ and γ. High values of mean B_{A} suggest a relative increase in intermodular connectivity, thus promoting the diversity of connections between modules (Bertolero et al., 2017) and increasing the integrative signature of the network (Shine et al., 2016a). The converse situation (i.e., segregation) can thus be indexed by low mean B_{A} scores, or alternatively by the modularity statistic, Q. We observed a complex relationship between σ, γ and B_{A}, such that maximal integration occurred at high levels of σ but with intermediate values of γ. Outside of this zone, the timeaveraged connectome was markedly less integrated. Similar patterns were observed for other topological measures of integration, such as the inverse modularity (Q^{−1}) and global efficiency (Figure 2—figure supplement 1).
Neural gain transitions the network across a critical boundary
The relative simplicity of our local neural model allows formal quantification of the interregional phase relationships that characterize the underlying neuronal dynamics. These fast neuronal phase dynamics compliment the view given by the slow BOLD amplitude fluctuations and give insight into their fundamental dynamic causes. We employed a phase order parameter, that quantifies the extent to which regions within the network align their oscillatory phase – high values on this scale reflect highly ordered synchronous oscillations across the network, whereas low values reflect a relatively asynchronous system (Breakspear et al., 2010; Kuramoto, 1984).
Across the parameter space, we observed two clear states (Figure 2b): one associated with high (ρ ≥0.5; yellow) and one with low (ρ <0.5; blue) mean synchrony, with a clear critical boundary demarcating the two states (dotted white line in Figure 2a/b) that was associated with a relative increase in the standard deviation of the order parameter (Figure 2—figure supplement 2a). This strong demarcation between states is a known signature of critical behavior (Chialvo, 2010), which can occur at both the regional and network level. We observed evidence for both regional and network criticality in our simulation, whereby small changes in parameters (here, σ and γ) facilitated an abrupt transition between qualitatively distinct states. At the regional level, this pattern is observed as a transition from inputdriven fluctuations about a stable equilibrium to selfsustained oscillations (Figure 2—figure supplement 3). At the network level, the combined influence of increased gain and structural connections manifest as a transition to high amplitude, interregional phase synchrony (Figure 2—figure supplement 2b).
To further disambiguate the systemlevel dynamics, we studied the probability distribution of the fluctuations in the order parameter. Close to the boundary, we observed a truncated Pareto (i.e., power law) scaling regime, spanning up to two orders of magnitude (Figure 2—figure supplement 2b). This pattern is consistent with a critical bifurcation within a complex system consisting of many components (see Cocchi et al., 2017 and Heitmann and Breakspear, 2017Heitman and Breakspear, 2017 for further discussion). After crossing the boundary, this relationship develops a ‘knee’ above the powerlaw scaling (Figure 2—figure supplement 2b), consistent with the emergence of a characteristic temporal scale in a supercritical system (Roberts et al., 2015). These observations suggest that the system undergoes a bifurcation across a critical boundary as the synchronization manifold loses stability.
A host of contemporary neuroscientific theories hypothesize that temporal phase synchrony between regions underlies effective communication between neural regions (Fries, 2015; Lisman and Jensen, 2013; Varela et al., 2001), which would otherwise remain isolated if not brought into temporal lockstep with one another. As such, we might expect that the changes in neural gain that integrate the brain might do so through the modulation of interregional phase synchrony. Our results were consistent with this hypothesis. By aligning changes in the topological signature of the network to the critical point delineating the two states, we were able to demonstrate a significant increase in integration (mean B_{A}; T_{798} = 2.57; p=0.01) and decrease in segregation (Q; T_{798} = −17.44; p<0.001) of networklevel BOLD fluctuations in the highly phase synchronous state. Specifically, global integration demonstrated a sharp increase in the zone associated with the high amplitude synchronous oscillations, particularly for intermediate values of γ (Figure 2c). In contrast, the transitions associated with manipulating γ (particularly at high values of σ) led to an inverse Ushaped relationship: the network was relatively segregated at high and low levels of γ, but integrated at intermediate values of γ, albeit with a monotonic relationship when increasing σ for low levels of γ (Figure 2d). In addition, increases in betweenhemisphere connectivity were more pronounced than withinhemisphere connectivity in the ordered state (within: 0.010 ± 0.017; between: 0.014 ± 0.013; T_{2,848} = 7.104; p=10^{−12}; see Figure 2—figure supplement 4). Together, these results suggest that neural gain and excitability act together to traverse a transition in network dynamics, maximizing interregional phase synchrony and integrating the functional connectome.
Neural gain increases topological complexity and temporal variability
Having identified a relationship between neural gain and network architecture, we next investigated the putative topological benefit of this tradeoff. A measure that characterizes the topological balance between integration and segregation is communicability (Estrada and Hatano, 2008), which quantifies the number of short paths that can be traversed between two regions of a network (Mišić et al., 2015). In networks with high communicability, individual regions are able to interact with a large proportion of the network through relatively short paths, which in turn may facilitate effective communication between otherwise segregated regions. In contrast to the relationship observed between neural gain and network integration, communicability was maximal at the critical boundaries between synchronous and asynchronous behavior (Figure 3a–c). Thus, the topological signature of the network was most effectively balanced between integration and segregation as the system transitioned between disorder and order through the modulation of interregional synchrony by subtle changes in neural gain.
Another important signature of complex systems is their flexibility over time. In previous work, we showed that the ‘resting state’ is characterized by significant fluctuations in network topology, in which the brain traverses between states that maximize either integration or segregation (Shine et al., 2016a). This variability was diminished during a cognitively challenging task, and the extent of integration was positively associated with improved task performance (Shine et al., 2016a). To determine whether these alterations in topological variability may have been related to changes in neural gain, we estimated the timeresolved mean participation coefficient (B_{T}) of the simulated BOLD time series and then determined whether the variability of this measure over time changed as a function of σ and γ. We found that the variability of timeresolved integration within each trial was maximized across the critical boundary, as the network switched between disordered and ordered phase synchrony (Figure 3d–f). These results support the hypothesis that changes in neural gain may control the temporal variability of network topology as a function of behavioral state.
Gainmediated integration is maximal in frontoparietal hub regions
To determine whether the influence of neural gain on network dynamics was related to the underlying structural connectivity of the brain, we estimated the ‘rich club’ architecture of the structural connectome (Figure 4a). Compared to lowdegree nodes, rich club regions demonstrated an increase in ‘realized’ mean gain adjacent to the critical boundary (Figure 4b). In short, this means that activity within frontoparietal ‘hub’ regions (red in Figure 4a) was more strongly affected by the interaction between neural gain and network topology than in nonhub regions (blue/green in Figure 4a). Indeed, this result demonstrates that the ‘realized’ gain of individual regions is not simply related to the applied gain (i.e. input from the ascending noradrenergic system; (AstonJones and Cohen, 2005), but also nonlinearly depends on afferent activity from topologically connected regions (Figure 4c/d). The observed effect was particularly evident for intermediate values of γ, suggesting that the hub regions were differentially impacted by neural gain at the critical boundary between the asynchronous and synchronous states. Interestingly, similar dissociations were observed when comparing regions with high and low diversity (Figure 4—figure supplement 1), suggesting a role for future experiments to disambiguate the importance of degree and diversity in the mediation of global network topology (Bertolero et al., 2017). However, given the substantial overlap between regions in the ‘rich’ and diverse’ clubs (73% of regions were found in both groups), our results confirm a crucial role for frontoparietal regions in the control of networklevel integration as a function of ascending neuromodulatory gain.
Discussion
We used a combination of computational modeling and graph theoretical analyses, quantifying the relationship between ascending neuromodulation and networklevel integration in order to test a direct prediction from a previous neuroimaging study (Shine et al., 2016a). We found that increasing neural gain transitioned network dynamics across a bifurcation from disordered to ordered phase synchrony (Figure 2b) with a shift from a segregated to integrated neural architecture (Figure 2e and Figure 2—figure supplement 1). The critical boundary between these two states was associated with maximal communicability and temporal topological variability (Figure 3). Finally, the effect of neural gain was felt most prominently in highdegree frontoparietal network hubs (Figure 4 and Figure 4—figure supplement 2). Together, these results confirm our prior hypothesis and complement an emerging view of the brain that highlights a mechanistic bridge between ascending arousal systems and cognition (Shine et al., 2016a), providing a potential mechanistic explanation for the longstanding notion that noradrenergic activity demonstrates an inverted Ushaped curve with cognitive performance (Robbins and Arnsten, 2009, Figure 1a).
The major result from our study is that networklevel fluctuations between segregation and integration in functional (BOLD) networks reflect an underlying transition in synchrony of faster neuronal oscillations, thus providing a previously unknown link between temporal scales in the brain (Figure 2b). At low levels of γ and σ, the governing equations are strongly stable (damped), so that all excursions from equilibrium must be driven by local noise – that is, regions are relatively insensitive to incoming inputs (Figure 1b/c). As γ and σ increase, local activity approaches an instability, and consequently incoming activity is able to substantially influence activity in target regions. This causes changes in the emergent wholebrain dynamics evident at both the short time scale of brain oscillations and the long time scale of BOLD correlation. A stark transition occurs at a critical point in the parameter space (denoted by the boundary between blue and yellow in Figure 2b), whereby small increases in σ lead to substantial alterations in the phase relationships between regions. Specifically, the network abruptly shifts from stable equilibrium to highamplitude synchronized oscillation, facilitating an increase in effective communication between otherwise topologically distant regions (Fries, 2005; Varela et al., 2001). This same transition point is associated with a peak in informational complexity (Figure 3), further suggesting the importance of criticality in maximizing the information processing capacity of global network topology. Notably, the transition is also accompanied by a peak in the topological variability over time: hence a dynamic instability amongst fast neuronal oscillations yields increased network fluctuations at very slow time scales, again highlighting the crucial role of criticality to multiscale neural phenomena (Cocchi et al., 2017).
The effect of neural gain on topology was greatest in a bilateral network of highdegree frontoparietal cortical regions (Figure 4). This suggests that the recruitment of these hub regions at intermediate levels of excitability and neural gain shifts collective network dynamics across a bifurcation, increasing effective interactions between otherwise segregated regions. This result underlines the effective influence of the structural ‘rich club’ (Figure 4), which in addition to providing topological support to the structural connectome (van den Heuvel and Sporns, 2013), may also facilitate the transition between distinct topological states. This relationship has been demonstrated previously in other studies, either by manipulating the excitability parameter alone (Deco et al., 2017; ZamoraLópez et al., 2016), or through the alteration of the intrinsic dynamics of the 2d oscillator model (Curto et al., 2009; Safaai et al., 2015), thus providing a strong conceptual link between structural topology and emergent dynamics. Crucially, the integrated states facilitated by gainmediated hub recruitment have been shown to underlie effective cognitive performance (Shine et al., 2016a), episodic memory retrieval (Westphal et al., 2017) and conscious awareness (Barttfeld et al., 2015; Godwin et al., 2015), confirming the importance of ascending neuromodulatory systems for a suite of higherlevel behavioral capacities.
Overall, our findings broadly support the predictions of the neural gain hypothesis of noradrenergic function (AstonJones and Cohen, 2005). For instance, manipulating neural gain, a plausible instantiation of the effects of ascending noradrenergic tone in the brain (ServanSchreiber et al., 1990), led to marked alterations in network topology. Given the demonstrated links between network topology and cognitive function (Cohen and D'Esposito, 2016; Hearne et al., 2017; Shine et al., 2016a; Shine and Poldrack, 2017), our work thus provides a plausible mechanistic account of the longstanding notion of a nonlinear relationship between catecholamine levels and effective cognitive performance (Robbins and Arnsten, 2009; Shine et al., 2016a; Figure 1a). However, it bears mention that our model highlighted a relationship between neural gain, excitability and network topology, in which there was an invertedU shaped relationship observed between excitability and integration that was related to two separate bifurcations (Figure 2—figure supplement 2). In contrast, the effect of neural gain on topology was demonstrably more linear, particularly at intermediate levels of γ (Figure 2). Importantly, although noradrenaline has been directly linked to alterations in gain (ServanSchreiber et al., 1990), there is also reason to believe that noradrenergic tone should have a demonstrable effect on excitability (Curto et al., 2009; Safaai et al., 2015; Stringer et al., 2016). Combined with our observation of the importance of the interaction between neural gain and highdegree (Figure 4), diverse (Figure 4—figure supplement 1) hub regions, our results thus represent an extension of the neural gain hypothesis that integrates the ascending arousal system with the constraints imposed by multiple order parameters and structural network topology.
In addition, our results also align with previous hypotheses that highlighted the importance of α2adrenoreceptor mediated hub recruitment with increasing concentrations of noradrenaline, particularly in the frontal cortex (Robbins and Arnsten, 2009; Sara, 2009). However, our findings are inconsistent with the hypothesis that neural gain mediates an increase in tightly clustered patterns of neural interactions (Eldar et al., 2013). In contrast to this prediction, our simulations showed that measures that reflect an increase in local clustering, such as modularity and the mean clustering coefficient (Figure 4—figure supplement 2), did not increase as a function of neural gain in the same manner as other measures, such as the mean participation coefficient. Therefore, our results suggest that an increase in functional integration (and hence, a concomitant decrease in local clustering) is a more effective indicator of the topological influence of increasing neural gain. However, it bears mention that the hypothesized relationship between clustering and neural gain was presented in the context of a focused learning paradigm (Eldar et al., 2013), whereas our data were not modeled in an explicit behavioral context. As such, future studies are required to disambiguate the relative relationship between neural gain and network topology as a function of task performance.
Prior computational studies have demonstrated a link between the structural and functional connectome, with the broad repertoire of functional network dynamics bounded by structural constraints imposed by the whitematter backbone of the brain (Deco and Jirsa, 2012; Honey et al., 2007, 2009). While the targeted role of gain modulation on local neuronal dynamics have been studied (Freeman, 1979), the impact of gain on functional network organization has not been pursued. Here, we have demonstrated a putative mechanism by which a known biological system (namely, the ascending noradrenergic system) can mediate structuralfunctional changes, essentially by navigating the functional connectome across a topological landscape characterized by alterations in oscillatory synchrony. However, the direct relationship between neural gain manipulation and the ascending noradrenergic system is likely to represent an oversimplification. Indeed, given the complexity and hierarchical organization of the brain, it is almost certain that other functional systems, such as the thalamus (Hwang et al., 2017) and fastspiking interneurons (Stringer et al., 2016), play significant roles in mediating neural gain and hence, the balance between integration and segregation. Further studies are required to interrogate these mechanisms more directly.
A somewhat surprising result of our simulation is the link between phase and amplituderelated measures of neuronal coupling. It has been known for some time that the BOLD signal is insensitive to the relative phase of underlying neural dynamics (Foster et al., 2016), relating more closely to changes in the local oscillator frequency and fluctuations in the relative amplitude of neural firing. Indeed, each of the model parameters used in our experiment (i.e., gain and coupling) exerts a complex influence on both the oscillator frequencies (and hence, the BOLD activity) and the global synchrony (and hence, the BOLD correlations). Moreover, in coupled oscillator systems such as this, the order parameter acts as a ‘mean field’ that feeds back and influences local dynamics (see e.g. Breakspear et al., 2010). Based on this knowledge, we can infer that estimates of connectivity using BOLD time series relate to covariance in amplitude fluctuations among pairs of regions, rather than alterations in phase synchrony. This clarification is important for modern theories of functional neuroscience, as synchronous relationships between regions in the phase domain have been used to explain effective communication between neural regions (Fries, 2015; Lisman and Jensen, 2013; Siegel et al., 2009), in which the precise timing between spiking populations determines the efficacy of information processing. Our results suggest a surprisingly robust link between these two measures, such that an integrated network with increased intermodular amplitude correlation coincides with a peak in ordered phase synchrony between regions. In our model, the peak of network variability occurs at the critical transition between disordered and ordered phases, where the local dynamic states shows the most variability and where fast stochastic perturbations are most able to influence slow amplitude fluctuations. However, while our model provides evidence linking neural gain to functional integration, advanced models that display a broader variety of nonlinear dynamics (Breakspear, 2017) are required to test these hypotheses more directly.
Together, our results suggest that the balance between integration and segregation relates to alterations in neural gain that exist within a ‘zone’ of maximal communicability and temporal variability. Our findings thus highlight important constraints on contemporary models of brain function, while also providing crucial implications for understanding effective brain function during task performance or as a function of neurodegenerative or psychiatric disease.
Materials and methods
Dynamical network modeling
Request a detailed protocolThe Virtual Brain software (Sanz Leon et al., 2013) was used to simulate neural activity across a lattice of parameter points in which we manipulated the interregional coupling between regions using both a gain parameter and an excitability parameter. Specifically, we used a generic 2dimensional oscillator model (Equations 1 and 2) to create time series data that represents neural activity via two variables (the membrane potential and a slow recovery variable). This equation is based upon a modal approximation (Stefanescu and Jirsa, 2008) of a population of FitzhughNagumo neurons (Izhikevich and FitzHugh, 2006). The neuronal dynamics are given by,
where V_{i} represents the local mean membrane potential and Wi represents the corresponding slow recovery variable at node i. Stochastic fluctuations are introduced additively through the white noise processes ${\eta}_{i}$ and ${\xi}_{i}$, drawn independently from Gaussian distributions with zero mean and unit variance. The synaptic current I_{i} arise from timedelayed input from other regions modulated in strength by the global excitability parameter γ. This input arises after the mean membrane potential V in distant nodes is converted into a firing rate via a sigmoidshaped activation function S, and then transmitted with axonal time delays through the connectivity matrix. Hence the synaptic current at node i is given by,
where A_{ij} is the directed connectivity matrix derived from the 76 region CoCoMac connectome (Kötter, 2004), and ${\tau}_{ij}$ is the corresponding time delay computed from the length of fiber tracts estimated by diffusion spectrum imaging (Sanz Leon et al., 2013). The conversion from regional membrane potential to firing rate is given by a sigmoidshaped activation function,
where $\sigma $ is the (global) gain parameter and the sigmoid activation function is shifted to center at m. These equations were integrated using a stochastic Heun method (Rüemelin, 1982).
The simulated neuronal data were fed through a BalloonWindkessel model to simulate realistic Blood Oxygen Level Dependent signals (Friston et al., 2000). The simulated BOLD time series were bandpass filtered (0.01–0.1 Hz) and the Pearson’s correlation was then computed (and normalized using Fisher’s rtoZ transformation).
We manipulated the interregional neural gain parameter σ and the regional excitability γ through a range of values (between 0–1). After aligning the sensitive region of the sigmoid function with its mean input (m = 1.5). Consistent with the effects of relatively diffuse projections from the locus coeruleus to cortex, all regions were given the same values of the σ and γ parameter for each trial. All code is freely available at https://github.com/macshine/gain_topology (Shine, 2018). A copy is archived at https://github.com/elifesciencespublications/gain_topology.
Integration and segregation
Request a detailed protocolThe Louvain modularity algorithm from the Brain Connectivity Toolbox (Rubinov and Sporns, 2010) was used to estimate timeaveraged community structure. The Louvain algorithm iteratively maximizes the modularity statistic, Q, for different community assignments until the maximum possible score of Q has been obtained (Equation 5). The modularity estimate for a given network is therefore a quantification of the extent to which the network may be subdivided into communities with stronger withinmodule than betweenmodule connections. Here, we used the Q parameter to estimate the extent of segregation within each graph,
where v is the total weight of the network (sum of all negative and positive connections), w_{ij} is the weighted and signed connection between regions i and j, e_{ij} is the strength of a connection divided by the total weight of the network, and δ_{MiMj} is set to one when regions are in the same community and 0 otherwise. ‘+’ and ‘–‘ superscripts denote all positive and negative connections, respectively. Consistent with previous work (Eldar et al., 2013), the mean clustering coefficient, which reflects the proportion of closed ‘triangles’ in the binarized graph, was also used as a measure of segregation (Rubinov and Sporns, 2010).
For each level of neural gain, the community assignment for each region was assessed 100 times and a consensus partition was identified using a finetuning algorithm from the Brain Connectivity Toolbox (http://www.brainconnectivitytoolbox.net/). All graph theoretical measures were calculated on weighted and signed connectivity matrices (Rubinov and Sporns, 2010), and weak connections were retained using a consistency thresholding technique that identifies weak, yet consistent connections by identifying edges with minimal variance across multiple iterations (Roberts et al., 2017). In order to assess global, largescale communities, the resolution parameter was set to 1.0 (higher values tune the algorithm to detect smaller communities, which instead reflect local, rather than global, clustering). This parameter was chosen by calculating the resolution value which maximized the Surprise (Aldecoa and Marín, 2013) between the community structure of the network at each level of gain and resolution and a random network defined using a cumulative hypergeometric distribution (see [Aldecoa and Marín, 2013]).
The participation coefficient, B_{A} (Equation 6) quantifies the extent to which a region connects across all modules (i.e. betweenmodule strength). As such, the mean participation coefficient can be used to estimate the extent of integration within a graph. The participation coefficient, B_{Ai}, for a given region i is,
where κ_{is} is the strength of the positive connections of region i to regions in module s, and κ_{i} is the sum of strengths of all positive connections of region i. The participation coefficient of a region is therefore close to one if its connections are uniformly distributed among all the modules and 0 if all of its links are within its own module. Finally, the global efficiency (mean inverse characteristic path length) and inverse modularity (Q^{−1}) were estimated for each element of the parameter space as adjunct measures of integration.
Phase synchrony order parameter
Request a detailed protocolTo estimate the degree of phase synchrony at different points in the parameter space, we extracted the raw signal (Vi) from each region in the simulation and subtracted the least squares linear trend from each channel. We then computed the phase of the analytic signal for each channel using the Hilbert transform and then estimated the phase synchrony order parameter (across all channels), OP, which is given by,
where i = $\sqrt{1}$ and θ_{j} represents the oscillation phase of the j^{th} region. Large values of ρ denote phase alignment between regions (Breakspear et al., 2010; Kuramoto, 1984). The value of ρ for each parameter combination was subsequently averaged over time and across sessions. By designating each parameter combination as resulting in either a synchronized (ρ ≥0.5) or unsynchronized (ρ <0.5) regime, we were able to determine whether network topology changes as a function of neural gain and excitability estimated from BOLD data coincided with changes of underlying phase synchrony. Specifically, we then separately grouped topological variables and within and betweenhemisphere connectivity according to their underlying ρ value and then estimated an independentsamples ttest between the two groups. The standard deviation of the order parameter, ρ, was also calculated and averaged across sessions. Finally, the dwell times for regional fluctuations were estimated for a number of characteristic parameter choices and analyzed for evidence of Pareto (i.e. power law) scaling.
Communicability
Request a detailed protocolThe communicability, C, between a pair of nodes i and j is defined as a weighted sum of the number of all walks connecting the pair of nodes (within weighted connectivity matrix, A) and has been shown to be equivalent to the matrix exponent of a binarized graph, e^{A} (Estrada and Hatano, 2008). For ease of interpretation, we calculated the log_{10}transformed mean of communicability for each graph across iterations and values of neural gain.
Topological variability
Request a detailed protocolTo estimate timeresolved functional connectivity between the 76 nodal pairs, we used a recently described statistical technique (Multiplication of Temporal Derivatives; (Shine et al., 2015); http://github.com/macshine/coupling), which is computed by calculating the pointwise product of temporal derivative of pairwise time series (Equation 7). To reduce the contamination of highfrequency noise in the timeresolved connectivity data, M_{ij} was averaged over a temporal window (w = 15 time points). Individual functional connectivity matrices were calculated within each temporal window, thus generating an unthresholded (signed and weighted) 3D adjacency matrix (region $\times $ region $\times $ time) for each participant. These matrices were then subjected to timeresolved topological analyses, which allowed us to estimate the participation coefficient for each region over time (B_{T}). We used the mean regional standard deviation of this measure to estimate timeresolved topological variability in the simulated data.
for each time point, t, M_{ij} is defined according to Equation 1, where dt is the first temporal derivative of the i^{th} or j^{th} time series at time t, σ is the standard deviation of the temporal derivative time series for region i or j and w is the window length of the simple moving average. This equation can then be calculated over the course of a time series to obtain an estimate of timeresolved connectivity between pairs of regions.
Structural rich club
Request a detailed protocolTo test whether changes associated with neural gain were mediated by highlyinterconnected highdegree hubs, we identified a set of ‘rich club’ regions using the structural white matter connectome from the CoCoMac database (Kötter, 2004). Briefly, the degree of each node i in the network was determined by calculating the number of links that node i shared with k other nodes in the network. All nodes that showed a number of connections of ≤k were removed from the network. For the remaining network, the richclub coefficient (Φ_{k}) was computed as the ratio of connections present between the remaining nodes and the total number of possible connections that would be present when the set would be fully connected. We then normalized Φ_{k} relative to a set of random networks with similar density and connectivity distributions. When Φ_{Z} is greater than 1, the network can be said to display a ‘rich club’ architecture. Individual regions that are interconnected at the value of k at which the network demonstrates a ‘rich club’ architecture are thus designated as ‘rich club’ nodes (n = 22). Any nodes outside of this group but still sharing a connection are labeled as ‘feeder’ nodes (n = 44), and regions disconnected from the rich club are designated as ‘local’ nodes (n = 10). The results were projected onto a standard surface representation of the macaque cortex (Figure 4). After segmenting the network in this fashion, we were able to estimate the realized mean gain and B_{A} across the parameter space for regions according to their structural topology.
Realized neural gain
Request a detailed protocolWhile the neural gain parameter σ controls the maximum gain in each region within the simulation by setting the maximum slope of the sigmoid, the realized gain (mean ratio of sigmoid output to input) for each brain region depends upon the distribution of its input, and is greater when the input level is concentrated near the center of the sigmoid. We estimated the regional variation in effective or ‘realized’ neural gain by calculating the integral of the instantaneous sigmoid slope over its complete input range, weighted by the probability of each input level. We then compared these values as a function of nodal class (rich club vs other nodes) at each aspect of the parameter space.
Reliability
Request a detailed protocolWe ran a number of subsequent tests to ensure that any observed changes in network topology were robust to the processing steps utilized in the analysis. Firstly, we reanalyzed data across a range of network thresholds (1–20%) and observed robust results (i.e. r > 0.75) for Q, mean B_{A}, mean communicability and the standard deviation of B_{T} on graphs estimated between the 9–20% threshold range. Secondly, as the number of modules estimated from graphs can change as a function of network topology, we reexamined the topological characteristics of networks that were matched for the number of modules (N = 4) and found no significant differences to the topological signatures estimated on the whole group.
References

An integrative theory of locus coeruleusnorepinephrine function: adaptive gain and optimal performanceAnnual Review of Neuroscience 28:403–450.https://doi.org/10.1146/annurev.neuro.28.061604.135709

Learninginduced autonomy of sensorimotor systemsNature Neuroscience 18:744–751.https://doi.org/10.1038/nn.3993

Generative models of cortical oscillations: neurobiological implications of the kuramoto modelFrontiers in Human Neuroscience 4:190.https://doi.org/10.3389/fnhum.2010.00190

Dynamic models of largescale brain activityNature Neuroscience 20:340–352.https://doi.org/10.1038/nn.4497

The economy of brain network organizationNature Reviews Neuroscience 13:336–349.https://doi.org/10.1038/nrn3214

Criticality in the brain: A synthesis of neurobiology, models and cognitionProgress in Neurobiology 158:132–152.https://doi.org/10.1016/j.pneurobio.2017.07.002

The segregation and integration of distinct brain networks and their relationship to cognitionJournal of Neuroscience 36:12083–12094.https://doi.org/10.1523/JNEUROSCI.296515.2016

Resting brains never rest: computational insights into potential cognitive architecturesTrends in Neurosciences 36:268–274.https://doi.org/10.1016/j.tins.2013.03.001

Ongoing cortical activity at rest: criticality, multistability, and ghost attractorsJournal of Neuroscience 32:3366–3375.https://doi.org/10.1523/JNEUROSCI.252311.2012

Rethinking segregation and integration: contributions of wholebrain modellingNature Reviews Neuroscience 16:430–439.https://doi.org/10.1038/nrn3963

Rethinking segregation and integration: contributions of wholebrain modellingNature Reviews Neuroscience 16:430–439.https://doi.org/10.1038/nrn3963

The effects of neural gain on attention and learningNature Neuroscience 16:1146–1153.https://doi.org/10.1038/nn.3428

Communicability in complex networksPhysical Review E 77:036111.https://doi.org/10.1103/PhysRevE.77.036111

Spontaneous Neural Dynamics and Multiscale Network OrganizationFrontiers in Systems Neuroscience 10:7.https://doi.org/10.3389/fnsys.2016.00007

Nonlinear gain mediating cortical stimulusresponse relationsBiological Cybernetics 33:237–247.https://doi.org/10.1007/BF00337412

A mechanism for cognitive dynamics: neuronal communication through neuronal coherenceTrends in Cognitive Sciences 9:474–480.https://doi.org/10.1016/j.tics.2005.08.011

Noise during rest enables the exploration of the brain's dynamic repertoirePLoS Computational Biology 4:e1000196.https://doi.org/10.1371/journal.pcbi.1000196

Dwelling quietly in the rich club: brain network determinants of slow cortical fluctuationsPhilosophical Transactions of the Royal Society B: Biological Sciences 370:20140165.https://doi.org/10.1098/rstb.2014.0165

Reconfiguration of Brain Network Architectures between RestingState and ComplexityDependent Cognitive ReasoningThe Journal of Neuroscience 37:048517–0488411.https://doi.org/10.1523/JNEUROSCI.048517.2017

QIMR Berghofer Medical Research InstituteHandbook for the Brain Dynamics Toolbox Version 2017c, QIMR Berghofer Medical Research Institute, 9781549720703.

The human thalamus is an integrative Hub for functional brain networksThe Journal of Neuroscience 37:5594–5607.https://doi.org/10.1523/JNEUROSCI.006717.2017

Towards the virtual brain: network modeling of the intact and the damaged brainArchives Italiennes De Biologie 148:189–205.

Chemical Oscillations, Waves, and TurbulenceChemical Oscillations, Waves, and Turbulence, 10.1007/9783642696893.

Pupil diameter covaries with BOLD activity in human locus coeruleusHuman Brain Mapping 35:4140–4154.https://doi.org/10.1002/hbm.22466

The neuropsychopharmacology of frontoexecutive function: monoaminergic modulationAnnual Review of Neuroscience 32:267–287.https://doi.org/10.1146/annurev.neuro.051508.135535

The heavy tail of the human brainCurrent Opinion in Neurobiology 31:164–172.https://doi.org/10.1016/j.conb.2014.10.014

Numerical Treatment of Stochastic Differential EquationsSIAM Journal on Numerical Analysis 19:604–613.https://doi.org/10.1137/0719041

The Virtual Brain: a simulator of primate brain network dynamicsFrontiers in Neuroinformatics 7:10.https://doi.org/10.3389/fninf.2013.00010

The locus coeruleus and noradrenergic modulation of cognitionNature Reviews Neuroscience 10:211–223.https://doi.org/10.1038/nrn2573

An anatomical substrate for integration among functional networks in human cortexJournal of Neuroscience 33:14489–14500.https://doi.org/10.1523/JNEUROSCI.212813.2013

The brainweb: phase synchronization and largescale integrationNature Reviews Neuroscience 2:229–239.https://doi.org/10.1038/35067550

Episodic memory retrieval benefits from a less modular brain network organizationThe Journal of Neuroscience 37:3523–3531.https://doi.org/10.1523/JNEUROSCI.250916.2017
Article and author information
Author details
Funding
National Health and Medical Research Council (GNT1072403)
 James M Shine
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank the creators of the Virtual Brain for their opensource software, Peter Bell for helpful comments, Bratislav Misic for sharing code and Joke Durnez for statistical advice.
Version history
 Received: August 9, 2017
 Accepted: January 26, 2018
 Accepted Manuscript published: January 29, 2018 (version 1)
 Version of Record published: February 19, 2018 (version 2)
Copyright
© 2018, Shine et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 4,541
 views

 689
 downloads

 131
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Neuroscience
Restingstate brain networks (RSNs) have been widely applied in health and disease, but the interpretation of RSNs in terms of the underlying neural activity is unclear. To address this fundamental question, we conducted simultaneous recordings of wholebrain restingstate functional magnetic resonance imaging (rsfMRI) and electrophysiology signals in two separate brain regions of rats. Our data reveal that for both recording sites, spatial maps derived from bandspecific local field potential (LFP) power can account for up to 90% of the spatial variability in RSNs derived from rsfMRI signals. Surprisingly, the time series of LFP band power can only explain to a maximum of 35% of the temporal variance of the local rsfMRI time course from the same site. In addition, regressing out time series of LFP power from rsfMRI signals has minimal impact on the spatial patterns of rsfMRIbased RSNs. This disparity in the spatial and temporal relationships between restingstate electrophysiology and rsfMRI signals suggests that electrophysiological activity alone does not fully explain the effects observed in the rsfMRI signal, implying the existence of an rsfMRI component contributed by ‘electrophysiologyinvisible’ signals. These findings offer a novel perspective on our understanding of RSN interpretation.

 Neuroscience
Deep neural networks have made tremendous gains in emulating humanlike intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies. However, these still fall short of, and therefore fail to provide insight into how the brain supports strong forms of generalization of which humans are capable. One such case is outofdistribution (OOD) generalization – successful performance on test examples that lie outside the distribution of the training set. Here, we identify properties of processing in the brain that may contribute to this ability. We describe a twopart algorithm that draws on speciﬁc features of neural computation to achieve OOD generalization, and provide a proof of concept by evaluating performance on two challenging cognitive tasks. First we draw on the fact that the mammalian brain represents metric spaces using grid cell code (e.g., in the entorhinal cortex): abstract representations of relational structure, organized in recurring motifs that cover the representational space. Second, we propose an attentional mechanism that operates over the grid cell code using determinantal point process (DPP), that we call DPP attention (DPPA) – a transformation that ensures maximum sparseness in the coverage of that space. We show that a loss function that combines standard taskoptimized error with DPPA can exploit the recurring motifs in the grid cell code, and can be integrated with common architectures to achieve strong OOD generalization performance on analogy and arithmetic tasks. This provides both an interpretation of how the grid cell code in the mammalian brain may contribute to generalization performance, and at the same time a potential means for improving such capabilities in artiﬁcial neural networks.