A diversity of localized timescales in network activity
 Cited 24
 Views 1,860
 Annotations
Abstract
Neurons show diverse timescales, so that different parts of a network respond with disparate temporal dynamics. Such diversity is observed both when comparing timescales across brain areas and among cells within local populations; the underlying circuit mechanism remains unknown. We examine conditions under which spatially local connectivity can produce such diverse temporal behavior.
In a linear network, timescales are segregated if the eigenvectors of the connectivity matrix are localized to different parts of the network. We develop a framework to predict the shapes of localized eigenvectors. Notably, local connectivity alone is insufficient for separate timescales. However, localization of timescales can be realized by heterogeneity in the connectivity profile, and we demonstrate two classes of network architecture that allow such localization. Our results suggest a framework to relate structural heterogeneity to functional diversity and, beyond neural dynamics, are generally applicable to the relationship between structure and dynamics in biological networks.
https://doi.org/10.7554/eLife.01239.001eLife digest
Many biological systems can be thought of as networks in which a large number of elements, called ‘nodes’, are connected to each other. The brain, for example, is a network of interconnected neurons, and the changing activity patterns of this network underlie our experience of the world around us. Within the brain, different parts can process information at different speeds: sensory areas of the brain respond rapidly to the current environment, while the cognitive areas of the brain, involved in complex thought processes, are able to gather information over longer periods of time. However, it has been largely unknown what properties of a network allow different regions to process information over different timescales, and how variations in structural properties translate into differences in the timescales over which parts of a network can operate.
Now Chaudhuri et al. have addressed these issues using a simple but ubiquitous class of networks called linear networks. The activity of a linear network can be broken down into simpler patterns called eigenvectors that can be combined to predict the responses of the whole network. If these eigenvectors ‘map’ to different parts of the network, this could explain how distinct regions process information on different timescales.
Chaudhuri et al. developed a mathematical theory to predict what properties would cause such eigenvectors to be separated from each other and applied it to networks with architectures that resemble the wiring of the brain. This revealed that gradients in the connectivity across the network, such that nodes share more properties with neighboring nodes than distant nodes, combined with random differences in the strength of internode connections, are general motifs that give rise to such separated activity patterns. Intriguingly, such gradients and randomness are both common features of biological systems.
https://doi.org/10.7554/eLife.01239.002Introduction
A major challenge in the study of neural circuits, and complex networks more generally, is understanding the relationship between network structure and patterns of activity or possible functions this structure can subserve (Strogatz, 2001; Newman, 2003; Honey et al., 2010; Sporns, 2011). A number of neural networks show a diversity of time constants, namely different nodes (single neurons or local neural groups) in the network display dynamical activity that changes on different timescales. For instance, in the mammalian brain, long integrative timescales of neurons in the frontal cortex (Romo et al., 1999; Wang, 2001; Wang, 2010) are in striking contrast with rapid transient responses of neurons in a primary sensory area (Benucci et al., 2009). Furthermore, even within a local circuit, a diversity of timescales may coexist across a heterogeneous neural population. Notable recent examples include the timescales of reward integration in the macaque cortex (Bernacchia et al., 2011), and the decay of neural firing rates in the zebrafish (Miri et al., 2011) and macaque oculomotor integrators (Joshua et al., 2013). While several models have been proposed, general structural principles that enable a network to show a diversity of timescales are lacking.
Studies of the cortex have revealed that neural connectivity decays rapidly with distance (Holmgren et al., 2003; Markov et al., 2011; Perin et al., 2011; Levy and Reyes, 2012; Markov et al., 2014; ErcseyRavasz et al., 2013) as does the magnitude of correlations in neural activity (Constantinidis and GoldmanRakic, 2002; Smith and Kohn, 2008; Komiyama et al., 2010). This characteristic is apparent on multiple scales: in the cerebral cortex of the macaque monkey, both the number of connections between neurons in a given area and those between neurons across different brain areas decay rapidly with distance (Markov et al., 2011, 2014). Intuitively, local connectivity may suggest that the timescales of network activity are localized, by which we mean that nodes that respond with a certain timescale are contained within a particular region of the network. Such a network would show patterns of activity with different temporal dynamics in disparate regions. Surprisingly, this is not always true and, as we show, additional conditions are required for localized structure to translate into localized temporal dynamics.
We study this structure–function relationship for linear networks of interacting nodes. Linear networks are used to model a variety of physical and biological networks, especially those where internode interactions are weighted (Newman, 2010). Most dynamical systems can be linearized around a point of interest, and so linear networks generically emerge when studying the response of nonlinear networks to small perturbations (Strogatz, 1994; Newman, 2010). Moreover, for many neurons the dependence of firing rate on input is approximately thresholdlinear over a wide range (Ahmed et al., 1998; Ermentrout, 1998; Wang, 1998; Chance et al., 2002), and linear networks are common models for the dynamics of neural circuits (Dayan and Abbott, 2001; Shriki et al., 2003; Vogels et al., 2005; Rajan and Abbott, 2006; Ganguli et al., 2008; Ganguli et al., 2008; Murphy and Miller, 2009; Miri et al., 2011).
The activity of a linear network is determined by a set of characteristic patterns, called eigenvectors (Rugh, 1995). Each eigenvector specifies the relative activation of the various nodes. For example, in one eigenvector the first node could show twice as much activity as the second node and four times as much activity as the third node, and so on. The activity of the network is the weighted sum of contributions from the eigenvectors. The weight (or amplitude) of each eigenvector changes over time with a timescale determined by the eigenvalue corresponding to the eigenvector. The network architecture determines the eigenvectors and eigenvalues, while the input sets the amplitudes with which the various eigenvectors are activated. In Figure 1, we illustrate this decomposition in a simple schematic network with three eigenvectors whose amplitudes change on a fast, intermediate and slow timescale respectively.
In general, the eigenvectors are poorly segregated from each other: each node participates significantly in multiple eigenvectors and each eigenvector is spread out across multiple nodes (Trefethen and Embree, 2005). Consequently, timescales are not segregated, and a large number of timescales are shared across nodes. Furthermore, if the timescales have largely different values, certain eigenvectors are more persistent than others and dominate the nodes at which they are present. If these slow timescales are spread across multiple nodes, they dominate the network activity and the nodes will show very similar temporal dynamics. This further limits the diversity of network computation.
In this paper, we begin by observing that rapidlydecaying connectivity by itself is insufficient to give rise to localized eigenvectors. We then examine conditions on the networkcoupling matrix that allow localized eigenvectors to emerge and build a framework to calculate their shapes. We illustrate our methods with simple examples of neural dynamics. Our examples are drawn from Neuroscience, but our results should be more broadly applicable for understanding network dynamics and the relationship between the structure and function of complex systems.
Results
We study linear neural networks endowed with a connection matrix W (j,k) (‘Methods’, Equation 9), which denotes the weight of connection from node k to node j. For a network with N nodes, the matrix W has N eigenvectors and N corresponding eigenvalues. The time constant associated with the eigenvector v_{λ} is $1/\mathfrak{R}\mathfrak{e}\left(\lambda \right)$, where λ is the corresponding eigenvalue (‘Methods’, Equation 11). This time constant is present at all nodes where the eigenvector has nonzero magnitude. We say an eigenvector is delocalized if its components are significantly different from 0 for most nodes. In this case, the corresponding timescale is spread across the entire network. On the other hand, if an eigenvector is localized then v_{λ} (j) ≈ 0 except for a restricted subset of spatially contiguous nodes, and the timescale $1/\mathfrak{R}\mathfrak{e}\left(\lambda \right)$ is confined to a region of the network. If most or all of the eigenvectors are localized, then different nodes show separated timescales in their dynamical response to external stimulation.
Note that even if the eigenvectors are localized, a large proportion of network nodes could respond to a given input, but they would do so with disparate temporal dynamics. Conversely, even if the eigenvectors are delocalized, a given input could still drive some nodes much more strongly than others. However, the temporal dynamics of the response will be very similar at the various nodes even if the magnitudes are different.
Consider a network with nodes arranged in a ring, as shown in the top panel of Figure 2A. The connection strength between nodes decays with distance according to
where, l_{c} is set to be 1 node so that the connectivity is sharply localized spatially. In Figure 2B we plot the absolute values and real parts of three sample eigenvectors. The behavior is typical of all eigenvectors: despite the local connectivity they are maximally delocalized and each node contributes with the same relative weight to each eigenvector (its absolute value is constant, while its real and imaginary parts oscillate across the network). As shown in Figure 2C, the timescales of decay are very similar across nodes.
As known from the theory of discrete Fourier transforms, such delocalized eigenvectors are generically seen if the connectivity is translationally invariant, meaning that the connectivity profile is the same around each node (see mathematical appendix [Supplementary file 1], Section 1 or standard references on linear algebra or solidstate physics [Ashcroft and Mermin, 1976]). In this case the jth component of the eigenvector v_{λ} is
where, ω/2π is the oscillation frequency (which depends on λ) and i is the imaginary unit (i^{2} = −1). Thus local connectivity is insufficient to produce localized eigenvectors.
We developed a theoretical approach that enables us to test network architectures that yield localized eigenvectors. Although in general it is not possible to analytically calculate all timescales (eigenvalues) of a generic matrix, the theory allows us to predict which timescales would be localized and which would be shared. For the localized timescales, it yields a functional form for the shape of the corresponding localized eigenvectors. Finally, the theory shows how changing network parameters promotes or hinders localization. For a further discussion of these issues, see Section 2 of the mathematical appendix (Supplementary file 1).
For a given local connectivity, W (j,k), we postulate the existence of an eigenvector v_{λ} that is well localized around some position, j_{0}, defined as its center. We then solve for the detailed shape (functional form) of our putative eigenvector and test whether this shape is consistent with our prior assumption on v_{λ}. If so, this is a valid solution for a localized eigenvector.
Specifically, if v_{λ} is localized around j_{0} then v_{λ} (k) is small when $\leftk{j}_{0}\right$ is large. We combine this with the requirement of local connectivity, which implies that W (j,k) is small when $\leftjk\right$ is large, and expand W and v_{λ} to firstorder in $\leftk{j}_{0}\right$ and $\leftjk\right$ respectively. With this approximation, we solve for v_{λ} across all nodes and find (‘Methods’ and mathematical appendix [Supplementary file 1], Section 2)
The eigenvector is a modulated Gaussian function, centered at j_{0}. The characteristic width is α, such that a small α corresponds to a sharply localized eigenvector. Note that j_{0} and ω depend on the particular timescale (or eigenvalue, λ) being considered and hence, in general, α^{2} will depend on the timescale under consideration. For v_{λ} to be localized, the real part of α^{2} must be positive when evaluated at the corresponding timescale. In this case, v_{λ} is consistent with our prior assumption, and we accept it as a meaningful solution.
Our theory gives the dependence of the eigenvector width on network parameters and on the corresponding timescale. In particular, α depends inversely on the degree of local heterogeneity in the network, so that greater heterogeneity leads to more tightly localized eigenvectors (see appendix [Supplementary file 1], Section 2). ω is a frequency term that allows v_{λ} to oscillate across nodes, as in Equation 1. As shown later, the method is general and a secondorder expansion can be used when the firstorder expansion breaks down. In that case the eigenvector shape is no longer Gaussian.
We now apply this theory to models of neural dynamics in the mammalian cerebral cortex. We use connectivity that decays exponentially with distance (Markov et al., 2011, 2014; ErcseyRavasz et al., 2013) but our analysis applies to other forms of local connectivity.
Localization in a network with a gradient of local connectivity
Our first model architecture is motivated by observations that as one progresses from sensory to prefrontal areas in the primate brain, neurons receive an increasing number of excitatory connections from their neighbors (Wang, 2001; Elston, 2007; Wang, 2008). We model a chain of nodes (i.e., neurons, networks of neurons or cortical areas) with connectivity that decays exponentially with distance. In addition, we introduce a gradient of excitatory selfcouplings along the chain to account for the increase in local excitation.
The network is shown in Figure 3A and the coupling matrix W is given by
The selfcoupling includes a leakage term (μ_{0} < 0) and a recurrent excitation term that increases along the chain with a slope Δ_{r}. Nodes higher in the network thus have stronger selfcoupling. Connection strengths have a decay length l_{c}. μ_{f} scales the overall strength of feedforward connections (i.e., connections from early to late nodes in the chain) while μ_{b} scales the strength of feedback connections. In general we set μ_{f} > μ_{b}.
If the gradient of selfcoupling (Δ_{r}) is strong enough, some of the eigenvectors of the network will be localized. As the gradient becomes steeper this region of localization expands. Our theory predicts which eigenvectors will be localized and how this region expands as the gradient becomes steeper (Figure 3—figure supplement 1).
By applying the theory sketched in the previous section (and developed in detail in the appendix [Supplementary file 1]), we find that the value of the eigenvector width for the localized eigenvectors (α in Equation 2) is equal to (see Section 3 of Supplementary file 1)
This equation asserts that α^{2} is inversely proportional to the gradient of local connectivity, Δ_{r}, so that a steeper gradient leads to sharper localization, and α^{2} increases with increasing connectivity decay length, l_{c}. Note that in this case the eigenvector width is independent of the location of the eigenvector (or the particular timescale).
In Figure 3B, we plot sample eigenvectors for a network with a weak gradient, where localized and delocalized eigenvectors coexist. We also plot the analytical prediction for the localized eigenvectors, which fits well with the numerical simulation results. For more details on this network see Figure 3—figure supplement 1. In Figure 3C, we plot sample eigenvectors for a network with a strong enough gradient that all eigenvectors are localized. As shown in Figure 3D, all the remaining eigenvectors of this network are localized. In Figure 3E, we plot the decay of this network’s activity from a uniform initial condition; as predicted from the structure of the eigenvectors, decay time constants increase up the chain.
With a strong gradient of selfcoupling, Equation 4 holds for all eigenvectors except those at the end of the chain, where edge effects change the shape of the eigenvectors. These eigenvectors are still localized, at the boundary, but are no longer Gaussian and appear to be better described as modulated exponentials. Equation 4 also predicts that eigenvectors become more localized as feedforward and feedback connection strengths approach each other. This is counterintuitive, since increasing feedback strength should couple nodes more tightly. Numerically, this prediction is confirmed only when μ_{f} − μ_{b} is not close to 0. As seen in Figure 4, when μ_{f} − μ_{b} is small, the eigenvector is no longer Gaussian and instead shows multiple peaks. Strengthening the feedback connections leads to the emergence of ripples in the slower modes that modulate the activity of the earlier, faster nodes. While the firstorder approximation of the shape of v_{λ} breaks down in this regime, Equation 4 is locally valid in that the largest peak sharpens with increasing symmetry, as seen in Figure 4B.
We extend our expansion to secondorder in v_{λ} (appendix [Supplementary file 1], Sections 5 & 6) to predict that the eigenvector is given by
with
where, Ai is the first Airy function (Olver, 2010). The eigenvector is the product of an exponential and an Airy function and this product is localized when the exponential is steep (Figure 4A). The steepness of the exponential depends on μ_{f} − μ_{b}. When this difference is small the exponential is shallow and the trailing edge of the product is poorly localized. Figure 4B shows that this functional form accurately predicts the results from numerical simulations, except when the eigenvector is almost completely delocalized.
These results reveal that an asymmetry in the strength of feedforward and feedback projections can play an important role in segregation of timescales in biological systems.
The secondorder expansion demonstrates that the approach is general and can be extended as needed. While the firstorder expansion in v_{λ} generically gives rise to modulated Gaussians, the functional form of the eigenvectors from a secondorder expansion depends on the connectivity (appendix [Supplementary file 1], Section 5) and, in general, the asymptotic decay is slower than that of a Gaussian.
Localization in a network with a gradient of connectivity range
The previous architecture was a chain of nodes with identical internode connectivity but varying local connectivity. We now consider a contrasting architecture: a chain with no selfcoupling but with a locationdependent bias in internode connectivity. We build this model motivated by the intuitive notion that nodes near the input end of a network send mostly feedforward projections, while nodes near the output send mostly feedback projections. The network architecture is shown in Figure 5A.
Connectivity decays exponentially, as in the previous example, but the decay length depends on position. Moving along the chain, feedforward decay length decreases while feedback decay length increases:
The parameters f_{0}, f_{1}, b_{0}, and b_{1} control the locationdependence in decay length, μ_{0} is the leakage term, and μ_{f} and μ_{b} set the maximum strength of feedforward and feedback projections. We also add a small amount of randomness to the connection strengths.
As before we calculate the eigenvector width, α. In this case, for a wide range of the parameters in Equation 7, α^{2} is positive and approximately constant for all eigenvectors. Therefore, all eigenvectors are localized and have approximately the same width (appendix [Supplementary file 1], Section 4). Four eigenvectors are plotted in Figure 5B along with theoretical predictions. Figure 5C shows all of the eigenvectors on a heat map and demonstrates that all are localized. The fastest and slowest timescales are localized to the earlier nodes while the intermediate timescales are localized towards the end of the chain. The earlier nodes thus show a combination of very fast and very slow time courses, whereas the later nodes display dynamics with an intermediate range of timescales. Such dynamics present a salient feature of networks with opposing gradients in their connectivity profile. In Figure 5D, we plot the decay of network activity from a uniform initial condition; note the contrast between nodes early and late in the chain.
While the eigenvectors are all localized, different eigenvectors tend to cluster their centers near similar locations. Near those locations, nodes may participate in multiple eigenvectors, implying that time constants are not well segregated. This is a consequence of the architecture: nodes towards the edges of the chain project most strongly towards the center, so that small perturbations at either end of the chain are strongly propagated inward. The narrow spread of centers (the overlap of multiple eigenvectors) reduces the segregation of timescales that is one benefit of localization. We find that adding a small amount of randomness to the system spreads out the eigenvector centers without significantly changing the shape. This approach is more robust than finetuning parameters to maximally spread the centers, and seems reasonable in light of the heterogeneity intrinsic to biological systems (Raser and O’Shea, 2005; Barbour et al., 2007). Upon adding randomness, most eigenvectors remain Gaussian while a minority are localized but lose their Gaussian shape.
The significant overlap of the eigenvectors means that the eigenvectors are far from orthogonal to each other. Such matrices, called nonnormal matrices, can show a number of interesting transient effects (Trefethen and Embree, 2005; Goldman, 2009; Murphy and Miller, 2009). In particular we note that the dynamics of our example network show significant initial growth before decaying, as visible in the scale of Figure 5D.
Randomness and diversity
As observed in the last section, the heterogeneity intrinsic to biological systems can play a beneficial role in computation. Indeed, sufficient randomness in local node properties has been shown to give localized eigenvectors in models of physical systems with nearestneighbor connectivity, and the transition from delocalized to localized eigenvectors has been suggested as a model of the transition from a conducting to an insulating medium (Anderson, 1958; AbouChacra et al., 1973; Lee, 1985). A similar mechanism should apply in biological systems. We numerically explore eigenvector localization in a network with exponentiallydecaying connectivity and randomly distributed selfcouplings.
The network connection matrix is given by
where, $\mathcal{N}\left(0,{\sigma}^{2}\right)$ is drawn from a normal distribution with mean zero and variance σ^{2}.
As σ^{2} increases, the network shows a transition to localization. This transition is increasingly sharp and occurs at lower values of σ as the network gets larger. Figure 6 shows a network with sufficient randomness for the eigenvectors to localize, with sample eigenvectors shown in Figure 6B. These show a variety of shapes and are no longer well described by Gaussians. Importantly, there is no longer a relationship between the location of an eigenvector and the timescale it corresponds to (Figure 6C). Thus while each timescale is localized, a variety of timescales are present in each region of the network, and each node will show a random mixture of timescales. This is in contrast to our previous examples, which have a spatially continuous distribution of time constants. The random distribution of time constants is also observed in the decay from a uniform initial conditions, as shown in Figure 6D.
Discussion
Local connectivity is insufficient to create localized temporal patterns of activity in linear networks. A network with sharply localized but translationally invariant connectivity has delocalized eigenvectors. This implies that distant nodes in the network have similar temporal activity, since they share the timescales of their dynamics. Breaking the invariance can give rise to localized eigenvectors, and we study conditions that allow this. We develop a theory to predict the shapes of localized eigenvectors and our theory generalizes to describe eigenvectors that are only partially localized and show multiple peaks. A major finding of this study is the identification of two network architectures, with either a gradient of local connectivity or a gradient of longdistance connection length, that give rise to activity patterns with localized timescales.
Our approach to eigenvector localization is partly based on Trefethen and Embree (2005); Trefethen and Chapman (2004). The authors study perturbations of translationally invariant matrices and determine conditions under which eigenvectors are localized in the largeN limit. We additionally assume that the connectivity is local, since we are interested in matrices that describe connectivity of biological networks. This allows us to calculate explicit functional forms for the eigenvectors.
We stress that the temporal aspect of the network dynamics should not be confused with selectivity across space in a neural network. Even if temporal patterns are localized, a large proportion of network nodes may be active in response to a given input, albeit with distinct temporal dynamics. Conversely, even if temporal patterns are delocalized, nodes show similar dynamics yet may still be highly selective to different inputs and any stimulus could primarily activate only a small fraction of nodes in the network.
Our results are particularly relevant to understanding networks that need to perform computations requiring a wide spread of timescales. In general, input along a fast eigenvector decays exponentially faster than input along a slow eigenvector. To see this, consider a network with a fast and a slow timescale ($1/\left{\lambda}_{fast}\right$ and $1/\left{\lambda}_{slow}\right$), and having initial condition with components a_{fast} and a_{slow} along the fast and the slow eigenvectors respectively. As shown in Equation 11, the network activity will evolve as ${a}_{fast}{e}^{\left{\lambda}_{fast}\rightt}+{a}_{slow}{e}^{\left{\lambda}_{slow}\rightt}$. For a node to show a significant fast timescale in the presence of a slower, more persistent timescale, the contribution of this slow timescale to the node must be small. This can happen in two ways, corresponding to the terms of Equation 10. If the input contributes little to the slower eigenvectors then their amplitudes will be small at all nodes. This requires finetuned input (exponentially smaller along the slow eigenvectors) and means that the slow timescales do not contribute significantly to any node. Alternately, as in the architectures we propose, the slow eigenvectors could be exponentially smaller at certain nodes; these nodes will then show fast timescales for most inputs, with a small slow component.
The architecture with a gradient of local connectivity (Figure 3) may explain some observations in the larval zebrafish oculomotor system (Miri et al., 2011). The authors observed a wide variation in the time constants of decay of firing activity across neurons, with more distant neurons showing a greater difference in time constants. They proposed a model characterized by a chain of nodes with linearlydecaying connectivity and a gradient of connection strengths, and found that different nodes in the model showed different timescales. Furthermore, the introduction of asymmetry to connectivity (with feedback connections weaker than feedforward connections) enhanced the diversity of timescales. This effect of asymmetry was also seen in an extension of the model to the macaque monkey oculomotor integrator (Joshua et al., 2013). Our work explains why such architectures allow for a diversity of timescales, and we predict that such gradients and asymmetry should be seen experimentally.
With a gradient of local connections, time constants increase monotonically along the network chain. By contrast, with a gradient of connectivity length (Figure 5), the relationship between timescales and eigenvector position is lawful but nonmonotonic, as a consequence of the existence of two gradients (feedforward connectivity decreases while feedback increases along the chain). The small amount of randomness added to this system helps segregate the timescales across the network, while only mildly affecting the continuous dependence of eigenvector position on timescale. This suggests that randomness may contribute to a diversity of timescales.
The connection between structural randomness and localization is well known in physical systems (Anderson, 1958; AbouChacra et al., 1973; Lee, 1985). We applied this idea to a biological context (Figure 6), and showed that localization can indeed emerge from sufficiently random node properties. However, in this case nearby eigenvectors do not correspond to similar timescales. A given timescale is localized to a particular region of the network but a similar timescale could be localized at a distant region and, conversely, a much shorter or longer timescale could be localized in the same part of the network. Thus, the timescales shown by a particular node are a random sample of the timescales of the network.
Chemical gradients are common in biological systems, especially during development (Wolpert, 2011), and structural randomness and local heterogeneity are ubiquitous. We predict that biological systems could show localized activity patterns due to either of these mechanisms or a combination of the two. Furthermore, local randomness can enhance localization that emerges from gradients or longrange spatial fluctuations in local properties. We have focused on localization that yields a smooth relationship between timescale and eigenvector position; such networks are wellplaced to integrate information at different timescales. However, it seems plausible that biological networks have evolved to take advantage of randomnessinduced localization, and it would be interesting to explore the computational implications of such localization. It could also be fruitful to explore localization from spatially correlated randomness.
An influential view of complexity is that a complex network combines segregation and integration: individual nodes and clusters of nodes show different behaviors and subserve different functions; these behaviors, however, emerge from network interactions and the computations depend on the flow of information through the network (Tononi and Edelman, 1998). The localized activity patterns we find are one way to construct such a network. Each node participates strongly in a few timescales and weakly in the others, but the shape and timescales of the activity patterns emerge from the network topology as a whole and information can flow from one node to another. Moreover, as shown in Figure 7, adding a small number of longrange strong links to local connectivity, as in smallworld networks (Watts and Strogatz, 1998), causes a few eigenvectors to delocalize while leaving most localized. This is a possible mechanism to integrate computations while preserving segregated activity, and is an interesting direction for future research.
Methods
We study the activity of a linear network of coupled units, which will be called ‘nodes’. These represent neurons or populations of neurons. The activity of the jth node, ϕ_{j} (t), is determined by interactions with the other nodes in the network and by external inputs. It obeys the following equation:
where W (j,k) is the connection strength from node k to node j of the network and I_{j} is the external input to the jth node. W (j,j) is the selfcoupling of the jth node and typically includes a leakage term. Note that the intrinsic timescale of node j is absorbed into the matrix W.
By solving Equation 9, ϕ_{j} (t) can be expressed in terms of the eigenvectors of the connection matrix W, yielding
(Rugh, 1995). Here, λ indexes the eigenvalues of W, and v_{λ} (j) is the jth component of the eigenvector corresponding to λ. These are independent of the input. A_{λ} (t) is the timedependent amplitude of the eigenvector v_{λ} and depends on the input, which determines to what extent different eigenvectors are activated. If the real parts of the eigenvalues are negative then the network is stable and, in the absence of input, A_{λ} (t) decays exponentially with a characteristic time of $1/\mathfrak{R}\mathfrak{e}\left(\lambda \right)$.
A_{λ} (t) consists of the sum of contributions from the initial condition and the input, so that Equation 10 can be written as
${\tilde{a}}_{\lambda}$ and ${\tilde{I}}_{\lambda}$ are the coefficients for the initial condition and the input, respectively, represented in the coordinate system of the eigenvectors. In a stable network, each node forgets its initial condition and simultaneously integrates input with the same set of time constants.
In this work, we examine different classes of the connection matrix W, with the constraint that connectivity is primarily local, and we identify conditions under which its eigenvectors are localized in the network in such a way that different nodes (or different parts of the network) exhibit disparate timescales.
The functional form of localized eigenvectors from a firstorder expansion
Request a detailed protocolWe rewrite the connectivity matrix in terms of a relative coordinate, p = j−k, as
Thus, c (j,2) = W (j,j − 2) indexes feedforward projections that span two nodes, and c (5,p) = W (5,5 − k) indexes projections to node 5. Note that in the translationinvariant case, c (j,p) would be independent of j (appendix [Supplementary file 1], Section 1), while the requirement of local connectivity means that c (j,p) is small away from p = 0. For any fixed j, c (j,p) is defined from p = j − N to p = j − 1. We extend the definition of c (j,p) to values outside this range by defining c (j,p) to be periodic in p, with the period equal to the size of the network. This is purely a formal convenience to simplify the limits in certain sums and does not constrain the connectivity between the nodes of the network.
Consider the candidate eigenvector v_{λ} (j) = g_{λ} (j) e^{iωj}. The dependence of g_{λ} on j allows the magnitude of the eigenvector to depend on position; setting this function equal to a constant returns us to the translationindependent case (see appendix [Supplementary file 1], Section 1). Moreover, note that g_{λ} (j) depends on λ, meaning that eigenvectors corresponding to different eigenvalues (timescales) can have different shapes. For example, different eigenvectors can be localized to different degrees, and localized and delocalized eigenvectors can coexist (see Figure 3—figure supplement 1 for an illustration). ω allows the eigenvector to oscillate across nodes; it varies between eigenvectors and so depends on λ.
Applying W to v_{λ} yields
here, the term in brackets is no longer independent of j.
So far we have made no use of the requirement of local connectivity and, given that g_{λ} is an arbitrary function of position and can be different for different timescales, we have placed no constraints on the shape of the eigenvectors. By including an oscillatory term (e^{iωj}) in our ansatz, we ensure that g_{λ} (j) is constant when connectivity is translationinvariant; this will simplify the analysis.
We now approximate both c (j,p) and g_{λ} (j − p) to firstorder (i.e., linearly):
where, j_{0} is a putative center of the eigenvector.
Substituting Equation 15 into Equation 14 we get
We expect these approximations to be valid only locally. However, if connectivity is local then the major contribution to the sum comes from small values of p. For large values of p, g_{λ} (j − p) is multiplied by connectivity strengths close to 0 and so we only need to approximate g_{λ} for p close to 0. Similarly, in approximating c (j,p) around j = j_{0}, we expect our approximation to be good in the vicinity of j = j_{0}. However, if our eigenvector is indeed localized around j_{0}, then g_{λ} (k) is small when $\leftk{j}_{0}\right$ is large. For small p, large values of $\leftk{j}_{0}\right$ approximately correspond to large values of $\leftj{j}_{0}\right$, and so c (j,p) makes a contribution to the sum only when j ≈ j_{0}.
The zerothorder term in Equation 16 is
The function in parentheses is periodic in p with period N (recall that c (j,p) was extended to be periodic in p). Thus to zerothorder v_{λ} is an eigenvector with eigenvalue
For λ to be an exact eigenvalue in Equation 16, the higherorder terms should vanish. By setting the firstorder term in this equation to 0, we obtain a differential equation for g_{λ} (j):
where,
Thus α^{2} is a ratio of discrete Fourier transforms at the frequency ω. Note that the denominator is a weighted measure of network heterogeneity at the location j_{0}. Also note that α^{2} can be written in terms of λ as (compare the twist condition of Trefethen and Embree, 2005):
Solving for g_{λ} in Equation 18 yields
where, C_{1} is a constant. Thus, to firstorder, the eigenvector is given by the modulated Gaussian function
In general, α can be complex. In order for v_{λ} to be localized, $\mathfrak{R}\mathfrak{e}\left({\alpha}^{2}\right)$ must be positive for the corresponding values of j_{0} and ω, and we only accept an eigenvector as a valid solution if this is the case. Thus the approach is selfconsistent: we assumed that there existed a localized eigenvector, combined this with the requirement of local connectivity to solve for its putative shape, and then restricted ourselves to solutions that did indeed conform to our initial assumption.
For an expanded version of this analysis along with further discussion of what the analysis provides, see the appendix (Supplementary file 1), Section 2.
References

1
A selfconsistent theory of localizationJournal of Physics C Solid State Physics 6:1734–1752.https://doi.org/10.1088/00223719/6/10/009
 2

3
Absence of diffusion in certain random latticesPhys Rev 109:1492–1505.https://doi.org/10.1103/PhysRev.109.1492
 4

5
What can we learn from synaptic weight distributions?Trends in Neurosciences 30:622–629.https://doi.org/10.1016/j.tins.2007.09.005

6
Coding of stimulus sequences by population responses in visual cortexNature Neuroscience 12:1317–1324.https://doi.org/10.1038/nn.2398

7
A reservoir of time constants for memory traces in cortical neuronsNature Neuroscience 14:366–372.https://doi.org/10.1038/nn.2752
 8

9
Correlated discharges among putative pyramidal neurons and interneurons in the primate prefrontal cortexJournal of Neurophysiology 88:3487–3497.https://doi.org/10.1152/jn.00188.2002
 10

11
Specialization of the neocortical pyramidal cell during primate evolutionIn: JH Kass, TM Preuss, editors. Evolution of nervous systems: a comprehensive reference. New York: Elsevier. pp. 191–242.
 12

13
Linearization of FI curves by adaptationNeural Computation 10:1721–1729.https://doi.org/10.1162/089976698300017106
 14

15
Memory traces in dynamical systemsProceedings of the National Academy of Sciences of the United States of America 105:18970–18975.https://doi.org/10.1073/pnas.0804451105
 16

17
Pyramidal cell communication within local networks in layer 2/3 of rat neocortexJournal of Physiology (London) 551:139–153.https://doi.org/10.1113/jphysiol.2003.044784
 18
 19
 20

21
Disordered electronic systemsReviews of Modern Physics 57:287–337.https://doi.org/10.1103/RevModPhys.57.287

22
Spatial profile of excitatory and inhibitory synaptic connectivity in mouse primary auditory cortexJournal of Neuroscience 32:5609–5619.https://doi.org/10.1523/JNEUROSCI.515811.2012
 23

24
Weight consistency specifies regularities of macaque cortical networksCerebral Cortex 21:1254–1272.https://doi.org/10.1093/cercor/bhq201

25
Spatial gradients and multidimensional dynamics in a neural integrator circuitNature Neuroscience 14:1150–1159.https://doi.org/10.1038/nn.2888
 26

27
The structure and function of complex networksSIAM Review 45:167–256.https://doi.org/10.1137/S003614450342480
 28

29
Chapter 9 airy and related functionsIn: FWJ Olver, DW Lozier, RF Boisvert, CW Clark, editors. NIST handbook of mathematical functions. New York: Cambridge University Press. pp. 193–214.

30
A synaptic organizing principle for cortical neuronal groupsProceedings of the National Academy of Sciences of the United States of America 108:5419–5424.https://doi.org/10.1073/pnas.1016051108

31
Eigenvalue spectra of random matrices for neural networksPhysical Review Letters 97:188104.https://doi.org/10.1103/PhysRevLett.97.188104
 32
 33
 34

35
Rate models for conductancebased cortical neuronal networksNeural Computation 15:1809–1841.https://doi.org/10.1162/08997660360675053

36
Spatial and temporal scales of neuronal correlation in primary visual cortexJournal of Neuroscience 28:12591–12603.https://doi.org/10.1523/JNEUROSCI.292908.2008

37
The nonrandom brain: efficiency, economy, and complex dynamicsFrontiers in Computational Neuroscience 5:5.https://doi.org/10.3389/fncom.2011.00005
 38
 39
 40

41
Wave packet pseudomodes of twisted Toeplitz matricesCommunications on Pure and Applied Mathematics 57:1233–1264.https://doi.org/10.1002/cpa.20034

42
Spectra and Pseudospectra: the behavior of Nonnormal matrices and OperatorsPrinceton: Princeton University Press.

43
Neural network dynamicsAnnual Review of Neuroscience 28:357–376.https://doi.org/10.1146/annurev.neuro.28.061604.135637

44
Calcium coding and adaptive temporal computation in cortical pyramidal neuronsJournal of Neurophysiology 79:1549–1566.

45
Synaptic reverberation underlying mnemonic persistent activityTrends in Neurosciences 24:455–463.https://doi.org/10.1016/S01662236(00)018683
 46

47
Prefrontal cortexIn: GM Shepherd, S Grillner, editors. Handbook of brain microcircuits. New York: Oxford University Press. pp. 46–56.
 48

49
Positional information and patterning revisitedJournal of Theoretical Biology 269:359–365.https://doi.org/10.1016/j.jtbi.2010.10.034
Decision letter

Misha TsodyksReviewing Editor; Weizmann Institute of Science, Israel
eLife posts the editorial decision letter and author response on a selection of the published articles (subject to the approval of the authors). An edited version of the letter sent to the authors after peer review is shown, indicating the substantive concerns or comments; minor concerns are not usually shown. Reviewers have the opportunity to discuss the decision before the letter is sent (see review process). Similarly, the author response typically shows only responses to the major concerns raised by the reviewers.
Thank you for sending your work entitled “A diversity of localized timescales in network activity” for consideration at eLife. Your article has been favorably evaluated by a Senior editor and 2 reviewers, one of whom is a member of our Board of Reviewing Editors.
The Reviewing editor and the other reviewers discussed their comments before we reached this decision, and the Reviewing editor has assembled the following comments to help you prepare a revised submission.
This is a very interesting article from a theoretical perspective. It shows how a combination of nonHermiticity and broken translation invariance can lead generically to surprisingly localized eigenfunctions. Biological implications of this result are that neuronal networks in the brain could have localized modes of activation characterized by different time scales.
The main issue that should be addressed in the revision is that, while on one hand, the analytical method for estimation of eigenvectors is the major contribution, the method is presented in an incomprehensible manner. As a result, one cannot appreciate why is it advantageous to simply diagonalize the connectivity matrix numerically (this issue is not discussed by the authors). Here is the list of points that have to be clarified.
1) Equation (7)  it has to be solved for j_{0} and w to find the shape of the corresponding eigenvector. How do we know the eigenvalues of the matrix? This is never explained. Moreover, in the first example considered, of Equation (13), the authors simply say that they ‘match the eigenvalues to j_{0} and w, to find that w=pi’. Is there an analytical expression for the eigenvalues of the matrix of Eq. (13)? If yes, the authors should provide it. It would make this example very special though. What if there is no such expression, would they have to diagonalize the matrix numerically? This would also provide the eigenvectors, so the whole procedure would seem to be redundant.
2) In the next example, of Equation (18), apparently there is no analytical expression for the eigenvalues, and the final solution for the width of the eigenvactor, Equation (21) still depends on j_{0} and w. The authors don’t explain how they find those.
3) In the presentation of the secondorder expansion approach, the authors seem to ignore that both firstorder and secondorder corrections to eigenvalues have to vanish. They only consider the second order. Why does it makes sense to ignore the firstorder correction?
https://doi.org/10.7554/eLife.01239.012Author response
1) Equation (7)  it has to be solved for j_{0} and w to find the shape of the corresponding eigenvector. How do we know the eigenvalues of the matrix? This is never explained. Moreover, in the first example considered, of Equation (13), the authors simply say that they ‘match the eigenvalues to j_{0} and w, to find that w=pi’. Is there an analytical expression for the eigenvalues of the matrix of Equation (13)? If yes, the authors should provide it. It would make this example very special though. What if there is no such expression, would they have to diagonalize the matrix numerically? This would also provide the eigenvectors, so the whole procedure would seem to be redundant. And
2) In the next example, of Equation (18), apparently there is no analytical expression for the eigenvalues, and the final solution for the width of the eigenvactor, Equation (21) still depends on j_{0} and w. The authors don’t explain how they find those.
We thank the reviewers for this point—the discussion of the relationship to the eigenvalues was unclear and has now been clarified. We start by stressing that the benefit of our approach is not primarily computational. The reviewers are right that, in general, our method doesn't provide a way to analytically compute the eigenvalues and, given that we consider a large class of matrices, it would be surprising if we could. Instead, our approach yields theoretical insight into the conditions that allow for eigenvector localization and how the shape of localized eigenvectors depend on network parameters.
In Supplementary file 1 (mathematical appendix), we have added an extensive discussion of what the theory yields and why it is useful. We have also clarified this in the main text and, to avoid any confusion, highlighted that our theory does not in general predict the eigenvalues of an arbitrary matrix with local connectivity. Finally, we have added a figure (Figure 3–figure supplement 1) that demonstrates how the theory picks out a region of the complex plane within which localized eigenvectors lie. We summarize these points below.
Given a network specification (i.e., connectivity profile), our analysis reveals the functional form of the eigenvector and, to first order, localized eigenvectors are Gaussians. However, the parameters of this functional form (in this case the center and width) depend on the particular connectivity profile (known) and on the eigenvalues, which are unknown. In general these eigenvalues must be separately calculated. Given a particular eigenvalue the theory tells us whether the corresponding eigenvector is localized. In this case it also yields the shape along with an analytic formula for the dependence of the shape on network parameters; this formula can be used to understand how changing network parameters promotes or hinders localization.
Our theory also identifies a region of the complex plane within which the eigenvalues lie, and tells us which of these putative timescales will correspond to localized eigenvectors. It tells us which nodes can host localized eigenvectors and how changing the parameters of the network changes the region of localized timescales. It also provides qualitative insight into factors (like translationdependence and asymmetry) that promote localization.
In certain cases we can draw general conclusions about the shapes of all localized eigenvectors without computing the eigenvalues. For example, in the network of Figure 3, the eigenvector width is the same for all localized eigenvectors. This is a special example but not an unnatural one; a gradient of local properties is among the simplest deterministic ways to break translationinvariance. We also note that our theory allows us to translate constraints on the eigenvalue spectrum of the network (for example, lowrank or sharplydecaying connectivity, real eigenvalues, etc) into constraints on the shape of eigenvectors.
We elaborate on these points in Section 2 of the mathematical appendix.
3) In the presentation of the secondorder expansion approach, the authors seem to ignore that both firstorder and secondorder corrections to eigenvalues have to vanish. They only consider the second order. Why does it makes sense to ignore the firstorder correction?
In all of our expansions, we require that the sum of the higher order terms vanishes. For the secondorder expansion, this means that the sum of the first and secondorder terms should vanish. Note that this does not necessarily mean that the terms vanish separately.
https://doi.org/10.7554/eLife.01239.013Article and author information
Author details
Funding
Office of Naval Research (N000141310297)
 XiaoJing Wang
John Simon Guggenheim Memorial Foundation Fellowship
 XiaoJing Wang
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Reviewing Editor
 Misha Tsodyks, Weizmann Institute of Science, Israel
Publication history
 Received: July 16, 2013
 Accepted: December 4, 2013
 Version of Record published: January 21, 2014 (version 1)
Copyright
© 2014, Chaudhuri et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,860
 Page views

 314
 Downloads

 24
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Download citations (links to download the citations from this article in formats compatible with various reference manager tools)
Open citations (links to open the citations from this article in various online reference manager services)
Further reading

 Neuroscience

 Neuroscience