Introduction

Traveling waves have emerged as a central topic among different large-scale spatiotemporal dy-namics observed in neuronal systems Muller et al. (2018). These waves are observed at different frequencies during various brain activities, ranging from slow-wave activity Massimini (2004); Liang et al. (2021), sleep spindles Muller et al. (2016), to faster oscillations in alpha Zhang et al. (2018); van Kerkoerle et al. (2014), beta Bhattacharya et al. (2022); Rubino et al. (2006); Davis et al. (2020), and gamma van Kerkoerle et al. (2014); Aggarwal et al. (2022) frequency bands. These waves also appear in various brain regions across species, including the hippocampus Lubenov and Siapas (2009); Patel et al. (2012); Zhang and Jacobs (2015), visual cortex Sato et al. (2012), prefrontal cor-tex Bhattacharya et al. (2022), motor cortex Rubino et al. (2006), or span across multiple brain regions Liang et al. (2023). More importantly, these waves have long been conjectured to regulate brain functions through mechanisms such as synaptic plasticity Ermentrout and Kleinfeld (2001); Muller et al. (2018). Indeed, researchers have recently found that traveling waves can modulate important brain functions such as memory, visual processing, and sleep Mohan et al. (2024); Aggarwal et al. (2022); Miyamoto et al. (2017).

The prevalence and important implications of traveling waves have prompted researchers to investigate their causes and organization. In these studies, computational modeling has been use-ful due to its flexibility. Researchers have developed both microscopic models Davis et al. (2020); Naze et al. (2015); Ermentrout and Kleinfeld (2001) and mesoscopic models such as neural mass models Roberts et al. (2019); Liang et al. (2023) to explore the mechanical properties and computational implications of traveling waves. However, most of these studies focus on microscopic or mesoscopic waves that appear in a single brain region. Large-scale, macroscopic waves across multiple brain regions have been less studied. Moreover, these models are usually constructed with local or uniform connectivity and lack the realistic geometry and connectivity of the brain. It has been shown that in a network of coupled non-linear oscillators, certain dynamics only arise when the number of oscillators is high enough, where local and global activities can coexist in the network Budzinski et al. (2022). At the same time, it has been argued that modeling neuronal networks at single-cell resolution will yield more precise discoveries than the mean-field model Deco et al. (2008). Therefore, it is ideal to study traveling waves in a large domain such as the cortex at single-cell resolution.

However, there are two significant challenges: First, a microscopic wiring diagram of the mouse brain at the whole-brain level is not available. However, we have mesoscopic connectivity data provided by the Allen Brain Atlas Oh et al. (2014); Knox et al. (2018); Harris et al. (2019), which will be used in this work. Second, simulating and measuring wave activity in a large 3D computational domain with realistic brain geometry is computationally expensive. Simulations of such scale are usually conducted on supercomputers Yamaura et al. (2020); Markram et al. (2015), and there are no established methods to quantify wave patterns on a large 3D domain.

In this work, we overcome these two challenges in order to study the emergence and propagation of macroscopic traveling waves in the mouse cortex. First, we propose an algorithm that integrates a spatial transcriptomic dataset that provides the detailed spatial and molecular profile of neurons Zhang et al. (2023), with voxelized connectivity data Knox et al. (2018) of the mouse brain. We use this algorithm to create neuron-to-neuron connectivity for one hemisphere of the mouse cortex, which we call Allen connectivity. We then constructed a 3D computational model with both excitatory and inhibitory neurons on top of the Allen connectivity. To overcome the second difficulty, we built our own parallel computational framework that can simulate the constructed model with a desktop GPU. We also extend the previously established method designed for 2-D data to our 3-D data to quantify the wave activities observed in simulations.

In our simulations, we found that the realistic Allen connectivity is capable of generating macroscopic waves across the mouse cortex with realistic propagating directions. Further quantification shows that Allen connectivity can generate a higher level of macroscopic wave activities than local and uniform connectivity. Moreover, we discover that the coupling strength between neurons non-monotonically affects the level of macroscopic wave activities that can emerge from different connectivity. Finally, we show that the realistic Allen connectivity is capable of generating a higher level of macroscopic traveling waves with different frequencies, especially in the Delta band (0.5 - 4 Hz), than uniform and local connectivity. Our work shows how realistic connectivity can enable flexible macroscopic waves across the mouse cortex. We also provide a computational framework to further study such traveling waves in the mouse brain at single-cell resolution.

Results

Construction of the cortical connectivity

A recent spatial transcriptomic dataset (Zhuang-ABCA1) Zhang et al. (2023) provides detailed positions of neurons in the mouse brain along with their molecular properties, including the neurotransmitter information (Fig. 1a, Use of the spatial transcriptomic data). Our goal is to connect neurons in Zhuang-ABCA1 based on realistic connectivity data of the mouse brain that has been experimentally measured. Previously, Oh et al. (2014) and Harris et al. (2019) have provided a mesoscopic connectivity matrix, known as the “projectome”, of the mouse brain. The projectome is derived from thousands of viral tracing experiments carried out within the mouse brain, and it measures the strength of connections between different brain regions at a regional level. This pro-Consider constructing a connectivity for N neurons that are contained in M voxels, and let Xi represent the physical position of the ith neuron. We have access to the voxelized projection matrix W, where Wab denotes the projection strength from voxel a to voxel b. Our connectivity algorithm requires only a single density parameter, ρ, which determines the average number of connections per neuron, thereby controlling the network’s sparsity (Use of the voxelized connectivity data). Using this information, we propose the following algorithm to construct the connectivity for the target neurons (Fig. 1b and Methods):

Construction of the cortical connectivity.

(a) Visualization of all neurons in the hemispherical dataset we use to build the cortical connectivity. The hemisphere contains about 1 million neurons in total, including about 300,000 cortical neurons. (b) Algorithm of building the connectome: i) First, we identify neuronal excitatory/inhibitory information based on the spatial transcriptomic data. ii) We then identify the voxelized position for all the neurons. iii) Based on the voxelized projection data, we calculate the number of connections between every pair of voxels (see Use of the voxelized connectivity data). iv) Connections are created randomly between the neurons in each pair of voxels. v) Repeat for all voxels in regions of interest. vi) Complete the connectivity. (c) The connectivity matrix shows the number of connections between different cortical regions(in log 10 scale). A full version of the connectivity matrix with annotated regions is shown in Figure 1—figure supplement 1. (d) Visualization of outgoing connections from sampled neurons in the primary visual cortex (VISp, left), primary auditory cortex (AUDp, middle), and the ventral anterior cingulate areas (ACAv, right).

Algorithm 1

Construction of cortical connectivity

Applying this algorithm to all cortical neurons identified in Zhuang-ABCA1, we obtain a cortical connectivity that consists of 297,812 neurons, and each neuron has 300 connections on average, resulting in a network sparsity of around 0.1 % (Construction and storage of the connectivity). The cortical connectivity we constructed is highly heterogeneous, where the number of connections between different regions can vary across four orders of magnitude (Fig. 1c). At the same time, both the in-degree and out-degree distribution of the connectome follow a realistic log-normal distribution Buzsáki and Mizuseki (2014) (Figure 1—figure supplement 1). Since the complete connectome is difficult to visualize, we show some connections from different cortical regions, including the primary visual, auditory cortex, and the ventral anterior cingulate area (Fig. 1d).

Network model and simulation

To simulate the Allen connectivity we just constructed, we now develop a computational model of the mouse cortex for computer simulation. The model incorporates both excitatory glutamatergic (AMPA) and inhibitory GABAergic neurons, with excitatory and inhibitory information provided by the original dataset (Zhuang-ABCA1). Both neurons are modeled as single-compartment, Hodgkin-Huxley type of neuron, whose firing behavior has been extensively described in Pospischil et al. (2008). Consequently, our network model contains two kinds of synapses (GABA and AMPA), both described by a first-order kinetic model (Neuronal and synaptic model).

Now, we need a mechanism to generate traveling waves in the simulation. According to previous studies, traveling waves can emerge in the mouse cortex both after sensory stimuli Mohajerani et al. (2013) or during spontaneous activities Liang et al. (2023). Here, we electrically stimulate all layer 4 neurons with an applied current as a way to simulate subcortical input to the cortex, since layer 4 is known to be the input layer Rockland (2019). Different levels of stimuli can be given by adjusting the magnitude of the applied current. Each simulation lasts for 3 seconds, during which the neurons are constantly under stimulation. To make such large-scale simulations possible on realistic time frames, we construct our own computational engine utilizing GPUs for simulation and visualization (Simulation).

Macroscopic traveling waves arise from realistic connectivity

Simulating the model with Allen connectivity, we immediately observe macroscopic traveling waves emerge in the network caused by the electrical stimuli to layer 4 neurons. In Figure 2 and Videos, we present a simulation in which a rhythmic oscillation occurs. The oscillation itself is well synchronized across the cortex, as shown by calculating the average voltage across the cortex and the raster plots of 40,000 sampled neurons (Fig. 2a). Next, we properly choose the time window to visualize the voltage activities of all the neurons in the network. We first visualize the intracellular voltage of all the individual neurons in the network (Fig. S2), but since the number of neurons are large, we proceed to visualize the average of the intracellular voltage of 300 neurons nearby at each location (Fig. 2b and d). This reveals an apparent macroscopic traveling wave in the anterior-posterior direction through the cortex, which has been previously reported Massimini (2004); Liang et al. (2021); Aggarwal et al. (2024). We also observe the anterior-posterior macroscopic wave in both directions (Fig. 2c and 2e). In fact, the two waves can propagate in the reverse direction right after each other, indicating a possible feedforward-feedback mechanism.

Anterior-posterior macroscopic traveling waves emerge from Allen connectivity

(a) Top: The average intracellular voltage of all neurons; bottom: Raster plots for 40000 neurons, which are evenly sampled from different cortical regions. The red and blue dashed lines indicate the time windows that are visualized in (b-e). (b) Visualization of average intracellular voltage across different locations in the cortex during the time window indicated by the red dashed lines in (a). (c) Visualization of the macroscopic traveling wave by coloring the neurons based on the first time they fire during the same time window visualized in (b). (d-e) Same plot as (b-c), but they visualize a different time window of the same simulation, indicated by the blue dashed lines in (a). The direction of the traveling wave is opposite to the one shown in the previous figures. See Videos for the movie of the simulation.

Figure 2—figure supplement 1. Visualization of the macroscopic traveling waves at single-neuron resolution. (a-b) Visualization of the same

Higher level of macroscopic wave emerges from realistic connectivity

In past modeling studies of traveling waves, it is common to adopt uniform-random, local, or a combination of the two kinds of connectivity Liang et al. (2023). Therefore, we now simulate the same model with different connectivity in order to compare and investigate whether such macroscopic waves that are presented in Figure 2 only emerge from the Allen connectivity. For comparison, we constructed two additional types of connectivity that are common in past studies of traveling waves:

  1. Uniform connectivity: Neurons are connected across the entire cortex randomly (Fig. 3b).

  2. Local connectivity: Neurons are connected to a fixed number of their closest neighbors (Fig. 3c).

Top: schematics of Allen (a), local (b), and uniform connectivity (c); bottom: Visualization of outgoing connections from the same 50 randomly selected neurons in the primary visual cortex (VISp) for the three connectivity. Figure 3—figure supplement 1. Comparison of the three connectivity matrices: a Allen, b local and c uniform connectivity. For local connectivity, the diagonal terms represent the connections within the same region. And the off-diagonal terms exist because these regions are physically adjacent to each other.

Here we visualize the outgoing connections from 50 randomly selected neurons in the primary visual cortex to show the difference between the three connectivity. Under Allen connectivity, the outgoing connections project to different regions heterogeneously (Fig. 3a); with local connectivity, the outgoing connections only reach around the adjacent areas where the sampled neurons are located (Fig. 3b); and with uniform connectivity, the outgoing connections tend to cover the whole cortex evenly (Fig. 3c). Although the topology of three connectivity is different, they share the same out-degree distribution based on the way they are constructed (see Construction and storage of the connectivity).

To properly quantify the traveling wave observed in simulations, we applied a phase-based method from image processing Fleet and Jepson (1990), which has been adopted to analyze neuronal dynamics Rubino et al. (2006), to our simulation data. First, we performed the Hilbert transform on the time series of the intracellular voltage signal for all neurons (Fig. 4a) to transform the voltage signal into a phase signal. Since neurons are spatially distributed according to a realistic dataset rather than arranged on a regular grid required for further calculations, we constructed a Cartesian grid defined by the Allen Brain Atlas at a resolution of 100 µm and interpolate the phase signal of neurons onto the grid points (Fig. 4b) to obtain the phase field. We then compute the gradient field of the phase field on the 3D grid (Fig. 4c). This allows us to calculate the phase gradient directionality (PGD), an indicator for planar wave activity Rubino et al. (2006), which will be used to measure the level of macroscopic wave activities in our simulations (see Quantitative measurement of neuronal activity).

Schematics of analyzing 3D spatiotemporal data using phase gradient.

(a) A single frame from a simulation with Allen connectivity, where neurons are colored based on their intracellular voltage. (b) The same frame as in (a), but with voltage data transformed into a generalized phase and interpolated onto a regular 3D grid. (c) Calculation of phase gradient field based on the phase field presented in (b). The arrows indicate the gradient directions at different locations. Please see Quantitative measurement of neuronal activity for details of the calculation presented in this figure.

In Figure 5 and Videos, we compare the PGD produced by three representative simulations with different connectivity but the same parameters otherwise. We compare three time windows of the same length (70ms) where PGD reaches its maximum during the corresponding simulations. Although both Allen and local connectivity produce visible wave activity, Allen connectivity produces waves that are much more macroscopic, while local connectivity is capable of producing locally organized planar waves, which are the most frequently form of waves observed in previous expeimental studies Lubenov and Siapas (2009); Bhattacharya et al. (2022) and computational models with local connectivity Keane and Gong (2015). However, uniform connectivity did not generate any apparent structure. This qualitative observation was confirmed by the PGD calculations: uniform connectivity exhibited correspondingly lowest PGD, whereas Allen connectivity resulted in significantly higher phase gradient directionality (Fig. 5d).

Allen connectivity produces a higher level of macroscopic wave activity than local and uniform connectivity.

(a-c) Time sequences of three simulations with Allen (a), local (b) and uniform (c) connectivity under the same stimulus. Colors represent the generalized phase of each neuron, which is further normalized for each simulation between 0 and 1. (d) Phase gradient directionality (PGD) of the corresponding simulations with respect to time. The three time windows are chosen such that the maximum PGD is reached. Please see Video 2 for animation of the three simulations.

Dynamic regime of the network affects macroscopic traveling waves

Traveling waves are known to propagate differently in a network under different states, depending on the excitability of the neurons and the strength of connections between them, even if the connectivity remains the same Muller et al. (2018). Here, after comparing the wave activities that emerge in three different connectivity for one parameter setting, we proceed to examine whether the Allen connectivity always generates more macroscopic wave activities under different conditions. To do so, we systematically alternate the coupling strength of the network and the level of stimuli, while fixing other parameters for each connectivity in each simulation (Simulation).

As a result, we observe that various kinds of oscillations arise from different connectivity under different conditions. As presented in Figure 6 and Videos, when the network is weakly coupled, all three connectivity are able to produce a certain level of macroscopic wave activities, indicated by the PGD calculation (Fig. 7b). Still, Allen connectivity produces the highest level of wave activity in this coupling threshold. We also found the network to be well synchronized for both Allen and uniform connectivity, but not for the local connectivity (Fig. 6a-c). However, when the coupling strength doubles from this level, the network goes into an asynchronous state for both Allen and uniform connectivity (Fig. 6d-f). This asynchronous firing behavior also disrupts the macroscopic wave activity that arises previously in the weakly coupled network. Interestingly, the local connectivity is capable of retaining the same level of macroscopic wave activities, which are higher then the Allen and uniform connectivity for this coupling threshold (Fig. 7b). Furthermore, if we keep increasing the coupling strength of the network, the synchrony will return to a high level for Allen and uniform connectivity, consequently increasing the level of macroscopic wave activity in the network (Fig. 6g-i).

Different oscillations and spatiotemporal patterns emerge from the cortical network with different connectivity and coupling strength.

For each panel, the raster plot (top) shows the firing behavior of the population and the visualization (bottom) shows the intracellular voltage activity across the cortical surface during the period marked in red dashed lines on the corresponding raster plot. There are 9 panels in total, corresponding to three different connectivity and coupling strengths. Left to right: Allen, local and uniform connectivity; Top to bottom: weak (50% baseline), medium (baseline) and strong (150% baseline) coupling strength.

Macroscopic wave activity is strongly affected by the coupling strength of the network and the frequency of the oscillation.

(a-b) The average phase gradient directionality (PGD) calculated from simulations of different connectivity under different applied currents (a) or coupling strengths (b). The error bar represents the values from other simulations with different coupling strengths (a) or applied currents (b). (c) The average PGD calculated from simulations of different connectivity under different kinds of oscillations categorized by the dominant oscillation frequency (see Quantitative measurement of neuronal activity).

From these simulations, we found that the level of macroscopic wave activity is relatively insensitive to the magnitude of the stimuli: under the same coupling strength, changing the level of the stimuli does not strongly change the level of macroscopic wave activity. However, the coupling strength in the network can drastically change the level of macroscopic wave activity non-monotonically (Figure 7 and Figure 7—figure supplement 1). We found that the level of PGD produced with local connectivity is largely consistent across different dynamic regimes. However, Allen and uniform connectivity are much more sensitive to the change of coupling strength. For Allen connectivity, a high level of macroscopic wave activity can emerge at both low and high coupling strengths, but not during medium coupling strength, in which local connectivity creates the highest wave activities at this threshold, as can be seen from previous Figure 6. At the same time, the uniform connectivity consistently creates a lower level of wave activities across different coupling strengths, except at the weakest coupling strengths.

Slow oscillations are more likely to form macroscopic traveling waves

In the study of neural oscillations, researchers have found that different frequencies usually indicate different brain functions and behaviors Buzsáki and Draguhn (2004). As we mentioned in the introduction, traveling waves are observed across a wide range of frequencies, but it is unclear whether there is a relationship between the kind of oscillation and the scope of the traveling wave that can arise from the oscillation. Because we are able to induce a range of oscillations of different frequencies, as shown in Figure 6, these results now allow us to investigate the relationship between the frequency of the oscillation and its corresponding scope. For each simulation, we conduct spectral analysis on the average voltage of the whole population and find out the dominant frequency of the voltage oscillation based on the power density. For each frequency band (Delta: 0.5 - 4 Hz, Theta: 4 - 8 Hz, Alpha: 8 - 12 Hz, Beta: 12 - 30 Hz and Gamma: 30 - 100 Hz), we gather the simulations that produce the oscillation in the frequency band and calculate the average PGD across these simulations (Quantitative measurement of neuronal activity).

Surprisingly, from Fig. 7c, we found that the level of PGD is highly correlated with the frequency band of the oscillation. For slow oscillations in the Delta frequency band, the Allen connectivity is able to support a much higher level of macroscopic wave activity on average than uniform and local connectivity. However, for Theta oscillations, the level of PGD for Allen connectivity significantly drops to the same level that is comparable to local connectivity. For Alpha oscillations, the level of macroscopic wave activity rises again, and drops for faster oscillations in Beta and Gamma frequencies. If we compare across connectivity, local connectivity again supports a similar level of PGD across all oscillation frequencies, while Allen connectivity particularly favors the slow frequency. Uniform connectivity again produces the lowest level of macroscopic wave activity compared to the other two connectivity. These findings conjecture that slow oscillations are more likely to propagate as macroscopic traveling waves within the mouse cortex.

Discussion

Construction of the Allen connectivity

In this work, we present an algorithm that integrates the spatial transcriptomic dataset that provides the spatial distribution and molecular properties of neurons across the mouse brain, and the mesoscopic connectivity dataset that offers the connectivity information between brain regions on a voxelized level. The algorithm is efficient and can be easily adapted for similar datasets of the mouse brain or for even other species. In essence, our algorithm belongs to the family of community generating algorithms, in particular the stochastic block model Holland et al. (1983). The projectomme Knox et al. (2018) we use to build the Allen connectivity is derived from a series of viral-tracing experiments that potentially reflect the connection strength between different regions Oh et al. (2014). However, the relationship between the actual connectome and the mesoscopic projectome remains unclear. For instance, we did not account for variations in synaptic strength across different connections. Moreover, the projection data used is not cell-type specific. A more updated thalamocortical dataset Harris et al. (2019) exists but is only on the regional level and therefore was not used in this work. An important future direction will be to improve the algorithm to integrate more comprehensive data on different types of connectivity in the cortex.

Cortical model and simulation

Compared to previous large-scale models Capone et al. (2023); Roberts et al. (2019), our model advances in incorporating the detailed spatial distribution of neurons in the 3-D computational domain. Without this information, spatiotemporal dynamics cannot be accurately analyzed and visualized. Recent research Pang et al. (2023); Gulbinaite et al. (2024) has highlighted the significant role that brain geometry plays in neuronal functions. In this work, we did not further use the spatial information of neurons to model aspects such as the layered structure of the cortex, which would be an interesting direction for future work.

At the same time, although we are using one of the largest spatial transcriptomic datasets up to date, 300,000 neurons are still only a small portion of the total neuronal population in the mouse cortex. The number of neurons in a network will have important effects on aspects such as synaptic scaling. Among the total population, we only modeled two kinds of neurons, which is also a simplification. Therefore, enriching our model by including a higher number of different types of neurons and studying how they affect the conclusion in this paper is highly meaningful.

For simulations, we choose to stimulate the total population of layer 4 neurons as a way to generate traveling waves, which is quite rare under biologically normal conditions. We also did not include any subcortical structure, such as the thalamus, which is vital in cortical dynamics like slow-wave activity Steriade et al. (1993). In this work, we choose two parameters that affect the dynamic regime of the network, which are the level of the stimuli controlled by the applied current and the coupling strength between neurons. There are many other things that affect the dynamical regime of a network, for example, the excitatory and inhibitory balance is vital for different cortical functions Ahmadian and Miller (2021). Incorporating these perspectives is also important for future investigations.

Synaptic and conduction delays are essential to cortical dynamics such as traveling waves Petkoski and Jirsa (2022); Deco et al. (2009); Golomb and Ermentrou (2000). In our model, because the synapses are modeled by a first-order kinetic model (Neuronal and synaptic model), both excitatory and inhibitory synapses have a rising time that mimics the synaptic delay. However, we did not include conduction delay in our study. Conduction delay is thought to have a proportional relationship with the white matter path length of the connection between two neurons Lemaréchal et al. (2022). In our model, although we have the 3D positions of neurons, we do not have geometric information about the cortical manifold. Two neurons can be very close in the Cartesian coordinates measured by the Euclidean distance, but very far in terms of the length of the actual connection in the brain. Therefore, incorporating accurate conduction delay in the model is an important future direction.

Quantitative measurements of 3-D traveling waves

There are many techniques available for identifying and measuring large-scale neuronal spatiotemporal patterns Townsend and Gong (2018); Gutzen et al. (2024). However, most of these algorithms are designed to analyze 2-D data. Our simulation data is intrinsically 3-D, thus presenting new challenges for measurement. For instance, we need to identify neuron boundaries to exclude unrealistic boundary effects when calculating the phase gradient field (Quantitative measurement of neuronal activity). Additionally, working with high-resolution 3-D data is computationally expensive. Despite these challenges, we extended and verified the previous phase-based method to quantify the level of macroscopic wave activity in our simulations. However, as noted in the original work Rubino et al. (2006), phase gradient directionality only indicates planar wave activity and may fail to identify other wave types such as spiral waves, which are observed in the cortex Muller et al. (2016). We also did not analyze other cortical dynamics, such as sinks and sources, which have appeared in our simulation. Extending more sophisticated methods from 2-D to 3-D to identify diverse spatiotemporal patterns in experimental or simulated data will be a meaningful direction for future research.

Comparing with experimental data

Across the paper, we have chosen the intracellular voltage of neurons as the primary data for the analysis. The voltage signals, though do not directly resemble experimental measures such as local field potential (LFP), provide a quick way to analyze the overall dynamics of the whole network. There is strong experimental evidence of cortical spatiotemporal waves from measurements of LFP and multiunit activity reflecting extracellular electrical potentials in regions across the cortex Destexhe et al. (1999). Recent experimental studies of cortical wave patterns have achieved significant improvements in spatiotemporal resolution via methods such as wide-field calcium imaging Manley et al. (2024) and optical voltage imaging Liang et al. (2021). Our simulation framework allows many opportunities to compare with these experimental data.

However, one must be careful when comparing these simulations to experimental data, especially LFPs or other methods measuring extracellular potential rather than intracellular activity. Further work is needed to connect high-resolution mathematical models with experimental data based on the imaging techniques used. For example, Davis et al. (2020) has simulated LFP from a high-resolution leaky integrate-and-fire (LIF) network model to compare with real LFP recordings. In future work, we plan to develop new methods to compare model voltage traces with electro-physiological recordings.

Neural oscillation and traveling wave

Different neural oscillations are known to have different generating mechanisms. For example, the gamma oscillation is conjectured to be caused by coordinated interaction of excitatory and inhibitory neurons in the cortex Buzsáki and Wang (2012). And slow-wave activity is conjectured to be caused by thalamocortical interaction Steriade et al. (1993). In our work, we did not fully investigate how different kinds of oscillations emerge in the network, but only study how these oscillations produce different traveling waves in the cortex a posteriori. Through statistical observation, we found that the slow oscillations are more likely to form macroscopic traveling waves. In the future, it is essential to combine established understanding of mechanisms behind different oscillations with experiments to further verify the conclusions drawn in this work.

Methods and Materials

Use of the spatial transcriptomic data

The original spatial transcriptomic study Zhang et al. (2023) provides four datasets of cell information identified from the mouse brain. Among these, we selected the largest dataset, Zhuang-ABCA-1, which includes 2.8 million cells from one hemisphere, with approximately 2.6 million (N = 2,616,328) cells registered to the Allen Brain Atlas. Of these 2.6 million cells, about half are non-neuronal cells and were removed before constructing the connectivity. Since we are only modeling the glutamatergic and GABA-ergic neurons, we only kept these two kinds of neuronal populations, totaling approximately 1 million neurons (N = 1,050,440). As a result, our final dataset consists of N = 1,050,440 neurons (Fig. 1a) in one hemisphere. Within this population, the isocortex contains 297,812 neurons. The cortical connectivity is constructed by applying our algorithm to the total cortical population.

Use of the voxelized connectivity data

The projection data we use in this study comes from Knox et al. (2018), which provides the connection strength between any pairwise voxel in the isocortex. The most important part of using the projection data is determining the number of connections between neurons in any pairwise voxels based on the connection strength. Let W represent the voxelized connectivity matrix, where Wab represents the connection strength from voxel a to voxel b. In this work, we assume the number of connections between two voxels is proportional to the connection strength, meaning the number of connections from voxel a to voxel b is Nab = λWab, where λ is a fact that will be calculated based on the density parameter.

Before building the connectivity, we first choose the density parameter, ρ, which is the average number of connections per neuron. Let vector x represents the number of neurons in each voxel. If λ = 1, then the total number of connections in the whole connectivity is Ntotal = xT W x, and the average number of connections per neurons is , where M is the total number of neurons. Therefore, if we let , we will get the ideal average number of connections per neurons. Consequently,. In the code, after calculating beforehand, we can multiply the whole projection matrix W by λ to derive a connection matrix, whose each entry now represents the number of connections between any two voxels.

Construction and storage of the connectivity

Three types of connectivity are used in our simulations: Allen, local and uniform connectivity. Allen connectivity is constructed by applying our algorithm to the cortical neurons in Zhuang-ABCA1 based on the method described above. Uniform and local connectivity are constructed in a similar fashion: for each neuron, we first read out the number of outgoing connections in the corresponding Allen connectivity, then we randomly choose the same number of neurons from the total population without repetition for the uniform connectivity, or the same number of nearest neighbors for local connectivity. By this way, the out-degree distribution of all three connectivity will be the same, making them more comparable. In partiuclar, local connectivity is constructed through nearest-neighbor coupling, where we use the K-nearest neighbors (KNN) algorithm Cover and Hart (1967) to choose a fixed number of outgoing connections to nearest neighbors for each neuron based on spatial position. For all three connectivity, self-connection is not allowed.

When running our connectivity algorithm to build the Allen connectivity, a large number of neurons can pose some challenges to our connectivity algorithm. To tackle this challenge, we loop through the voxels for a significant speed-up. As a result, our algorithm will take O(MN) time, where M is the number of neurons that contain all the neurons and N is the number of voxels. N has a maximum since the computational domain is finite, and in the end, this algorithm is about two orders of magnitude faster than iterating through every neuron.

In our model, we choose ρ = 300, which means there are about 90 million synapses in the model since we have 297,812 neurons. To store such huge connectivity, we use a compressed row form representation of the connectivity matrix, which is also used in similar packages Dai et al. (2020).

Neuronal and synaptic model

We use a Hodgkin-Huxley type neuron model that can produce different firing behaviors of a cortical neuron Pospischil et al. (2008). Each neuron has four ionic channels: voltage-gated sodium, voltage-gated potassium, slow non-inactivating potassium, and leak channels. The pyramidal neurons will have all four ionic channels, while the inhibitory neurons do not have the slow non-inactivating potassium channels because they are fast-firing and do not usually exhibit firing adaptation. Detailed equations of each current and the change of gating variables are provided in the appendix. The parameters of the neuronal model are also drawn from the original paper and presented in Table S1 in the appendix.

For synapses, we model both glutamatergic (AMPA) or GABAergic synapses in our cortical network. Both synpases are modeled by a conducatance-based, first-order kinetic model. Please see the supplement for more details.

Simulation

In our system, we have to solve two sets of equations at every time step: the Hodgkin-Huxley equations for the electrophysiology model and the conductance-coupled synaptic equations. We solve the Hodgkin-Huxley equations using an established implicit leapfrog method DeWoskin et al. (2015), allowing for large timesteps due to increased stability. Both electrophysiology and synaptic equations are solved parallelly on GPU in our framework. The overall parallelism is based on the neuron, which means each set of Hodgkin-Huxley equations of a neuron is put up on an individual thread of GPU to solve at every timestep. Similarly, at any time step, we first check for all neurons firing and then assign each thread to process each of the firing neurons’ downstream connections. Please see the supplements for a detailed description of the numerical methods.

Each simulation lasts for 3 seconds, and we only record the last 1 second of the simulation to avoid the transient at the beginning of the simulation. The initial conditions of all neurons are randomly chosen at the beginning of each simulation. To be more specific, the initial voltage of all neurons is drawn from a uniform distribution between -65 mV and -5 mV, and all gate variables are drawn from a uniform distribution between 0.02 and 0.05. For each simulation, we collect the voltage traces of all neurons with a sampling frequency of 2000 Hz.

In all simulations, the total excitatory population of layer 4 neurons is electrically stimulated with an applied current. All layer 4 neurons receive the same level of applied current and all synapses have the same weight throughout each simulation. In order to adjust the level of stimuli, we change the applied current that ranges from 0.9 nA to 1.9 nA with a step of 0.1 nA. In the simulations, we also modify the coupling strength of the network, which is done by factoring the conductance of both excitatory and inhibitory synapses at the same time. The baseline conductance of an excitatory synapse is 30 nS, and we change the conductance within 15 to 50 nS with a step of 5 nS. The inhibitory conductance is always 10% of the excitaory conductance.

Quantitative measurement of neuronal activity

To better quantify spatiotemporal dynamics, we extend a phase-based method Rubino et al. (2006) that has primarily been used for 2-D data but naturally generalizes to 3-D. First, we apply the Hilbert transform to the voltage traces of all neurons and extract their phase, treating each neuron as a nonlinear oscillator. Next, we construct a regular 3-D Cartesian grid following the discretization of the Allen Brain Atlas common coordinate framework (ABA-ccf), resulting in a grid of size 132 ∗ 80 ∗ 114, with a spatial resolution of 100µm in each direction. We then linearly interpolate the generalized phase of all neurons to their adjacent grid points, effectively mapping neuronal phase information onto the 3-D grid.

Let ϕ(x, t) represent the interpolated phase field on the 3-D grid. To compute phase gradient directionality (PGD), we first calculate the phase field gradient, ∇ϕ, using a finite difference method. We then apply the following formula to calculate P, which represents the PGD:

Note that the horizontal bar means taking the average across the whole spatial domain, and P is a function of time. Higher PGD means the gradient field at different spatial locations is aligned, and lower PGD means the opposite. In calculating the gradient field, the boundary cells that contain no neurons are excluded. This is done by using the PyVistas package Sullivan and Kaszynski (2019) by masking off the points that have no neurons to interpolate for. Both phase field interpolation and gradient computation are done by using the intrinsic functions in the 3-D Visualization Toolkit (VTK) Schroeder et al. (1998).

For spectral analysis of the oscillation, we first calculate the global average voltage by averaging the intracellular voltages of all neurons. We then apply a discrete Fourier transform to the average voltage trace with the signal.periodogram function from the SciPy package Virtanen et al. (2020). Given a sampling frequency of 2000 Hz, we compute the one-sided power spectral density for frequencies ranging from 0 to 1000 Hz and identify the frequency with the highest density, which we define as the dominant frequency. This value is used in plotting 7 and 2.

Visualization

Throughout the paper, we visualize neurons by coloring them based on their intracellular voltage. The visualization is straightforward for a single neuron such as in 1, but in 2, 5 and 6, we also need to calculate the average voltage of 300 neurons nearby. This is done by identifying the closest 300 neighbors for each neuron based on their spatial distributions and is calculated in post-processing. Additionally, Fig. 2c and e are visualized based on the time that each neuron fires for the first time under the chosen time window. For 6, the majority of the time window is chosen manually to fully show the spatiotemporal patterns of the corresponding simulation. All visualizations are made using the PyVistas package (Sullivan and Kaszynski (2019)).

Connectivity matrix with detail region legend and degree distribution.

a The same connectivity matrix shown in Fig. 1c, but with detailed region legend. b The out and in-degree distribution of all neurons in log 2 scale.

jectome was later refined to a finer level using a statistical model introduced by Knox et al. (2018), where the strength of connections is provided on a voxel (100µm3) resolution. Because both neuronal (Zhuang-ABCA1) and the voxelized connectivity data are registered with the Allen Brain Atlas Common Coordinate Framework (ABA-CCF) Wang et al. (2020), we now introduce an algorithm that integrates the two datasets to construct a neuron-to-neuron connectivity.

Visualization of the macroscopic traveling waves at single-neuron resolution.

(a-b) Visualization of the same simulation presented in Fig. 2, but showing intracellular voltage at single-neuron resolution.

Comparison of the three connectivity matrices: a Allen, b local and c uniform connectivity. For local connectivity, the diagonal terms represent the connections within the same region. And the off-diagonal terms exist because these regions are physically adjacent to each other.

Detailed plots of phase gradient directionality (PGD) measured in simulations conducted in different conditions.

a-c The average PGD versus applied currents when the coupling is weak (15ns, a), medium (30ns, b), or strong (45ns, c). d-f The average PGD versus different coupling strengths when the applied current is weak (1.2nA, d), medium (1.5nA, e), or strong (1.8nA, f)

Spectrum analysis and correlation with phase gradient directionality (PGD):

(a-c) In each plot, the top panel shows the average voltage of all neurons versus time, and the bottom panel shows the power spectral density of the corresponding voltage trace. For all three plots, the simulations are conducted with an applied current of 0.9 nA and coupling strengths of 15 nS (a), 30 nS (b), and 45 nS (c). (d) Each point represents a simulation, where the x-axis represents the dominant frequency of the oscillation calculated from the spectral analysis, y-axis represents the phase gradient directionality (PGD) measured in the same simulation.

Acknowledgements

We thank Ningyuan Wang for a helpful discussion about work and coding. This work is dedicated to Steven Brown, who provided many helpful suggestions.

We acknowledge the following funding: ARO MURI W911NF-22-1-0223, NSF DMS 2052499 and HFSP RGP0019/2018,.

Appendix for manuscript: Realistic coupling enables flexible macroscopic traveling waves in the mouse cortex

Neuronal and synaptic model

As we described in the main manuscript, each neuron is modeled as a single-compartment Hodgkin-Huxley neuron, whose intracellular voltage is governed by the following equation:

Where C and A represent the specific membrane capacitance and membrane area, V is the membrane potential, IL is the leak current, INa and IK are the voltage-dependent sodium and potassium currents, and IM is a slow non-inactivating potassium current that is responsible for firing frequency adaptation. Iapp and Isyn are the applied and synaptic currents. The ionic currents are based on descriptions in [1]. Please see the following for equations that describe each ionic current:

Ionic Currents

Synaptic Currents

In our model, the synaptic current Isyn can be excitatory AMPA current or inhibitory GABA current. For the synaptic current, we follow the conductance-based model described in [2]. Each synaptic current is calculated as:

where: x indicates the synaptic type (AMPA or GABA), Tin is the total synaptic input, sx is the gating variable. To calculate the synaptic input for the ith neuron:

where W is the connectivity matrix, Tout is modeled as a sigmoidal function, and VT s and KT are parameters that control the synaptic output for each neuron.

Based on this model, when a pre-synaptic neuron fires, it will generate a synaptic input that excites the post-synaptic neuron’s synaptic channel and further generates a post-synaptic current. All parameters used in the simulation are presented in Table S1.

Numerical method

Both electrophysiology and synaptic equations are solved with an implicit second-order method, which is based on the trapezoidal rule. We refer to the original manuscript [3] for the details of the numerical method.

Parameters used in the simulations

Data availability

Raw data related to this study, including the neuronal and connectivity data, has been uploaded to the following repository: https://figshare.com/s/9d68a15dbf33a6f94880. The related code and simulation data will be uploaded upon acceptance for publication.