Connectome-based Hopfield networks as models of macro-scale brain dynamics.

(A) Hopfield artificial neural networks (HNNs) are a form of recurrent artificial neural networks that serve as content-addressable (“associative”) memory systems. Hopfield networks can be trained to store a finite number of patterns (e.g. via Hebbian learning a.k.a. “fire together - wire together”). During the training procedure, the weights of the HNN are trained so that the stored patterns become stable attractor states of the network. Thus, when the trained network is presented partial, noisy or corrupted variations of the stored patterns, it can effectively reconstruct the original pattern via an iterative relaxation procedure that converges to the attractor states. (B) We consider regions of the brain as nodes of a Hopfield network. Instead of initializing the network with the structural wiring of the brain or training it to solve specific tasks, we set its weights empirically, using information about the interregional activity flow across regions, as estimated via functional brain connectivity. Capitalizing on strong analogies between the relaxation rule of Hopfield networks and the activity flow principle that links activity to connectivity in brain networks, we propose the resulting functional connectome-based Hopfield neural network (fcHNN) as a phenomenological model for macro-scale brain dynamics. (C) The proposed computational framework assigns an energy level, an attractor state and a position in a low-dimensional embedding to brain activation patterns. Additionally, it models how the entire state-space of viable activation patterns is restricted by the dynamics of the system and how alterations in activity and/or connectivity modify these dynamics.

Attractor states and state-space dynamics of connectome-based Hopfield networks

(A) Top: During so-called relaxation procedure, activities in the nodes of an fcHNN model are iteratively updated based on the activity of all other regions and the connectivity between them. The energy of a connectome-based Hopfield network decreases during the relaxation procedure until reaching an equilibrium state with minimal energy, i.e. an attractor state. Bottom: Four attractor states of the fcHNN derived from the group-level functional connectivity matrix from study 1 (n=44). (B) Top: In presence of weak noise (stochastic update), the system does not converge to equilibrium anymore. Instead, activity traverses on the state landscape in a way restricted by the topology of the connectome and the “gravitational pull” of the attractor states. Bottom: We sample the “state landscape” by running the stochastic relaxation procedure for an extended amount of time (e.g. 100.000 consecutive stochastic updates), each point representing an activation configuration or state. To construct a low-dimensional representation of the state space, we take the first two principal components of the simulated activity patterns. The first two principal components explain approximately 58-85% of the variance of state energy (depending on the noise parameter σ, see Supplementary Figure 1). (C) We map all states of the state space sample to their corresponding attractor state, with the conventional Hopfield relaxation procedure (A). The four attractor states are also visualized in their corresponding position on the PCA-based projection. The first two principal components yield a clear separation of the attractive state basins (cross-validated classification accuracy: 95.5%, Supplementary Figure 2). We refer to the resulting visualization as the fcHNN projection and use it to visualize fcHNN-derived and empirical brain dynamics throughout the rest of the manuscript. (D) The fcHNN of study 1 seeded with real activation maps (gray dots) of an example participant. All activation maps converge to one of the four attractor states during the relaxation procedure (without noise) and the system reaches equilibrium. Trajectories are colored by attractor state. (E) Illustration of the stochastic relaxation procedure in the same fcHNN model, seeded from a single starting point (activation pattern). The system does not converge to an attractor state but instead traverses the state space in a way restricted by the topology of the connectome and the “gravitational pull” of the attractor states. The shade of the trajectory changes with increasing number of iterations. The trajectory is smoothed with a moving average over 10 iterations for visualization purposes. (F) Real resting state fMRI data of an example participant from study 1, plotted on the fcHNN projection. The shade of the trajectory changes with an increasing number of iterations. The trajectory is smoothed with a moving average over 10 iterations for visualization purposes. (G) Consistent with theoretical expectations, we observed that increasing the temperature parameter β led to an increasing number of attractor states, emerging in a nested fashion (i.e. the basin of a new attractor state is fully contained within the basin of a previous one). When contrasting the functional connectome-based HNN with a null model based on symmetry-retaining permuted variations of the connectome, we found that the topology of the original (unpermuted) functional brain connectome makes it significantly better suited to function as an attractor network; than the permuted null model. Table contains the median number of iterations until convergence for the original and permuted connectomes for different temperature parameters β and the corresponding p-value. (H) We optimized the noise parameter σ of the stochastic relaxation procedure for 8 different σ values over a logarithmic range between σ = 0.1 and 1 so that the similarity (the timeframes distribution over the attractor basins) is maximized between the empirical data and the fcHNN-generated data. We used two null models to assess the significance of similarity: one based on multivariate normal data, with the covariance matrix set to the functional connectome’s covariance matrix, and one based on spatial phase-randomization. P-values are given in the table at the bottom of the panel. The fcHNN only reached multistability with σ > 0.19, and it provided the most accurate reconstruction of the real data with σ = 0.37 (p=0.007, and p<0.001 for the two null models).

fcHNNs reconstruct characteristics of real resting state brain activity.

(A) The four attractor states of the fcHNN model from study 1 reflect brain activation patterns with high neuroscientific relevance, representing sub-systems previously associated with “internal context” (blue), “external context” (yellow), “action” (red) and “perception” (green) (Golland et al., 2008; Cioli et al., 2014; Chen et al., 2018; Fuster, 2004; Margulies et al., 2016). (B) The attractor states show excellent replicability in two external datasets (study 2 and 3, mean correlation 0.93). (C) The fcHNN projection (first two PCs of the fcHNN state space) explains significantly more variance (p<0.0001) in the real resting state fMRI data than principal components derived from the real resting state data itself and generalizes better (p<0.0001) to out-of-sample data (study 2). Error bars denote 99% bootstrapped confidence intervals. (D) The fcHNN analysis reliably predicts various characteristics of real resting state fMRI data, such as the fraction of time spent in the basins of the four attractors (first column, p=0.007, contrasted to the multivariate normal null model), the distribution of the data on the fcHNN-projection (second column, p<0.001, contrasted to the multivariate normal null model) and the temporal autocorrelation structure of the real data (third column, p<0.001, contrasted to a null model based on temporally permuted data). This analysis was based on flow maps of the mean trajectories (i.e. the characteristic timeframe-to-timeframe transition direction) in fcHNN-generated data, as compared to a shuffled null model representing zero temporal autocorrelation. For more details, see Methods. Furthermore, (rightmost column), stochastic fcHNNs are capable of self-reconstruction: the timeseries resulting from the stochastic relaxation procedure mirror the co-variance structure of the functional connectome the fcHNN model was initialized with. While the self-reconstruction property in itself does not strengthen the face validity of the approach (no unknown information is reconstructed), it is a strong indicator of the model’s construct validity; i.e. that systems that behave like the proposed model inevitably “leak” their weights into the activity time series.

Empirical Hopfield-networks reconstruct real task-based brain activity.

A Functional MRI time-frames during pain stimulation from study 4 (second fcHNN projection plot) and self-regulation (third and fourth) are distributed differently on the fcHNN projection than brain states during rest (first projection, permutation test, p<0.001 for all). Energies, as defined by the Hopfield model, are also significantly different between rest and the pain conditions (permutation test, p<0.001), with higher energies during pain stimulation. Triangles denote participant-level mean activations in the various blocks (corrected for hemodynamics). Small circle plots show the directions of the change for each individual (points) as well as the mean direction across participants (arrow), as compared to the reference state (downregulation for the last circle plot, rest for all other circle plots). B Flow-analysis (difference in the average timeframe-to-timeframe transition direction) reveals a non-linear difference in brain dynamics during pain and rest (left). When introducing weak pain-related signal in the fcHNN model during stochastic relaxation, it accurately reproduces these non-linear flow differences (right). C Simulating activity in the Nucleus Accumbens (NAc) (the region showing significant activity differences in Woo et al., 2015) reconstructs the observed non-linear flow difference between up- and downregulation (left). D Schematic representation of brain dynamics during pain and its up- and downregulation, visualized on the fcHNN projection. In the proposed framework, pain does not simply elicit a direct response in certain regions, but instead, shifts spontaneous brain dynamics towards the “action” attractor, converging to a characteristic “ghost attractor” of pain. Down-regulation by NAc activation exerts force towards the attractor of internal context, leading to the brain less frequent “visiting” pain-associated states. E Visualizing meta-analytic activation maps (see Supplementary Table 1 for details) on the fcHNN projection captures intimate relations between the corresponding tasks and F serves as a basis for a fcHNN-based theoretical interpretative framework for spontaneous and task-based brain dynamics. In the proposed framework, task-based activity is not a mere response to external stimuli in certain brain locations but a perturbation of the brain’s characteristic dynamic trajectories, constrained by the underlying functional connectivity. From this perspective, “activity maps” from conventional task-based fMRI analyses capture time-averaged differences in these whole brain dynamics.

Connectome-based Hopfield analysis of autism spectrum disorder.

(A) The distribution of time-frames on the fcHNN-projection separately for ASD patients and typically developing control (TDC) participants. (B) We quantified attractor state activations in the Autism Brain Imaging Data Exchange datasets (study 7) as the individual-level mean activation of all time-frames belonging to the same attractor state. This analysis captured alterations similar to those previously associated to ASD-related perceptual atypicalities (visual, auditory and somatosensory cortices) as well as atypical integration of information about the “self” and the “other” (default mode network regions). All results are corrected for multiple comparisons across brain regions and attractor states (122*4 comparisons) with Bonferroni-correction. See Table 1 and Supplementary Figure 9 for detailed results. (C) The comparison of data generated by fcHNNs initialized with ASD and TDC connectomes, respectively, revealed a characteristic pattern of differences in the system’s dynamics, with increased pull towards (and potentially a higher separation between) the action and perception attractors and a lower tendency of trajectories going towards the internal and external attractors. Abbreviations: MCC: middle cingulate cortex, ACC: anterior cingulate cortex, pg: perigenual, PFC: prefrontal cortex, dm: dorsomedial, dl: dorsolateral, STG: superior temporal gyrus, ITG: inferior temporal gyrus, Caud/Acc: caudate-accumbens, SM: sensorimotor, V1: primary visual, A1: primary auditory, SMA: supplementary motor cortex, ASD: autism spectrum disorder, TDC: typically developing control.

The top ten largest changes in average attractor-state activity between autistic and control individuals.

Mean attractor-state activity changes are presented in the order of their absolute effect size. All p-values are based on permutation tests (shuffling the group assignment) and corrected for multiple comparisons (via Bonferroni’s correction). For a comprehensive list of significant findings, see Supplementary Figure 9.

Datasets and studies.

The table includes details about the study modality, analysis aims, sample size used for analyses, mean age, gender ratio, and references.