Author response:
eLife Assessment
This study uses a Bayesian framework to characterize latent brain state dynamics associated with memory encoding and performance in children, as measured with functional magnetic resonance imaging. The novelty of the approach offers valuable insights into memory-related brain activity, but the consideration of developmental changes in memory and brain dynamics, and the evidence to support the proposed mapping between specific states and distinct aspects of memory, are incomplete. This work will be of interest to researchers interested in cognitive neuroscience and the development of memory.
We are grateful to the editor and reviewers for their positive feedback and constructive evaluation. Their comments have identified important areas where the manuscript can be strengthened. Below, we outline our planned revisions.
Reviewer #1 (Public review):
Zeng et al. characterized the dynamic brain states that emerged during episodic encoding and the reactivation of these states during the offline rest period in children aged 8-13. In the study, participants encoded scene images during fMRI and later performed a memory recognition test. The authors adopted the BSDS approach and identified four states during encoding, including an "active-encoding" state. The occupancy rate of, and the state transition rates towards, this active-encoding state positively predicted memory accuracy across participants. The authors then decoded the brain states during pre- and post-encoding rests with the model trained on the encoding data to examine state reactivation. They found that the state temporal profile and transition structure shifted from encoding to post-encoding rest. They also showed that the mean lifetime and stability (measured with self-transition probability) of the "default-mode" state during post-encoding rest predict memory performance. How brain dynamics during encoding and offline rest support long-term memory remains understudied, particularly in children. Thus, this study addresses an important question in the field. The authors implemented an advanced computational framework to identify latent brain states during encoding and carefully characterized their spatiotemporal features. The study also showed evidence for the behavioral relevance of these states, providing valuable insights into the link between state dynamics and successful encoding and consolidation.
We thank Reviewer #1 for the positive feedback on our study. And we would like to thank you for the reviewer's constructive feedback. We plan to incorporate detailed methodological justifications and a thorough limitation analysis. We also plan to enhance the overall logical coherence of the manuscript, ensuring a more robust and scientifically sound presentation.
Weaknesses:
(1) If applicable, please provide information on the decoding performance of states during pre- and post-encoding rests. The Methods noted that the authors applied a threshold of 0.1 z-scored likelihood, and based on Figure S2, it seems like most TRs were assigned a reinstated state during post-encoding rest. It would be useful to know, for the decodable TRs, how strong the evidence was in favor of one state over others. Further, was decoding performance better during post- vs. pre- encoding rest? This is critical for establishing that these states were indeed "reinstated" during rest. The authors showed individual-specific correlations between encoding and post-encoding state distribution, which is an important validation of the method, but this result alone is not sufficient to suggest that the states during encoding were the ones that occurred during rest. The authors found that the state dynamics vary substantially between encoding and rest, and it would be helpful to clarify whether these differences might be related to decoding performance. I am also curious whether, if the authors apply the BSDS approach to independently identify brain states during rest periods (instead of using the trained model from encoding), they find similar states during rest as those that emerged during encoding?
We plan three additional analyses to strengthen the evidence for state reinstatement during rest: First, we will report quantitative decoding confidence metrics for each decoded time point, including the log-likelihood between the winning state and the next-best state. We will compare these distributions between pre- and post-encoding rest to test whether decoding quality differs between conditions, as the reviewer suggests. Second, we will provide a more detailed characterization of the decoding process, including the proportion of TRs that survive the log-likelihood threshold of 0.1 during pre- vs. post-encoding rest and whether this proportion relates to memory performance. Third, we will train an independent BSDS model directly on the rest data (rather than using the encoding-trained model) and assess the degree of correspondence between the independently discovered rest states and the encoding states in terms of amplitude profiles and covariance structures. Convergence between the two approaches would provide strong validation that the encoding-defined states genuinely re-emerge at rest. Together with our evidence from our previous analyses, these additional analyses will strengthen our claims.
(2) During post-encoding rest, the intermediate activation state (S1) became the dominant state. Overall, the paper did not focus too much on this state. For example, when examining the relationship between state transitions and memory performance, the authors also did not include this state as a part of the analyses presented in the paper (lines 203-211). Could the author report more information about this state and/or discuss how this state might be relevant to memory formation and consolidation?
We thank the reviewer for this suggestion. During encoding, S1 had the lowest occupancy (~10%) and showed no significant relationship with memory performance, which led us to interpret it as a non-essential transient configuration. In the revision, we will provide a more thorough characterization of S1, and conduct correlation analyses to probe whether its dynamic properties during post-encoding rest correlate with individual memory performance.
(3) Two outcome measures from the BSDS model were the occupancy rate and the mean lifetime. The authors found a significant association with behavior and occupancy rate in some analyses, and mean lifetime in others. The paper would benefit from a stronger theoretical framing explaining how and why these two different measures provide distinct information about the brain dynamics, which will help clarify the interpretation of results when association with behavior was specific to one measure.
We thank the reviewer for this suggestion. Occupancy rate and mean lifetime, while related, capture fundamentally different aspects of brain state dynamics. Occupancy rate reflects the total proportion of time the brain spends in a given state, capturing the overall prevalence of that configuration across the scanning session. Mean lifetime, by contrast, measures the average uninterrupted duration of each state visit, indexing the temporal stability or persistence of a given network configuration once it is entered. Critically, two states could have identical occupancy rates but very different mean lifetimes, a state visited frequently but briefly versus one visited rarely but sustained, implying distinct underlying neural dynamics. In the context of memory, high occupancy of the active-encoding state may reflect repeated engagement of encoding-optimal circuits, while long mean lifetime of the default-mode state during rest may reflect sustained consolidation-related processing. We will expand the theoretical framework in the revised manuscript to articulate these distinctions and connect them to extant findings suggesting that temporal stability versus frequency of state visits may have dissociable behavioral correlates in working memory and episodic memory (He et al., 2023; Stevner et al., 2019).
(4) For performance on a memory recognition test, d' is a more common metric in the literature as it isolates the memory signal for the old items from response bias. According to Methods (line 451), the authors have computed a different metric as their primary behavioral measure (hits + correction rejections - misses - false alarms). Please provide a rationale for choosing this measure instead. Have the authors considered computing d' as well and examining brain-behavior relationships using d'?
Our primary memory recognition metric computed as (hits + correct rejections − misses − false alarms) / total trials, provides an unbiased linear estimate of discrimination ability that is mathematically consistent with d' in directional effects. We selected this measure because it is particularly robust with limited trial counts per condition (Verde et al., 2006; Wickens, 2001). Nonetheless, we agree that reporting d' is important for comparability with the broader literature. In the revision, we will compute d' for each participant and conduct parallel brain–behavior correlation analyses to demonstrate that our findings are robust across both metrics.
(5) While this study examined brain state dynamics in children, there was no adult sample to compare with. Therefore, it is hard to conclude whether the findings are specific to children (or developing brains). It would be helpful to discuss this point in the paper.
We thank the reviewer for raising this point. While several studies have documented memory-related replay and reinstatement in adults at both the regional and systems levels(Tambini et al., 2017; Wimmer et al., 2020), few have examined whether analogous state-level reinstatement occurs in children. Our study was motivated by this gap: we sought to test whether children show dynamic brain state reinstatement mechanisms similar to those described in adults. However, we acknowledge that without a direct adult comparison, we cannot determine whether the observed patterns are unique to children or reflect general principles of episodic memory organization. In the revised manuscript, we will: (a) frame the study more carefully as examining whether established state-level consolidation mechanisms also operate during childhood, (b) discuss findings in relation to adult studies, and (c) include exploratory analyses of age-related variability in both memory performance and BSDS dynamics within our sample, while acknowledging that the narrow age range (8–13) and small sample size limit the power of such developmental analyses. We will clearly identify the absence of an adult comparison as a limitation.
Reviewer #2 (Public review):
This paper investigates the latent dynamic brain states that emerge during memory encoding and predict later memory performance in children (N = 24, ages: 8 -13 years). A novel computational approach (Bayesian Switching Dynamic Systems, BSDS) discovers latent brain states from fMRI data in an unsupervised and parameter-free manner that is agnostic to external stimuli, resulting in 4 states: an active-encoding state, a default-mode state, an inactive state, and an intermediate state. The key finding is that the percentage of time occupied in the active-encoding state (characterized by greater activity in hippocampal, visual, and frontoparietal regions), as well as greater transitions to this state, predicts memory accuracy. Memory accuracy was also predicted by the mean lifetime and transitions to the default-mode state (characterized by greater activity in medial prefrontal cortex and posterior cingulate cortex) during post-encoding rest. Together, the results provide insights into dynamic interactions between brain regions that may be optimal for encoding novel information and consolidating memories for long-term retention.
We thank Reviewer #2 for recognizing the novelty and broader utility of our methodology and for noting that the manuscript is well-written and concise.
Weaknesses:
(1) The study focuses on middle childhood, but there is a lack of engagement in the Introduction or Discussion about what is known about memory development and the brain during this period. Many of the brain regions examined in this study, particularly frontoparietal regions, undergo developmental changes that could influence their involvement in memory encoding and consolidation. The paper would be strengthened by more directly linking the findings to what is already known about episodic memory development and the brain.
We thank the reviewer for this suggestion. In response, we will substantially expand the Introduction and Discussion to situate our findings within the developmental cognitive neuroscience literature on episodic memory. In particular, we will address the protracted developmental trajectory of frontoparietal regions, the well-documented maturation of hippocampal–cortical connectivity during middle childhood, and how these developmental changes may influence the brain state configurations we observed (He et al., 2023; Ryali et al., 2016). This will provide the necessary developmental context for interpreting our state dynamics results.
(2) A more thorough overview of the BSDS algorithm is needed, since this is likely a novel method for most readers. Although many of the nitty-gritty details can be referenced in prior work, it was unclear from the main text if the BSDS algorithm discovered latent states based on activation patterns, functional connectivity, or both. Figure 1F is not very informative (and is missing labels).
We thank the reviewer for this suggestion. We agree that a more accessible overview of the BSDS algorithm (Lee et al., 2025; Taghia et al., 2018) is needed. In the revision, we will expand the Methods and provide a concise algorithmic overview in the main text that clarifies the following key points: (a) BSDS operates on multivariate time series from the ROIs and infers latent brain states defined jointly by their mean activation patterns (amplitude vectors) and inter-regional covariance matrices (functional connectivity); (b) it employs a hidden Markov model framework with Bayesian inference and automatic relevance determination to identify the number of states without manual specification; and (c) state assignments are made at each TR, yielding a temporal sequence that enables computation of occupancy rates, mean lifetimes, and transition probabilities. We will also revise Figure 1F to include appropriate labels and a clearer schematic of the model's inputs, latent structure, and outputs.
(3) A further confusion about the BSDS algorithm was whether it necessarily had to work on the rest data. Figure 4A suggests that each TR was assigned one of the four states based on the maximum win from the log-likelihood estimation. Without more details about how this algorithm was applied to the rest data, it is difficult to evaluate the claim on page 14 about the spontaneous emergence of the states at rest.
The key methodological point is that the BSDS model, once trained on encoding data, can be applied to new (rest) time series via log-likelihood estimation: for each TR during rest, the model computes the log-likelihood of each state given the observed multivariate signal, and the state with the maximum log-likelihood is assigned to that TR. This "decoding" approach tests whether the spatial configurations learned during encoding are present during rest, rather than fitting new states de novo. We applied a threshold to the log-likelihood values to exclude TRs where the evidence for any single state was weak, thus controlling for potential misassignment. We will substantially clarify this process in the revised Methods and main text, and as described in our response to Reviewer #1 point 1, we will also conduct additional analyses to address the concerns raised.
(4) Although the BSDS algorithm was validated in prior simulations and task-based fMRI using sustained block designs in adults, it is unclear whether it is appropriate for the kind of event-related design used in the current study. Figure 1G shows very rapid state changes, which is quantified in the low mean lifetime of the states (between 1-3 TRs on average) in Figure 4C. On the one hand, it is a strength of the algorithm that it is not necessarily tied to external stimuli. On the other hand, it would be helpful to see simulations validating that rapid transitions between states in fMRI data are meaningful and not due to noise.
This is an important methodological question. The rapid state changes observed in our event-related design (mean lifetimes of 1–3 TRs) differ from the longer state durations typically observed with block designs(He et al., 2023; Zeng et al., 2024), where sustained cognitive demands stabilize brain configurations. We believe these rapid transitions are consistent with the inherent dynamics of event-related encoding, where each trial involves rapid shifts between sensory processing, memory binding, and attentional engagement. Several considerations support the meaningfulness of these transitions: (a) the identified states have interpretable amplitude profiles consistent with well-established memory-related brain systems; (b) state dynamics show statistically significant, directionally consistent correlations with subsequent memory performance; and (c) the transition structure during encoding is distinct from that observed during rest, indicating sensitivity to task demands. Nonetheless, we acknowledge the concern about noise and will conduct additional analyses in the revision to address the concerns raised.
(5) The Methods section mentions that participants actively imagined themselves within the encoded scenes and were instructed to memorize the images for a later test during the post-encoding rest scan. This detail needs to be included in the main text and incorporated into the interpretation of the findings, as there are likely mechanistic differences between spontaneous memory replay/reinstatement vs. active rehearsal.
We thank the reviewer for this suggestion. We will include these experimental details in the main text and incorporate it into the interpretation of our findings in the context of spontaneous memory replay/reinstatement vs. active rehearsal (Liu et al., 2019; Wimmer et al., 2020).
(6) Information about the general linear model used to discover the 16 ROIs that showed a subsequent memory effect are missing, such as: covariates in the model (motion, etc.), group analysis approach (parametric or nonparametric), whether and how multiple-comparisons correction was performed, if clusters were overlapping at all or distinct, if the total number of clusters was 16 or if this was only a subset of regions that showed the effect.
We apologize for the missing methodological details. In the revised manuscript, we will provide complete information on the general linear model used to identify the 16 ROIs, including: the event regressors and parametric modulators included in the model, nuisance covariates (motion parameters, white matter and CSF regressors), the group-level analysis approach and statistical thresholding, the method for multiple-comparisons correction, whether the 16 ROIs represent all significant clusters or a subset, and whether any clusters were spatially overlapping. We will also clarify how peak voxels were selected for ROI definition.
Reviewer #3 (Public review):
This paper uses a novel method to look at how stable brain states and the transitions between them promote memory formation during encoding and post-encoding rest in children. I think the paper has some weaknesses (detailed below) that mean that the authors fall short of achieving their aims. Although the paper has an interesting methodological approach, the authors need better logic, and are potentially "double dipping" in their results - meaning their logic is circular. I think the method that they are using could be useful to the broader neuroimaging community, although they need to make this argument clearer in the paper.
We thank Reviewer #3 for recognizing the novelty of our approach and its potential utility for the broader neuroimaging community.
(1) The authors use children as their study subjects but fail to reconcile why children are used, if the same phenomena are expected to be seen in adults (or only children), and if and how their findings change with age across an age range that ranges from middle childhood into early adolescence. They need to include more consideration for the development of their subject population. The authors should make it clear why and how memory was tested in children and not adults. Are adults and children expected to encode and consolidate in a similar manner to children? Do the findings here also apply to adults? How was the age range of 8-13-year-old children selected? Why didn't the authors look at change with age? Does memory performance change with age? Do the BSDS dynamics change with age in the authors' sample?
Our study was motivated by the observation that while adult studies have documented memory replay and reinstatement, very little is known about whether these dynamic state-level mechanisms operate during middle childhood, a period characterized by substantial improvements in episodic memory ability and ongoing maturation of frontoparietal and hippocampal–cortical circuits. The age range of 8–13 was defined a priori based on typical developmental classifications of middle childhood through early adolescence, representing a period when episodic memory abilities are developing rapidly.
In response to the reviewer's specific questions: (a) we will conduct exploratory analyses testing whether memory accuracy, BSDS state dynamics (occupancy, mean lifetime, transitions), and brain–behavior correlations vary as a function of age within our sample; (b) we will clearly discuss whether adults are expected to show similar patterns, drawing on the extant adult literature; and (c) we will acknowledge as a limitation that our sample size (N = 24) and narrow age range provide limited statistical power for detecting continuous age-related changes, and that a dedicated cross-sectional or longitudinal developmental design would be needed to draw firm conclusions about developmental trajectories. Please also see responses to Reviewer #1 point 5 and Reviewer #2 point 1.
(2) The authors look for brain state dynamics within a preselected set of ROIs that are selected because they display a subsequent memory effect. This is problematic because the state that is most associated with subsequent memory (S3, or State 3) is also the one that shows most activity in these regions (that have already been a priori selected due to displaying a subsequent memory effect). This logic is circular. It would be helpful if they could look at brain state dynamics in a more ROI agnostic whole brain approach so that we can learn something beyond what a subsequent memory analysis tells us. I think the authors are "double dipping" in that they selected regions for further analysis based on a subsequent memory association (remembered > forgotten contrast) and then found states within those regions showing a subsequent memory effect to further analyze for being associated with subsequent memory. Would it be possible instead to do a whole-brain analysis (something a bit more agnostic to findings) using the BSDS framework, and then, from a whole-brain perspective, look for particular brain states associated with subsequent memory? As it stands, it looks like S3 (state 3) has greater overall activation in all brain regions associated with subsequent memory, so it makes sense that this brain state is also most associated with subsequent memory. The BSDS analysis is therefore not adding anything new beyond what the authors find with the simple subsequent memory contrast that they show in Figure 1C. This particularly effects the following findings: (a) active-encoding state occupancy rate correlated positively with memory accuracy, (b) transitions to the active-encoding state were beneficial / Conversely, transitions toward the inactive state (S4) were detrimental, with incoming transitions showing negative correlations with memory accuracy / The active-encoding state serves as a "hub" configuration that facilitates memory formation, while pathways leading to this state enhance performance and transitions away from it impair encoding.
We appreciate this critique, which raises an important concern about analytical circularity.
a) Why BSDS adds information beyond the static subsequent memory contrast. The reviewer notes that S3 (the active-encoding state) shows high activation in the same regions selected by the subsequent memory contrast, and therefore questions whether BSDS provides new information. We respectfully argue that BSDS captures dimensions of neural organization that a static contrast cannot. Specifically: (a) the subsequent memory contrast identifies which regions are differentially active for remembered vs. forgotten items, averaged across the entire encoding session, it provides no temporal information about when or for how long these regions are co-active; (b) BSDS reveals the moment-to-moment temporal evolution of brain states, including the duration and stability of each configuration (mean lifetime), which independently predicts behavior; (c) BSDS uniquely captures transition dynamics, the rates and patterns of switching between states, which we show are predictive of memory in ways not derivable from the contrast map (e.g., transitions from S2→S3 positively predict memory, transitions toward S4 negatively predict memory); and (d) BSDS characterizes the full covariance structure among regions within each state, revealing distinct connectivity patterns (e.g., the high clustering coefficient and global efficiency of S3), which are not captured by univariate activation contrasts. Thus, while the ROI selection is informed by the subsequent memory effect, the information BSDS extracts from those regions, temporal dynamics, transition patterns, and multivariate covariance, is orthogonal to the information used for selection.
b) Additional validation. To directly address the circularity concern empirically, we will conduct additional analysis using ROIs from previous studies (e.g. network templates) / meta-analyses/Neurosynth ROIs (He et al., 2023; Meer et al., 2020; Taghia et al., 2018), without resorting to selection based on the subsequent memory contrast.
(3) The task used to test memory in children seems strange. Why should children remember arbitrary scenes? How this was chosen for encoding needs to be made clear. There needs to be more description of the memory task and why it was chosen. Why was scene encoding chosen? What does scene encoding have to do with the stated goal of (a) "Understanding how children's brains form lasting memories", (b) "optimizing education" and (c) "identifying learning disabilities"? What was the design of the recognition memory test? How many novel scenes were included in the test, and how were they chosen? How close were the "new" images to previously seen "old" images? Was this varied parametrically (i.e., was the similarity between new and old images assessed and quantified?)
Scene encoding was chosen for several reasons: (a) scenes are rich, complex stimuli that engage the hippocampal–parahippocampal memory system, eliciting robust subsequent memory effects suitable for BSDS modeling; (b) scene encoding recruits distributed networks spanning visual cortex, MTL, and frontoparietal regions, enabling detection of multi-region brain states; and (c) scene encoding paradigms have been widely used in both adult and developmental studies of episodic memory and replay(Tambini et al., 2017; Tompary et al., 2017), facilitating comparison with prior work.
Regarding the recognition test: participants viewed 200 images (100 old, 100 new), with novel scenes drawn from the same categories (buildings and natural scenes) but chosen to be perceptually distinct from studied images. Similarity between old and new images was not parametrically manipulated or quantified: we will note this limitation. We will also expand the main text to include full task details and have deleted claims about implications for educational optimization and learning disability identification (see also Reviewer #3 point 7).
(4) They ultimately found four brain states during encoding. It would be helpful if they could make the logic and foundation for arriving at this number clear.
The number of brain states is not predetermined by the user but is automatically determined by the BSDS algorithm through Bayesian automatic relevance determination (ARD). The model is initialized with a maximum number of possible states, and during inference, states that contribute minimally to explaining the data are effectively pruned, their associated parameters are driven to near-zero by the ARD prior. In our data, the model converged on four states. This is a key advantage of BSDS over conventional HMM approaches, which require the user to specify the state number a priori. We will clarify this process in the revised Methods and Results, referencing the original BSDS methodology paper (Taghia et al., 2018) for full mathematical details.
(5) There is already extant work on whether brain states during post-encoding rest predict memory outcomes. This work needs to be cited and referred to. The present manuscript needs to be better situated within prior work. The authors should look at the work by Alexa Tompary and Lila Davachi. They have already addressed many of the questions that the authors seek to answer. The authors should read their papers (and the papers they cite and that cite them) and then situate their work within the prior literature.
We agree that the manuscript must be better situated within the existing literature on post-encoding rest and memory consolidation. We will revise the Introduction and Discussion to further discuss with the foundational work in adults by Tompary & Davachi (2017, Neuron; 2024, eLife) on consolidation-related hippocampal–mPFC representational overlap, as well as Tambini & Davachi (2013, PNAS; 2019, Trends in Cognitive Sciences) on hippocampal persistence during post-encoding rest and awake reactivation(Tambini et al., 2019; Tambini et al., 2017; Tompary et al., 2017). We will explicitly discuss how our BSDS-based approach to state-level reinstatement complements and extends these earlier findings, which largely focused on region-specific pattern similarity or hippocampal–cortical connectivity, by characterizing reinstatement at the level of dynamic, whole-network configurations.
(6) The authors should back up the claim that "successful episodic memory formation critically depends on the temporal coordination between these systems. Brain regions must coordinate their activity through dynamic functional interactions, rapidly reconfiguring their activity and connectivity patterns in response to changing cognitive demands and stimulus characteristics." Do they have any specific evidence supporting this claim?
The claim that episodic memory depends on temporal coordination and dynamic functional interactions is supported by several lines of evidence: (a) within our study, the significant correlations between state transition rates and memory performance directly demonstrate that dynamic inter-state communication predicts memory outcomes; (b) studies showing that hippocampal–prefrontal theta coherence during encoding predicts subsequent memory (e.g., Zielinski et al., 2020)(Zielinski et al., 2020); and (c) recent work demonstrating that rapid reconfiguration of large-scale brain networks supports cognitive functions including working memory (Shine et al., 2018; Braun et al., 2015)(Braun et al., 2015; Shine et al., 2018) and episodic encoding (Phan et al., 2024)(Phan et al., 2024) We will revise this passage to include specific citations and to make clear that our own transition–behavior correlations constitute direct evidence for this claim.
(7) These claims seem overstated: "this work has broad implications for understanding memory function in children, for developing educational interventions that enhance memory formation, and enabling early identification of children at risk for learning disabilities." Can the authors add citations that would support these claims, or if not, remove them?
We thank the reviewer for raising this point. We agree that the current framing overstates the practical implications. We have now removed these claims and remark on future studies that are needed here.
References
(1) Braun, U., Schafer, A., Walter, H., Erk, S., Romanczuk-Seiferth, N., Haddad, L., . . . Bassett, D. S. (2015). Dynamic reconfiguration of frontal brain networks during executive cognition in humans. Proc Natl Acad Sci U S A, 112(37), 11678-11683.
(2) He, Y., Liang, X., Chen, M., Tian, T., Zeng, Y., Liu, J., . . . Qin, S. (2023). Development of brain-state dynamics involved in working memory. Cerebral Cortex.
(3) Lee, B., Young, C. B., Cai, W., Yuan, R., Ryman, S., Kim, J., . . . Menon, V. (2025). Dopaminergic modulation and dosage effects on brain state dynamics and working memory component processes in Parkinson’s disease. Nature Communications, 16(1), 2433.
(4) Liu, Y., Dolan, R. J., Kurth-Nelson, Z., & Behrens, T. E. J. (2019). Human Replay Spontaneously Reorganizes Experience. Cell, 178(3), 640-652.e614.
(5) Meer, J. N. v. d., Breakspear, M., Chang, L. J., Sonkusare, S., & Cocchi, L. (2020). Movie viewing elicits rich and reliable brain state dynamics. Nature Communications, 11(1), 5004.
(6) Phan, A. T., Xie, W., Chapeton, J. I., Inati, S. K., & Zaghloul, K. A. (2024). Dynamic patterns of functional connectivity in the human brain underlie individual memory formation. Nature Communications, 15(1), 8969.
(7) Ryali, S., Supekar, K., Chen, T., Kochalka, J., Cai, W., Nicholas, J., . . . Menon, V. (2016). Temporal Dynamics and Developmental Maturation of Salience, Default and Central-Executive Network Interactions Revealed by Variational Bayes Hidden Markov Modeling. PLoS Comput Biol, 12(12), e1005138.
(8) Shine, J. M., & Poldrack, R. A. (2018). Principles of dynamic network reconfiguration across diverse brain states. Neuroimage, 180, 396-405.
(9) Stevner, A. B. A., Vidaurre, D., Cabral, J., Rapuano, K., Nielsen, S. F. V., Tagliazucchi, E., . . . Kringelbach, M. L. (2019). Discovery of key whole-brain transitions and dynamics during human wakefulness and non-REM sleep. Nature Communications, 10(1), 1035.
(10) Taghia, J., Cai, W., Ryali, S., Kochalka, J., Nicholas, J., Chen, T., & Menon, V. (2018). Uncovering hidden brain state dynamics that regulate performance and decision-making during cognition. Nature Communications, 9(1), 2505.
(11) Tambini, A., & Davachi, L. (2019). Awake Reactivation of Prior Experiences Consolidates Memories and Biases Cognition. Trends in Cognitive Sciences, 23(10), 876-890.
(12) Tambini, A., Rimmele, U., Phelps, E. A., & Davachi, L. (2017). Emotional brain states carry over and enhance future memory formation. Nature Neuroscience, 20(2), 271-278.
(13) Tompary, A., & Davachi, L. (2017). Consolidation Promotes the Emergence of Representational Overlap in the Hippocampus and Medial Prefrontal Cortex. Neuron, 96(1), 228-241.e225.
(14) Verde, M. F., Macmillan, N. A., & Rotello, C. M. (2006). Measures of sensitivity based on a single hit rate and false alarm rate: The accuracy, precision, and robustness of′, A z, and A’. Perception & psychophysics, 68(4), 643-654.
(15) Wickens, T. D. (2001). Elementary signal detection theory: Oxford university press.
(16) Wimmer, G. E., Liu, Y., Vehar, N., Behrens, T. E. J., & Dolan, R. J. (2020). Episodic memory retrieval success is associated with rapid replay of episode content. Nature Neuroscience, 23(8), 1025-1033.
(17) Zeng, Y., Xiong, B., Gao, H., Liu, C., Chen, C., Wu, J., & Qin, S. (2024). Cortisol awakening response prompts dynamic reconfiguration of brain networks in emotional and executive functioning. Proceedings of the National Academy of Sciences, 121(52), e2405850121.
(18) Zielinski, M. C., Tang, W., & Jadhav, S. P. (2020). The role of replay and theta sequences in mediating hippocampal-prefrontal interactions for memory and cognition. Hippocampus, 30(1), 60-72.