Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, and public reviews.
Read more about eLife’s peer review process.Editors
- Reviewing EditorSupratim RayIndian Institute of Science Bangalore, Bengaluru, India
- Senior EditorFloris de LangeDonders Institute for Brain, Cognition and Behaviour, Nijmegen, Netherlands
Reviewer #1 (Public Review):
Summary:
These types of analyses use many underlying assumptions about the data, which are not easy to verify. Hence, one way to test how the algorithm is performing in a task is to study its performance on synthetic data in which the properties of the variable of interest can be apriori fixed. For example, for burst detection, synthetic data can be generated by injected bursts of known durations, and checking if the algorithm is able to pick it up. Burst detection is difficult in the spectral domain since direct spectral estimators have high variance (see Subhash Chandran et al., 2018, J Neurophysiol). Therefore, detected burst lengths are typically much lower than injected burst lengths (see Figure 3). This problem can be solved by doing burst estimation in the time domain itself, for example, using Matching Pursuit (MP). I think the approach presented in this paper would also work since this model is also trained on data in the time domain. Indeed, the synthetic data can be made more "challenging" by injecting multiple oscillatory bursts that are overlapping in time, for which a greedy approach like MP may fail. It would be very interesting to test whether this method can "keep up" as the data is made more challenging. While showing results from brain signals directly (e.g., Figure 7) is nice, it will be even more impactful if it is backed up with results obtained from synthetic data with known properties.
I was wondering about what kind of "synthetic data" could be used for the results shown in Figure 8-12 but could not come up with a good answer. Perhaps data in which different sensory systems are activated (visual versus auditory) or sensory versus movement epochs are compared to see if the activation maps change as expected. We see similarities between states across multiple runs (reproducibility analysis) and across tasks (e.g. Figure 8 vs 9) and even methods (Figure 8 vs 10), which is great. However, we should also expect the emergence of new modes specific to sensory activation (say auditory cortex for an auditory task). This will allow us to independently check the performance of this method.
The authors should explain the reproducibility results (variational free energy and best run analysis) in the Results section itself, to better orient the reader on what to look for.
Page 15: the comparison across subjects is interesting, but it is not clear why sensory-motor areas show a difference and the mean lifetime of the visual network decreases. Can you please explain this better? The promised discussion in section 3.5 can be expanded as well.
Reviewer #2 (Public Review):
Summary:
The authors have developed a comprehensive set of tools to describe dynamics within a single time-series or across multiple time-series. The motivation is to better understand interacting networks within the human brain. The time-series used here are from direct estimates of the brain's electrical activity; however, the tools have been used with other metrics of brain function and would be applicable to many other fields.
Strengths:
The methods described are principled, and based on generative probabilistic models.
This makes them compact descriptors of the complex time-frequency data.
Few initial assumptions are necessary in order to reveal this compact description.
The methods are well described and demonstrated within multiple peer-reviewed articles.
This toolbox will be a great asset to the brain imaging community.
Weaknesses:
The only question I had was how to objectively/quantitatively compare different network models. This is possibly easily addressed by the authors.