Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, and public reviews.
Read more about eLife’s peer review process.Editors
- Reviewing EditorAlex FornitoMonash University, Clayton, Australia
- Senior EditorJonathan RoiserUniversity College London, London, United Kingdom
Reviewer 1 (Public Review):
This is a reasonably good paper and the use of a commonality analysis is a nice contribution to understanding variance partitioning across different covariates. I have some comments that I believe the authors ought to address which mostly relate to clarity and interpretation.
First, from a conceptual point of view, the authors focus exclusively on cognition as a downstream outcome. I would suggest the authors nuance their discussion to provide broader considerations of the utility of their method and on the limits of interpretation of brain-age models more generally. Further, I think that since brain-age models by construction confound relevant biological variation with the accuracy of the regression models used to estimate them, there may be limits to the interpretation of (e.g.) the brain-age gap is as a dimensionless biomarker. This has also been discussed elsewhere (see e.g. https://academic.oup.com/brain/article/143/7/2312/5863667). I would suggest that the authors consider and comment on these issues.
Second, from a methods perspective, there is not a sufficient explanation of the methodological procedures in the current manuscript to fully understand how the stacked regression models were constructed. Stacked models can be prone to overfitting when combined with cross-validation. This is because the predictions from the first-level models (i.e. the features that are provided to the second level 'stacked' models) contain information about the training set *and* the test set. If cross-validation is not done very carefully (e.g. using multiple hold-out sets), information leakage can easily occur at the second level. Unfortunately, there is not a sufficient explanation of the methodological procedures in the current manuscript to fully understand what was actually done. Please provide more information to enable the reader to better understand the stacked regression models. If the authors are not using an approach that fully preserves training and test separability, they need to do so.
Please also provide an indication of the different regression strengths that were estimated across the different models and cross-validation splits. Also, how stable were the weights across splits?
Please provide more details about the task designs, MRI processing procedures that were employed on this sample in addition to the regression methods, and bias-correction methods used. For example, there are several different parameterisations of the elastic net, please provide equations to describe the method used here so that readers can easily determine how the regularisation parameters should be interpreted.
Reviewer 2 (Public Review):
In this study, the authors aimed to evaluate the contribution of brain-age indices in capturing variance in cognitive decline and proposed an alternative index, brain-cognition, for consideration. The study employs suitable data and methods, albeit with some limitations, to address the research questions. A more detailed discussion of methodological limitations in relation to the study's aims is required. For instance, the current commonality analysis may not sufficiently address potential multicollinearity issues, which could confound the findings. Importantly, given that the study did not provide external validation for the indices, it is unclear how well the models would perform and generalize to other samples. This is particularly relevant to their novel index, brain-cognition, given that brain-age has been validated extensively elsewhere. In addition, the paper's rationale for using elastic net, which references previous fMRI studies, seemed somewhat unclear. The discussion could be more nuanced and certain conclusions appear speculative.
The authors aimed to evaluate how brain-age and brain-cognition indices capture cognitive decline (as mentioned in their title) but did not employ longitudinal data, essential for calculating 'decline'. As a result, 'cognition-fluid' should not be used interchangeably with 'cognitive decline,' which is inappropriate in this context.
In their first aim, the authors compared the contributions of brain-age and chronological age in explaining variance in cognition-fluid. Results revealed much smaller effect sizes for brain-age indices compared to the large effects for chronological age. While this comparison is noteworthy, it highlights a well-known fact: chronological age is a strong predictor of disease and mortality. Has the brain-age literature systematically overlooked this effect? If so, please provide relevant examples. They conclude that due to the smaller effect size, brain-age may lack clinical significance, for instance, in associations with neurodegenerative disorders. However, caution is required when speculating on what brain-age may fail to predict in the absence of direct empirical testing. This conclusion also overlooks extant brain-age literature: although effect sizes vary across psychiatric and neurological disorders, brain-age has demonstrated significant effects beyond those driven by chronological age, supporting its utility.
The second aim's results reveal a discrepancy between the accuracy of their brain-age models in estimating age and the brain-age's capacity to explain variance in cognition-fluid. The authors suggest that if the ultimate goal is to capture cognitive variance, brain-age predictive models should be optimized to predict this target variable rather than age. While this finding is important and noteworthy, additional analyses are needed to eliminate potential confounding factors, such as correlated noise between the data and cognitive outcome, overfitting, or the inclusion of non-healthy participants in the sample. Optimizing brain-age models to predict the target variable instead of age could ultimately shift the focus away from the brain-age paradigm, as it might optimize for a factor differing from age.
While a primary goal in biomarker research is to obtain indices that effectively explain variance in the outcome variable of interest, thus favouring models optimized for this purpose, the authors' conclusion overlooks the potential value of 'generic/indirect' models, despite sacrificing some additional explained variance provided by ad-hoc or 'specific/direct' models. In this context, we could consider brain-age as a 'generic' index due to its robust out-of-sample validity and significant associations across various health outcome variables reported in the literature. In contrast, the brain-cognition index proposed in this study is presumed to be 'specific' as, without out-of-sample performance metrics and testing with different outcome variables (e.g., neurodegenerative disease), it remains uncertain whether the reported effect would generalize beyond predicting cognition-fluid, the same variable used to condition the brain-cognition model in this study. A 'generic' index like brain-age enables comparability across different applications based on a common benchmark (rather than numerous specific models) and can support explanatory hypotheses (e.g., "accelerated ageing") since it is grounded in its own biological hypothesis. Generic and specific indices are not mutually exclusive; instead, they may offer complementary information. Their respective utility may depend heavily on the context and research or clinical question.
The study's third aim was to evaluate the authors' new index, brain-cognition. The results and conclusions drawn appear similar: compared to brain-age, brain-cognition captures more variance in the outcome variable, cognition-fluid. However, greater context and discussion of limitations is required here. Given the nature of the input variables (a large proportion of models in the study were based on fMRI data using cognitive tasks), it is perhaps unsurprising that optimizing these features for cognition-fluid generates an index better at explaining variance in cognition-fluid than the same features used to predict age. In other words, it is expected that brain-cognition would outperform brain-age in explaining variance in cognition-fluid since the former was optimized for the same variable in the same sample, while brain-age was optimized for age. Consequently, it is unclear if potential overfitting issues may inflate the brain-cognition's performance. This may be more evident when the model's input features are the ones closely related to cognition, e.g., fMRI tasks. When features were less directly related to cognitive tasks, e.g., structural MRI, the effect sizes for brain-cognition were notably smaller (see 'Total Brain Volume' and 'Subcortical Volume' models in Figure 6). This observation raises an important feasibility issue that the authors do not consider. Given the low likelihood of having task-based fMRI data available in clinical settings (such as hospitals), estimating a brain-cognition index that yields the large effects discussed in the study may be challenged by data scarcity.
This study is valuable and likely to be useful in two main ways. First, it can spur further research aimed at disentangling the lack of correspondence reported between the accuracy of the brain-age model and the brain-age's capacity to explain variance in fluid cognitive ability. Second, the study may serve, at least in part, as an illustration of the potential pros and cons of using indices that are specific and directly related to the outcome variable versus those that are generic and only indirectly related.
Overall, the authors effectively present a clear design and well-structured procedure; however, their work could have been enhanced by providing more context for both the brain-age and brain-cognition indices, including a discussion of key concepts in the brain-age paradigm, which acknowledges that chronological age strongly predicts negative health outcomes, but crucially, recognizes that ageing does not affect everyone uniformly. Capturing this deviation from a healthy norm of ageing is the key brain-age index. This lack of context was mirrored in the presentation of the four brain-age indices provided, as it does not refer to how these indices are used in practice. In fact, there is no mention of a more common way in which brain-age is implemented in statistical analyses, which involves the use of brain-age delta as the variable of interest, along with linear and non-linear terms of age as covariates. The latter is used to account for the regression-to-the-mean effect. The 'corrected brain-age delta' the authors use does not include a non-linear term, which perhaps is an additional reason (besides the one provided by the authors) as to why there may be small, but non-zero, common effects of both age and brain-age in the 'corrected brain-age delta' index commonality analysis. The context for brain-cognition was even more limited, with no reference to any existing literature that has explored direct brain-cognitive markers, such as brain-cognition.
While this paper delivers intriguing and thought-provoking results, it would benefit from recognizing the value that both approaches--brain-age indices and more direct, specific markers like brain-cognition--can contribute to the field.
Reviewer 3 (Public Review):
The main question of this article is as follows: "To what extent does having information on brain-age improve our ability to capture declines in fluid cognition beyond knowing a person's chronological age?" While this question is worthwhile, considering that there is considerable confusion in the field about the nature of brain-age, the authors are currently missing an opportunity to convey the inevitability of their results, given how brain-age and the brain-age gap are calculated. They also argue that brain-cognition is somehow superior to brain-age, but insufficient evidence is provided in support of this claim.
Specific comments follow:
- "There are many adjustments proposed to correct for this estimation bias" (p3). Regression to the mean is not a sign of bias. Any decent loss function will result in over-predicting the age of younger individuals and under-predicting the age of older individuals. This is a direct result of minimizing an error term (e.g., mean squared error). Therefore, it is inappropriate to refer to regression to the mean as a sign of bias. This misconception has led to a great deal of inappropriate analyses, including "correcting" the brain age gap by regressing out age.
- "Corrected Brain Age Gap in particular is viewed as being able to control for both age dependency and estimation biases (Butler et al., 2021)" (p3). This summary is not accurate as Butler and colleagues did not use the words "corrected" and "biases" in this context. All that authors say in that paper is that regressing out age from the brain age gap - which is referred to as the modified brain age gap (MBAG) - makes it so that the modified brain age gap is not dependent on age, which is true. This metric is meaningless, though, because it is the variance left over after regressing out age from residuals from a model that was predicting age. If it were not for the fact that regression on residuals is not equivalent to multiple regression (and out of sample estimates), MBAG would be a vector of zeros. Upon reading the Methods, I noticed that the authors use a metric from Le et al. (2018) for the "Corrected Brain Age Gap". If they cite the Butler et al. (2021) paper, I highly recommend sticking with the same notation, metrics and terminology throughout. That would greatly help with the interpretability of the present manuscript, and cross-comparisons between the two.
- "However, the improvement in predicting chronological age may not necessarily make Brain Age to be better at capturing Cognitionfluid. If, for instance, the age-prediction model had the perfect performance, Brian Age Gap would be exactly zero and would have no utility in capturing Cognitionfluid beyond chronological age" (p3). I largely agree with this statement. I would be really careful to distinguish between brain-age and the brain-age gap here, as the former is a predicted value, and the latter is the residual times -1 (i.e., predicted age - age). Therefore, together they explain all of the variance in age. Changing the first sentence to refer to the brain-age gap would be more accurate in this context. The brain-age gap will never be exactly zero, though, even with perfect prediction on the training set, because subjects in the testing set are different from the subjects in the training set.
- "Can we further improve our ability to capture the decline in cognitionfluid by using, not only Brain Age and chronological age, but also another biomarker, Brain Cognition?". This question is fundamentally getting at whether a predicted value of cognition can predict cognition. Assuming the brain parameters can predict cognition decently, and the original cognitive measure that you were predicting is related to your measure of fluid cognition, the answer should be yes. Upon reading the Methods, it became clear that the cognitive variable in the model predicting cognition using brain features (to get predicted cognition, or as the authors refer to it, brain-cognition) is the same as the measure of fluid cognition that you are trying to assess how well brain-cognition can predict. Assuming the brain parameters can predict fluid cognition at all, it is then inevitable that brain-cognition will predict fluid cognition. Therefore, it is inappropriate to use predicted values of a variable to predict the same variable.
- "However, Brain Age Gap created from the lower-performing age-prediction models explained a higher amount of variation in Cognitionfluid. For instance, the top performing age-prediction model, "Stacked: All excluding Task Contrast", generated Brain Age and Corrected Brain Age that explained the highest amount of variation in Cognitionfluid, but, at the same time, produced Brian Age Gap that explained the least amount of variation in Cognitionfluid" (p7). This is an inevitable consequence of the following relationship between predicted values and residuals (or residuals times -1): y=(y-y ̂ )+y ̂. Let's say that age explains 60% of the variance in fluid cognition, and predicted age (y ̂) explains 40% of the variance in fluid cognition. Then the brain age gap (-(y-y ̂)) should explain 20% of the variance in fluid cognition. If by "Corrected Brain Age" you mean the modified predicted age from Butler et al (2021), the "Corrected Brain Age" result is inevitable because the modified predicted age is essentially just age with a tiny bit of noise added to it. From Figure 4, though, this does not seem to be the case, because the lower left quadrant in panel (a) should be flat and high (about as high as the predictive value of age for fluid cognition). So it is unclear how "Corrected Brain Age" is calculated. It looks like you might be regressing age out of brain-age, though from your description in the Methods section, it is not totally clear. Again, I highly recommend using the terminology and metrics of Butler et al (2021) throughout to reduce confusion. Please also clarify how you used the slope and intercept. In general, given how brain-age metrics tend to be calculated, the following conclusion is inevitable: "As before, the unique effects of Brain Age indices were all relatively small across the four Brain Age indices and across different prediction models" (p10).
"On the contrary, the unique effects of Brain Cognition appeared much larger" (p10). This is not a fair comparison if you do not look at the unique effects above and beyond the cognitive variable you predicted in your brain-cognition model. If your outcome measure had been another metric of cognition other than fluid cognition, you would see that brain-cognition does not explain any additional variance in this outcome when you include fluid cognition in the model, just as brain-age would not when including age in the model (minus small amounts due to penalization and out-of-sample estimates). This highlights the fact that using a predicted value to predict anything is worse than using the value itself.
"First, how much does Brain Age add to what is already captured by chronological age? The short answer is very little" (p12). This is a really important point, but the paper requires an in-depth discussion of the inevitability of this result, as discussed above.
"Third, do we have a solution that can improve our ability to capture Cognitionfluid from brain MRI? The answer is, fortunately, yes. Using Brain Cognition as a biomarker, along with chronological age, seemed to capture a higher amount of variation in Cognitionfluid than only using Brain Age" (p12). I suggest controlling for the cognitive measure you predicted in your brain-cognition model. This will show that brain-cognition is not useful above and beyond cognition, highlighting the fact that it is not a useful endeavor to be using predicted values.
"Accordingly, a race to improve the performance of age-prediction models (Baecker et al., 2021) does not necessarily enhance the utility of Brain Age indices as a biomarker for Cognitionfluid. This calls for a new paradigm. Future research should aim to build prediction models for Brian Age indices that are not necessarily good at predicting age, but at capturing phenotypes of interest, such as Cognitionfluid and beyond" (p13). I whole-heartedly agree with the first two sentences, but strongly disagree with the last. Certainly your results, and the underlying reason as to why you found these results, calls for a new paradigm (or, one might argue, a pre-brain-age paradigm). As of now, your results do not suggest that researchers should keep going down the brain-age path. While it is difficult to prove that there is no transformation of brain-age or the brain-age gap that will be useful, I am nearly sure this is true from the research I have done. If you would like to suggest that the field should continue down this path, I suggest presenting a very good case to support this view.