The (Limited?) Utility of Brain Age as a Biomarker for Capturing Fluid Cognition in Older Individuals

  1. Department of Psychology, University of Otago, New Zealand, 9016

Peer review process

Revised: This Reviewed Preprint has been revised by the authors in response to the previous round of peer review; the eLife assessment and the public reviews have been updated where necessary by the editors and peer reviewers.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Alex Fornito
    Monash University, Clayton, Australia
  • Senior Editor
    Jonathan Roiser
    University College London, London, United Kingdom

Reviewer #1 (Public Review):

In this paper, the authors evaluate the utility of brain-age-derived metrics for predicting cognitive decline by performing a 'commonality' analysis in a downstream regression that enables the different contribution of different predictors to be assessed. The main conclusion is that brain-age-derived metrics do not explain much additional variation in cognition over and above what is already explained by age. The authors propose to use a regression model trained to predict cognition ("brain-cognition") as an alternative suited to applications of cognitive decline. While this is less accurate overall than brain age, it explains more unique variance in the downstream regression.

Comments on revised version:

I thank the authors for addressing many of my concerns with this revision. However, I do not feel they have addressed them all. In particular I think the authors could do more to address the concern I raised about the instability of the regression coefficients and about providing enough detail to determine that the stacked regression models do not overfit.

In considering my responses to the authors revision, I also must say that I agree with Reviewer 3 about the limitations of the brain age and brain cognition methods conceptually. In particular that the regression model used to predict fluid cognition will by construction explain more variance in cognition than a brain age model that is trained to predict age. To be fair, these conceptual problems are more widespread than this paper alone, so I do not believe the authors should be penalised for that. However, I would recommend to make these concerns more explicit in the manuscript.

Reviewer #2 (Public Review):

In this study, the authors aimed to evaluate the contribution of brain-age indices in capturing variance in cognitive decline and proposed an alternative index, brain-cognition, for consideration.

The study employs suitable methods and data to address the research questions, and the methods and results sections are generally clear and easy to follow.

Comments on revised submission:

I appreciate the authors' efforts in significantly improving the paper, including some considerable changes, from the original submission. While not all reviewer points were tackled, the majority of them were adequately addressed. These include additional analyses, more clarity in the methods and a much richer and nuanced discussion. While recognising the merits of the revised paper, I have a few additional comments.

Perhaps it would help the reader to note that it might be expected for brain-cognition to account for a significantly larger variance (11%) in fluid cognition, in contrast to brain-age. This stems from the fact that the authors specifically trained brain-cognition to predict fluid cognition, the very variable under consideration. In line with this, the authors later recommend that researchers considering the use of brain-age should evaluate its utility using a regression approach. The latter involves including a brain index (e.g. brain-cognition) previously trained to predict the regression's target variable (e.g. fluid cognition) alongside a brain-age index (e.g., corrected brain-age gap). If the target-trained brain index outperforms the brain-age metric, it suggests that relying solely on brain-age might not be the optimal choice. Although not necessarily the case, is it surprising for the target-trained brain index to demonstrate better performance than brain-age? This harks back to the broader point raised in the initial review: while brain-age may prove useful (though sometimes with modest effect sizes) across diverse outcomes as a generally applicable metric, a brain index tailored for predicting a specific outcome, such as brain-cognition in this case, might capture a considerably larger share of variance in that specific context but could lack broader applicability. The latter aspect needs to be empirically assessed.

Furthermore, the discussion pertaining to training brain-age models on healthy populations for subsequent testing on individuals with neurological or psychological disorders seems somewhat one-sided within the broader debate. This one-sidedness might potentially confuse readers. It is worth noting that the choice to employ healthy participants in the training model is likely deliberate, serving as a norm against which atypical populations are compared. To provide a more comprehensive understanding, referencing Tim Hans's counterargument to Bashyam's perspective could offer a more complete view (https://academic.oup.com/brain/article/144/3/e31/6214475?login=false).

Overall, this paper makes a significant contribution to the field of brain-age and related brain indices and their utility.

Reviewer #3 (Public Review):

The main question of this article is as follows: "To what extent does having information on brain-age improve our ability to capture declines in fluid cognition beyond knowing a person's chronological age?" This question is worthwhile, considering that there is considerable confusion in the field about the nature of brain-age.

Comments on revised version:

Thank you to the authors for addressing so many of my concerns with this revision. There are a few points that I feel still need addressing/clarifying related to 1) calculating brain cognition, 2) the inevitability of their results, and 3) their continued recommendation to use brain-age metrics.

Author Response

The following is the authors’ response to the original reviews.

eLife assessment

This useful manuscript challenges the utility of current paradigms for estimating brain-age with magnetic resonance imaging measures, but presents inadequate evidence to support the suggestion that an alternative approach focused on predicting cognition is more useful. The paper would benefit from a clearer explication of the methods and a more critical evaluation of the conceptual basis of the different models. This work will be of interest to researchers working on brain-age and related models.

Response: Thank you so much for providing high-quality reviews on our manuscript. We revised the manuscript to address all of the reviewers’ comments and provided full responses to each of the comments below.

Briefly, regarding clearer explanations of the methods, we added additional analyses (e.g., commonality analyses on ridge regression and on multiple regressions with a quadratic term for chronological age) to address some of the concerns and additional details in text and figures to ensure that the reader can fully understand our methodological procedures. Regarding the critical evaluation of the conceptual basis of the different models, we added discussions to help with interpretations and the scope of the generalisability of our findings. For instance, as opposed to treating Brain Cognition and Brain Age as separate biomarkers and comparing them in the ability to explain fluid cognition, we now treated the capability of Brain Cognition in capturing fluid cognition as the upper limit of Brain Age’s capability in capturing fluid cognition. In other words, we now examined the extent to which Brain Age missed the variation in the brain MRI that could explain fluid cognition (for this particular issue, please see our response to Reviewer 3 Public Review #4).

Reviewer 1:

This is a reasonably good paper and the use of a commonality analysis is a nice contribution to understanding variance partitioning across different covariates. I have some comments that I believe the authors ought to address which mostly relate to clarity and interpretation.

Reviewer 1 Public Review #1:

First, from a conceptual point of view, the authors focus exclusively on cognition as a downstream outcome. I would suggest the authors nuance their discussion to provide broader considerations of the utility of their method and on the limits of interpretation of brain-age models more generally. Further, I think that since brain-age models by construction confound relevant biological variation with the accuracy of the regression models used to estimate them, there may be limits to the interpretation of (e.g.) the brain-age gap is as a dimensionless biomarker. This has also been discussed elsewhere (see e.g. https://academic.oup.com/brain/article/143/7/2312/5863667). I would suggest that the authors consider and comment on these issues.

Response: Thank you Reviewer 1 for pointing out these important issues. We addressed them in our response to Reviewer 1 Recommendations For The Authors #1 (see below).

Reviewer 1 Public Review #2

Second, from a methods perspective, there is not a sufficient explanation of the methodological procedures in the current manuscript to fully understand how the stacked regression models were constructed. Stacked models can be prone to overfitting when combined with cross-validation. This is because the predictions from the first-level models (i.e. the features that are provided to the second level 'stacked' models) contain information about the training set and the test set. If cross-validation is not done very carefully (e.g. using multiple hold-out sets), information leakage can easily occur at the second level. Unfortunately, there is not a sufficient explanation of the methodological procedures in the current manuscript to fully understand what was actually done. Please provide more information to enable the reader to better understand the stacked regression models. If the authors are not using an approach that fully preserves training and test separability, they need to do so.

Response: Thank you Reviewer 1. We addressed this issue in our response to Reviewer 1 Recommendations For The Authors #2 (see below). Briefly, we now made it clearer that training models for both non-stacked and stacked models did not involve the test set, ensuring that there was no data leakage between training and test sets.

Reviewer 1 Public Review #3

Please also provide an indication of the different regression strengths that were estimated across the different models and cross-validation splits. Also, how stable were the weights across splits?

Response: Thank you Reviewer 1. We addressed this issue in our response to Reviewer 1 Recommendations For The Authors #3 (see below).

Reviewer 1 Public Review #4:

Please provide more details about the task designs, MRI processing procedures that were employed on this sample in addition to the regression methods, and bias-correction methods used. For example, there are several different parameterisations of the elastic net, please provide equations to describe the method used here so that readers can easily determine how the regularisation parameters should be interpreted.

Response: Thank you Reviewer 1. We addressed this issue in our response to Reviewer 1 Recommendations For The Authors #5-#6. Briefly, we followed your advice and add all of the suggested details.

Reviewer 2 (Public Review):

Reviewer 2 Public Review Overall:

In this study, the authors aimed to evaluate the contribution of brain-age indices in capturing variance in cognitive decline and proposed an alternative index, brain-cognition, for consideration. The study employs suitable data and methods, albeit with some limitations, to address the research questions. A more detailed discussion of methodological limitations in relation to the study's aims is required. For instance, the current commonality analysis may not sufficiently address potential multicollinearity issues, which could confound the findings. Importantly, given that the study did not provide external validation for the indices, it is unclear how well the models would perform and generalize to other samples. This is particularly relevant to their novel index, brain-cognition, given that brain-age has been validated extensively elsewhere. In addition, the paper's rationale for using elastic net, which references previous fMRI studies, seemed somewhat unclear. The discussion could be more nuanced and certain conclusions appear speculative.

Response Thank you for your encouragement. We have now added discussion of methodological limitations (see below). Regarding potential multicollinearity issues, we addressed this comment using Ridge regressions (see our response to Reviewer 2 Recommendations For The Authors #2). Regarding external validation, we now added discussions about how consistency between our results and several recent studies that investigated similar issues with Brain Age in different populations (see Reviewer 2 Recommendations For The Authors #1). Regarding Brain Cognition, we also added previous studies showing similarly high prediction for cognition functioning (Dubois et al., 2018; Pat, Wang, Anney, et al., 2022; Rasero et al., 2021; Sripada et al., 2020; Tetereva et al., 2022; for review, see Vieira et al., 2022). We added a discussion about Elastic Net (see Reviewer 1 Recommendations For The Authors #6)

Discussion

“There are several potential limitations of this study. First, we conducted an investigation relying only on one dataset, the Human Connectome Project in Aging (HCP-A) (Bookheimer et al., 2019). While HCP-A used state-of-the-art MRI methodologies, covered a wide age range from 36 to 100 years old and used several task-fMRI from different tasks that are harder to find in other bigger databases (e.g., UK Biobank from Sudlow et al., 2015), several characteristics of HCP-A might limit the generalisability of our findings. For instance, the tasks used in task-based fMRI in HCP-A are not used widely in clinical settings (Horien et al., 2020). This might make it challenging to translate the approaches used here. Similarly, HCP-A also excluded participants with neurological conditions, possibly making their participants not representative of the general population. Next, while HCP-A’s sample size is not small (n=725 and 504 people, before and after exclusion, respectively), other datasets provide a much larger sample size (Horien et al., 2020). Similarly, HCP-A does not include younger populations. But as mentioned above, a study with a larger sample in older adults (Cole, 2020) and studies in younger populations (8-22 years old) (Butler et al., 2021; Jirsaraie, Kaufmann, et al., 2023) also found small effects of the adjusted Brain Age Gap in explaining cognitive functioning. And the disagreement between the predictive performance of age-prediction models and the utility of Brain Age found here is largely in line with the findings across different phenotypes seen in a recent systematic review (Jirsaraie, Gorelik, et al., 2023).”

Reviewer 2 Public Review #1:

The authors aimed to evaluate how brain-age and brain-cognition indices capture cognitive decline (as mentioned in their title) but did not employ longitudinal data, essential for calculating 'decline'. As a result, 'cognition-fluid' should not be used interchangeably with 'cognitive decline,' which is inappropriate in this context.

Response Thank you for raising this issue. We now no longer used the word ‘cognitive decline’.

Reviewer 2 Public Review #2:

In their first aim, the authors compared the contributions of brain-age and chronological age in explaining variance in cognition-fluid. Results revealed much smaller effect sizes for brain-age indices compared to the large effects for chronological age. While this comparison is noteworthy, it highlights a well-known fact: chronological age is a strong predictor of disease and mortality. Has the brain-age literature systematically overlooked this effect? If so, please provide relevant examples. They conclude that due to the smaller effect size, brain-age may lack clinical significance, for instance, in associations with neurodegenerative disorders. However, caution is required when speculating on what brain-age may fail to predict in the absence of direct empirical testing. This conclusion also overlooks extant brain-age literature: although effect sizes vary across psychiatric and neurological disorders, brain-age has demonstrated significant effects beyond those driven by chronological age, supporting its utility.

Response For aim 1, we focused our claims on cognitive functioning and not on any clinical significance for neurodegenerative disorders. We now made it clearer that the small effects of the Corrected Brain Age Gap in explaining fluid cognition of aging individuals found here are consistent with a study with a larger sample in older adults (Cole, 2020) and studies in younger populations (8-22 years old) (Butler et al., 2021; Jirsaraie, Kaufmann, et al., 2023).

We believe this issue of the utility of brain age on cognitive functioning vs neurological/psychological disorders requires another consideration, namely the discrepancy in the training and test samples typically used for studies focusing on neurological/psychological disorders. We made this point in the discussion now (see below).

Discussion

“There is a notable difference between studies investigating the utility of Brain Age in explaining cognitive functioning, including ours and others (e.g., Butler et al., 2021; Cole, 2020, 2020; Jirsaraie, Kaufmann, et al., 2023) and those explaining neurological/psychological disorders (e.g., Bashyam et al., 2020; Rokicki et al., 2021). That is, those Brain Age studies focusing on neurological/psychological disorders often build age-prediction models from MRI data of largely healthy participants (e.g., controls in a case-control design or large samples in a population-based design), apply the built age-prediction models to participants without vs. with neurological/psychological disorders and compare Brain Age indices between the two groups. This means that age-prediction models from Brain Age studies focusing on neurological/psychological disorders might be under-fitted when applied to participants with neurological/psychological disorders because they were built from largely healthy participants. And thus the difference in Brain Age indices between participants without vs. with neurological/psychological disorders might be confounded by the under-fitted age-prediction models (i.e., Brain Age may predict chronological age well for the controls, but not for those with a disorder). On the contrary, our study and other Brain Age studies focusing on cognitive functioning often build age-prediction models from MRI data of largely healthy participants and apply the built age-prediction models to participants who are also largely healthy. Accordingly, the age-prediction models for explaining cognitive functioning do not suffer from being under-fitted. We consider this as a strength, not a weakness of our study.”

Reviewer 2 Public Review #3:

The second aim's results reveal a discrepancy between the accuracy of their brain-age models in estimating age and the brain-age's capacity to explain variance in cognition-fluid. The authors suggest that if the ultimate goal is to capture cognitive variance, brain-age predictive models should be optimized to predict this target variable rather than age. While this finding is important and noteworthy, additional analyses are needed to eliminate potential confounding factors, such as correlated noise between the data and cognitive outcome, overfitting, or the inclusion of non-healthy participants in the sample. Optimizing brain-age models to predict the target variable instead of age could ultimately shift the focus away from the brain-age paradigm, as it might optimize for a factor differing from age.

Response We discussed the issue regarding the discrepancy between the accuracy of their brain-age models in estimating age and the brain-age's capacity to explain variance in fluid cognition in our response to Reviewer 3 Public Review #9 (see below). This issue is found to be widespread in a recent systematic review (Jirsaraie, Gorelik, et al., 2023). We now provided several strategies to mitigate this issue to improve the utility of Brain Age in explaining other phenotypes based on our current work and others, using different MRI modalities as well as modelling techniques (Bashyam et al., 2020; Jirsaraie, Kaufmann, et al., 2023; Rokicki et al., 2021).

Regarding potential confounding factors, we are not sure what the reviewer meant by “correlated noise between the data and cognitive outcome”. The current study, for instance, used ICA-FIX (Glasser et al., 2016) to remove noise in functional MRI. It is unclear how much ‘noise’ is still left and might confound our findings. More importantly, we are not sure how to define ‘noise’ as referred to by Reviewer 2 here. As for overfitting, we used nested cross-validation to ensure that training and test sets were separate from each other (see Reviewer 1 Recommendations For The Authors #2). If overfitting happened as suggested, we should see a ‘lower’ predictive performance of age-prediction and cognitive-prediction models since the models would fit well with the training set but would not generalise well to the test set. This is not what we found. The predictive performance of our age-prediction and cognitive-prediction models was high and consistent with the literature. Regarding the inclusion of non-healthy participants in the sample, we discussed this above in our response to Reviewer 2 Public Review #2).

Reviewer 2 Public Review #4:

While a primary goal in biomarker research is to obtain indices that effectively explain variance in the outcome variable of interest, thus favouring models optimized for this purpose, the authors' conclusion overlooks the potential value of 'generic/indirect' models, despite sacrificing some additional explained variance provided by ad-hoc or 'specific/direct' models. In this context, we could consider brain-age as a 'generic' index due to its robust out-of-sample validity and significant associations across various health outcome variables reported in the literature. In contrast, the brain-cognition index proposed in this study is presumed to be 'specific' as, without out-of-sample performance metrics and testing with different outcome variables (e.g., neurodegenerative disease), it remains uncertain whether the reported effect would generalize beyond predicting cognition-fluid, the same variable used to condition the brain-cognition model in this study. A 'generic' index like brain-age enables comparability across different applications based on a common benchmark (rather than numerous specific models) and can support explanatory hypotheses (e.g., "accelerated ageing") since it is grounded in its own biological hypothesis. Generic and specific indices are not mutually exclusive; instead, they may offer complementary information. Their respective utility may depend heavily on the context and research or clinical question.

Response Thank you Reviewer 2 for pointing out this important issue. Reviewer 1 (Recommendations For The Authors #4) and Reviewer 3 (Public Review #4) bought up a similar issue. We agreed with Reviewer 2 that both 'specific/direct' index and Brain Age as a 'generic/indirect' index have merit in their own right. We made a discussion about this issue in our response to Reviewer 3 Public Review #4 (please see this response below).

Briefly, in the revision, as opposed to treating Brain Cognition and Brain Age as separate biomarkers and comparing them, we treated the capability of Brain Cognition in capturing fluid cognition as the upper limit of Brain Age’s capability in capturing fluid cognition. In other words, we now examined the extent to which Brain Age missed the variation in the brain MRI that could explain fluid cognition. We also made a discussion about using our commonality approach to test for this missing variation in future work:

Discussion

“Finally, researchers should test how much Brain Age miss the variation in the brain MRI that could explain fluid cognition or other phenotypes of interest. As demonstrated here, one straightforward method is to build a prediction model using a phenotype of interest as the target (e.g., fluid cognition) and incorporate the predicted value of this model (e.g., Brain Cognition), along with Brain Age and chronological age, into a multiple regression for commonality analyses. The unique effect of this predicted value will inform the missing variation in the brain MRI from Brain Age. If this unique effect is large, then researchers might need to reconsider whether using Brain Age is appropriate for a particular phenotype of interest.”

Reviewer 2 Public Review #5:

The study's third aim was to evaluate the authors' new index, brain-cognition. The results and conclusions drawn appear similar: compared to brain-age, brain-cognition captures more variance in the outcome variable, cognition-fluid. However, greater context and discussion of limitations is required here. Given the nature of the input variables (a large proportion of models in the study were based on fMRI data using cognitive tasks), it is perhaps unsurprising that optimizing these features for cognition-fluid generates an index better at explaining variance in cognition-fluid than the same features used to predict age. In other words, it is expected that brain-cognition would outperform brain-age in explaining variance in cognition-fluid since the former was optimized for the same variable in the same sample, while brain-age was optimized for age. Consequently, it is unclear if potential overfitting issues may inflate the brain-cognition's performance. This may be more evident when the model's input features are the ones closely related to cognition, e.g., fMRI tasks. When features were less directly related to cognitive tasks, e.g., structural MRI, the effect sizes for brain-cognition were notably smaller (see 'Total Brain Volume' and 'Subcortical Volume' models in Figure 6). This observation raises an important feasibility issue that the authors do not consider. Given the low likelihood of having task-based fMRI data available in clinical settings (such as hospitals), estimating a brain-cognition index that yields the large effects discussed in the study may be challenged by data scarcity.

Response Given the use of nested cross-validation, we do not consider the good predictive performance of Brain Cognition found here as overfitting. In fact, we found a similar level of predictive performance of Brain Cognition on another database with younger participants in the past (Tetereva et al., 2022). However, we agreed with Reviewer 2 that the prediction of fluid cognition might be driven by MRI modalities that are different from those that drive the prediction of chronological age. In our own work with other age groups, including young adults (Tetereva et al., 2022) and children (Pat, Wang, Anney, et al., 2022), cognitive functioning seems to be predicted well from task-based functional MRI. And Reviewer 2 is right that task-based fMRI is not commonly used in clinics, making it harder to translate our results. However, given our results, clinicians should be encouraged to use task-based fMRI if their goal is to predict cognitive functioning. Nevertheless, as suggested, we listed data scarcity as one of the limitations of our approach.

Discussion “For instance, the tasks used in task-based fMRI in HCP-A are not used widely in clinical settings (Horien et al., 2020). This might make it challenging to translate the approaches used here.”

Reviewer 2 Public Review #6:

This study is valuable and likely to be useful in two main ways. First, it can spur further research aimed at disentangling the lack of correspondence reported between the accuracy of the brain-age model and the brain-age's capacity to explain variance in fluid cognitive ability. Second, the study may serve, at least in part, as an illustration of the potential pros and cons of using indices that are specific and directly related to the outcome variable versus those that are generic and only indirectly related.

Response We are thankful for the encouragement. For the discrepancy between the predictive performance of age-prediction models and the utility of Brain Age indices as a biomarker for fluid cognition, we made a detailed discussion in our response to Reviewer 3 Public Review #9. More specifically, to ensure that readers can benefit from our findings, we made suggestions on how to ensure the utility of Brain Age indices as a biomarker for other phenotypes by drawing from our own strategy, as well as strategies used by Rokicki and colleagues (2021), Jirsaraie and colleagues (2023) and Bashyam and colleagues (2020).

As for the pros and cons between generic vs specific biomarkers, we made a detailed discussion in our response to Reviewer 3 Public Review #4. We also made some suggestions on how to make use of the difference in the ability between generic vs specific biomarkers (see Reviewer 2 Public Review #4, above).

Reviewer 2 Public Review #7:

Overall, the authors effectively present a clear design and well-structured procedure; however, their work could have been enhanced by providing more context for both the brain-age and brain-cognition indices, including a discussion of key concepts in the brain-age paradigm, which acknowledges that chronological age strongly predicts negative health outcomes, but crucially, recognizes that ageing does not affect everyone uniformly. Capturing this deviation from a healthy norm of ageing is the key brain-age index. This lack of context was mirrored in the presentation of the four brain-age indices provided, as it does not refer to how these indices are used in practice. In fact, there is no mention of a more common way in which brain-age is implemented in statistical analyses, which involves the use of brain-age delta as the variable of interest, along with linear and non-linear terms of age as covariates. The latter is used to account for the regression-to-the-mean effect. The 'corrected brain-age delta' the authors use does not include a non-linear term, which perhaps is an additional reason (besides the one provided by the authors) as to why there may be small, but non-zero, common effects of both age and brain-age in the 'corrected brain-age delta' index commonality analysis. The context for brain-cognition was even more limited, with no reference to any existing literature that has explored direct brain-cognitive markers, such as brain-cognition.

Response Regarding Brain Age and negative health outcomes, we addressed this in our response to Reviewer 1 Recommendations For The Authors #1 (see below). Briefly, we now discussed (1) the consistency between our findings on fluid cognition and other recent works on negative health outcomes, (2) the differences between Brain Age studies focusing on negative health outcomes vs. cognitive functioning and (3) suggested solutions to optimise the utility of brain age for both cognitive functioning and negative health outcomes.

Regarding how Brain Age was used in practice, we addressed this in our response to Reviewer 3 Public Review #2 (see below). Our argument resonates Butler and colleagues’ (2021) suggestion that the common practice for Brain Age analysis should be re-evaluated: “The MBAG and performance on the complex cognition tasks were not associated (r =  .01, p = 0.71). These results indicate that the association between cognition and the BAG are driven by the association between age and cognitive performance. As such, it is critical that readers of past literature note whether or not age was controlled for when testing for effects on the BAG, as this has not always been common practice (e.g., Beheshti et al., 2018; Cole, Underwood, et al., 2017; Franke et al., 2015; Gaser et al., 2013; Liem et al., 2017; Nenadi c et al., 2017; Steffener et al., 2016). (p. 4097).”

Importantly, we also implemented “brain-age delta as the variable of interest, along with linear and non-linear terms of age as covariates” in our additional analyses along with other implementations (see Reviewer 2 Recommendations For The Authors #3). Of particular note, we found that adding a non-linear term (i.e., a quadratic term for chronological age) barely changed the results of commonality analyses.

We now wrote this paragraph to recommend how future research should implement Brain Age:

Discussion

“First, they have to be aware of the overlap in variation between Brain Age and chronological age and should focus on the contribution of Brain Age over and above chronological age. Using Brain Age Gap will not fix this. Butler and colleagues (2021) recently highlighted this point, “These results indicate that the association between cognition and the BAG are driven by the association between age and cognitive performance. As such, it is critical that readers of past literature note whether or not age was controlled for when testing for effects on the BAG, as this has not always been common practice (p. 4097).” Similar to their recommendation (Butler et al., 2021), we suggest future work focus on Corrected Brain Age Gap or, better, unique effects of Brain Age indices after controlling for chronological age in multiple regressions. In the case of fluid cognition, the unique effects might be too small to be clinically meaningful as shown here and previously (Butler et al., 2021; Jirsaraie, Kaufmann, et al., 2023). “

Regarding brain cognition, we now expanded our explanation about Brain Cognition on how it might be relevant to Brain Age and on Brain Cognition’s predictive performance found previously.

Introduction

“Third and finally, certain variation in the brain MRI is related to fluid cognition, but to what extent does Brain Age not capture this variation? To estimate the variation in the brain MRI that is related to fluid cognition, we could build prediction models that directly predict fluid cognition (i.e., as opposed to chronological age) from brain MRI data. Previous studies found reasonable predictive performances of these cognition-prediction models, built from certain MRI modalities (Dubois et al., 2018; Pat, Wang, Anney, et al., 2022; Rasero et al., 2021; Sripada et al., 2020; Tetereva et al., 2022; for review, see Vieira et al., 2022). Analogous to Brain Age, we called the predicted values from these cognition-prediction models, Brain Cognition. The strength of an out-of-sample relationship between Brain Cognition and fluid cognition reflects variation in the brain MRI that is related to fluid cognition and, therefore, indicates the upper limit of Brain Age’s capability in capturing fluid cognition. Consequently, the unique effects of Brain Cognition that explain fluid cognition beyond Brain Age and chronological age indicate what is missing from Brain Age -- the amount of co-variation between brain MRI and fluid cognition that cannot be captured by Brain Age.”

Discussion

“Third, by introducing Brain Cognition, we showed the extent to which Brain Age indices were not able to capture the variation of brain MRI that is related to fluid cognition. Brain Cognition, from certain cognition-prediction models such as the stacked models, has relatively good predictive performance, consistent with previous studies (Dubois et al., 2018; Pat, Wang, Anney, et al., 2022; Rasero et al., 2021; Sripada et al., 2020; Tetereva et al., 2022; for review, see Vieira et al., 2022).”

Reviewer 2 Public Review #8:

While this paper delivers intriguing and thought-provoking results, it would benefit from recognizing the value that both approaches--brain-age indices and more direct, specific markers like brain-cognition--can contribute to the field.

Response Thank you so much for recognising the value of our work. As we mentioned above in our response to Reviewer 2 Public Review #4 and #6, we made some suggestions on how to make use of the difference in the ability between generic vs specific biomarkers.

Reviewer 3 (Public Review):

Reviewer 3 Public Review Overall:

The main question of this article is as follows: "To what extent does having information on brain-age improve our ability to capture declines in fluid cognition beyond knowing a person's chronological age?" While this question is worthwhile, considering that there is considerable confusion in the field about the nature of brain-age, the authors are currently missing an opportunity to convey the inevitability of their results, given how brain-age and the brain-age gap are calculated. They also argue that brain-cognition is somehow superior to brain-age, but insufficient evidence is provided in support of this claim.

Response We addressed the concerns below. The inevitability of our results is not obvious to many researchers who might be interested in Brain Age. We hope our findings might make many issues surrounding Brain Age more obvious, and we now make many suggestions on how to address some of these issues. We no longer argue that Brain Cognition is superior to Brain Age (Reviewer 3 Public Review #4). Rather, we treated the capability of Brain Cognition in capturing fluid cognition as the upper limit of Brain Age’s capability in capturing fluid cognition. We used the unique effects of Brain Cognition that explain fluid cognition beyond Brain Age and chronological age to indicate how much Brain Age misses the variation in the brain MRI that could explain fluid cognition.

Specific comments follow:

Reviewer 3 Public Review #1:

  • "There are many adjustments proposed to correct for this estimation bias" (p3). Regression to the mean is not a sign of bias. Any decent loss function will result in over-predicting the age of younger individuals and under-predicting the age of older individuals. This is a direct result of minimizing an error term (e.g., mean squared error). Therefore, it is inappropriate to refer to regression to the mean as a sign of bias. This misconception has led to a great deal of inappropriate analyses, including "correcting" the brain age gap by regressing out age.

Response: Thank you so much for raising this issue. We used the word ‘bias’ following many articles in the field. For instance,

de Lange and Cole (2020) wrote: “brain-age estimation also involves a frequently observed bias: brain age is overestimated in younger subjects and underestimated in older subjects, while brain age for participants with an age closer to the mean age (of the training dataset) are predicted more accurately (Cole, Le, Kuplicki, McKinney, Yeh, Thompson, Paulus, Investigators, et al., 2018, Liang, Zhang, Niu, 2019, Niu, Zhang, Kounios, Liang, 2019, Smith, Vidaurre, Alfaro-Almagro, Nichols, Miller, 2019).”

Cole (2020) wrote: “As recent research has highlighted a proportional bias in brain-age calculation, whereby the difference between chronological age and brain-predicted age is negatively correlated with chronological age (Le et al., 2018, Liang et al., 2019, Smith et al., 2019), an age-bias correction procedure was used. This entailed calculating the regression line between age (predictor) and brain-predicted age (outcome) in the training set, then using the slope (i.e., coefficient) and intercept of that line to adjust brain-predicted age values in the testing set (by subtracting the intercept and then dividing by the slope). After applying the age-bias correction the brain-predicted age difference (brain-PAD) was calculated; chronological age subtracted from brain-predicted age.”

Beheshiti and colleagues (2019) used bias in their title: “Bias-adjustment in neuroimaging-based brain age frameworks: a robust scheme”

More recently, Cumplido-Mayoral and colleagues (2023) wrote: “As recent research has shown that brain-age estimation involves a proportional bias (de Lange et al., 2020a; Le et al., 2018; Liang et al., 2019; Smith et al., 2019), we applied a well-established age-bias correction procedure to our data (de Lange et al., 2020a; Le et al., 2018).”

Still, we agree with Reviewer 3 that using ‘bias’ might lead to misinterpretation. As Butler and colleagues (Butler et al., 2021) pointed out, ”It is important to note that regression toward the mean is not a failure, but a feature, of regression and related methods.“ We rewrote the paragraph and clarified the “regression towards the mean” issue. We no longer used the word “bias” here:

Introduction

“Note researchers often subtract chronological age from Brain Age, creating an index known as Brain Age Gap (Franke & Gaser, 2019). A higher value of Brain Age Gap is thought to reflect accelerated/premature aging. Yet, given that Brain Age Gap is calculated based on both Brain Age and chronological age, Brain Age Gap still depends on chronological age (Butler et al., 2021). If, for instance, Brain Age was based on prediction models with poor performance and made a prediction that everyone was 50 years old, individual differences in Brain Age Gap would then depend solely on chronological age (i.e., 50 minus chronological age). Moreover, Brain Age is known to demonstrate the “regression towards the mean” phenomenon (Stigler, 1997). More specifically, because Brain Age is a predicted value of a regression model that predicts chronological age, Brain Age is usually shrunk towards the mean age of samples used for training the model (Butler et al., 2021; de Lange & Cole, 2020; Le et al., 2018). Accordingly, Brain Age predicts chronological age more accurately for individuals who are closer to the mean age while overestimating younger individuals’ chronological age and underestimating older individuals’ chronological age. There are many adjustments proposed to correct for the age dependency, but the outcomes tend to be similar to each other (Beheshti et al., 2019; de Lange & Cole, 2020; Liang et al., 2019; Smith et al., 2019). These adjustments can be applied to Brain Age and Brain Age Gap, creating Corrected Brain Age and Corrected Brain Age Gap, respectively. Corrected Brain Age Gap in particular is viewed as being able to control for age dependency (Butler et al., 2021). Here, we tested the utility of different Brain Age calculations in capturing fluid cognition, over and above chronological age.”

Reviewer 3 Public Review #2:

  • "Corrected Brain Age Gap in particular is viewed as being able to control for both age dependency and estimation biases (Butler et al., 2021)" (p3). This summary is not accurate as Butler and colleagues did not use the words "corrected" and "biases" in this context. All that authors say in that paper is that regressing out age from the brain age gap - which is referred to as the modified brain age gap (MBAG) - makes it so that the modified brain age gap is not dependent on age, which is true. This metric is meaningless, though, because it is the variance left over after regressing out age from residuals from a model that was predicting age. If it were not for the fact that regression on residuals is not equivalent to multiple regression (and out of sample estimates), MBAG would be a vector of zeros. Upon reading the Methods, I noticed that the authors use a metric from Le et al. (2018) for the "Corrected Brain Age Gap". If they cite the Butler et al. (2021) paper, I highly recommend sticking with the same notation, metrics and terminology throughout. That would greatly help with the interpretability of the present manuscript, and cross-comparisons between the two.

Response: We thank Reviewer 3 for pointing out the issues surrounding our choices of wording: "corrected" and "biases". We share the same frustration with Reviewer 3 in that different brain-age articles use different terminologies, and we tried to make sure our readers understand our calculations of Brain Age indices in order to compare our results with previous work.

We commented on the word “bias” in our response to Reviewer 3 Public Review #1 above and refrained from using this word in the revised manuscript. Here we commented on the use of the word “Corrected Brain Age Gap". And by doing so, we clarified how we calculated it.

Reviewer 3 is right that we cited the work of Butler and colleagues (2021), but wasn’t accurate to say that we used “a metric from Le et al. (2018) for the "Corrected Brain Age Gap". We, instead, used a method described in de Lange and Cole’s (2020) work. We now added equations to explain this method in our Materials and Method section (see below).

It is important to note that Butler and colleagues (2021) did not come up with any adjustment methods. Instead, Butler and colleagues (2021) discussed three adjustment methods:

  1. A method proposed by Beheshiti and colleagues (2019). Butler and colleagues (2021) called the result of this method, Modified Brain Age Gap (MBAG). Importantly, Butler and colleagues (2021) discouraged the use of this method due to “researchers misinterpreting the reduced variability of the MBAG as an improvement in prediction accuracy.” Accordingly in our article, we performed methods (2) and (3) below.

  2. A method proposed by de Lange and Cole (2020). We used this method in our article (see below for the equations). Briefly, we first fit a regression line predicting the Brain Age from a chronological age in each training set. We then used the slope and intercept of this regression line to adjust Brain Age in the corresponding test set, resulting in an adjusted index of Brain Age. Butler and colleagues (2021) called this index, “Revised Predicted Age.”, while de Lange and Cole’s (2020) originally called this Corrected Brain Age, “Corrected Predicted Age”. Butler and colleagues (2021) then subtracted the chronological age from this index and called it, “Revised Brain Age Gap (RBAG)”. We would like to follow the original terminology, but we do not want to use the word “Predicted Age” since chronological age can be predicted by other variables beyond the brain. We then settled with the word, "Corrected Brain Age" and “Corrected Brain Age Gap". We listed the terminologies used in the past in our article (see below).

  3. A method proposed by Le and colleagues (2018). Here, Butler and colleagues (2021) referred to one of the approaches done by Le and colleagues: “include age as a regressor when doing follow-up analyses.” Essentially this is what we did for the commonality analysis. Le and colleagues (2018)’ approach is the same as examining the unique effects of Brain Age in a multiple regression analysis with Chronological Age and Brain Age as regressors.

While indexes from de Lange and Cole’s (2020) and Le and colleagues’ (2018) methods show poor performance in capturing fluid cognition in the current work, we need to stress that many research groups do not believe that these methods are meaningless. In fact, de Lange and Cole’s method (2020) is one of the most commonly implemented methods that can be seen elsewhere (e.g., Cole et al., 2020; Cumplido-Mayoral et al., 2023; Denissen et al., 2022). This index just does not seem to work well in the case of fluid cognition.

Here is how we described how we calculated Brain Age indexes in the revised manuscript:

Methods

“ Brain Age calculations: Brain Age, Brain Age Gap, Corrected Brain Age and Corrected Brain Age Gap In addition to Brain Age, which is the predicted value from the models predicting chronological age in the test sets, we calculated three other indices to reflect the estimation of brain aging. First, Brain Age Gap reflects the difference between the age predicted by brain MRI and the actual, chronological age. Here we simply subtracted the chronological age from Brain Age:

Brain Age Gapi = Brain Agei - chronological agei , (2)

where i is the individual. Next, to reduce the dependency on chronological age (Butler et al., 2021; de Lange & Cole, 2020; Le et al., 2018), we applied a method described in de Lange and Cole’s (2020), which was implemented elsewhere (Cole et al., 2020; Cumplido-Mayoral et al., 2023; Denissen et al., 2022):

In each outer-fold training set: Brain Agei = 0 + 1 chronological agei + εi, (3)

Then in the corresponding outer-fold test set: Corrected Brain Agei = (Brain Agei - 0)/1, (4)

That is, we first fit a regression line predicting the Brain Age from a chronological age in each outer-fold training set. We then used the slope (1) and intercept (0) of this regression line to adjust Brain Age in the corresponding outer-fold test set, resulting in Corrected Brain Age. Note de Lange and Cole (2020) called this Corrected Brain Age, “Corrected Predicted Age”, while Butler (2021) called it “Revised Predicted Age.”

Lastly, we computed Corrected Brain Age Gap by subtracting the chronological age from the Corrected Brain Age (Butler et al., 2021; Cole et al., 2020; de Lange & Cole, 2020; Denissen et al., 2022):

Corrected Brain Age Gap = Corrected Brain Age - chronological age, (5)

Note Cole and colleagues (2020) called Corrected Brain Age Gap, “brain-predicted age difference (brain-PAD),” while Butler and colleagues (2021) called this index, “Revised Brain Age Gap”.

Reviewer 3 Public Review #3:

  • "However, the improvement in predicting chronological age may not necessarily make Brain Age to be better at capturing Cognitionfluid. If, for instance, the age-prediction model had the perfect performance, Brian Age Gap would be exactly zero and would have no utility in capturing Cognitionfluid beyond chronological age" (p3). I largely agree with this statement. I would be really careful to distinguish between brain-age and the brain-age gap here, as the former is a predicted value, and the latter is the residual times -1 (i.e., predicted age - age). Therefore, together they explain all of the variance in age. Changing the first sentence to refer to the brain-age gap would be more accurate in this context. The brain-age gap will never be exactly zero, though, even with perfect prediction on the training set, because subjects in the testing set are different from the subjects in the training set.

Response: Thank you so much for pointing this out. We agree to change “Brain Age” to “Brain Age Gap” in the mentioned sentence.

Reviewer 3 Public Review #4:

  • "Can we further improve our ability to capture the decline in cognitionfluid by using, not only Brain Age and chronological age, but also another biomarker, Brain Cognition?". This question is fundamentally getting at whether a predicted value of cognition can predict cognition. Assuming the brain parameters can predict cognition decently, and the original cognitive measure that you were predicting is related to your measure of fluid cognition, the answer should be yes. Upon reading the Methods, it became clear that the cognitive variable in the model predicting cognition using brain features (to get predicted cognition, or as the authors refer to it, brain-cognition) is the same as the measure of fluid cognition that you are trying to assess how well brain-cognition can predict. Assuming the brain parameters can predict fluid cognition at all, it is then inevitable that brain-cognition will predict fluid cognition. Therefore, it is inappropriate to use predicted values of a variable to predict the same variable.

Response: Thank you Reviewer 3 for pointing out this important issue. Reviewer 1 (Recommendations For The Authors #4) and Reviewer 2 (Public Review #4) bought up a similar issue. While Reviewer 3 felt that “it is inappropriate to use predicted values of a variable to predict the same variable,“ Reviewer 2 viewed Brain Cognition as a 'specific/direct' index and Brain Age as a 'generic/indirect' index. And both have merit in their own right.

Similar to Reviewer 2, we believe that the specific index is as important and has commonly been used elsewhere in the context of biomarkers. For instance, to obtain neuroimaging biomarkers for Alzheimer’s, neuroimaging researchers often build a predictive model to predict Alzheimer's diagnosis (Khojaste-Sarakhsi et al., 2022). In fact, outside of neuroimaging, polygenic risk scores (PRSs) in genomics are often used following “to use predicted values of a variable to predict the same variable” (Choi et al., 2020). For instance, a PRS of ADHD that indicates the genetic liability to develop ADHD is based on genome-wide association studies of ADHD (Demontis et al., 2019).

Still, we now agreed that it may not be fair to compare the performance of a specific index (Brain Cognition) and a generic index (Brain Age) directly (as pointed out by Reviewer 3 Public Review #6 below). Accordingly, in the revision, as opposed to treating Brain Cognition and Brain Age as separate biomarkers and comparing them, we treated the capability of Brain Cognition in capturing fluid cognition as the upper limit of Brain Age’s capability in capturing fluid cognition. In other words, the strength of an out-of-sample relationship between Brain Cognition and fluid cognition reflects variation in the brain MRI that is related to fluid cognition. And consequently, the unique effects of Brain Cognition that explain fluid cognition beyond Brain Age and chronological age indicate what is missing from Brain Age -- the amount of co-variation between brain MRI and fluid cognition that cannot be captured by Brain Age. According to Reviewer 2, a generic index (Brain Age) “sacrificed some additional explained variance provided” compared to a specific index (Brain Cognition). Here, we used the commonality analyses to quantify how much scarifying was made by Brain Age. See below for the re-conceptualisation of Brain Age vs. Brain Cognition in the revision:

Abstract

“Lastly, we tested how much Brain Age missed the variation in the brain MRI that could explain fluid cognition. To capture this variation in the brain MRI that explained fluid cognition, we computed Brain Cognition, or a predicted value based on prediction models built to directly predict fluid cognition (as opposed to chronological age) from brain MRI data. We found that Brain Cognition captured up to an additional 11% of the total variation in fluid cognition that was missing from the model with only Brain Age and chronological age, leading to around a 1/3-time improvement of the total variation explained.”

Introduction:

“Third and finally, certain variation in the brain MRI is related to fluid cognition, but to what extent does Brain Age not capture this variation? To estimate the variation in the brain MRI that is related to fluid cognition, we could build prediction models that directly predict fluid cognition (i.e., as opposed to chronological age) from brain MRI data. Previous studies found reasonable predictive performances of these cognition-prediction models, built from certain MRI modalities (Dubois et al., 2018; Pat, Wang, Anney, et al., 2022; Rasero et al., 2021; Sripada et al., 2020; Tetereva et al., 2022; for review, see Vieira et al., 2022). Analogous to Brain Age, we called the predicted values from these cognition-prediction models, Brain Cognition. The strength of an out-of-sample relationship between Brain Cognition and fluid cognition reflects variation in the brain MRI that is related to fluid cognition and, therefore, indicates the upper limit of Brain Age’s capability in capturing fluid cognition. Consequently, the unique effects of Brain Cognition that explain fluid cognition beyond Brain Age and chronological age indicate what is missing from Brain Age -- the amount of co-variation between brain MRI and fluid cognition that cannot be captured by Brain Age.”

“Finally, we investigated the extent to which Brain Age indices missed the variation in the brain MRI that could explain fluid cognition. Here, we tested Brain Cognition’s unique effects in multiple regression models with a Brain Age index, chronological age and Brain Cognition as regressors to explain fluid cognition.“

Discussion

“Third, how much does Brain Age miss the variation in the brain MRI that could explain fluid cognition? Brain Age and chronological age by themselves captured around 32% of the total variation in fluid cognition. But, around an additional 11% of the variation in fluid cognition could have been captured if we used the prediction models that directly predicted fluid cognition from brain MRI.

“Third, by introducing Brain Cognition, we showed the extent to which Brain Age indices were not able to capture the variation of brain MRI that is related to fluid cognition. Brain Cognition, from certain cognition-prediction models such as the stacked models, has relatively good predictive performance, consistent with previous studies (Dubois et al., 2018; Pat, Wang, Anney, et al., 2022; Rasero et al., 2021; Sripada et al., 2020; Tetereva et al., 2022; for review, see Vieira et al., 2022). We then examined Brain Cognition using commonality analyses (Nimon et al., 2008) in multiple regression models having a Brain Age index, chronological age and Brain Cognition as regressors to explain fluid cognition. Similar to Brain Age indices, Brain Cognition exhibited large common effects with chronological age. But more importantly, unlike Brain Age indices, Brain Cognition showed large unique effects, up to around 11%. The unique effects of Brain Cognition indicated the amount of co-variation between brain MRI and fluid cognition that was missed by a Brain Age index and chronological age. This missing amount was relatively high, considering that Brain Age and chronological age together explained around 32% of the total variation in fluid cognition. Accordingly, if a Brain Age index was used as a biomarker along with chronological age, we would have missed an opportunity to improve the performance of the model by around one-third of the variation explained.”

Reviewer 3 Public Review #5:

  • "However, Brain Age Gap created from the lower-performing age-prediction models explained a higher amount of variation in Cognitionfluid. For instance, the top performing age-prediction model, "Stacked: All excluding Task Contrast", generated Brain Age and Corrected Brain Age that explained the highest amount of variation in Cognitionfluid, but, at the same time, produced Brian Age Gap that explained the least amount of variation in Cognitionfluid" (p7). This is an inevitable consequence of the following relationship between predicted values and residuals (or residuals times -1): y=(y-y ̂ )+y ̂. Let's say that age explains 60% of the variance in fluid cognition, and predicted age (y ̂) explains 40% of the variance in fluid cognition. Then the brain age gap (-(y-y ̂)) should explain 20% of the variance in fluid cognition. If by "Corrected Brain Age" you mean the modified predicted age from Butler et al (2021), the "Corrected Brain Age" result is inevitable because the modified predicted age is essentially just age with a tiny bit of noise added to it. From Figure 4, though, this does not seem to be the case, because the lower left quadrant in panel (a) should be flat and high (about as high as the predictive value of age for fluid cognition). So it is unclear how "Corrected Brain Age" is calculated. It looks like you might be regressing age out of brain-age, though from your description in the Methods section, it is not totally clear. Again, I highly recommend using the terminology and metrics of Butler et al (2021) throughout to reduce confusion. Please also clarify how you used the slope and intercept. In general, given how brain-age metrics tend to be calculated, the following conclusion is inevitable: "As before, the unique effects of Brain Age indices were all relatively small across the four Brain Age indices and across different prediction models" (p10).

Response: We agreed that the results are ‘inevitable’ due to the transformations from Brain Age to other Brain Age indices. However, the consequences of these transformations may not be very clear to readers who are not very familiar with Brain Age literature and to the community at large who think about the implications of Brain Age. This is appreciated by Reviewer 1, who mentioned “While the main message will not come as a surprise to anyone with hands-on experience of using brain-age models, I think it is nonetheless an important message to convey to the community.”

Note we made clarifications on how we calculated each of the Brain Age indices above (see
Reviewer 3 Public Review #2), including how we used the slope and intercept. We chose the terminology closer to the one originally used by de Lange and Cole (2020) and now listed many terminologies others have used to refer to this transformation.

Reviewer 3 Public Review #6:

"On the contrary, the unique effects of Brain Cognition appeared much larger" (p10). This is not a fair comparison if you do not look at the unique effects above and beyond the cognitive variable you predicted in your brain-cognition model. If your outcome measure had been another metric of cognition other than fluid cognition, you would see that brain-cognition does not explain any additional variance in this outcome when you include fluid cognition in the model, just as brain-age would not when including age in the model (minus small amounts due to penalization and out-of-sample estimates). This highlights the fact that using a predicted value to predict anything is worse than using the value itself.

Response Please see our response to Reviewer 3 Public Review #4 above. Briefly, we no long made this comparison. Instead, we now viewed the unique effects of Brain Cognition as a way to test how much Brain Age missed the variation in the brain MRI that could explain fluid cognition.

Reviewer 3 Public Review #7:

"First, how much does Brain Age add to what is already captured by chronological age? The short answer is very little" (p12). This is a really important point, but the paper requires an in-depth discussion of the inevitability of this result, as discussed above.

Response We agree that the tight relationship between Brain Age and chronological age is inevitable. We mentioned this from the get-go in the introduction:

Introduction “Accordingly, by design, Brain Age is tightly close to chronological age. Because chronological age usually has a strong relationship with fluid cognition, to begin with, it is unclear how much Brain Age adds to what is already captured by chronological age.”

To make this point obvious, we quantified the overlap between Brain Age and chronological age using the commonality analysis. We hope that our effort to show the inevitability of this overlap can make people more careful when designing studies involving Brain Age.

Reviewer 3 Public Review #8:

"Third, do we have a solution that can improve our ability to capture Cognitionfluid from brain MRI? The answer is, fortunately, yes. Using Brain Cognition as a biomarker, along with chronological age, seemed to capture a higher amount of variation in Cognitionfluid than only using Brain Age" (p12). I suggest controlling for the cognitive measure you predicted in your brain-cognition model. This will show that brain-cognition is not useful above and beyond cognition, highlighting the fact that it is not a useful endeavor to be using predicted values.

Response This point is similar to Reviewer 3 Public Review #6. Again please see our response to Reviewer 3 Public Review #4 above. Briefly, we no long made this comparison and said whether Brain Cognition is ‘better’ than Brain Age. Instead, we now viewed the unique effects of Brain Cognition as a way to test how much Brain Age missed the variation in the brain MRI that could explain fluid cognition.

Reviewer 3 Public Review #9:

"Accordingly, a race to improve the performance of age-prediction models (Baecker et al., 2021) does not necessarily enhance the utility of Brain Age indices as a biomarker for Cognitionfluid. This calls for a new paradigm. Future research should aim to build prediction models for Brian Age indices that are not necessarily good at predicting age, but at capturing phenotypes of interest, such as Cognitionfluid and beyond" (p13). I whole-heartedly agree with the first two sentences, but strongly disagree with the last. Certainly your results, and the underlying reason as to why you found these results, calls for a new paradigm (or, one might argue, a pre-brain-age paradigm). As of now, your results do not suggest that researchers should keep going down the brain-age path. While it is difficult to prove that there is no transformation of brain-age or the brain-age gap that will be useful, I am nearly sure this is true from the research I have done. If you would like to suggest that the field should continue down this path, I suggest presenting a very good case to support this view.

Response Thank you for your comments on this issue.

Since the submission of our manuscript, other researchers also made a similar observation regarding the disagreement between the predictive performance of age-prediction models and the utility of Brain Age. For instance, in their systematic review, Jirasarie and colleagues (2023, p7) wrote this statement, “Despite mounting evidence, there is a persisting assumption across several studies that the most accurate brain age models will have the most potential for detecting differences in a given phenotype of interest. As a point of illustration, seven of the twenty studies in this review only evaluated the utility of their most accurate model, which in all cases was trained using multimodal features. This approach has also led to researchers to exclusively use T1-weighted and diffusion-weighted MRI scans when developing brain age models36 since such modalities have been shown to have the largest contribution to a model’s predictive power.2,67 However, our review suggests that model accuracy does not necessarily provide meaningful insight about clinical utility (e.g., detection of age-related pathology). Taken with prior studies,16,17 it appears that the most accurate models tend to not be the most useful.”

We now discussed the disagreement between the predictive performance of age-prediction models and the utility of Brain Age, not only in the context of cognitive functioning (Jirsaraie, Kaufmann, et al., 2023) but also in the context of neurological/psychological disorders (Bashyam et al., 2020; Rokicki et al., 2021). Following Reviewer 3’s suggestion, we also added several possible strategies to mitigate this problem of Brain Age, used by us and other groups. Please see below.

Discussion:

“This discrepancy between the predictive performance of age-prediction models and the utility of Brain Age indices as a biomarker is consistent with recent findings (for review, see Jirsaraie, Gorelik, et al., 2023), both in the context of cognitive functioning (Jirsaraie, Kaufmann, et al., 2023) and neurological/psychological disorders (Bashyam et al., 2020; Rokicki et al., 2021). For instance, combining different MRI modalities into the prediction models, similar to our stacked models, often lead to the highest performance of age-prediction models, but does not likely explain the highest variance across different phenotypes, including cognitive functioning and beyond (Jirsaraie, Gorelik, et al., 2023).”

“Next, researchers should not select age-prediction models based solely on age-prediction performance. Instead, researchers could select age-prediction models that explained phenotypes of interest the best. Here we selected age-prediction models based on a set of features (i.e., modalities) of brain MRI. This strategy was found effective not only for fluid cognition as we demonstrated here, but also for neurological and psychological disorders as shown elsewhere (Jirsaraie, Gorelik, et al., 2023; Rokicki et al., 2021). Rokicki and colleagues (2021), for instance, found that, while integrating across MRI modalities led to age-prediction models with the highest age-prediction performance, using only T1 structural MRI gave age-prediction models that were better at classifying Alzheimer’s disease. Similarly, using only cerebral blood flow gave age-prediction models that were better at classifying mild/subjective cognitive impairment, schizophrenia and bipolar disorder.

As opposed to selecting age-prediction models based on a set of features, researchers could also select age-prediction models based on modelling methods. For instance, Jirsaraie and colleagues (2023) compared gradient tree boosting (GTB) and deep-learning brain network (DBN) algorithms in building age-prediction models. They found GTB to have higher age-prediction performance but DBN to have better utility in explaining cognitive functioning. In this case, an algorithm with better utility (e.g., DBN) should be used for explaining a phenotype of interest. Similarly, Bashyam and colleagues (2020) built different DBN-based age-prediction models, varying in age-prediction performance. The DBN models with a higher number of epochs corresponded to higher age-prediction performance. However, DBN-based age-prediction models with a moderate (as opposed to higher or lower) number of epochs were better at classifying Alzheimer’s disease, mild cognitive impairment and schizophrenia. In this case, a model from the same algorithm with better utility (e.g., those DBN with a moderate epoch number) should be used for explaining a phenotype of interest. Accordingly, this calls for a change in research practice, as recently pointed out by Jirasarie and colleagues (2023, p7), “Despite mounting evidence, there is a persisting assumption across several studies that the most accurate brain age models will have the most potential for detecting differences in a given phenotype of interest”. Future neuroimaging research should aim to build age-prediction models that are not necessarily good at predicting age, but at capturing phenotypes of interest.”

Reviewer #1 (Recommendations For The Authors):

In this paper, the authors evaluate the utility of brain age derived metrics for predicting cognitive decline using the HCP aging dataset by performing a commonality analysis in a downstream regression. The main conclusion is that brain age derived metrics do not explain much additional variation in cognition over and above what is already explained by age. The authors propose to use a regression model trained to predict cognition ('brain-cognition') as an alternative that explains more unique variance in the downstream regression.

This is a reasonably good paper and the use of a commonality analysis is a nice contribution to understanding variance partitioning across different covariates. While the main message will not come as a surprise to anyone with hands-on experience of using brain-age models, I think it is nonetheless an important message to convey to the community. With that said, I have some comments that I believe the authors ought to address before publication.

Reviewer 1 Recommendations For The Authors #1:

First, from a conceptual point of view, the authors focus exclusively on cognition as a downstream outcome. This is undeniably important, but is only one application area for brain age models. They are also used for example to provide biomarkers for many brain disorders. What would the results presented here have to say about these application areas? Further, I think that since brain-age models by construction confound relevant biological variation with the accuracy of the regression models used to estimate them, my own opinion about the limits of interpretation of (e.g.) the brain-age gap is as a dimensionless biomarker. This has also been discussed elsewhere (see e.g. https://academic.oup.com/brain/article/143/7/2312/5863667). I would suggest the authors nuance their discussion to provide considerations on these issues.

Response Thank you Reviewer 1 for pointing out two important issues.

The first issue was about applications for brain disorders. We now made a detailed discussion about this, which also addressed Reviewer 3 Public Review #9. Briefly, we now bought up

  1. the consistency between our findings on fluid cognition and other recent works on brain disorders,

  2. under-fitted age-prediction models from Brain Age studies focusing on neurological/psychological disorders when applied to participants with neurological/psychological disorders because the age-prediction models were built from largely healthy participants,

and 3) suggested solutions we and others made to optimise the utility of Brain Age for both cognitive functioning and brain disorders.

Discussion:

“This discrepancy between the predictive performance of age-prediction models and the utility of Brain Age indices as a biomarker is consistent with recent findings (for review, see Jirsaraie, Gorelik, et al., 2023), both in the context of cognitive functioning (Jirsaraie, Kaufmann, et al., 2023) and neurological/psychological disorders (Bashyam et al., 2020; Rokicki et al., 2021). For instance, combining different MRI modalities into the prediction models, similar to our stacked models, often lead to the highest performance of age-prediction models, but does not likely explain the highest variance across different phenotypes, including cognitive functioning and beyond (Jirsaraie, Gorelik, et al., 2023).”

“There is a notable difference between studies investigating the utility of Brain Age in explaining cognitive functioning, including ours and others (e.g., Butler et al., 2021; Cole, 2020, 2020; Jirsaraie, Kaufmann, et al., 2023) and those explaining neurological/psychological disorders (e.g., Bashyam et al., 2020; Rokicki et al., 2021). That is, those Brain Age studies focusing on neurological/psychological disorders often build age-prediction models from MRI data of largely healthy participants (e.g., controls in a case-control design or large samples in a population-based design), apply the built age-prediction models to participants without vs. with neurological/psychological disorders and compare Brain Age indices between the two groups. This means that age-prediction models from Brain Age studies focusing on neurological/psychological disorders might be under-fitted when applied to participants with neurological/psychological disorders because they were built from largely healthy participants. And thus, the difference in Brain Age indices between participants without vs. with neurological/psychological disorders might be confounded by the under-fitted age-prediction models (i.e., Brain Age may predict chronological age well for the controls, but not for those with a disorder). On the contrary, our study and other Brain Age studies focusing on cognitive functioning often build age-prediction models from MRI data of largely healthy participants and apply the built age-prediction models to participants who are also largely healthy. Accordingly, the age-prediction models for explaining cognitive functioning do not suffer from being under-fitted. We consider this as a strength, not a weakness of our study.”

“Next, researchers should not select age-prediction models based solely on age-prediction performance. Instead, researchers could select age-prediction models that explained phenotypes of interest the best. Here we selected age-prediction models based on a set of features (i.e., modalities) of brain MRI. This strategy was found effective not only for fluid cognition as we demonstrated here, but also for neurological and psychological disorders as shown elsewhere (Jirsaraie, Gorelik, et al., 2023; Rokicki et al., 2021). Rokicki and colleagues (2021), for instance, found that, while integrating across MRI modalities led to age-prediction models with the highest age-prediction performance, using only T1 structural MRI gave age-prediction models that were better at classifying Alzheimer’s disease. Similarly, using only cerebral blood flow gave age-prediction models that were better at classifying mild/subjective cognitive impairment, schizophrenia and bipolar disorder. As opposed to selecting age-prediction models based on a set of features, researchers could also select age-prediction models based on modelling methods. For instance, Jirsaraie and colleagues (2023) compared gradient tree boosting (GTB) and deep-learning brain network (DBN) algorithms in building age-prediction models. They found GTB to have higher age-prediction performance but DBN to have better utility in explaining cognitive functioning. In this case, an algorithm with better utility (e.g., DBN) should be used for explaining a phenotype of interest. Similarly, Bashyam and colleagues (2020) built different DBN-based age-prediction models, varying in age-prediction performance. The DBN models with a higher number of epochs corresponded to higher age-prediction performance. However, DBN-based age-prediction models with a moderate (as opposed to higher or lower) number of epochs were better at classifying Alzheimer’s disease, mild cognitive impairment and schizophrenia. In this case, a model from the same algorithm with better utility (e.g., those DBN with a moderate epoch number) should be used for explaining a phenotype of interest. Accordingly, this calls for a change in research practice, as recently pointed out by Jirasarie and colleagues (2023, p7), “Despite mounting evidence, there is a persisting assumption across several studies that the most accurate brain age models will have the most potential for detecting differences in a given phenotype of interest”. Future neuroimaging research should aim to build age-prediction models that are not necessarily good at predicting age, but at capturing phenotypes of interest.”

The second issue was about “the brain-age gap as a dimensionless biomarker.” We are not so clear on what the reviewer meant by “the dimensionless biomarker.” One possible meaning of the “dimensionless biomarker” is the fact that Brain Age from the same algorithm and same modality can be computed, such that Brain Age can be tightly fit or loosely fit with chronological age. This is what Bashyam and colleagues (2020) did in the article Reviewer 1 referred to. We now wrote about this strategy in the above paragraph in the Discussion.

Alternatively, “the dimensionless biomarker” might be something closer to what Reviewer 2 viewed Brain Age as a “generic/indirect” index (as opposed to a 'specific/direct' index in the case of Brain Cognition) (see Reviewer 2 Public Review #4). We discussed this in our response to Reviewer 3 Public Review #4.

Reviewer 1 Recommendations For The Authors #2:

Second, from a methods perspective, I am quite suspicious of the stacked regression models the authors are using to combine regression models and I suspect they may be overfit. In my experience, stacked models are very prone to overfitting when combined with cross-validation. This is because the predictions from the first level models (i,e. the features that are provided to the second-level 'stacked' models) contain information about the training set and the test set. If cross-validation is not done very carefully (e.g. using multiple hold-out sets), information leakage can easily occur at the second level. Unfortunately, there is not sufficient explanation of the methodological procedures in the current manuscript to fully understand what was done. First, please provide more information to enable the reader to better understand the stacked regression models and if the authors are not using an approach that fully preserves training and test separability, please do so.

Response: We would like to thank Reviewer 1 for the suggestion. We now made it clearer in texts and new figure (see below) that we used nested cross-validation to ensure no information leakage between training and test sets. Regarding the stacked models more specifically, the hyperparameters of the stacked models were tuned in the same inner-fold CV as the non-stacked model (see Figure 7 below). That is, training models for both non-stacked and stacked models did not involve the test set, ensuring that there was no data leakage between training and test sets.

Methods:

“To compute Brain Age and Brain Cognition, we ran two separate prediction models. These prediction models either had chronological age or fluid cognition as the target and standardised brain MRI as the features (Denissen et al., 2022). We used nested cross-validation (CV) to build these models (see Figure 7). We first split the data into five outer folds. We used five outer folds so that each outer fold had around 100 participants. This is to ensure the stability of the test performance across folds. In each outer-fold CV, one of the outer folds was treated as a test set, and the rest was treated as a training set, which was further divided into five inner folds. In each inner-fold CV, one of the inner folds was treated as a validation set and the rest was treated as a training set. We used the inner-fold CV to tune for hyperparameters of the models and the outer-fold CV to evaluate the predictive performance of the models.

In addition to using each of the 18 sets of features in separate prediction models, we drew information across these sets via stacking. Specifically, we computed predicted values from each of the 18 sets of features in the training sets. We then treated different combinations of these predicted values as features to predict the targets in separate “stacked” models. The hyperparameters of the stacked models were tuned in the same inner-fold CV as the non-stacked model (see Figure 7). That is, training models for both non-stacked and stacked models did not involve the test set, ensuring that there was no data leakage between training and test sets. We specified eight stacked models: “All” (i.e., including all 18 sets of features), “All excluding Task FC”, “All excluding Task Contrast”, “Non-Task” (i.e., including only Rest FC and sMRI), “Resting and Task FC”, “Task Contrast and FC”, “Task Contrast” and “Task FC”. Accordingly, in total, there were 26 prediction models for Brain Age and Brain Cognition.

Reviewer 1 Recommendations For The Authors #3:

Third, the authors standardize the elastic net regression coefficients post-hoc. Why did the authors not perform the more standard approach of standardizing the covariates and responses, prior to model estimation, which would yield standardized regression coefficients (in the classical sense) by construction? Please also provide an indication of the different regression strengths that were estimated across the different models and cross-validation splits. Also, how stable were the weights across splits?

Response For model fitting, we did not “standardize the elastic net regression coefficients post-hoc.” Instead, we did all of the standardisation steps prior to model fitting (see Methods below). For regression strengths across different models and cross-validation splits, we now provided predictive performance at each of the five outer-fold test sets in Figure 1 (below). As you may have seen, the predictive performance was quite stable across the cross-validation splits.

For visualising feature importance, We originally only standardised the elastic net regression coefficients post-hoc, so that feature importance plots were in the same scale across folds. However, as mentioned by Reviewer 3 (Recommendations for the Authors #7, below), this might make it difficult to interpret the directionality of the coefficients. In the revised manuscript, we refitted the Elastic Net model to the full dataset without splitting them into five folds and visualised the coefficients on brain images (see below).

Methods

“We controlled for the potential influences of biological sex on the brain features by first residualising biological sex from brain features in each outer-fold training set. We then applied the regression of this residualisation to the corresponding test set. We also standardised the brain features in each outer-fold training set and then used the mean and standard deviation of this outer-fold training set to standardise the test set. All of the standardisation was done prior to fitting the prediction models.”

“To understand how Elastic Net made a prediction based on different brain features, we examined the coefficients of the tuned model. Elastic Net coefficients can be considered as feature importance, such that more positive Elastic Net coefficients lead to more positive predicted values and, similarly, more negative Elastic Net coefficients lead to more negative predicted values (Molnar, 2019; Pat, Wang, Bartonicek, et al., 2022). While the magnitude of Elastic Net coefficients is regularised (thus making it difficult for us to interpret the magnitude itself directly), we could still indicate that a brain feature with a higher magnitude weights relatively stronger in making a prediction. Another benefit of Elastic Net as a penalised regression is that the coefficients are less susceptible to collinearity among features as they have already been regularised (Dormann et al., 2013; Pat, Wang, Bartonicek, et al., 2022).

Given that we used five-fold nested cross validation, different outer folds may have different degrees of ‘’ and ‘l_1 ratio’, making the final coefficients from different folds to be different. For instance, for certain sets of features, penalisation may not play a big part (i.e., higher or lower ‘’ leads to similar predictive performance), resulting in different ‘’ for different folds. To remedy this in the visualisation of Elastic Net feature importance, we refitted the Elastic Net model to the full dataset without splitting them into five folds and visualised the coefficients on brain images using Brainspace (Vos De Wael et al., 2020) and Nilern (Abraham et al., 2014) packages. Note, unlike other sets of features, Task FC and Rest FC were modelled after data reduction via PCA. Thus, for Task FC and Rest FC, we, first, multiplied the absolute PCA scores (extracted from the ‘components_’ attribute of ‘sklearn.decomposition.PCA’) with Elastic Net coefficients and, then, summed the multiplied values across the 75 components, leaving 71,631 ROI-pair indices.”

Reviewer 1 Recommendations For The Authors #4:

I do not really find it surprising that the level of unique explained variance provided by a brain-cognition model is higher than a brain-age model, given that the latter is considerably more accurate (also, in view of the comment above). As such I would recommend to tone down the claims about the utility of this method, also because it is only really applicable to one application area for brain age.

Response Thank you for bringing this issue to our attention. We have now toned down the claims about the utility of Brain Cognition and importantly treated the capability of Brain Cognition in capturing fluid cognition as the upper limit of Brain Age’s capability in capturing fluid cognition. Please see Reviewer 3 Public Review #4 above for a detailed discussion about this issue.

Reviewer 1 Recommendations For The Authors #5:

Please provide more details about the task designs and MRI processing procedures that were employed on this sample so that the reader is not forced to dig through the publications from the consortia contributing the data samples used. For example, comments such as "Here we focused on the pre-processed task fMRI files with a suffix "_PA_Atlas_MSMAll_hp0_clean.dtseries.nii." are not particularly helpful to readers not already familiar with this dataset.

Response Thank you so much for pointing out this important point on the clarity of the description of our MRI methodology. We now added additional details about the data processing done by the HCP-A and by us. We, for instance, explained the meaning of the HCP-A suffix “"_PA_Atlas_MSMAll_hp0_clean.dtseries.nii”. Please see below.

Methods

“HCP-A provides details of parameters for brain MRI elsewhere (Bookheimer et al., 2019; Harms et al., 2018). Here we used MRI data that were pre-processed by the HCP-A with recommended methods, including the MSMALL alignment (Glasser et al., 2016; Robinson et al., 2018) and ICA-FIX (Glasser et al., 2016) for functional MRI. We used multiple brain MRI modalities, covering task functional MRI (task fMRI), resting-state functional MRI (rsfMRI) and structural MRI (sMRI), and organised them into 19 sets of features.

Sets of Features 1-10: Task fMRI contrast (Task Contrast)

Task contrasts reflect fMRI activation relevant to events in each task. Bookheimer and colleagues (2019) provided detailed information about the fMRI in HCP-A. Here we focused on the pre-processed task fMRI Connectivity Informatics Technology Initiative (CIFTI) files with a suffix, “_PA_Atlas_MSMAll_hp0_clean.dtseries.nii.” These CIFTI files encompassed both the cortical mesh surface and subcortical volume (Glasser et al., 2013). Collected using the posterior-to-anterior (PA) phase, these files were aligned using MSMALL (Glasser et al., 2016; Robinson et al., 2018), linear detrended (see https://groups.google.com/a/humanconnectome.org/g/hcp-users/c/ZLJc092h980/m/GiihzQAUAwAJ) and cleaned from potential artifacts using ICA-FIX (Glasser et al., 2016).

To extract Task Contrasts, we regressed the fMRI time series on the convolved task events using a double-gamma canonical hemodynamic response function via FMRIB Software Library (FSL)’s FMRI Expert Analysis Tool (FEAT) (Woolrich et al., 2001). We kept FSL’s default high pass cutoff at 200s (i.e., .005 Hz). We then parcellated the contrast ‘cope’ files, using the Glasser atlas (Gordon et al., 2016) for cortical surface regions and the Freesurfer’s automatic segmentation (aseg) (Fischl et al., 2002) for subcortical regions. This resulted in 379 regions, whose number was, in turn, the number of features for each Task Contrast set of features.

HCP-A collected fMRI data from three tasks: Face Name (Sperling et al., 2001), Conditioned Approach Response Inhibition Task (CARIT) (Somerville et al., 2018) and VISual MOTOR (VISMOTOR) (Ances et al., 2009). First, the Face Name task (Sperling et al., 2001) taps into episodic memory. The task had three blocks. In the encoding block [Encoding], participants were asked to memorise the names of faces shown. These faces were then shown again in the recall block [Recall] when the participants were asked if they could remember the names of the previously shown faces. There was also the distractor block [Distractor] occurring between the encoding and recall blocks. Here participants were distracted by a Go/NoGo task. We computed six contrasts for this Face Name task: [Encode], [Recall], [Distractor], [Encode vs. Distractor], [Recall vs. Distractor] and [Encode vs. Recall].

Second, the CARIT task (Somerville et al., 2018) was adapted from the classic Go/NoGo task and taps into inhibitory control. Participants were asked to press a button to all [Go] but not to two [NoGo] shapes. We computed three contrasts for the CARIT task: [NoGo], [Go] and [NoGo vs. Go].

Third, the VISMOTOR task (Ances et al., 2009) was designed to test simple activation of the motor and visual cortices. Participants saw a checkerboard with a red square either on the left or right. They needed to press a corresponding key to indicate the location of the red square. We computed just one contrast for the VISMOTOR task: [Vismotor], which indicates the presence of the checkerboard vs. baseline.

Sets of Features 11-13: Task fMRI functional connectivity (Task FC)

Task FC reflects functional connectivity (FC ) among the brain regions during each task, which is considered an important source of individual differences (Elliott et al., 2019; Fair et al., 2007; Gratton et al., 2018). We used the same CIFTI file “_PA_Atlas_MSMAll_hp0_clean.dtseries.nii.” as the task contrasts. Unlike Task Contrasts, here we treated the double-gamma, convolved task events as regressors of no interest and focused on the residuals of the regression from each task (Fair et al., 2007). We computed these regressors on FSL, and regressed them in nilearn (Abraham et al., 2014). Following previous work on task FC (Elliott et al., 2019), we applied a highpass at .008 Hz. For parcellation, we used the same atlases as Task Contrast (Fischl et al., 2002; Glasser et al., 2016). We computed Pearson’s correlations of each pair of 379 regions, resulting in a table of 71,631 non-overlapping FC indices for each task. We then applied r-to-z transformation and principal component analysis (PCA) of 75 components (Rasero et al., 2021; Sripada et al., 2019, 2020). Note to avoid data leakage, we conducted the PCA on each training set and applied its definition to the corresponding test set. Accordingly, there were three sets of 75 features for Task FC, one for each task. “

Reviewer 1 Recommendations For The Authors #6:

Similarly, please be more specific about the regression methods used. There are several different parameterisations of the elastic net, please provide equations to describe the method used here so that readers can easily determine how the regularisation parameters should be interpreted. The same goes for the methods used for correcting bias, e.g. what is "de Lange and Cole's (2020) 5th equation"?

Response Thank you. We now made a detailed description of Elastic Net including its equation (see below). We also added more specific details about the methods used for correcting bias in Brain Age indices (see our response to Reviewer 3 Public Review #2 above).

Methods:

“For the machine learning algorithm, we used Elastic Net (Zou & Hastie, 2005). Elastic Net is a general form of penalised regressions (including Lasso and Ridge regression), allowing us to simultaneously draw information across different brain indices to predict one target variable. Penalised regressions are commonly used for building age-prediction models (Jirsaraie, Gorelik, et al., 2023). Previously we showed that the performance of Elastic Net in predicting cognitive abilities is on par, if not better than, many non-linear and more-complicated algorithms (Pat, Wang, Bartonicek, et al., 2022; Tetereva et al., 2022). Moreover, Elastic Net coefficients are readily explainable, allowing us the ability to explain how our age-prediction and cognition-prediction models made the prediction from each brain feature (Molnar, 2019; Pat, Wang, Bartonicek, et al., 2022) (see below).

Elastic Net simultaneously minimises the weighted sum of the features’ coefficients. The degree of penalty to the sum of the feature’s coefficients is determined by a shrinkage hyperparameter ‘’: the greater the , the more the coefficients shrink, and the more regularised the model becomes. Elastic Net also includes another hyperparameter, ‘l_1 ratio’, which determines the degree to which the sum of either the squared (known as ‘Ridge’; l_1 ratio=0) or absolute (known as ‘Lasso’; l_1 ratio=1) coefficients is penalised (Zou & Hastie, 2005). The objective function of Elastic Net as implemented by sklearn (Pedregosa et al., 2011) is defined as: argmin_ ((|(|y-X|)|_2^2)/(2×n_samples )+α×l_1 _ratio×|(||)|_1+0.5×α×(1-l_1 _ratio)×|(|w|)|_2^2 ), (1) where X is the features, y is the target, and  is the coefficient. In our grid search, we tuned two Elastic Net hyperparameters:  using 70 numbers in log space, ranging from .1 and 100, and l_1-ratio using 25 numbers in linear space, ranging from 0 and 1.”

Additional minor points:

Reviewer 1 Recommendations For The Authors #7:

  • Please provide more descriptive figure legends, especially for Figs 5 and 6. For example, what do the boldface numbers reflect? What do the asterisks reflect?

Response Thank you for the suggestion. We made changes to the figure legends to make it clearer what the numbers and asterisks reflect.

Reviewer 1 Recommendations For The Authors #8:

  • Perhaps this is personal thing, but I find the nomenclature cognition_{fluid} to be quite awkward. Why not just define FC as an acronym?

Response Thank you for the suggestion. We now used the word ‘fluid cognition’ throughout the manuscript.

Reviewer #2 (Recommendations For The Authors):

Suggestions for improved or additional experiments, data or analyses.

Reviewer 2 Recommendations For The Authors #1:

• Since the study did not provide external validation for the indices, it is unclear how well the models would perform and generalize to other samples. Therefore, it is recommended to conduct out-of-sample testing of the models.

Response Thank you for the suggestion. We now added discussions about how consistency between our results and several recent studies that investigated similar issues with Brain Age in different populations, e.g., large samples of older adults in Uk Biobank (Cole, 2020) and younger populations (Butler et al., 2021; Jirsaraie, Kaufmann, et al., 2023), and in a broader context, extending to neurological and psychological disorders (for review, see Jirsaraie, Gorelik, et al., 2023). Please see below.

Please also noted that all of the analyses done were out-of-sample. We used nested cross-validation to evaluate the predictive performance of age- and cognition-prediction models on the outer-fold test sets, which are out-of-sample from the training sets (please see Reviewer 1 Recommendations For The Authors #2). Similarly, we also conducted all of the commonality analyses on the outer-fold test sets.

Discussion

“The small effects of the Corrected Brain Age Gap in explaining fluid cognition of aging individuals found here are consistent with studies in older adults (Cole, 2020) and younger populations (Butler et al., 2021; Jirsaraie, Kaufmann, et al., 2023). Cole (2020) studied the utility of Brain Age on cognitive functioning of large samples (n>17,000) of older adults, aged 45-80 years, from the UK Biobank (Sudlow et al., 2015). He constructed age-prediction models using LASSO, a similar penalised regression to ours and applied the same age-dependency adjustment to ours. Cole (2020) then conducted a multiple regression explaining cognitive functioning from Corrected Brain Age Gap while controlling for chronological age and other potential confounds. He found Corrected Brain Age Gap to be significantly related to performance in four out of six cognitive measures, and among those significant relationships, the effect sizes were small with a maximum of partial eta-squared at .0059. Similarly, Jirsaraie and colleagues (2023) studied the utility of Brain Age on cognitive functioning of youths aged 8-22 years old from the Human Connectome Project in Development (Somerville et al., 2018) and Preschool Depression Study (Luby, 2010). They built age-prediction models using gradient tree boosting (GTB) and deep-learning brain network (DBN) and adjusted the age dependency of Brain Age Gap using Smith and colleagues’ (2019) method. Using multiple regressions, Jirsaraie and colleagues (2023) found weak effects of the adjusted Brain Age Gap on cognitive functioning across five cognitive tasks, five age-prediction models and the two datasets (mean of standardised regression coefficient = -0.09, see their Table S7). Next, Butler and colleagues (2021) studied the utility of Brain Age on cognitive functioning of another group of youths aged 8-22 years old from the Philadelphia Neurodevelopmental Cohort (PNC) (Satterthwaite et al., 2016). Here they used Elastic Net to build age-prediction models and applied another age-dependency adjustment method, proposed by Beheshti and colleagues (2019). Similar to the aforementioned results, Butler and colleagues (2021) found a weak, statistically non-significant correlation between the adjusted Brain Age Gap and cognitive functioning at r=-.01, p=.71. Accordingly, the utility of Brain Age in explaining cognitive functioning beyond chronological age appears to be weak across age groups, different predictive modelling algorithms and age-dependency adjustments.“

“This discrepancy between the predictive performance of age-prediction models and the utility of Brain Age indices as a biomarker is consistent with recent findings (for review, see Jirsaraie, Gorelik, et al., 2023), both in the context of cognitive functioning (Jirsaraie, Kaufmann, et al., 2023) and neurological/psychological disorders (Bashyam et al., 2020; Rokicki et al., 2021). For instance, combining different MRI modalities into the prediction models, similar to our stacked models, often lead to the highest performance of age-prediction models, but does not likely explain the highest variance across different phenotypes, including cognitive functioning and beyond (Jirsaraie, Gorelik, et al., 2023). “

“Third, by introducing Brain Cognition, we showed the extent to which Brain Age indices were not able to capture the variation of brain MRI that is related to fluid cognition. Brain Cognition, from certain cognition-prediction models such as the stacked models, has relatively good predictive performance, consistent with previous studies (Dubois et al., 2018; Pat, Wang, Anney, et al., 2022; Rasero et al., 2021; Sripada et al., 2020; Tetereva et al., 2022; for review, see Vieira et al., 2022). We then examined Brain Cognition using commonality analyses (Nimon et al., 2008) in multiple regression models having a Brain Age index, chronological age and Brain Cognition as regressors to explain fluid cognition. Similar to Brain Age indices, Brain Cognition exhibited large common effects with chronological age. But more importantly, unlike Brain Age indices, Brain Cognition showed large unique effects, up to around 11%. The unique effects of Brain Cognition indicated the amount of co-variation between brain MRI and fluid cognition that was missed by a Brain Age index and chronological age. This missing amount was relatively high, considering that Brain Age and chronological age together explained around 32% of the total variation in fluid cognition. Accordingly, if a Brain Age index was used as a biomarker along with chronological age, we would have missed an opportunity to improve the performance of the model by around one-third of the variation explained. “

“There is a notable difference between studies investigating the utility of Brain Age in explaining cognitive functioning, including ours and others (e.g., Butler et al., 2021; Cole, 2020, 2020; Jirsaraie, Kaufmann, et al., 2023) and those explaining neurological/psychological disorders (e.g., Bashyam et al., 2020; Rokicki et al., 2021). That is, those Brain Age studies focusing on neurological/psychological disorders often build age-prediction models from MRI data of largely healthy participants (e.g., controls in a case-control design or large samples in a population-based design), apply the built age-prediction models to participants without vs. with neurological/psychological disorders and compare Brain Age indices between the two groups. This means that age-prediction models from Brain Age studies focusing on neurological/psychological disorders might be under-fitted when applied to participants with neurological/psychological disorders because they were built from largely healthy participants. And thus, the difference in Brain Age indices between participants without vs. with neurological/psychological disorders might be confounded by the under-fitted age-prediction models (i.e., Brain Age may predict chronological age well for the controls, but not for those with a disorder). On the contrary, our study and other Brain Age studies focusing on cognitive functioning often build age-prediction models from MRI data of largely healthy participants and apply the built age-prediction models to participants who are also largely healthy. Accordingly, the age-prediction models for explaining cognitive functioning do not suffer from being under-fitted. We consider this as a strength, not a weakness of our study.”

Reviewer 2 Recommendations For The Authors #2:

• Employ Variance Inflation Factor (VIF) to empirically test for multicollinearity.

Response Given high common effects between many of the regressors in the models (e.g., between Brain Age and chronological age), VIF will be high, but this is not a concern for the commonality analysis. We showed now that applying the commonality analysis to multiple regressions allowed us to have robust results against multicollinearity, as demonstrated elsewhere (Ray-Mukherjee et al., 2014, Using commonality analysis in multiple regressions: A tool to decompose regression effects in the face of multicollinearity). Specifically, using the multiple regressions by themselves without the commonality analysis, researchers have to rely on beta estimates, which are strongly affected by multicollinearity (e.g., a phenomenon known as the Suppression Effect). However, by applying the commonality analysis on top of multiple regressions, researchers can then rely on R2 estimates, which are less affected by multicollinearity. This can be seen in our case (Figure 5 and 6) where Brain Age indices had the same unique effects regardless of the level of common effects they had with chronological age (e.g., Brain Age vs. Corrected Brain Age Gap from stacked models).

To directly demonstrate the robustness of the current commonality analysis regarding multicollinearity, we applied the commonality analysis to Ridge regressions (see Supplementary Figures 3 and 5 below). Ridge regression is a method designed to deal with multicollinearity (Dormann et al., 2013). As seen below, the results from commonality analyses applied to Ridge regressions are closely matched with our original results.

Methods

“Note to ensure that the commonality analysis results were robust against multicollinearity (Ray-Mukherjee et al., 2014), we also repeated the same commonality analyses done here on Ridge regression, as opposed to multiple regression. Ridge regression is a method designed to deal with multicollinearity (Dormann et al., 2013). See Supplementary Figure 3 for the Ridge regression with chronological age and each Brain Age index as regressors and Supplementary Figure 5 for the Ridge regression with chronological age, each Brain Age and Brain Cognition index as regressors. Briefly, the results from commonality analyses applied to Ridge regressions are closely matched with our results done using multiple regression.”

Reviewer 2 Recommendations For The Authors #3:

• Incorporate non-linearities in the correction of brain-age indices, such as separate terms in the regression or statistical analyses.

Response Thank you for the suggestion. We now added a non-linear term of chronological age in our multiple-regression models explaining fluid cognition (see Supplementary Figure 4 and 6 below). Originally we did not have the quadratic term for chronological age in our model since the relationship between chronological age and fluid cognition was relatively linear (see Figure 1 above). Accordingly, as expected, adding the quadratic term for chronological age as suggested did not change the pattern of the results of the commonality analyses.

Methods

“Similarly, to ensure that we were able to capture the non-linear pattern of chronological age in explaining fluid cognition, we added a quadratic term of chronological age to our multiple-regression models in the commonality analyses. See Supplementary Figure 4 for the multiple regression with chronological age, square chronological age and each Brain Age index as regressors and Supplementary Figure 6 for the multiple regression with chronological age, square chronological age, each Brain Age index and Brain Cognition as regressors. Briefly, adding the quadratic term for chronological age did not change the pattern of the results of the commonality analyses.”

Reviewer 2 Recommendations For The Authors #4:

• It would be helpful to include the complete set of results in the appendix - for instance, the statistical significance for each component for the final commonality analysis.

Response Figures 5 and 6 (see above) already have asterisks to reflect the statistical significance of the unique effects. Because of this, we do not believe we need more figures/tables in the appendix to show statistical significance.

Recommendations for improving the writing and presentation.

Reviewer 2 Recommendations For The Authors #5:

• The authors are encouraged to refrain from using terms such as 'fortunately', 'unfortunately', and 'unsettling', as they may appear inappropriate when referring to empirical findings.

Response We agree with this suggestion and no long used those words.

Reviewer 2 Recommendations For The Authors #6:

• It would be helpful to clarify in the methods that you end up with 5 test folds.

Response We now made a clarification why we chose 5 test folds.

Methods

“We used nested cross-validation (CV) to build these models (see Figure 7). We first split the data into five outer folds. We used five outer folds so that each outer fold had around 100 participants. This is to ensure the stability of the test performance across folds.”

Minor corrections to the text and figures.

Reviewer 2 Recommendations For The Authors #7:

• Why use months, not years for chronological age? This seems inappropriate given the age range.

Response We originally used months since they were units used in our prediction modelling. However, to make the figures easier to understand, we now used years.

Reviewer 2 Recommendations For The Authors #8:

• The formatting, especially regarding the text embedded within the figures, could benefit from significant improvements.

Response Thank you for the suggestion. We made changes to the text embedded within the figures. They should be more readable now

Reviewer 2 Recommendations For The Authors #9:

• The legend for the neuroimaging feature labels is missing, and the captions are incomplete.

Response Please see Figure 2 above. We now revised by adding the letter L and R for the laterality of the brain images. We made some changes to the captions to make sure they are complete.

Reviewer 2 Recommendations For The Authors #10:

• Figure 5's caption: SD has a missing decimal point).

Response The numbers are not SD. The numbers to the left of the figure represent the unique effects of chronological age in %, the numbers in the middle of the figure represent the common effects between chronological age and Brain Age index in %, and the numbers to the right of the figure represent the unique effects of Brain Age Index in %. We now used the same one decimal point for these number

Reviewer #3 (Recommendations For The Authors):

The main question of this article is as follows: “To what extent does having information on Brain Age improve our ability to capture declines in fluid cognition beyond knowing a person’s chronological age?” While this question is worthwhile, considering most of the field is confused about the nature of brain age, the authors are currently missing an opportunity to convey the inevitability of their results given how Brain Age and the Brain Age Gap are calculated. They also misleadingly convey that Brain Cognition is somehow superior to Brain Age. If the authors work on conveying the inevitability of their results and redo (or remove) their section on Brain Cognition, I can see how their results would be enlightening to the general neuroimaging community that is interested in the concept of brain age. See below for specific critiques.

Response Please see our response to Reviewer 3 Public Review Overall. Note we no longer argue that Brain Cognition is superior to Brain Age (Reviewer 3 Public Review #4). Rather, we treated the capability of Brain Cognition in capturing fluid cognition as the upper limit of Brain Age’s capability in capturing fluid cognition. We used the unique effects of Brain Cognition that explain fluid cognition beyond Brain Age and chronological age to indicate how much Brain Age misses the variation in the brain MRI that could explain fluid cognition.

Reviewer 3 Recommendations For The Authors #1:

“There are many adjustments proposed to correct for this estimation bias” (p3) → Regression to the mean is not a sign of bias. Any decent loss function will result in over- predicting the age of younger individuals and under-predicting the age of older individuals. This is a direct result of minimizing an error term (e.g., mean squared error). Therefore, it is inappropriate to refer to regression to the mean as a sign of bias. This misconception has led to a great deal of inappropriate analyses, including “correcting” the brain age gap by regressing out age.

Response Please see our response to Reviewer 3 Public Review#1

Reviewer 3 Recommendations For The Authors #2:

“Corrected Brain Age Gap in particular is viewed as being able to control for both age dependency and estimation biases (Butler et al., 2021).” (p3) → This summary is not accurate as Butler and colleagues did not use the words "corrected" and "biases" in this context. All that authors say in that paper is that regressing out age from the brain age gap - which is referred to as the modified brain age gap (MBAG) - makes it so that the modified brain age gap is not dependent on age, which is true. This metric is meaningless, though, because it is the variance left over after regressing out age from residuals from a model that was predicting age. If it were not for the fact that regression on residuals is not equivalent to multiple regression (and out of sample estimates), MBAG would be a vector of zeros. Upon reading your Methods, I noticed that you are using a metric for Le et al. (2018) for your “Corrected Brain Age Gap”. If they cite the Butler et al. (2021) paper, I highly recommend sticking with the same notation, metrics and terminology throughout. That would greatly help with the interpretability of your paper, and cross-comparisons between the two.

Response Please see our response to Reviewer 3 Public Review #2.

Reviewer 3 Recommendations For The Authors #3:

“However, the improvement in predicting chronological age may not necessarily make Brain Age to be better at capturing Cognitionfluid. If, for instance, the age-prediction model had the perfect performance, Brian Age Gap would be exactly zero and would have no utility in capturing Cognitionfluid beyond chronological age.” (p3) → I largely agree with this statement. I would be really careful to distinguish between Brain Age and the Brain Age Gap here, as the former is a predicted value, and the latter is the residual times -1 (predicted age - age). Therefore, together they explain all of the variance in age. If you change the first sentence to refer to the Brain Age Gap, this statement makes more sense. The Brain Age Gap will never be exactly zero, though, even with perfect prediction on the training set, because subjects in the testing set are different from the subjects in the training set.

Response Please see our response to Reviewer 3 Public Review #3.

Reviewer 3 Recommendations For The Authors #4:

“Can we further improve our ability to capture the decline in cognitionfluid by using, not only Brain Age and chronological age, but also another biomarker, Brain Cognition?” → This question is fundamentally getting at whether a predicted value of cognition can predict cognition. Assuming the brain parameters can predict cognition decently, and the original cognitive measure that you were predicting is related to your measure of fluid cognition, the answer should be yes. This seems like an uninteresting question to me. Upon reading your Methods, it became clear that the cognitive variable in the model predicting cognition using brain features (to get predicted cognition, or as you refer to it, Brain Cognition) is the same as the measure of fluid cognition that you are trying to assess how well Brain Cognition can predict. Assuming the brain parameters can predict fluid cognition at all, of course Brain Cognition will predict fluid cognition. This is inevitable. You should never use predicted values of a variable to predict the same variable.

Response Please see our response to Reviewer 3 Public Review #4.

Reviewer 3 Recommendations For The Authors #5:

“We also examined if these better-performing age-prediction models improved the ability of Brain Age in explaining Cognitionfluid.” → Improved above and beyond what?

Response We referred to if better-performing age-prediction models improved the ability of Brain Age in explaining fluid cognition over and above lower-performing age-prediction models. We made changes to the Introduction to clarify this change.

Reviewer 3 Recommendations For The Authors #6:

Figure 1 b & c → It is a little difficult to read the text by the horizontal bars in your plots. Please make the text smaller so that there is more space between the words vertically, or even better, make the plots slightly bigger. Please also put the predicted values on the y-axis. This is standard practice for displaying regression results. To make more room, you can get rid of your rPearson or your R2 plot, considering the latter is simply the square of the former. If you want to make it clear that the association is positive between all of your variables, I would keep rPearson.

Response Thank you so much for the suggestions.

  1. We now made sure that the text by the horizontal bars in Figure 1b and c is readable.

  2. Note in prediction model/machine-learning literature, it is more common to plot observed/real values on the y-axis. Here is the logic of our practice: values in the x-axis are the predicted values based on the model, and we would like to see if the changes in the predicted values correspond to the changes in the observed/real value in the y-axis.

  3. Regarding Pearson correlation vs R2, please note that we wrote ”for R2, we used the sum of squares definition (i.e., R2 = 1 – (sum of squares residuals/total sum of squares)) per a previous recommendation (Poldrack et al., 2020).” As such, R2 is NOT the square of the Pearson correlation. In fact, in Poldrack and colleages’s “Establishment of Best Practices for Evidence for Prediction” paper (2020), they discourage 1) the use of Pearson correlation by itself and 2) the use of the correlation coefficient square as R2 (as opposed to sum of squares definition):

“It is common in the literature to use the correlation between predicted and actual values as a measure of predictive performance; of the 64 studies in our literature review that performed prediction analyses on continuous outcomes, 30 reported such correlations as a measure of predictive performance. This reporting is problematic for several reasons. First, correlation is not sensitive to scaling of the data; thus, a high correlation can exist even when predicted values are discrepant from actual values. Second, correlation can sometimes be biased, particularly in the case of leave-one-out cross-validation. As demonstrated in Figure 4, the correlation between predicted and actual values can be strongly negative when no predictive information is present in the model. A further problem arises when the variance explained (R2) is incorrectly computed by squaring the correlation coefficient. Although this computation is appropriate when the model is obtained using the same data, it is not appropriate for out-of-sample testing23; instead, the amount of variance explained should be computed using the sum-of-squares formulation (as implemented in software packages such as scikit-learn).”

“A further problem arises when the variance explained (R2) is incorrectly computed by squaring the correlation coefficient. Although this computation is appropriate when the model is obtained using the same data, it is not appropriate for out-of-sample testing23; instead, the amount of variance explained should be computed using the sum-of-squares formulation (as implemented in software packages such as scikit-learn).”

Accordingly, we decided to keep both R2 and Pearson correlation (along with MAE) in our Figure 1.

Reviewer 3 Recommendations For The Authors #7:

Figure 2 “We calculated feature importance by, first, standardizing Elastic Net weights across brain features of each set of features from each test fold.” → What do you mean by “standardize” here? Rescale to be mean 0, variance 1? If so, this seems like a misleading transformation, because it gives the impression that the relationships are negative, when they are not necessarily. Also, why did you choose to use elastic net weights in any form as measures of effect size (or importance)? The raw values are inherently penalized, which means they are under-estimates of the true effect size. It would be more meaningful (and less biased) to plot the raw correlations.

Response For the first question regarding standardisation, we addressed this issue in our response to Reviewer 1 Recommendations For The Authors #3. Briefly, we agreed with Reviewer 3 that standardisation (with mean = 0, SD = 1) might make it difficult to interpret the directionality of the coefficients. For visualising feature importance in the revised manuscript, we refitted the Elastic Net model to the full dataset without splitting them into five folds and visualised the coefficients on brain images (see below).

For the second question regarding why using Elastic Net coefficients as feature importance (as opposed to correlations), we need to mention the goal of feature importance: to understand how the model makes a prediction based on different brain features (Molnar, 2019). Correlations between a target and each brain feature do not achieve this. Instead, they will show univariate/marginal relationships between a target and a brain feature. What we want to visualise is how the model made a prediction, which in the case of Elastic Net, the prediction is based on the sum of the features’ coefficients. In other words, the multivariate models (including Elastic Net) focus on marginal relationships that take into account all brain features within each set of features.

Elastic Net coefficients can be considered as feature importance, such that more positive Elastic Net coefficients lead to more positive predicted values and, similarly, more negative Elastic Net coefficients lead to more negative predicted values (Molnar, 2019; Pat, Wang, Bartonicek, et al., 2022). While the magnitude of Elastic Net coefficients is regularised (thus making it difficult for us to interpret the magnitude itself directly), we could still indicate that a brain feature with a higher magnitude weights relatively stronger in making a prediction. Another benefit of Elastic Net as a penalised regression is that the coefficients are less susceptible to collinearity among features as they have already been regularised (Dormann et al., 2013; Pat, Wang, Bartonicek, et al., 2022).

Reviewer 3 Recommendations For The Authors #8:

Figure 3 → Again, what exactly do you mean by “standardised” here?

Response It means mean subtraction followed by the division by an SD. Though we no longer applies standardisation for feature importance. See our response to Reviewer 1 Recommendations For The Authors #3 and Reviewer 3 Recommendations For The Authors #7.

Reviewer 3 Recommendations For The Authors #9:

“However, Brain Age Gap created from the lower-performing age-prediction models explained a higher amount of variation in Cognitionfluid. For instance, the top performing age-prediction model, “Stacked: All excluding Task Contrast”, generated Brain Age and Corrected Brain Age that explained the highest amount of variation in Cognitionfluid, but, at the same time, produced Brian Age Gap that explained the least amount of variation in Cognitionfluid.” (p7) → Yes, but you did not need to run any models to show this, considering it is an inevitable consequence of the following relationship between predicted values and residuals (or residuals times -1): 𝑦 = (𝑦 − 𝑦% ) + 𝑦% . Let’s say that age explains 60% of the variance in fluid cognition, and predicted age ( 𝑦% ) explains 40% of the variance in fluid cognition. Then the brain age gap (−(𝑦 − 𝑦% )) should explain 20% of the variance in fluid cognition. If by “Corrected Brain Age” you mean the modified predicted age from the Butler paper, the “Corrected Brain Age” result is inevitable because the modified predicted age is essentially just age with a tiny bit of noise added to it. From Figure 4, though, this does not seem to be the case, because the lower left quadrant in panel a should be flat and high (about as high as the predictive value of age for fluid cognition). So how are you calculating “Corrected Brain Age”? It looks like you might be regressing age out of Brain Age, though from your description the Methods (How exactly do you use the slope and intercept? You need equation of you are going to stick with this terminology), it is not totally clear. I highly recommend using terminology and metrics from the Butler et al. (2021) paper throughout to reduce confusion.

Response Please see our response to Reviewer 3 Public Review #5

Reviewer 3 Recommendations For The Authors #10:

“On the contrary, an amount of variation in Cognitionfluid explained by Corrected Brain Age Gap was relatively small (maximum R2 = .041) across age-prediction models and did not relate to the predictive performance of the age-prediction models.” (p7) → If by “Corrected Brain Age Gap” you mean MBAG from The Butler paper, yes, this is also inevitable, considering MBAG would be a vector of zeros if it were not for regression on residuals (and out of sample estimates), as I mentioned earlier. Also, it is not clear why you used “on the contrary” as a transition here.

Response Please see our response to Reviewer 3 Public Review #2 for the ‘MBAG’ term. Briefly, we didn’t use Butler and colleagues' (2021) MBAG, but rather we used the method described in de Lange and Cole’s (2020), which was called RBAG by Butler and colleagues.

de Lange and Cole’s (2020) method, was commonly implemented elsewhere (Cole et al., 2020; Cumplido-Mayoral et al., 2023; Denissen et al., 2022). Accordingly, researchers who use Brain Age do not usually view this method as capturing a meaningless biomarker. Yet, the small effects of the Corrected Brain Age Gap in explaining fluid cognition of aging individuals found here are consistent with studies in older adults (Cole, 2020) and younger populations (Butler et al., 2021; Jirsaraie, Kaufmann, et al., 2023) (see our response to Reviewer 2 Recommendations For The Authors #1).

“On the contrary” refers to the fact that the other three Brain Age indices (i.e., those that did not account for the relationship between Brain Age and chronological age) showed a much higher amount of variation in fluid cognition explained. As mentioned above (our response to Reviewer 2 Public Review #7), our argument resonates Butler and colleagues’ (2021) suggestion (p. 4097): “As such, it is critical that readers of past literature note whether or not age was controlled for when testing for effects on the BAG, as this has not always been common practice (e.g., Beheshti et al., 2018; Cole, Underwood, et al., 2017; Franke et al., 2015; Gaser et al., 2013; Liem et al., 2017; Nenadi c et al., 2017; Steffener et al., 2016)”.

Reviewer 3 Recommendations For The Authors #11:

“As before, the unique effects of Brain Age indices were all relatively small across the four Brain Age indices and across different prediction models.” (p10) → Yes, again, this is inevitable considering how they are calculated. You can show these analyses to demonstrate your results in data, if you want, but ignoring the inevitability given how these variables are calculated is misleading.

Response Accounting for the relationship between Brain Age and chronological age when examining the utility of Brain Age is not misleading. Similar to previous recommendations (Butler et al., 2021; Le et al., 2018), we believe that not doing so is misleading. That is, without accounting for the relationship between Brain Age and chronological age, Brain Age will likely explain the same variation of the phenotype of interest as chronological age. Please see our response to Reviewer 3 Recommendations For The Authors #18 below.

Reviewer 3 Recommendations For The Authors #12:

“On the contrary, the unique effects of Brain Cognition appeared much larger.” (p10) → This is not a fair comparison if you don’t look at the unique effects above and beyond the cognitive variable you predicted (fluid cognition) in your Brain Cognition model. When you do this, you will see that Brain Cognition is useless when you include fluid cognition in the model, just as Brain Age would be in predicting age when you include age in the model. This highlights the fact that using predicted values of a metric to predict that metric is a pointless path to take, and that using a predicted value to predict anything is worse than using the value itself.

Response Please see our response to Reviewer 3 Public Review #6.

Reviewer 3 Recommendations For The Authors #13:

“First, how much does Brain Age add to what is already captured by chronological age? The short answer is very little.” (p12) → This is a really important point, but your paper requires an in-depth discussion of the inevitability of this result, which I have discussed previously in this review.

Response Please see our response to Reviewer 3 Public Review #7.

Reviewer 3 Recommendations For The Authors #14:

“Second, do better-performing age-prediction models improve the ability of Brain Age to capture Cognitionfluid? Unfortunately, the answer is no.” (p12) → You need to be clear that you are talking about above and beyond age here.

Response Thank you so much for your suggestion. We now made the change to this sentence accordingly.

Discussion

“Second, do better-performing age-prediction models improve the utility of Brain Age to capture fluid cognition above and beyond chronological age? The answer is also no.”

Reviewer 3 Recommendations For The Authors #15:

“Third, do we have a solution that can improve our ability to capture Cognitionfluid from brain MRI? The answer is, fortunately, yes. Using Brain Cognition as a biomarker, along with chronological age, seemed to capture a higher amount of variation in Cognitionfluid than only using Brain Age.” (p12) → Again, try controlling for the cognitive measure you predicted in your Brain Cognition model. This will show that Brain Cognition is not useful above and beyond cognition, highlighting the fact that it is not a useful endeavor to be using predicted values.

Response Please see our response to Reviewer 3 Public Review #8.

Reviewer 3 Recommendations For The Authors #16:

“Accordingly, a race to improve the performance of age-prediction models (Baecker et al., 2021) does not necessarily enhance the utility of Brain Age indices as a biomarker for Cognitionfluid. This calls for a new paradigm. Future research should aim to build prediction models for Brian Age indices that are not necessarily good at predicting age, but at capturing phenotypes of interest, such as Cognitionfluid and beyond.” (p13) → I whole-heartedly agree with the first two sentences, and strongly disagree with the last. Certainly your results, and the underlying reason as to why you found these results, calls for a new paradigm (or, one might argue, a pre-brain age paradigm). They do not, however, suggest that we should keep going down the Brain Age path. In fact, I think it should be abandoned all together. While it is difficult to prove that there is no transformation of Brain Age or the Brain Age Gap that will be useful, I am nearly sure this is true from the research I have done. Therefore, if you would like to suggest that the field should continue down this path, you need to present a very good case to support this view.

Response Please see our response to Reviewer 3 Public Review #9.

Reviewer 3 Recommendations For The Authors #17:

“Perhaps this is because the estimation of the influences of chronological age was done in the training set.” (p13) → I believe this is the case, and it is testable. Try re-running your analyses where parameters are estimated and performance is evaluated on the same data.

Response Yes, we agreed with this. Based on the equations we used, this is inevitable.

Reviewer 3 Recommendations For The Authors #18:

“Similar to a previous recommendation (Butler et al., 2021), we suggest focusing on Corrected Brain Age Gap.” (p13) → To be clear, the authors did not use the term “Corrected” because it is very misleading. The authors also did not suggest that we proceed with any brain age metric; rather they mentioned that the modified brain age gap is independent of age. Note the following passage: “Further, the interpretability of the modified brain age gap (MBAG) itself is limited by the fact that it is a prediction error from a regression to remove the effects of age from a residual obtained through a regression to predict age. By virtue of these limitations, we suggest that the modified version may not provide useful information about precocity or delay in brain development. In light of this, as well as the complexities associated with interpretations of the BAG and its dependence on age, we suggest that further methodological and theoretical work is warranted.” I recognize that that this statement is hedged, as is often required in the publication process, but I am all but certain that MBAG/BAG/modified predicted age are useless constructs. Therefore, if you are going to suggest that people continue to use them, opposed to suggesting that further methodological or theoretical work is warranted, you need to make a strong case, which you did not try to make here. If anything, your results support abandoning the age- prediction endeavor altogether.

Response Please see our response to Reviewer 3 Public Review #2 for the term. Briefly, we didn’t use Butler and colleagues’ (2021) MBAG, but rather RBAG. This index was originally described in de Lange and Cole’s (2020), and has now been implemented elsewhere (Cole et al., 2020; Cumplido-Mayoral et al., 2023; Denissen et al., 2022).

We do not intend to encourage people to abandon the Brain Age endeavour altogether. However, we made main three suggestions for future research on Brain Age to ensure its utility. First, they should account for the relationship between Brain Age and chronological age either using Corrected Brain Age Gap (or other similar adjustments) or, better, examining the unique effects of Brain Age indices after controlling for chronological age through commonality analyses (see below). This is similar to the suggestion made by Le and colleagues (2018) and later rephased by Butler and colleagues (2021). More specifically, Le and colleagues (2018) mentioned (p. 10): “Based on our observations in both real and simulated data, we recommend that the relationship between chronological age and BrainAGE should be accounted for. The two methods proposed in this study are either: (1) regress age on BrainAGE, producing BrainAGER, which is centered on 0 regardless of a participant's actual age or (2) include age as a regressor when doing follow-up analyses.”

Second, we suggested that researchers should not select age-prediction models based solely on age-prediction performance (see our response to Reviewer 1 Recommendations For The Authors #1).

Third, we suggested that researchers should test how much Brain Age miss the variation in the brain MRI that could explain fluid cognition or other phenotypes of interest (see our response to Reviewer 2 Public Review #4).

Discussion

“What does it mean then for researchers/clinicians who would like to use Brain Age as a biomarker? First, they have to be aware of the overlap in variation between Brain Age and chronological age and should focus on the contribution of Brain Age over and above chronological age. Using Brain Age Gap will not fix this. Butler and colleagues (2021) recently highlighted this point, “These results indicate that the association between cognition and the BAG are driven by the association between age and cognitive performance. As such, it is critical that readers of past literature note whether or not age was controlled for when testing for effects on the BAG, as this has not always been common practice (p. 4097).” Similar to previous recommendations (Butler et al., 2021; Le et al., 2018), we suggest future work should account for the relationship between Brain Age and chronological age, either using Corrected Brain Age Gap (or other similar adjustments) or, better, examining unique effects of Brain Age indices after controlling for chronological age through commonality analyses. Note we prefer using unique effects over beta estimates from multiple regressions, given that unique effects do not change as a function of collinearity among regressors (Ray-Mukherjee et al., 2014). In our case, Brain Age indices had the same unique effects regardless of the level of common effects they had with chronological age (e.g., Brain Age vs. Corrected Brain Age Gap from stacked models). In the case of fluid cognition, the unique effects might be too small to be clinically meaningful as shown here and previously (Butler et al., 2021; Cole, 2020; Jirsaraie, Kaufmann, et al., 2023).”

Reviewer 3 Recommendations For The Authors #19:

“To compute Brain Age and Brain Cognition, we ran two separate prediction models. These prediction models either had chronological age or Cognitionfluid as the target.” (p16) → You should make it clear in the main text of your paper that the cognition variable in your Brain Cognition models is the same as what you refer to as Cognitionfluid. Some of your analyses would have been much more reasonable if you had two different measures of cognition.

Response Thank you so much for the suggestion. We believe, given the re-conceptualisation of Brain Cognition as the main text

Introduction

“certain variation in the brain MRI is related to fluid cognition, but to what extent does Brain Age not capture this variation? To estimate the variation in the brain MRI that is related to fluid cognition, we could build prediction models that directly predict fluid cognition (i.e., as opposed to chronological age) from brain MRI data.”

Reviewer 3 Recommendations For The Authors #20:

“We controlled for the potential influences of biological sex on the brain features by first residualizing biological sex from brain features in the training set.” (p16) → Why? Your question is about prediction, not causal inference.

Response While the question is about prediction, we still would like to, as much as possible, be confident about what kind of information we drew from. Here we focused on brain data and controlled for other variables that might not be neuronal. For instance, we controlled for movement and physiological noise using ICA-FIX (Glasser et al., 2016). Following conventional practices in brain-based predictive modelling, we also treated biological sex as another sort of noise (Vieira et al., 2022). The difference between movement/physiological noise and biological sex is that the former varies across TRs, and the latter varies across individuals. Thus we controlled for movement and physiological noise within each participant and controlled for biological sex within a group of participants who belonged to the same training set.

Reviewer 3 Recommendations For The Authors #20:

“Lastly, we computer Corrected Brain Age Gap by subtracting the chronological age from the Corrected Brain Age (Butler et al., 2021; Le et al., 2018).” (p17) → The modified brain age gap in that paper is the residuals from regressing BAG on age (see equation 6). I highly recommend using that terminology and notation throughout to provide consistency and interpretability across papers.

Response Please see our response to Reviewer 3 Public Review #2 for the term.

Reviewer 3 Recommendations For The Authors #21: Equations (pgs 17-19) → Please use statistical notation instead of pseudo-R code.

Response We rewrote all of the equations using statistical notations.

References

Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., Gramfort, A., Thirion, B., & Varoquaux, G. (2014). Machine learning for neuroimaging with scikit-learn. Frontiers in Neuroinformatics, 8, 14. https://doi.org/10.3389/fninf.2014.00014

Ances, B. M., Liang, C. L., Leontiev, O., Perthen, J. E., Fleisher, A. S., Lansing, A. E., & Buxton, R. B. (2009). Effects of aging on cerebral blood flow, oxygen metabolism, and blood oxygenation level dependent responses to visual stimulation. Human Brain Mapping, 30(4), 1120–1132. https://doi.org/10.1002/hbm.20574

Bashyam, V. M., Erus, G., Doshi, J., Habes, M., Nasrallah, I. M., Truelove-Hill, M., Srinivasan, D., Mamourian, L., Pomponio, R., Fan, Y., Launer, L. J., Masters, C. L., Maruff, P., Zhuo, C., Völzke, H., Johnson, S. C., Fripp, J., Koutsouleris, N., Satterthwaite, T. D., … on behalf of the ISTAGING Consortium, the P. A. disease C., ADNI, and CARDIA studies. (2020). MRI signatures of brain age and disease over the lifespan based on a deep brain network and 14 468 individuals worldwide. Brain, 143(7), 2312–2324. https://doi.org/10.1093/brain/awaa160

Beheshti, I., Nugent, S., Potvin, O., & Duchesne, S. (2019). Bias-adjustment in neuroimaging-based brain age frameworks: A robust scheme. NeuroImage: Clinical, 24, 102063. https://doi.org/10.1016/j.nicl.2019.102063

Bookheimer, S. Y., Salat, D. H., Terpstra, M., Ances, B. M., Barch, D. M., Buckner, R. L., Burgess, G. C., Curtiss, S. W., Diaz-Santos, M., Elam, J. S., Fischl, B., Greve, D. N., Hagy, H. A., Harms, M. P., Hatch, O. M., Hedden, T., Hodge, C., Japardi, K. C., Kuhn, T. P., … Yacoub, E. (2019). The Lifespan Human Connectome Project in Aging: An overview. NeuroImage, 185, 335–348. https://doi.org/10.1016/j.neuroimage.2018.10.009

Butler, E. R., Chen, A., Ramadan, R., Le, T. T., Ruparel, K., Moore, T. M., Satterthwaite, T. D., Zhang, F., Shou, H., Gur, R. C., Nichols, T. E., & Shinohara, R. T. (2021). Pitfalls in brain age analyses. Human Brain Mapping, 42(13), 4092–4101. https://doi.org/10.1002/hbm.25533 Choi, S. W., Mak, T. S.-H., & O’Reilly, P. F. (2020). Tutorial: A guide to performing polygenic risk score analyses. Nature Protocols, 15(9), Article 9. https://doi.org/10.1038/s41596-020-0353-1

Cole, J. H. (2020). Multimodality neuroimaging brain-age in UK biobank: Relationship to biomedical, lifestyle, and cognitive factors. Neurobiology of Aging, 92, 34–42. https://doi.org/10.1016/j.neurobiolaging.2020.03.014

Cole, J. H., Raffel, J., Friede, T., Eshaghi, A., Brownlee, W. J., Chard, D., De Stefano, N., Enzinger, C., Pirpamer, L., Filippi, M., Gasperini, C., Rocca, M. A., Rovira, A., Ruggieri, S., Sastre-Garriga, J., Stromillo, M. L., Uitdehaag, B. M. J., Vrenken, H., Barkhof, F., … Group, M. study. (2020). Longitudinal Assessment of Multiple Sclerosis with the Brain-Age Paradigm. Annals of Neurology, 88(1), 93–105. https://doi.org/10.1002/ana.25746

Cumplido-Mayoral, I., García-Prat, M., Operto, G., Falcon, C., Shekari, M., Cacciaglia, R., Milà-Alomà, M., Lorenzini, L., Ingala, S., Meije Wink, A., Mutsaerts, H. J., Minguillón, C., Fauria, K., Molinuevo, J. L., Haller, S., Chetelat, G., Waldman, A., Schwarz, A. J., Barkhof, F., … OASIS study. (2023). Biological brain age prediction using machine learning on structural neuroimaging data: Multi-cohort validation against biomarkers of Alzheimer’s disease and neurodegeneration stratified by sex. ELife, 12, e81067. https://doi.org/10.7554/eLife.81067

de Lange, A.-M. G., & Cole, J. H. (2020). Commentary: Correction procedures in brain-age prediction. NeuroImage: Clinical, 26, 102229. https://doi.org/10.1016/j.nicl.2020.102229

Demontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., Baldursson, G., Belliveau, R., Bybjerg-Grauholm, J., Bækvad-Hansen, M., Cerrato, F., Chambert, K., Churchhouse, C., Dumont, A., Eriksson, N., Gandal, M., Goldstein, J. I., Grasby, K. L., Grove, J., … Neale, B. M. (2019). Discovery of the first genome-wide significant risk loci for attention deficit/hyperactivity disorder. Nature Genetics, 51(1), Article 1. https://doi.org/10.1038/s41588-018-0269-7

Denissen, S., Engemann, D. A., De Cock, A., Costers, L., Baijot, J., Laton, J., Penner, I., Grothe, M., Kirsch, M., D’hooghe, M. B., D’Haeseleer, M., Dive, D., De Mey, J., Van Schependom, J., Sima, D. M., & Nagels, G. (2022). Brain age as a surrogate marker for cognitive performance in multiple sclerosis. European Journal of Neurology, 29(10), 3039–3049. https://doi.org/10.1111/ene.15473

Dormann, C. F., Elith, J., Bacher, S., Buchmann, C., Carl, G., Carré, G., Marquéz, J. R. G., Gruber, B., Lafourcade, B., Leitão, P. J., Münkemüller, T., McClean, C., Osborne, P. E., Reineking, B., Schröder, B., Skidmore, A. K., Zurell, D., & Lautenbach, S. (2013). Collinearity: A review of methods to deal with it and a simulation study evaluating their performance. Ecography, 36(1), 27–46. https://doi.org/10.1111/j.1600-0587.2012.07348.x

Dubois, J., Galdi, P., Paul, L. K., & Adolphs, R. (2018). A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1756), 20170284. https://doi.org/10.1098/rstb.2017.0284

Elliott, M. L., Knodt, A. R., Cooke, M., Kim, M. J., Melzer, T. R., Keenan, R., Ireland, D., Ramrakha, S., Poulton, R., Caspi, A., Moffitt, T. E., & Hariri, A. R. (2019). General functional connectivity: Shared features of resting-state and task fMRI drive reliable and heritable individual differences in functional brain networks. NeuroImage, 189, 516–532. https://doi.org/10.1016/j.neuroimage.2019.01.068

Fair, D. A., Schlaggar, B. L., Cohen, A. L., Miezin, F. M., Dosenbach, N. U. F., Wenger, K. K., Fox, M. D., Snyder, A. Z., Raichle, M. E., & Petersen, S. E. (2007). A method for using blocked and event-related fMRI data to study “resting state” functional connectivity. NeuroImage, 35(1), 396–405. https://doi.org/10.1016/j.neuroimage.2006.11.051

Fischl, B., Salat, D. H., Busa, E., Albert, M., Dieterich, M., Haselgrove, C., van der Kouwe, A., Killiany, R., Kennedy, D., Klaveness, S., Montillo, A., Makris, N., Rosen, B., & Dale, A. M. (2002). Whole Brain Segmentation. Neuron, 33(3), 341–355. https://doi.org/10.1016/S0896-6273(02)00569-X

Franke, K., & Gaser, C. (2019). Ten Years of BrainAGE as a Neuroimaging Biomarker of Brain Aging: What Insights Have We Gained? Frontiers in Neurology, 10, 789. https://doi.org/10.3389/fneur.2019.00789

Glasser, M. F., Smith, S. M., Marcus, D. S., Andersson, J. L. R., Auerbach, E. J., Behrens, T. E. J., Coalson, T. S., Harms, M. P., Jenkinson, M., Moeller, S., Robinson, E. C., Sotiropoulos, S. N., Xu, J., Yacoub, E., Ugurbil, K., & Van Essen, D. C. (2016). The Human Connectome Project’s neuroimaging approach. Nature Neuroscience, 19(9), 1175–1187. https://doi.org/10.1038/nn.4361

Glasser, M. F., Sotiropoulos, S. N., Wilson, J. A., Coalson, T. S., Fischl, B., Andersson, J. L., Xu, J., Jbabdi, S., Webster, M., Polimeni, J. R., Van Essen, D. C., & Jenkinson, M. (2013). The minimal preprocessing pipelines for the Human Connectome Project. NeuroImage, 80, 105–124. https://doi.org/10.1016/j.neuroimage.2013.04.127

Gordon, E. M., Laumann, T. O., Adeyemo, B., Huckins, J. F., Kelley, W. M., & Petersen, S. E. (2016). Generation and Evaluation of a Cortical Area Parcellation from Resting-State Correlations. Cerebral Cortex, 26(1), 288–303. https://doi.org/10.1093/cercor/bhu239

Gratton, C., Laumann, T. O., Nielsen, A. N., Greene, D. J., Gordon, E. M., Gilmore, A. W., Nelson, S. M., Coalson, R. S., Snyder, A. Z., Schlaggar, B. L., Dosenbach, N. U. F., & Petersen, S. E. (2018). Functional Brain Networks Are Dominated by Stable Group and Individual Factors, Not Cognitive or Daily Variation. Neuron, 98(2), 439-452.e5. https://doi.org/10.1016/j.neuron.2018.03.035

Harms, M. P., Somerville, L. H., Ances, B. M., Andersson, J., Barch, D. M., Bastiani, M., Bookheimer, S. Y., Brown, T. B., Buckner, R. L., Burgess, G. C., Coalson, T. S., Chappell, M. A., Dapretto, M., Douaud, G., Fischl, B., Glasser, M. F., Greve, D. N., Hodge, C., Jamison, K. W., … Yacoub, E. (2018). Extending the Human Connectome Project across ages: Imaging protocols for the Lifespan Development and Aging projects. NeuroImage, 183, 972–984. https://doi.org/10.1016/j.neuroimage.2018.09.060

Horien, C., Noble, S., Greene, A. S., Lee, K., Barron, D. S., Gao, S., O’Connor, D., Salehi, M., Dadashkarimi, J., Shen, X., Lake, E. M. R., Constable, R. T., & Scheinost, D. (2020). A hitchhiker’s guide to working with large, open-source neuroimaging datasets. Nature Human Behaviour, 5(2), 185–193. https://doi.org/10.1038/s41562-020-01005-4

Jirsaraie, R. J., Gorelik, A. J., Gatavins, M. M., Engemann, D. A., Bogdan, R., Barch, D. M., & Sotiras, A. (2023). A systematic review of multimodal brain age studies: Uncovering a divergence between model accuracy and utility. Patterns, 4(4), 100712. https://doi.org/10.1016/j.patter.2023.100712

Jirsaraie, R. J., Kaufmann, T., Bashyam, V., Erus, G., Luby, J. L., Westlye, L. T., Davatzikos, C., Barch, D. M., & Sotiras, A. (2023). Benchmarking the generalizability of brain age models: Challenges posed by scanner variance and prediction bias. Human Brain Mapping, 44(3), 1118–1128. https://doi.org/10.1002/hbm.26144

Khojaste-Sarakhsi, M., Haghighi, S. S., Ghomi, S. M. T. F., & Marchiori, E. (2022). Deep learning for Alzheimer’s disease diagnosis: A survey. Artificial Intelligence in Medicine, 130, 102332. https://doi.org/10.1016/j.artmed.2022.102332

Le, T. T., Kuplicki, R. T., McKinney, B. A., Yeh, H.-W., Thompson, W. K., Paulus, M. P., Tulsa 1000 Investigators, Aupperle, R. L., Bodurka, J., Cha, Y.-H., Feinstein, J. S., Khalsa, S. S., Savitz, J., Simmons, W. K., & Victor, T. A. (2018). A Nonlinear Simulation Framework Supports Adjusting for Age When Analyzing BrainAGE. Frontiers in Aging Neuroscience, 10. https://www.frontiersin.org/articles/10.3389/fnagi.2018.00317

Liang, H., Zhang, F., & Niu, X. (2019). Investigating systematic bias in brain age estimation with application to post-traumatic stress disorders. Human Brain Mapping, 40(11), 3143–3152. https://doi.org/10.1002/hbm.24588

Luby, J. L. (2010). Preschool Depression: The Importance of Identification of Depression Early in Development. Current Directions in Psychological Science, 19(2), 91–95. https://doi.org/10.1177/0963721410364493

Molnar, C. (2019). Interpretable Machine Learning. A Guide for Making Black Box Models Explainable. https://christophm.github.io/interpretable-ml-book/

Nimon, K., Lewis, M., Kane, R., & Haynes, R. M. (2008). An R package to compute commonality coefficients in the multiple regression case: An introduction to the package and a practical example. Behavior Research Methods, 40(2), 457–466. https://doi.org/10.3758/BRM.40.2.457

Pat, N., Wang, Y., Anney, R., Riglin, L., Thapar, A., & Stringaris, A. (2022). Longitudinally stable, brain‐based predictive models mediate the relationships between childhood cognition and socio‐demographic, psychological and genetic factors. Human Brain Mapping, hbm.26027. https://doi.org/10.1002/hbm.26027

Pat, N., Wang, Y., Bartonicek, A., Candia, J., & Stringaris, A. (2022). Explainable machine learning approach to predict and explain the relationship between task-based fMRI and individual differences in cognition. Cerebral Cortex, bhac235. https://doi.org/10.1093/cercor/bhac235

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, É. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12(85), 2825–2830.

Poldrack, R. A., Huckins, G., & Varoquaux, G. (2020). Establishment of Best Practices for Evidence for Prediction: A Review. JAMA Psychiatry, 77(5), 534–540. https://doi.org/10.1001/jamapsychiatry.2019.3671

Rasero, J., Sentis, A. I., Yeh, F.-C., & Verstynen, T. (2021). Integrating across neuroimaging modalities boosts prediction accuracy of cognitive ability. PLOS Computational Biology, 17(3), e1008347. https://doi.org/10.1371/journal.pcbi.1008347

Ray-Mukherjee, J., Nimon, K., Mukherjee, S., Morris, D. W., Slotow, R., & Hamer, M. (2014). Using commonality analysis in multiple regressions: A tool to decompose regression effects in the face of multicollinearity. Methods in Ecology and Evolution, 5(4), 320–328. https://doi.org/10.1111/2041-210X.12166

Robinson, E. C., Garcia, K., Glasser, M. F., Chen, Z., Coalson, T. S., Makropoulos, A., Bozek, J., Wright, R., Schuh, A., Webster, M., Hutter, J., Price, A., Cordero Grande, L., Hughes, E., Tusor, N., Bayly, P. V., Van Essen, D. C., Smith, S. M., Edwards, A. D., … Rueckert, D. (2018). Multimodal surface matching with higher-order smoothness constraints. NeuroImage, 167, 453–465. https://doi.org/10.1016/j.neuroimage.2017.10.037

Rokicki, J., Wolfers, T., Nordhøy, W., Tesli, N., Quintana, D. S., Alnæs, D., Richard, G., de Lange, A.-M. G., Lund, M. J., Norbom, L., Agartz, I., Melle, I., Nærland, T., Selbæk, G., Persson, K., Nordvik, J. E., Schwarz, E., Andreassen, O. A., Kaufmann, T., & Westlye, L. T. (2021). Multimodal imaging improves brain age prediction and reveals distinct abnormalities in patients with psychiatric and neurological disorders. Human Brain Mapping, 42(6), 1714–1726. https://doi.org/10.1002/hbm.25323

Satterthwaite, T. D., Connolly, J. J., Ruparel, K., Calkins, M. E., Jackson, C., Elliott, M. A., Roalf, D. R., Hopson, R., Prabhakaran, K., Behr, M., Qiu, H., Mentch, F. D., Chiavacci, R., Sleiman, P. M. A., Gur, R. C., Hakonarson, H., & Gur, R. E. (2016). The Philadelphia Neurodevelopmental Cohort: A publicly available resource for the study of normal and abnormal brain development in youth. NeuroImage, 124, 1115–1119. https://doi.org/10.1016/j.neuroimage.2015.03.056

Smith, S. M., Vidaurre, D., Alfaro-Almagro, F., Nichols, T. E., & Miller, K. L. (2019). Estimation of brain age delta from brain imaging. NeuroImage, 200, 528–539. https://doi.org/10.1016/j.neuroimage.2019.06.017

Somerville, L. H., Bookheimer, S. Y., Buckner, R. L., Burgess, G. C., Curtiss, S. W., Dapretto, M., Elam, J. S., Gaffrey, M. S., Harms, M. P., Hodge, C., Kandala, S., Kastman, E. K., Nichols, T. E., Schlaggar, B. L., Smith, S. M., Thomas, K. M., Yacoub, E., Van Essen, D. C., & Barch, D. M. (2018). The Lifespan Human Connectome Project in Development: A large-scale study of brain connectivity development in 5–21 year olds. NeuroImage, 183, 456–468. https://doi.org/10.1016/j.neuroimage.2018.08.050

Sperling, R. A., Bates, J. F., Cocchiarella, A. J., Schacter, D. L., Rosen, B. R., & Albert, M. S. (2001). Encoding novel face-name associations: A functional MRI study. Human Brain Mapping, 14(3), 129–139. https://doi.org/10.1002/hbm.1047

Sripada, C., Angstadt, M., Rutherford, S., Kessler, D., Kim, Y., Yee, M., & Levina, E. (2019). Basic Units of Inter-Individual Variation in Resting State Connectomes. Scientific Reports, 9(1), Article 1. https://doi.org/10.1038/s41598-018-38406-5

Sripada, C., Angstadt, M., Rutherford, S., Taxali, A., & Shedden, K. (2020). Toward a “treadmill test” for cognition: Improved prediction of general cognitive ability from the task activated brain. Human Brain Mapping, 41(12), 3186–3197. https://doi.org/10.1002/hbm.25007

Stigler, S. M. (1997). Regression towards the mean, historically considered. Statistical Methods in Medical Research, 6(2), 103–114. https://doi.org/10.1177/096228029700600202

Sudlow, C., Gallacher, J., Allen, N., Beral, V., Burton, P., Danesh, J., Downey, P., Elliott, P., Green, J., Landray, M., Liu, B., Matthews, P., Ong, G., Pell, J., Silman, A., Young, A., Sprosen, T., Peakman, T., & Collins, R. (2015). UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age. PLOS Medicine, 12(3), e1001779. https://doi.org/10.1371/journal.pmed.1001779

Tetereva, A., Li, J., Deng, J. D., Stringaris, A., & Pat, N. (2022). Capturing brain‐cognition relationship: Integrating task‐based fMRI across tasks markedly boosts prediction and test‐retest reliability. NeuroImage, 263, 119588. https://doi.org/10.1016/j.neuroimage.2022.119588

Vieira, B. H., Pamplona, G. S. P., Fachinello, K., Silva, A. K., Foss, M. P., & Salmon, C. E. G. (2022). On the prediction of human intelligence from neuroimaging: A systematic review of methods and reporting. Intelligence, 93, 101654. https://doi.org/10.1016/j.intell.2022.101654

Vos De Wael, R., Benkarim, O., Paquola, C., Lariviere, S., Royer, J., Tavakol, S., Xu, T., Hong, S.-J., Langs, G., Valk, S., Misic, B., Milham, M., Margulies, D., Smallwood, J., & Bernhardt, B. C. (2020). BrainSpace: A toolbox for the analysis of macroscale gradients in neuroimaging and connectomics datasets. Communications Biology, 3(1), 103. https://doi.org/10.1038/s42003-020-0794-7

Woolrich, M. W., Ripley, B. D., Brady, M., & Smith, S. M. (2001). Temporal Autocorrelation in Univariate Linear Modeling of FMRI Data. NeuroImage, 14(6), 1370–1386. https://doi.org/10.1006/nimg.2001.0931

Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 301–320. https://doi.org/10.1111/j.1467-9868.2005.00503.x

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation