Author response:
Reviewer # 1 (Public review):
A major concern is that the model is trained in the midst of the COVID-19 pandemic and its associated restrictions and validated on 2023 data. The situation before, during, and after COVID is fluid, and one may not be representative of the other. The situation in 2023 may also not have been normal and reflective of 2024 onward, both in terms of the amount of testing (and positives) and measures taken to prevent the spread of these types of infections. A further worry is that the retrospective prospective split occurred in October 2020, right in the first year of COVID, so it will be impossible to compare both cohorts to assess whether grouping them is sensible.
We fully concur with the reviewer that the COVID-19 pandemic represents a profound confounding factor that fundamentally impacts the interpretation and generalizability of our model. This is a critical point that deserves a more thorough treatment. In the revised manuscript, we will add a dedicated subsection in the Discussion to explicitly analyze the pandemic’s impact. We will reframe our model’s contribution not as a universally generalizable tool for a hypothetical “normal” future, but as a robust framework demonstrated to capture complex epidemiological dynamics under the extreme, non-stationary conditions of a real-world public health crisis. We will argue that its strong performance on the 2023 validation data, a unique post-NPI “rebound” year, specifically showcases its utility in modeling volatile periods.
The outcome of interest is the number of confirmed influenza cases. This is not only a function of weather, but also of the amount of testing. The amount of testing is also a function of historical patterns. This poses the real risk that the model confirms historical opinions through increased testing in those higher-risk periods. Of course, the models could also be run to see how meteorological factors affect testing and the percentage of positive tests. The results only deal with the number of positive (only the overall number of tests is noted briefly), which means there is no way to assess how reasonable and/or variable these other measures are. This is especially concerning as there was massive testing for respiratory viruses during COVID in many places, possibly including China.
The reviewer raises a crucial point regarding surveillance bias, which is inherent in studies using reported case data. We acknowledge this limitation and will address it more transparently.
(1) Clarification of Available Data: Our manuscript states that over the six-year period, a total of 20,488 ILI samples were tested, yielding 3,155 positive cases (line 471; Figure 1). We will make this denominator more prominent in the Methods section. However, the reviewer is correct that our models for Putian and the external validation for Sanming utilize the daily positive case counts as the outcome. The reality of our surveillance data source is that while we have the aggregate total of tests over six years, obtaining a reliable daily denominator of all respiratory virus tests conducted (not just for ILI patients as per the surveillance protocol) is not feasible. This is a common constraint in real-world public health surveillance systems.
(2) Justification and Discussion: We will add a detailed paragraph to the Limitations section to address this. We will justify our use of case counts as it is the most direct metric for assessing public health burden and planning resource allocation (e.g., hospital beds, antivirals). We will also explain that modeling the positivity rate presents its own challenges, as the ILI denominator is also subject to biases (e.g., shifts in healthcare-seeking behavior, co-circulation of other pathogens causing similar symptoms). We will thus frame our work as forecasting the direct surveillance signal that public health officials monitor daily.
Although the authors note a correlation between influenza and the weather factors. The authors do not discuss some of the high correlations between weather factors (e.g., solar radiation and UV index). Because of the many weather factors, those plots are hard to parse.
This is an excellent point. Our preliminary analysis (Supplementary Figure S2) indeed confirms a strong positive correlation between solar radiation and the UV index. Perhaps the reviewer overlooked the contents of the supplementary information document. We have included the figure for their review. Our original discussion did explicitly address this multicollinearity, summarized as follows: We acknowledge the high correlation between certain meteorological variables. We then explain that our two-stage modeling approach is designed to mitigate this issue. In the first stage, the DLNM models assess the impact of each variable individually, thus isolating their non-linear and lagged effects without being confounded by interactions. In the second stage, the LSTM network, by its nature, is a powerful non-linear function approximator that is robust to multicollinearity and can learn the complex, interactive relationships between all input features, including correlated ones.
Figure S2. Scatterplot matrix illustrating correlations between Influenza cases and meteorological factors. This comprehensive scatterplot matrix visualizes the relationships between influenza-like illness (ILI) cases, influenza A and B cases, and multiple meteorological variables, including average temperature, humidity, precipitation, wind speed, wind direction, solar radiation, and ultraviolet (UV) index. The figure is composed of three distinct sections that collectively provide an in-depth analysis of these relationships:
(1) Upper-right triangle: This section presents a Pearson correlation coefficient matrix, with color intensity reflecting the strength of correlations between the variables. Red cells represent positive correlations, while green cells represent negative correlations. The closer the coefficient is to 1 or -1, the darker the cell and the stronger the correlation, with statistically significant correlations marked by asterisks. This matrix allows for a rapid identification of notable relationships between influenza cases and meteorological factors.
(2) Lower-left triangle: This section contains scatterplots of pairwise comparisons between variables. These scatterplots facilitate the visual identification of potential linear or non-linear relationships, as well as any outliers or anomalies. This visualization is essential for evaluating the nature of interactions between meteorological factors and influenza cases.
(3) Diagonal: The diagonal displays the density distribution curves for each individual variable. These curves provide an overview of the distribution characteristics of each variable, revealing central tendencies, variance, and any skewness present in the data.
The authors do not actually compare the results of both methods and what the LSTM adds.
We thank the reviewer for this comment and realize we may not have signposted the comparison clearly enough. Our manuscript does present a direct comparison between the LSTM and ARIMA models in the Results section (lines 737-745) and Table 2, where performance metrics (MAE, RMSE, MAPE, SMAPE) for both models on the 2023 validation set are detailed, showing LSTM’s superior performance, particularly for Influenza A. Furthermore, Figure 6 (panels A and B) visualizes the LSTM’s predictions against observed values, and Supplementary Figure S3 does the same for the ARIMA model, allowing for a visual comparison of their fit.
To address the reviewer’s concern, in the revised manuscript, we will:
(1) Add a more explicit comparative statement in the Results section, directly contrasting the key metrics and highlighting the LSTM’s advantages in capturing peak activities.
(2) Consider combining the visualizations from Figure 6 and Supplementary Figure S3 into a single, more powerful comparative figure that shows the observed data, the LSTM predictions, and the ARIMA predictions on the same plot.
Meandering methods; reliability of “Our Word in Data”; Figure 2A is hard to parse.
We will address these points comprehensively.
(3) Methods: We will significantly streamline and restructure the Methods section. We also wish to provide context that the manuscript’s current structure reflects an effort to incorporate feedback from multiple rounds of peer review across different journals, which may have led to some repetition. We will perform a thorough edit to improve its conciseness and logical flow.
(4) Data Reliability: The reviewer raises a crucial and highly insightful question regarding the validity of using a national-level index to represent local public health interventions. This is a critical aspect of our model’s construction, and we are grateful for the opportunity to provide a more thorough justification.
We acknowledge that the ideal variable would be a daily, quantitative, city-level index of non-pharmaceutical interventions (NPIs). However, the practical reality of the data landscape in China is that such granular, publicly accessible databases for subnational regions do not exist. Given this constraint, our choice of the Our World in Data (OWID) national stringency index was the result of a careful consideration process, and we believe it serves as the best available proxy for our study context.
In the revised manuscript, we will significantly expand the Methods section to articulate our rationale, which is threefold:
National Policy Coherence: During the COVID-19 pandemic in mainland China, core NPIs, particularly mandatory face-covering policies in shared public spaces, were implemented with a high degree of national uniformity. While local governments had some autonomy, they operated within a centrally defined framework, ensuring a baseline level of policy consistency across the country.
Local Context Alignment: A key factor supporting the use of this national proxy is the specific epidemiological context of Putian during the study period. For the vast majority of the pandemic, Putian was classified as a low-risk area with only sporadic COVID-19 cases. Consequently, the city’s public health measures consistently aligned with the standard national guidelines. It did not experience prolonged or exceptionally strict local lockdowns that would cause a significant deviation from the national-level policy trends captured by the OWID index.
Validation by Local Public Health Experts: Most critically, and to directly address your suggestion, our co-authors from the Putian Center for Disease Control and Prevention have meticulously reviewed the OWID stringency index against their on-the-ground, institutional knowledge of the mandates that were in effect. They have confirmed that the categorical levels (0-4) and the temporal trends of the OWID index provide a faithful representation of the public health restrictions concerning face coverings as experienced by the population of Putian.
Therefore, we will revise our manuscript to make it clear that the use of the OWID index was not a choice of convenience, but a necessary and well-vetted decision. Given the unavailability of official local data, the OWID index, cross-validated by our local experts, represents the most rigorous and appropriate variable available to account for the profound impact of NPIs on influenza transmission in our model.
(5) Figure 2A: We agree completely and will replace the heatmap with a multi-line plot or a stacked area chart to better visualize the temporal dynamics of influenza subtypes.
We have preliminarily completed the redrawing of Figure 3A. The new and old versions are presented for your review to determine which figure is more suitable for this manuscript in terms of scientific accuracy and visual impact.
Reviewer #2 (Public review):
Weakness (1):
The rationale of the study is not clearly stated.
We appreciate the reviewer’s critique and acknowledge that the unique contribution of our study needs to be articulated more forcefully. Our introduction (lines 105-140) attempted to outline the limitations of existing studies, but we will revise it to be much sharper. The revised introduction will state unequivocally that our study’s rationale is to address a confluence of specific, unresolved gaps in the literature: 1) The persistent challenge of forecasting influenza in subtropical regions with their erratic seasonality; 2) The lack of studies that build subtype-specific models for Influenza A and B, which we show have distinct meteorological drivers; 3) The methodological gap in integrating the explanatory power of DLNM with the predictive power of a rigorously, Bayesian-optimized LSTM network; and 4) The unique opportunity to develop and test a model on data that encompasses the unprecedented disruption of the COVID-19 pandemic, a critical test of model robustness.
Weakness (2):
Several issues with methodological and data integration should be clarified.
We interpret this as a general statement, with the specific issues detailed in the reviewer’s subsequent points and the “Recommendations for the authors” section. We will meticulously address each of these specific points in our revision. For instance, as a demonstration of our commitment to clarification, we will provide a much more detailed justification for our choice of benchmark model (ARIMA), as detailed in our response to Recommendation #11.
Reviewer #2 (Recommendation for the authors):
The authors should justify why the baseline model selection was made by comparing the LSTM model only with ARIMA? How the outcomes could be sensitive to other commonly used machine learning methods, such as Random Forest or XGBoost, etc, as a benchmark for their performance.
The reviewer raises a highly pertinent question regarding the selection of our benchmark model. A robust comparison is indeed essential for contextualizing the performance of our proposed LSTM network. Our choice to benchmark against the ARIMA model was a deliberate and principled decision, grounded in the specific literature of influenza forecasting at the intersection of climatology and epidemiology.
In the revised manuscript, we will expand our justification within the Methods section and reinforce it in the Discussion. Our rationale is as follows:
(1) ARIMA as the Established Standard: As we briefly noted in our original introduction (lines 110-113), the ARIMA model is arguably the most widely established and frequently cited statistical method for time-series forecasting of influenza incidence, including studies investigating meteorological drivers. It serves as the conventional benchmark against which novel methods in this specific domain are often evaluated. Therefore, demonstrating superiority over ARIMA is the most direct and scientifically relevant way to validate the incremental value of our deep learning approach.
(2) A Focused Scientific Hypothesis: Our primary hypothesis was that the LSTM network, with its inherent ability to capture complex non-linearities and long-term dependencies, could overcome the documented limitations of linear autoregressive models like ARIMA in the context of climate-influenza dynamics. Our study was designed specifically to test this hypothesis.
(3) Avoiding a “Bake-off” without a Clear Rationale: While other machine learning models like Random Forest or XGBoost are powerful, they are not established as the standard baseline in this particular niche of literature. Including them would shift the focus from a targeted comparison against the conventional standard to a broader, less focused “bake-off” of various algorithms. Such an exercise, while potentially interesting, would risk diluting the core message of our paper and would be undertaken without a clear, literature-driven hypothesis for why one of these specific tree-based models should be the next logical benchmark.
Therefore, we will argue in the revised manuscript that our focused comparison with ARIMA provides the clearest and most meaningful assessment of our model’s contribution to the existing body of work on climate-informed influenza forecasting. We will, however, explicitly acknowledge in the Discussion that future work could indeed benefit from a broader comparative analysis as the field continues to evolve and adopt a wider array of machine learning techniques.
Similarly, for some of the reviewer’s recommendations that do not require significant time and effort to implement, such as recommendation 7, we have also redrawn Figure 3 based on your feedback. It is provided for your review.
Figure 3 presents the time series of the cases. I wonder whether the data for these factors and outcomes are daily or aggregated by week/month? I suggest representing it in 9x1 format with a single x-axis to compare, instead of 3x3 format. Authors can refer similar plot in https://doi.org/ 10.1371/journal.pcbi.1012311 in Figure 1.
We are deeply grateful for the reviewer’s valuable suggestion and thoughtful provision of reference illustrations. Based on their input, we have redrawn Figure 3 and have included it for their review.
Weakness (3):
Validation of the models is not presented clearly.
We were concerned by this comment and conducted a thorough self-assessment of our manuscript. We believe we have performed a multi-faceted validation, but we have evidently failed to present it with sufficient clarity and structure. Our validation strategy, detailed across the Methods and Results sections, includes:
Internal Out-of-Time Validation: Using 2023 data as a hold-out set to test the model trained on 2018-2022 data (lines 695-696, 705-710; Figure 6A, B).
External Validation: Testing the trained model on an independent dataset from a different city, Sanming (lines 730-736; Figure 6I, J).
- Benchmark Model Comparison: Quantitatively comparing the LSTM’s performance against the standard ARIMA model using multiple error metrics (lines 737-745; Table 2).
- Interpretability Validation (Sanity Check): Using SHAP analysis to ensure the model’s predictions are driven by epidemiologically plausible factors (lines 746-755; Figure 6E-H).
To address the reviewer’s valid critique of our presentation, we will significantly restructure the relevant parts of the Results section. We will create explicit subheadings such as “Internal Validation,” “External Validation,” and “Comparative Performance against ARIMA Benchmark” to make our comprehensive validation process unambiguous and easy to follow.
Weakness (4):
The claim for providing tools for 'early warning' was not validated by analysis and results.
We agree with this assessment entirely. This aligns with the eLife Assessment and comments from Reviewer #1. Our primary revision will be to systematically recalibrate the manuscript's language. We will replace all instances of “early warning tool” with more accurate and modest phrasing, such as “high-performance forecasting framework” or “a foundational model for future warning systems.” We will ensure that our revised title, abstract, and conclusions precisely reflect what our study has delivered: a robust predictive model, not a field-ready public health intervention tool.