Early prediction of level-of-care requirements in patients with COVID-19
Abstract
This study examined records of 2566 consecutive COVID-19 patients at five Massachusetts hospitals and sought to predict level-of-care requirements based on clinical and laboratory data. Several classification methods were applied and compared against standard pneumonia severity scores. The need for hospitalization, ICU care, and mechanical ventilation were predicted with a validation accuracy of 88%, 87%, and 86%, respectively. Pneumonia severity scores achieve respective accuracies of 73% and 74% for ICU care and ventilation. When predictions are limited to patients with more complex disease, the accuracy of the ICU and ventilation prediction models achieved accuracy of 83% and 82%, respectively. Vital signs, age, BMI, dyspnea, and comorbidities were the most important predictors of hospitalization. Opacities on chest imaging, age, admission vital signs and symptoms, male gender, admission laboratory results, and diabetes were the most important risk factors for ICU admission and mechanical ventilation. The factors identified collectively form a signature of the novel COVID-19 disease.
eLife digest
The new coronavirus (now named SARS-CoV-2) causing the disease pandemic in 2019 (COVID-19), has so far infected over 35 million people worldwide and killed more than 1 million. Most people with COVID-19 have no symptoms or only mild symptoms. But some become seriously ill and need hospitalization. The sickest are admitted to an Intensive Care Unit (ICU) and may need mechanical ventilation to help them breath. Being able to predict which patients with COVID-19 will become severely ill could help hospitals around the world manage the huge influx of patients caused by the pandemic and save lives.
Now, Hao, Sotudian, Wang, Xu et al. show that computer models using artificial intelligence technology can help predict which COVID-19 patients will be hospitalized, admitted to the ICU, or need mechanical ventilation. Using data of 2,566 COVID-19 patients from five Massachusetts hospitals, Hao et al. created three separate models that can predict hospitalization, ICU admission, and the need for mechanical ventilation with more than 86% accuracy, based on patient characteristics, clinical symptoms, laboratory results and chest x-rays.
Hao et al. found that the patients’ vital signs, age, obesity, difficulty breathing, and underlying diseases like diabetes, were the strongest predictors of the need for hospitalization. Being male, having diabetes, cloudy chest x-rays, and certain laboratory results were the most important risk factors for intensive care treatment and mechanical ventilation. Laboratory results suggesting tissue damage, severe inflammation or oxygen deprivation in the body's tissues were important warning signs of severe disease.
The results provide a more detailed picture of the patients who are likely to suffer from severe forms of COVID-19. Using the predictive models may help physicians identify patients who appear okay but need closer monitoring and more aggressive treatment. The models may also help policy makers decide who needs workplace accommodations such as being allowed to work from home, which individuals may benefit from more frequent testing, and who should be prioritized for vaccination when a vaccine becomes available.
Introduction
As a result of the SARS-CoV-2 pandemic, many hospitals across the world have resorted to drastic measures: canceling elective procedures, switching to remote consultations, designating most beds to COVID-19, expanding Intensive Care Unit (ICU) capacity, and re-purposing doctors and nurses to support COVID-19 care. In the U.S., the CDC estimates more than 310,000 COVID-19 hospitalizations from March 1 to June 13, 2020 (CDC, 2020).
Much of the modeling work related to the pandemic has focused on spread dynamics (Kucharski et al., 2020). Others have described patients who were hospitalized (Richardson et al., 2020) (n = 5700) and (Buckner et al., 2020) (n = 105), became critically ill (Gong et al., 2020) (n = 372), or succumbed to the disease (n = 1625 (Onder et al., 2020), n = 270 [Wu et al., 2020]). In data from the New York City, 14.2% required ICU treatment and 12.2% mechanical ventilation (Richardson et al., 2020). With such rates, the logistical and ethical implications of bed allocation and potential rationing of care delivery are immense (White and Lo, 2020). To date, while state- or country-level prognostication has developed to examine resource allocation at a mass scale, there is inadequate evidence based on a large cohort on accurate prediction of the disease progress at the individual patient level. A string of recent studies developed models to predict severe disease or mortality based on clinical and laboratory findings, for example (Yan et al., 2020) (n = 485), (Gong et al., 2020) (n = 372), (Bhargava et al., 2020) (n = 197), (Ji et al., 2020) (n = 208), and (Wang et al., 2020) (n = 296). In these studies, several variables such as Lactate Dehydrogenase (LDH) (Gong et al., 2020; Ji et al., 2020; Yan et al., 2020) and C-reactive protein (CRP) have been identified as important predictors. All of these studies considered relatively small cohorts and, with the exception of Bhargava et al., 2020, considered patients in China. Although it is believed that the virus remains the same around the globe, the physiologic response to the virus and the eventual course of disease depend on multiple other factors, many of them regional (e.g. population characteristics, hospital practices, prevalence of pre-existing conditions) and not applicable universally. Triage of adult patients with COVID-19 remains challenging with most evidence coming from expert recommendations; evidence-based methods based on larger U.S.-based cohorts have not been reported (Sprung et al., 2020).
Leveraging data from five hospitals of the largest health care system in Massachusetts, we seek to develop personalized, interpretable predictive models of (i) hospitalization, (ii) ICU treatment, and (iii) mechanical ventilation, among SARS-CoV-2 positive patients. To develop these models, we developed a pipeline leveraging state-of-the-art Natural Language Processing (NLP) tools to extract information from the clinical reports for each patient, employing statistical feature selection methods to retain the most predictive features for each model, and adapting a host of advance machine learning-based classification methods to develop parsimonious (hence, easier to use and interpret) predictive models. We found that the more interpretable models can, for the most part, deliver similar predictive performance compared to more complex, ‘black-box’ models involving ensembles of many decision trees. Our results support our initial hypothesis that important clinical outcomes can be predicted with a high degree of accuracy upon the patient’s first presentation to the hospital using a relatively small number of features, which collectively compose a ‘signature’ of the novel COVID-19 disease.
Results
We extracted data for all patients (n = 2566) who had a positive RT-PCR SARS-CoV-2 test between March 4 and April 13, 2020 at five Massachusetts hospitals, included in the same health care system (Massachusetts General Hospital (MGH), Brigham and Women’s Hospital (BWH), Faulkner Hospital (FH), Newton-Wellesley Hospital (NWH), and North Shore Medical Center (NSM)). The study was approved by the pertinent Institutional Review Boards.
Demographics, pre-hospital medications, and comorbidities were extracted for each patient based on the electronic medical record. Patient symptoms, vital signs, radiologic findings, and laboratory results were recorded at their first hospital presentation (either clinic or emergency department) before testing positive for SARS-CoV-2. A total of 164 features were extracted for each patient. ICU admission and mechanical ventilation were determined for each patient. Complete blood count values were considered as absolute counts. Representative statistics comparing hospitalized, ICU admitted, and mechanically ventilated patients are provided in Table A1 (Appendix). Table A2 (Appendix) reports how patients were distributed among the five hospitals.
Among the 2566 patients with a positive test, 930 (36.2%) were hospitalized. Among the hospitalized, 273 (29.4% of the hospitalized) required ICU care of which 217 (79.5%) required mechanical ventilation. The mean age over all patients was 51.9 years (SD: 18.9 years) and 45.6% were male.
Hospitalization
The mean age of hospitalized patients was 62.3 years (SD: 18 years) and 55.3% were male. We employed linear and non-linear classification methods for predicting hospitalizations. Non-linear methods included random forests (RF) (Breiman, 2001) and XGBoost (Chen and Guestrin, 2016). Linear methods included support vector machines (SVM) (Cortes and Vapnik, 1995) and Logistic Regression (LR); each linear method used either ℓ1- or ℓ2-norm regularization and we report the best-performing flavor of each model.
Results are reported in Table 1. We report the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) and the Weighted-F1 score, both computed out-of-sample (in a test set not used for training the model). As we detail under Methods, we used two validation strategies. The ‘Random’ strategy randomly split the patients into a training and a test set and was repeated five times; from these five splits we report the average and the standard deviation of the test performance. The ‘BWH’ strategy trained the models on MGH, FH, NWH, and NSM patients, and evaluated performance on BWH patients.
The hospitalization models used symptoms, pre-existing medications, comorbidities, and patient demographics. Laboratory results and radiologic findings were not considered since these were not available for most non-hospitalized patients. Full models used all (106) variables retained after several pre-processing steps described in Materials and methods. Applying the statistical variable selection procedure described in the Appendix (specifically, eliminating variables with a p-value exceeding 0.05), yields a model with 74 variables. To provide a more parsimonious, highly interpretable, and easier to implement model, we used recursive feature elimination (see Appendix) to select a model with only 11 variables. The best model using the random validation approach has an AUC of 88% while the best parsimonious (linear) model has an AUC of 83%, being though easier to interpret and implement. Validation on the BWH patients yields an AUC of 84% for the parsimonious model.
Table 1 also reports the 11 variables in the parsimonious LR model, including their LR coefficients, and a binarized version of this model as described in Materials and methods. The most important variables associated with hospitalization were: oxygen saturation, temperature, respiratory rate, age, pulse, blood pressure, a comorbidity of adrenal insufficiency, BMI, prior transplantation, dyspnea, and kidney disease.
Additionally, we assessed the role of pre-existing ACE inhibitor (ACEI) and angiotensin receptor blocker (ARB) medications by adding these variables into the parsimonious binarized model, while controlling for additional relevant variables (hypertension, diabetes, and arrhythmia comorbidities and other hypertension medications). We found that while ARBs are not a factor, ACEIs reduce the odds of hospitalization by 3/4, on average, controlling for other important factors, such as age, hypertension, and related comorbidities associated with the use of these medications.
ICU admission
The mean age of ICU admitted patients was 63.3 years (SD: 15.1 years) and 63% were male. The ICU and ventilation prediction models used the features considered for the hospitalization, as well as laboratory results and radiologic findings. For these models, we excluded patients who required immediate ICU admission or ventilation (defined as within 4 hr from initial presentation). This was implemented in order to focus on patients where triaging is challenging and risk prediction would be beneficial. There were 2513 and 2525 patients remaining for the ICU and the mechanical ventilation prediction models, respectively.
For the model including 2513 patients (Table 2), we first developed a model using all 130 variables retained after pre-processing, then employed statistical variable selection to retain 56 of the variables, and then applied recursive feature elimination with LR to select a parsimonious model which uses only 10 variables. The following variables were included: opacity observed in a chest scan, respiratory rate, age, fever, male gender, albumin, anion gap, oxygen saturation, LDH, and calcium. In addition, we generated a binarized version of the parsimonious model. The parsimonious model for all 2513 patients has an AUC of 86%, almost as high as the model with all 130 features.
For comparison purposes against well-established scoring systems, we implemented two commonly used pneumonia severity scores, CURB-65 (Lim et al., 2003) and the Pneumonia Severity Index (PSI) (Fine et al., 1997). Predictions based on the PSI and CURB-65 scores, have AUCs of 73% and 67%, respectively.
We also developed a model for a more restrictive set of patients. Specifically, the number of missing lab values for some patients is substantial. Given the importance of LDH and CRP, as revealed by our models, the more restricted patient set contains 669 patients with non-missing LDH and CRP values. After removing patients who required intubation or ICU admission within 4 hr of hospital presentation, we included 628 patients and 635 patients for the restricted ICU admission and ventilation models, respectively.
The best restricted model for the 628 patients (Table 3) is the nonlinear XGBoost model using 29 statistically selected features with an AUC of 83%, with a linear parsimonious LR model close behind (AUC 80%). An RF model using all variables yields an AUC of 77% when tested on BWH data. PSI- and CURB-65 models have AUCs below 59%.
Mechanical ventilation
The mean age of patients requiring mechanical ventilation was 63.3 years (SD: 14.7 years) and 63.6% were male. Again, we excluded patients who were intubated within 4 hr of their hospital admission.
For the model including 2525 patients (Table 4), we used statistical feature selection to select 55 variables, and recursive feature elimination with LR to select a parsimonious model with only eight variables. The following variables were included: lung opacities, albumin, fever, respiratory rate, glucose, male gender, LDH, and anion gap. In addition, we generated a binarized version of the parsimonious model. The best model for all 2525 patients was a nonlinear RF model using the 55 statistically selected variables and yielding an AUC of 86%. The best linear model was the parsimonious LR model with an AUC of 85%. PSI- and CURB-65 models yield AUCs of 74% and 67%, respectively.
The best model for the restricted case of 635 patients (Table 5) was the linear parsimonious LR model (with just five variables) achieving an AUC of 82%. PSI- and CURB-65 models do not exceed AUC of 58%.
Time period between ICU/ventilation model prediction and corresponding outcomes
Table 6 reports the mean and the median time interval (in hours) between hospital admission time and ICU/ventilation outcomes. Specifically, we report statistics for ICU admission or intubation outcomes from the correct ICU/intubation predictions made by our models trained on four hospitals (MGH, NWH, NSM, FH) and applied to BWH patients (both the models making predictions for all patients and the restricted models). As we have noted earlier, our models use the lab results closest to admission (either on admission date or the following day). We also report the time interval between the last lab result used by the model and the corresponding ICU/intubation outcome.
Discussion
We developed three models to predict need for hospitalization, ICU admission, and mechanical ventilation in patients with COVID-19. The prediction models are not meant to replace clinicians’ judgment for determining level of care. Instead, they are designed to assist clinicians in identifying patients at risk of future decompensation. Patient vital signs were the most important predictors of hospitalization. This is expected as vital signs reflect underlying disease severity, the need for cardiorespiratory resuscitation, and the risk of future decompensation without adequate medical support. Older age and BMI were also important predictors for hospitalization. Age has been recognized as an important factor associated with severe COVID-19 in previous series (Grasselli et al., 2020; Guan et al., 2020; Richardson et al., 2020). However, it is not known whether age itself or the presence of comorbidities place patients at risk for severe disease. Our results demonstrate that age is a stronger predictor of severe COVID-19 than a host of underlying comorbidities.
In terms of patient comorbidities, adrenal insufficiency, prior transplantation, and chronic kidney disease were strongly associated with need for hospitalization. Diabetes mellitus was associated with a need for ICU admission and mechanical ventilation, which might be due to its detrimental effects on immune function.
For the ICU and ventilation prediction models screening all at-risk (COVID-19-positive patients), opacities observed in a chest scan, age, and male gender emerge as important variables. Males have been found to have worse in-hospital outcomes in other studies as well (Palaiodimos et al., 2020).
We also identified several routine laboratory values that are predictive of ICU admission and mechanical ventilation. Elevated serum LDH, CRP, anion gap, and glucose, as well as decreased serum calcium, sodium, and albumin were strong predictors of ICU admission and mechanical ventilation. LDH is an indicator of tissue damage and has been found to be a marker of severity in P. jirovecii pneumonia (Zaman and White, 1988). Along with CRP, it was among the two most important predictors of ICU admission and ventilation in the parsimonious model among patients who had LDH and CRP measurements on admission. This finding is consistent with previous reports identifying LDH as an important prognostic factor (Gong et al., 2020; Ji et al., 2020; Mo et al., 2020; Yan et al., 2020). In addition, lower serum calcium is associated with cell lysis and tissue destruction, as it is often seen as part of the tumor lysis syndrome. Elevated serum anion gap is a marker of metabolic acidosis and ischemia, suggesting that tissue hypoxia and hypoperfusion may be components of severe disease.
For all three prognostic models, we developed predicting hospitalizations, ICU care, and mechanical ventilation, AUC ranges within 86–88%, which indicates strong predictive power. Interestingly, we can achieve AUC within 85–86% for ICU and ventilation prediction with a parsimonious linear model utilizing no more than 10 variables. In all cases, we can also develop a parsimonious model with binarized variables using medically suggested normal and abnormal variable thresholds. These binarized models have similar performance with their continuous counterparts. The ICU and ventilation models using all patients are very accurate, but, arguably, make a number of ‘easier’ decisions since more than 60% of the patients are never hospitalized. Many of these patients are younger, healthy, and likely present with mild-to-moderate symptoms. To test the robustness of the models to patients with potentially more ‘complex’ disease, we developed ICU and ventilation models on a restricted set of patients. This is the subset of patients who are hospitalized and most of the crucial labs are available for them (specifically CRP and LDH which emerged as important from our models). The best AUC for these models drops, but not below 82%, which indicates robustness of the model even when dealing with arguably harder to assess cases. LDH, CRP, calcium, lung opacity, anion gap, SpO2, sodium, and a comorbidity of insulin-controlled diabetes appear as the most significant for these patients. Interestingly, the corresponding binarized models have about 10% lower AUC; apparently, for the more severely ill, clinical variables deviate substantially from normal and knowing the exact values is crucial.
The models have been validated with two different approaches, using random splits of the data into training and testing, as well as training in some hospitals and testing at a different hospital. Performance metrics are relatively consistent with these two approaches. We also compared the models against standard pneumonia severity scores, PSI and CURB-65, establishing that our models are significantly stronger, which highlights the different clinical profile of COVID-19.
We also examined how much in advance of the ICU or ventilation outcomes our models are able to make a prediction. Of course, this is not entirely in our control; it depends on what state the patients get admitted and how soon their condition deteriorates to require ICU admission and/or ventilation. Table 6 reports the corresponding statistics. For example, the restricted ICU and ventilation models are making a correct prediction upon admission (using the lab results closest to that time) for outcomes that on average occur 38 and 35 hr later, respectively.
To further test the accuracy of the restricted ICU and ventilation models well in advance of the corresponding event, we considered an extended BWH test set (adding 11 more patients) and computed the accuracy of the models when the test set was restricted to patients whose outcome (ICU admission or ventilation) was more than x hours after the admission lab results based on which the prediction was made, with x being 6 hr, or 12 hr, or 18 hr, or 24 hr, or even 48 hr. The ICU model reaches an AUC of 87% and a weighted F1-score of 86% at x = 18 hr. The ventilation model reaches an AUC of 64% and an F1-score of 72% at x = 48 hr. These results demonstrate that the predictive models can indeed make predictions well into the future, when physicians would be less certain about the course of the disease and when there is potentially enough time to intervene and improve outcomes.
A manual review of the predictions by the models indicates that they performed well at predicting future ICU admissions for patients who presented with mild disease several days before ICU admission was necessary. Such patients were hemodynamically stable and had minimal oxygen requirements on the floor, before clinical deterioration necessitated ICU admission. We identified several such patients. A typical case is that of a 51-year-old male with a history of hypertension, obesity, and insulin-dependent type 2 diabetes mellitus, who presented with a 3-day history of dyspnea, cough and myalgias. In the emergency department, he was hemodynamically stable, saturating at 96–97% on 2 L of nasal cannula. The patient was admitted to the floor and did well for 3 days, saturating at 93–96% on room air. On the fourth day of hospitalization, he had increasing oxygen requirements and the decision was made to transfer him to the ICU. He was intubated and ventilated for 30 days. Our prediction models accurately predicted at the time of his presentation that he would eventually require ICU admission and mechanical ventilation. This prediction was based on such variables as an elevated LDH (241 U/L) and the presence of insulin-dependent diabetes mellitus. Another such case is that of a 59-year-old male without a significant prior medical history who presented with 2 days of dyspnea, nausea, and diarrhea. At the emergency department, he was tachycardic at 110 beats per minute and saturating at 96% on room air, and the patient was admitted. For 2 days, the patient was hemodynamically stable, saturating at 94–97% on room air. On the third day of hospitalization, he had increasing oxygen requirements, eventually requiring transfer to the ICU. He was intubated and ventilated for the next 14 days. Our prediction model predicted the patient’s decompensation at his presentation, due to elevations in LDH (348 U/L) and CRP (102.3 mg/L).
We also considered the role of ACEIs and ARBs and their potential association with the outcomes. It has been speculated that ACEIs may worsen COVID-19 outcomes because they upregulate the expression of ACE2, which the virus targets for cell entry. No such evidence has been reported in earlier studies (Kuster et al., 2020; Patel and Verma, 2020). In fact, a smaller study (Zhang et al., 2020) (n = 1128 vs. 2566 in our case) reported a beneficial effect and (Rossi et al., 2020) warn of potential harmful effects of discontinuing ACEIs or ARBs due to COVID-19. Our hospitalization model suggests that ACEIs do not increase hospitalization risk and may slightly reduce it (OR 95% CI is (0.52,1.04) with a mean of 0.73). In the ICU and ventilation models, the role of these two medications is statistically weaker to observe any meaningful association.
The models we derived can be used for a variety of purposes: (i) guiding patient triage to appropriate inpatient units, (ii) guiding staffing and resource planning logistics, and (iii) understanding patient risk profiles to inform future policy decisions, such as targeted risk-based stay-at-home restrictions, testing, and vaccination prioritization guidelines once a vaccine becomes available.
Calculators implementing the parsimonious models corresponding to each of the Tables 1, 2, 3, 4, 5 have been made available online (Hao et al., 2020).
Materials and methods
Data extraction
Request a detailed protocolNatural Language Processing (NLP) was used to extract patient comorbidities (see Appendix for details), pre-existing medications, admission vital signs, hospitalization course, ICU admission, and mechanical intubation.
Pre-processing
Request a detailed protocolThe categorical features were converted to numerical by ‘one-hot’ encoding. Each categorical feature, such as gender and race, was encoded as an indicator variable for each category. Features were standardized by subtracting the mean and dividing by the standard deviation.
Several pre-processing steps, including variable imputation, outlier elimination, and removal of highly correlated variables were undertaken (see Appendix). After completing these procedures, 106 variables for each patient remained to be used by the hospitalization model. For the ICU and ventilation prediction models, we added laboratory results and radiologic findings. We removed variables with more than 90% missing values out of the roughly 2500 patients retained for these models; the remaining missing values were imputed as described above. These pre-processing steps retained 130 variables for the ICU and ventilation models.
Classification methods
Request a detailed protocolWe employed nonlinear ensemble methods including Random forests (RF) (Breiman, 2001) and XGBoost (Chen and Guestrin, 2016). We also employed ‘custom’ linear methods which yield interpretable models; specifically, support vector machines (SVM) (Cortes and Vapnik, 1995) and Logistic Regression (LR). In both cases, the variants we computed were robust to noise and the presence of outliers (Chen and Paschalidis, 2018), using proper regularization. LR, in addition to a prediction, provides the likelihood associated with the predicted outcome, which can be used as a confidence measure in decision making. Further details on these methods are in the Appendix.
For each outcome, we used the statistical feature selection and recursive feature elimination procedures described in the Appendix to develop an LR parsimonious model. The LR coefficients are comparable since the variables are standardized. Hence, a larger absolute coefficient indicates that the corresponding variable is a more significant predictor. Positive (negative) coefficients imply positive (negative) correlation with the outcome. We also developed a version of this model by converting all continuous variables into binary variables, using medically motivated thresholds (see Appendix). We report the coefficients of the ‘binarized’ model and the implied odds ratio (OR), representing how the odds of the outcome are scaled by having a specific variable being abnormal vs. normal, while controlling for all other variables in the model.
Outcomes and performance metrics
Request a detailed protocolModel performance metrics included the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) and the Weighted-F1 score. The ROC plots the true positive rate (a.k.a. recall or sensitivity) against the false positive rate (equal to one minus the specificity). We optimized algorithm parameters to maximize AUC.
The F1 score is the harmonic mean of precision and recall. Precision (or positive predictive value) is defined as the ratio of true positives over true and false positives. The Weighted-F1 score is computed by weighting the F1-score of each class by the number of patients in that class.
Model validation
Request a detailed protocolThe data were split into a training (80%) and a test set (20%). Algorithm parameters were optimized on the training (derivation) set using fivefold cross-validation. Performance metrics were computed on the test set. This process was repeated five times, each time with a random split into training/testing sets. In columns labeled as Random in Tables 1, 2, 3, 4, 5, we report the average (and standard deviation) of the test performance metrics over the five random splits. We also performed a different type of validation. We trained the models on MGH, FH, NWH, and NSM patients, and evaluated performance on BWH patients. These results are reported under the columns BWH in the tables.
Appendix 1
1. Representative statistics of patients and variables highly correlated with the outcomes
Characteristics of the 2566 patients who tested positive for SARC-CoV2 with key statistics for each cohort (hospitalized vs. not, ICU admitted vs. not, and mechanically ventilated vs. not) are provided in Appendix 1—table 1. For each variable we provide a mean value of the variable (or percentage for categorical variables) in each cohort and its complement and a p-value computed using a chi-squared test for categorical variables and a Kolmogorov-Smirnov (KS) test for continuous variables. A low p-value supports rejection of the null hypothesis, implying that the corresponding variable is statistically different in a cohort compared to its complement (e.g., hospitalized vs. not).
Appendix 1—table 2 reports how the entire patient cohort is distributed across the five different hospitals according to the various outcome groups.
2. Natural Language Processing (NLP) of clinical notes
The de-identified data consisted of demographics, lab results, history and physical examination (H and P) notes, progress notes, radiology reports, and discharge notes. We extracted all variables needed for each patient and built a profile using NLP tools. There were mainly two difficulties. First, many important features such as vitals and medical history (prior conditions, medications) were not in a table format and were extracted from the report text using different regular expression templates, post-processing the results to eliminate errors due to non-uniformity in the reports (e.g., a line break may separate a date from the field indicating the type). Second, the negations in the text should be recognized. Simply recognizing a medical term such as ‘cough’ or ‘fever’ is not sufficient since the report may include ‘Patient denies fever or cough’. We applied multiple NLP schemes to overcome these difficulties.
Regular expression matching is the basic strategy we used to extract features such as body temperature values (with or without decimal followed by ‘?C/?F’) and blood pressure values (‘xx(x)/xx(x)’ even if they are mixed up with a date ‘mm/dd/yyyy’ having similar symbols). Extracting pulse and respiratory rates is challenging since it is easy to mismatch the corresponding values; thus, we also matched the indicators ‘RR:’ (respiratory rate) or ‘P’ (pulse rate) in the vicinity of the number.
To extract symptoms in H and P notes and findings in radiology reports, we used two NLP models: a Named Entity Recognition (NER) model, and a Natural Language Inference (NLI) model (Zhu et al., 2018). The first model aims at finding all the symptoms/disease named entities in the report. The key motivation of NER is that it is hard to list all possible disease names and search for them in each sentence; instead, NER models use the context to infer the possible targets, thus, even abbreviations like ‘N/V’ will be recognized. We used the spaCy NER model (Kiperwasser and Goldberg, 2016) trained on the BC5CDR corpus. The NLI model is used to detect negations, by checking if a sentence as a premise supports the hypothesis that the patient truly has the disease/symptoms in it. We applied a fine-tuned RoBERTa model (Liu et al., 2019) to perform NLI.
For medication extraction, we used the Unified Medical Language System (UMLS) (UMLS, 2019), which comprehensively contains medical terms and their relationships. We added a medication to the patient’s prior to admission medication list only If the medication or brand name is found in the UMLS ‘Pharmacologic Substance’ or ‘Clinical Drug’ category.
Symptoms, medical history, and prior medications from H and P notes are often described using different terminology or acronyms that imply the same condition or medication (e.g., dyspnea and SOB). We manually mapped these non-unique descriptors to distinct categories. An appropriate classification was also used for comorbidities, prior medications, radiological findings, and laboratories. The entire list of variables extracted and used in the analysis is provided in Appendix 1—table 3.
To evaluate the accuracy of the NLP models on our data, we randomly selected 35 hr and P notes and manually checked the model, evaluating the precision, recall, and F1-score for the extracted terms. For the NER+NLI deep learning model, we compared all the symptoms extracted by the models against the manually extracted ground truth. For the general regular expression matching models, we checked the extraction of vitals as a representative task, particularly since vitals have the most complicated format in the original notes. Appendix 1—table 4 provides the results of this manual evaluation.
For both types of models, the F1-score exceeds 90%. Most of the symptoms missing are due to non-obvious abbreviations. Regular expression matching has better performance since potential errors may only come from very rare formats we did not consider.
3. Classification methods
A random forest (RF) (Breiman, 2001) is an ensemble algorithm that achieves high accuracy and generalization performance by combining multiple weak decision tree classifiers. For training, RF uses bootstrap aggregating (bagging) technique to randomly select a training sample set for each decision tree classifier. It trains multiple decision trees in parallel during the training phase, where each tree is trained using a random sample set from the original training set. In the test phase, RF uses the trained decision tree classifiers to classify a test sample, and then combines all the classifiers by majority voting.
XGBoost (Chen and Guestrin, 2016) generates a series of decision trees in sequential order; each decision tree is fitted to the residual between the prediction of the previous decision tree and the target value, and this is repeated until a predetermined number of trees or a convergence criterion is reached. All decision trees computed are combined with proper weights to produce a final decision. XGBoost uses shrinkage and column subsampling to prevent overfitting and achieves fast training using a number or parallelization approaches.
Both of these nonlinear models are expensive to train compared to the linear models we discuss next. Essentially, each one of them trains an ensemble of many decision trees (could be as many as 500 or more) and a decision is made by combining information from all of these trees.
Among the linear classifiers, we used the support vector machine (SVM) (Cortes and Vapnik, 1995), which computes an optimal hyperplane separating the two classes. To render the method robust to noise and the presence of outliers (Chen and Paschalidis, 2018) we used (ℓ1- or ℓ2-norm) regularized versions of SVM.
We also used Logistic regression (LR) -- a common classification method that uses a linear regression model to approximate the logarithmic odds (logit) of the true classification label. LR, in addition to a prediction, also provides the likelihood of the predicted outcome, which can be used as a confidence measure in decision making. Similar to SVM, we used (ℓ1- or ℓ2-norm) regularized logistic regression to find the optimal subset of features from the initial feature space. In particular, based on the LR model, the predicted probability of the outcome, denoted by , is estimated by the formula:
where denotes the exponential function, is the intercept, the variables used by the model, and the corresponding coefficients. Using this formula and the LR coefficients (and intercept) provided in Tables 1, 2, 3, 4, 5, one can obtain an easily computable value for the predicted probability of the corresponding outcome. Comparing that value to a threshold (in the interval [0,1]) yields a prediction. The threshold can be set depending on the desired trade-off between sensitivity and specificity, which is typically specified by the user.
4. Pre-processing, statistical feature selection and recursive feature elimination
We extracted patients’ laboratory test results at the date of hospital admission (reference date). Since some lab tests may be received several hours after the reference time, we extracted the nearest set of lab results to the reference time. Some tests have multiple Logical Observation Identifiers Names and Codes (LOINC), referring to the same quantity, and were merged. White blood cells (WBC) types (basophils, eosinophils, lymphocytes, monocytes, and neutrophils) were reported both as an absolute count and percentage (of WBC). We eliminated the percentages and maintained the absolute counts. We also removed all laboratory test results that did not contain enough information for a significant percentage of the patients (less than 10%). This retained 65 laboratory variables.
Missing variables were imputed using the mode or, for some key lab variables, by regressing on the non-missing variables of the patient. To mitigate the effect of outliers, each variable with values higher than the 99th percentile or lower than the 1st percentile, was replaced with the 99th or 1st percentile, respectively. Finally, and to avoid collinearity, of the variables that were highly correlated (absolute correlation coefficient higher than 0.8) we removed one among the two.
For each model, we used a variety of statistical feature selection approaches. Specifically, we first calculated a p-value for each variable as described earlier and removed all variables with a p-value exceeding 0.05. Further, we used (ℓ1-norm) regularized LR and performed recursive feature elimination as follows. We run LR and obtained the coefficients of the model. We then eliminated the variable with the smallest absolute coefficient and re-run LR to obtain a new model. We kept iterating in this fashion, to select a model that maximizes a metric equal to the mean AUC minus its standard deviation in a validation dataset.
5. Thresholds for the binarized models
Thresholds used for generating binarized versions of our parsimonious models are reported in Appendix 1—table 5. In these models, a variable is set to one if the corresponding continuous variable is abnormal and 0 otherwise.
6. Standard pneumonia severity scores
For comparison purposes we implemented two commonly used pneumonia severity scores, CURB-65 (Lim et al., 2003) and the Pneumonia Severity Index (PSI) (Fine et al., 1997). CURB-65 uses a mental test assessment, Blood Urea Nitrogen (BUN), respiratory rate, blood pressure, and the indicator of age being 65 or older. PSI uses similar information, a host of laboratory values, and comorbidities. From CURB-65 we did not score for mental status since we did not have such information. From PSI, we did not use mental status and whether the patient was a nursing home resident. Given that laboratory values are used, we computed these scores to predict ICU care and ventilator use. In each case, we computed the corresponding score and then optimized a threshold using cross-validation over the training set in order to make the prediction. We used these thresholds and evaluated performance of each scoring system in the test set.
7. Training/Derivation Model Performance
Performance metrics for the various models on the training/derivation cohorts are reported in Appendix 1—tables 6, 7, 8, 9, 10. These are computed for both the random splitting of the data into training and testing sets (in this case, we provide the mean and standard deviation over the five random splits), as well as for the training dataset formed from patients at MGH, FH, NWH, and NSM (these results are under the column named BWH in Appendix 1—tables 6, 7, 8, 9, 10, simply to match the terminology of Tables 1, 2, 3, 4, 5).
8. Performance of the restricted ICU and ventilation models with sufficient distance to the event
Appendix 1—table 11 lists the performance of the restricted ICU and mechanical ventilation parsimonious LR-L1 models provided in Tables 3 and 5 when applied to a test set consisting of the BWH patients and 11 additional patients whose data were collected right after the original dataset was compiled. In these results, we excluded patients whose predicted outcome (ICU or intubation) occurs less than x hours from the time the admission lab results were made available, where x takes values in the set {6 hr, 12 hr, 18 hr, 24 hr, 48 hr}. Thus, the corresponding test set includes only patients with sufficient time difference from the data used to make the prediction, assessing how far into the future the predictive model could reach. We added the additional 11 patients to make sure we have a sufficient number of test patients to perform this study. As the results suggest, ICU admission estimation is fairly accurate and robust, whereas intubation prediction had moderate predictive power.
Data availability
Source code for processing patient data is provided together with the submission. Due to HIPAA restrictions and Data Use Agreements we can not make the original patient data publicly available. Interested parties may submit a request to obtain access to de-identified data to the authors. The authors would request pertinent IRB approval to make available a de-identified version of the data, stripped of any protected health information as specified under HIPAA rules. The IRB of the hospital system approved the study under Protocol #2020P001112 and the Boston University IRB found the study as being Not Human Subject Research under Protocol #5570X (the BU team worked with a de-identified limited dataset).
References
-
Predictors for severe COVID-19 infectionClinical Infectious Diseases 395:ciaa674.https://doi.org/10.1093/cid/ciaa674
-
Clinical features and outcomes of 105 hospitalized patients with COVID-19 in Seattle, WashingtonClinical Infectious Diseases 395:ciaa632.https://doi.org/10.1093/cid/ciaa632
-
ConferenceXgboostA Scalable Tree Boosting systemProceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining. pp. 785–794.
-
A robust learning approach for regression models based on distributionally robust optimizationJournal of Machine Learning Research 19:1–48.
-
A prediction rule to identify low-risk patients with community-acquired pneumoniaNew England Journal of Medicine 336:243–250.https://doi.org/10.1056/NEJM199701233360402
-
Clinical characteristics of coronavirus disease 2019 in ChinaThe New England Journal of Medicine 382:1708–1720.https://doi.org/10.1056/NEJMoa2002032
-
WebsiteCOVID calculatorsNetwork Optimization and Control Lab, Boston University. Accessed October 10, 2020.
-
Prediction for Progression Risk in Patients With COVID-19 Pneumonia: The CALL ScoreClinical Infectious Diseases 71:1393–1399.https://doi.org/10.1093/cid/ciaa414
-
Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature RepresentationsTransactions of the Association for Computational Linguistics 4:313–327.https://doi.org/10.1162/tacl_a_00101
-
Early dynamics of transmission and control of COVID-19: a mathematical modelling studyThe Lancet Infectious Diseases 20:553–558.https://doi.org/10.1016/S1473-3099(20)30144-4
-
SARS-CoV2: should inhibitors of the renin–angiotensin system be withdrawn in patients with COVID-19?European Heart Journal 41:1801–1803.https://doi.org/10.1093/eurheartj/ehaa235
-
Clinical characteristics of refractory COVID-19 pneumonia in Wuhan, ChinaClinical Infectious Diseases 1:ciaa270.https://doi.org/10.1093/cid/ciaa270
-
Renin-Angiotensin-Aldosterone system inhibitors impact on COVID-19 mortality: what’s Next for ACE2?Clinical Infectious Diseases 323:ciaa627.https://doi.org/10.1093/cid/ciaa627
-
Clinical and laboratory predictors of in-hospital mortality in patients with COVID-19: a cohort study in Wuhan, ChinaClinical Infectious Diseases 1:ciaa538.https://doi.org/10.1093/cid/ciaa538
-
Identification and validation of a novel clinical signature to predict the prognosis in confirmed COVID-19 patientsClinical Infectious Diseases 1:ciaa793.https://doi.org/10.1093/cid/ciaa793
-
An interpretable mortality prediction model for COVID-19 patientsNature Machine Intelligence 2:283–288.https://doi.org/10.1038/s42256-020-0180-7
-
Serum lactate dehydrogenase levels and Pneumocystis carinii pneumonia. Diagnostic and prognostic significanceAmerican Review of Respiratory Disease 137:796–800.https://doi.org/10.1164/ajrccm/137.4.796
-
ReportClinical Concept Extraction with Contextual Word EmbeddingNeurIPS Machine Learning for Health Workshop.
Article and author information
Author details
Funding
National Science Foundation (IIS-1914792)
- Ioannis Ch Paschalidis
National Science Foundation (DMS-1664644)
- Ioannis Ch Paschalidis
National Science Foundation (CNS-1645681)
- Ioannis Ch Paschalidis
National Institute of General Medical Sciences (R01 GM135930)
- Ioannis Ch Paschalidis
Office of Naval Research (N00014-19-1-2571)
- Ioannis Ch Paschalidis
National Institutes of Health (UL54 TR004130)
- Ioannis Ch Paschalidis
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
Research partially supported by the NSF under grants IIS-1914792, DMS-1664644, and CNS-1645681, by the ONR under MURI grant N00014-19-1-2571, and by the NIH under grant R01 GM135930.
Ethics
Human subjects: The Institutional Review Board of Mass General Brigham reviewed and approved the study under Protocol #2020P001112. The Boston University IRB found the study as being Not Human Subject Research under Protocol #5570X (the BU team worked with a de-identified limited dataset).
Copyright
© 2020, Hao et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 3,216
- views
-
- 291
- downloads
-
- 46
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Medicine
- Microbiology and Infectious Disease
- Epidemiology and Global Health
- Immunology and Inflammation
eLife has published articles on a wide range of infectious diseases, including COVID-19, influenza, tuberculosis, HIV/AIDS, malaria and typhoid fever.
-
- Medicine
Gremlin-1 has been implicated in liver fibrosis in metabolic dysfunction-associated steatohepatitis (MASH) via inhibition of bone morphogenetic protein (BMP) signalling and has thereby been identified as a potential therapeutic target. Using rat in vivo and human in vitro and ex vivo model systems of MASH fibrosis, we show that neutralisation of Gremlin-1 activity with monoclonal therapeutic antibodies does not reduce liver inflammation or liver fibrosis. Still, Gremlin-1 was upregulated in human and rat MASH fibrosis, but expression was restricted to a small subpopulation of COL3A1/THY1+ myofibroblasts. Lentiviral overexpression of Gremlin-1 in LX-2 cells and primary hepatic stellate cells led to changes in BMP-related gene expression, which did not translate to increased fibrogenesis. Furthermore, we show that Gremlin-1 binds to heparin with high affinity, which prevents Gremlin-1 from entering systemic circulation, prohibiting Gremlin-1-mediated organ crosstalk. Overall, our findings suggest a redundant role for Gremlin-1 in the pathogenesis of liver fibrosis, which is unamenable to therapeutic targeting.