Early prediction of level-of-care requirements in patients with COVID-19

This study examined records of 2566 consecutive COVID-19 patients at five Massachusetts hospitals and sought to predict level-of-care requirements based on clinical and laboratory data. Several classification methods were applied and compared against standard pneumonia severity scores. The need for hospitalization, ICU care, and mechanical ventilation were predicted with a validation accuracy of 88%, 87%, and 86%, respectively. Pneumonia severity scores achieve respective accuracies of 73% and 74% for ICU care and ventilation. When predictions are limited to patients with more complex disease, the accuracy of the ICU and ventilation prediction models achieved accuracy of 83% and 82%, respectively. Vital signs, age, BMI, dyspnea, and comorbidities were the most important predictors of hospitalization. Opacities on chest imaging, age, admission vital signs and symptoms, male gender, admission laboratory results, and diabetes were the most important risk factors for ICU admission and mechanical ventilation. The factors identified collectively form a signature of the novel COVID-19 disease.


Introduction
As a result of the SARS-CoV-2 pandemic, many hospitals across the world have resorted to drastic measures: canceling elective procedures, switching to remote consultations, designating most beds to COVID-19, expanding Intensive Care Unit (ICU) capacity, and re-purposing doctors and nurses to support COVID-19 care. In the U.S., the CDC estimates more than 310,000 COVID-19 hospitalizations from March 1 to June 13, 2020(CDC, 2020. Much of the modeling work related to the pandemic has focused on spread dynamics (Kucharski et al., 2020). Others have described patients who were hospitalized (Richardson et al., 2020) (n = 5700) and (Buckner et al., 2020) (n = 105), became critically ill (Gong et al., 2020) (n = 372), or succumbed to the disease (n = 1625 (Onder et al., 2020), n = 270 [Wu et al., 2020]). In data from the New York City, 14.2% required ICU treatment and 12.2% mechanical ventilation (Richardson et al., 2020). With such rates, the logistical and ethical implications of bed allocation and potential rationing of care delivery are immense (White and Lo, 2020). To date, while state-or country-level prognostication has developed to examine resource allocation at a mass scale, there is inadequate evidence based on a large cohort on accurate prediction of the disease progress at the individual patient level. A string of recent studies developed models to predict severe disease or mortality based on clinical and laboratory findings, for example (Yan et al., 2020) (n = 485), (Gong et al., 2020) (n = 372), (Bhargava et al., 2020) (n = 197), ) (n = 208), and (Wang et al., 2020) (n = 296). In these studies, several variables such as Lactate Dehydrogenase (LDH) (Gong et al., 2020;Ji et al., 2020;Yan et al., 2020) and C-reactive protein (CRP) have been identified as important predictors. All of these studies considered relatively small cohorts and, with the exception of Bhargava et al., 2020, considered patients in China. Although it is believed that the virus remains the same around the globe, the physiologic response to the virus and the eventual course of disease depend on multiple other factors, many of them regional (e.g. population characteristics, hospital practices, prevalence of pre-existing conditions) and not applicable universally. Triage of adult patients with COVID-19 remains challenging with most evidence coming from expert recommendations; evidence-based methods based on larger U.S.-based cohorts have not been reported (Sprung et al., 2020).
Leveraging data from five hospitals of the largest health care system in Massachusetts, we seek to develop personalized, interpretable predictive models of (i) hospitalization, (ii) ICU treatment, and (iii) mechanical ventilation, among SARS-CoV-2 positive patients. To develop these models, we developed a pipeline leveraging state-of-the-art Natural Language Processing (NLP) tools to extract information from the clinical reports for each patient, employing statistical feature selection methods to retain the most predictive features for each model, and adapting a host of advance machine learning-based classification methods to develop parsimonious (hence, easier to use and interpret) predictive models. We found that the more interpretable models can, for the most part, deliver similar predictive performance compared to more complex, 'black-box' models involving ensembles of many decision trees. Our results support our initial hypothesis that important clinical outcomes can be predicted with a high degree of accuracy upon the patient's first presentation to the hospital using a relatively small number of features, which collectively compose a 'signature' of the novel COVID-19 disease.

Results
We extracted data for all patients (n = 2566) who had a positive RT-PCR SARS-CoV-2 test between March 4 and April 13, 2020 at five Massachusetts hospitals, included in the same health care system (Massachusetts General Hospital (MGH), Brigham and Women's Hospital (BWH), Faulkner Hospital (FH), Newton-Wellesley Hospital (NWH), and North Shore Medical Center (NSM)). The study was approved by the pertinent Institutional Review Boards. eLife digest The new coronavirus (now named SARS-CoV-2) causing the disease pandemic in 2019 , has so far infected over 35 million people worldwide and killed more than 1 million. Most people with COVID-19 have no symptoms or only mild symptoms. But some become seriously ill and need hospitalization. The sickest are admitted to an Intensive Care Unit (ICU) and may need mechanical ventilation to help them breath. Being able to predict which patients with COVID-19 will become severely ill could help hospitals around the world manage the huge influx of patients caused by the pandemic and save lives. Now, Hao, Sotudian, Wang, Xu et al. show that computer models using artificial intelligence technology can help predict which COVID-19 patients will be hospitalized, admitted to the ICU, or need mechanical ventilation. Using data of 2,566 COVID-19 patients from five Massachusetts hospitals, Hao et al. created three separate models that can predict hospitalization, ICU admission, and the need for mechanical ventilation with more than 86% accuracy, based on patient characteristics, clinical symptoms, laboratory results and chest x-rays. Hao et al. found that the patients' vital signs, age, obesity, difficulty breathing, and underlying diseases like diabetes, were the strongest predictors of the need for hospitalization. Being male, having diabetes, cloudy chest x-rays, and certain laboratory results were the most important risk factors for intensive care treatment and mechanical ventilation. Laboratory results suggesting tissue damage, severe inflammation or oxygen deprivation in the body's tissues were important warning signs of severe disease.
The results provide a more detailed picture of the patients who are likely to suffer from severe forms of COVID-19. Using the predictive models may help physicians identify patients who appear okay but need closer monitoring and more aggressive treatment. The models may also help policy makers decide who needs workplace accommodations such as being allowed to work from home, which individuals may benefit from more frequent testing, and who should be prioritized for vaccination when a vaccine becomes available.
Demographics, pre-hospital medications, and comorbidities were extracted for each patient based on the electronic medical record. Patient symptoms, vital signs, radiologic findings, and laboratory results were recorded at their first hospital presentation (either clinic or emergency department) before testing positive for SARS-CoV-2. A total of 164 features were extracted for each patient. ICU admission and mechanical ventilation were determined for each patient. Complete blood count values were considered as absolute counts. Representative statistics comparing hospitalized, ICU admitted, and mechanically ventilated patients are provided in Table A1 (Appendix).  Table A2 (Appendix) reports how patients were distributed among the five hospitals.
Among the 2566 patients with a positive test, 930 (36.2%) were hospitalized. Among the hospitalized, 273 (29.4% of the hospitalized) required ICU care of which 217 (79.5%) required mechanical ventilation. The mean age over all patients was 51.9 years (SD: 18.9 years) and 45.6% were male.

Hospitalization
The mean age of hospitalized patients was 62.3 years (SD: 18 years) and 55.3% were male. We employed linear and non-linear classification methods for predicting hospitalizations. Non-linear methods included random forests (RF) (Breiman, 2001) and XGBoost (Chen and Guestrin, 2016). Linear methods included support vector machines (SVM) (Cortes and Vapnik, 1995) and Logistic Regression (LR); each linear method used either ' 1 -or ' 2 -norm regularization and we report the best-performing flavor of each model.
Results are reported in Table 1. We report the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) and the Weighted-F1 score, both computed out-of-sample (in a test set not used for training the model). As we detail under Methods, we used two validation strategies. The 'Random' strategy randomly split the patients into a training and a test set and was repeated five times; from these five splits we report the average and the standard deviation of the test performance. The 'BWH' strategy trained the models on MGH, FH, NWH, and NSM patients, and evaluated performance on BWH patients.
The hospitalization models used symptoms, pre-existing medications, comorbidities, and patient demographics. Laboratory results and radiologic findings were not considered since these were not available for most non-hospitalized patients. Full models used all (106) variables retained after several pre-processing steps described in Materials and methods. Applying the statistical variable selection procedure described in the Appendix (specifically, eliminating variables with a p-value exceeding 0.05), yields a model with 74 variables. To provide a more parsimonious, highly interpretable, and easier to implement model, we used recursive feature elimination (see Appendix) to select a model with only 11 variables. The best model using the random validation approach has an AUC of 88% while the best parsimonious (linear) model has an AUC of 83%, being though easier to interpret and implement. Validation on the BWH patients yields an AUC of 84% for the parsimonious model. Table 1 also reports the 11 variables in the parsimonious LR model, including their LR coefficients, and a binarized version of this model as described in Materials and methods. The most important variables associated with hospitalization were: oxygen saturation, temperature, respiratory rate, age, pulse, blood pressure, a comorbidity of adrenal insufficiency, BMI, prior transplantation, dyspnea, and kidney disease.
Additionally, we assessed the role of pre-existing ACE inhibitor (ACEI) and angiotensin receptor blocker (ARB) medications by adding these variables into the parsimonious binarized model, while controlling for additional relevant variables (hypertension, diabetes, and arrhythmia comorbidities and other hypertension medications). We found that while ARBs are not a factor, ACEIs reduce the odds of hospitalization by 3/4, on average, controlling for other important factors, such as age, hypertension, and related comorbidities associated with the use of these medications.

ICU admission
The mean age of ICU admitted patients was 63.3 years (SD: 15.1 years) and 63% were male. The ICU and ventilation prediction models used the features considered for the hospitalization, as well as laboratory results and radiologic findings. For these models, we excluded patients who required immediate ICU admission or ventilation (defined as within 4 hr from initial presentation). This was implemented in order to focus on patients where triaging is challenging and risk prediction would be beneficial. There were 2513 and 2525 patients remaining for the ICU and the mechanical ventilation prediction models, respectively.
For the model including 2513 patients ( Table 2), we first developed a model using all 130 variables retained after pre-processing, then employed statistical variable selection to retain 56 of the variables, and then applied recursive feature elimination with LR to select a parsimonious model which uses only 10 variables. The following variables were included: opacity observed in a chest scan, respiratory rate, age, fever, male gender, albumin, anion gap, oxygen saturation, LDH, and The values inside the parentheses refer to the standard deviation of the corresponding metric. Random refers to test set results from the five random training/test splits. BWH refers to training on four other hospitals and testing on data from BWH. SVM-L1 and LR-L1 refer to the ' 1 -norm regularized SVM and LR models. For the parsimonious model, we list the LR coefficients of each variable (Coef), the correlation of the variable with the outcome (Y-corr), the mean of the variable (Y1-mean) in the positive class (hospitalized for this  table), and the mean of the variable (Y0-mean) in the negative class (non-hospitalized). Binary Coef denotes the coefficient of the variables in the binarized model. We report the corresponding odds ratio (OR) and the 95% confidence intervals (CI). Thresholds used for the binarized model are provided in Appendix 1- calcium. In addition, we generated a binarized version of the parsimonious model. The parsimonious model for all 2513 patients has an AUC of 86%, almost as high as the model with all 130 features. For comparison purposes against well-established scoring systems, we implemented two commonly used pneumonia severity scores, CURB-65 (Lim et al., 2003) and the Pneumonia Severity Index (PSI) (Fine et al., 1997). Predictions based on the PSI and CURB-65 scores, have AUCs of 73% and 67%, respectively.
We also developed a model for a more restrictive set of patients. Specifically, the number of missing lab values for some patients is substantial. Given the importance of LDH and CRP, as revealed by our models, the more restricted patient set contains 669 patients with non-missing LDH and CRP values. After removing patients who required intubation or ICU admission within 4 hr of hospital presentation, we included 628 patients and 635 patients for the restricted ICU admission and ventilation models, respectively. The best restricted model for the 628 patients (Table 3) is the nonlinear XGBoost model using 29 statistically selected features with an AUC of 83%, with a linear parsimonious LR model close behind (AUC 80%). An RF model using all variables yields an AUC of 77% when tested on BWH data. PSIand CURB-65 models have AUCs below 59%.

Mechanical ventilation
The mean age of patients requiring mechanical ventilation was 63.3 years (SD: 14.7 years) and 63.6% were male. Again, we excluded patients who were intubated within 4 hr of their hospital admission.
For the model including 2525 patients (Table 4), we used statistical feature selection to select 55 variables, and recursive feature elimination with LR to select a parsimonious model with only eight variables. The following variables were included: lung opacities, albumin, fever, respiratory rate, Time period between ICU/ventilation model prediction and corresponding outcomes Table 6 reports the mean and the median time interval (in hours) between hospital admission time and ICU/ventilation outcomes. Specifically, we report statistics for ICU admission or intubation outcomes from the correct ICU/intubation predictions made by our models trained on four hospitals (MGH, NWH, NSM, FH) and applied to BWH patients (both the models making predictions for all patients and the restricted models). As we have noted earlier, our models use the lab results closest to admission (either on admission date or the following day). We also report the time interval between the last lab result used by the model and the corresponding ICU/intubation outcome.

Discussion
We developed three models to predict need for hospitalization, ICU admission, and mechanical ventilation in patients with COVID-19. The prediction models are not meant to replace clinicians' judgment for determining level of care. Instead, they are designed to assist clinicians in identifying patients at risk of future decompensation. Patient vital signs were the most important predictors of hospitalization. This is expected as vital signs reflect underlying disease severity, the need for cardiorespiratory resuscitation, and the risk of future decompensation without adequate medical support. Older age and BMI were also important predictors for hospitalization. Age has been recognized as an important factor associated with severe COVID-19 in previous series (Grasselli et al., 2020;Guan et al., 2020;Richardson et al., 2020). However, it is not known whether age itself or the presence of comorbidities place patients at risk for severe disease. Our results demonstrate that age is a stronger predictor of severe COVID-19 than a host of underlying comorbidities. In terms of patient comorbidities, adrenal insufficiency, prior transplantation, and chronic kidney disease were strongly associated with need for hospitalization. Diabetes mellitus was associated with a need for ICU admission and mechanical ventilation, which might be due to its detrimental effects on immune function.
For the ICU and ventilation prediction models screening all at-risk (COVID-19-positive patients), opacities observed in a chest scan, age, and male gender emerge as important variables. Males have been found to have worse in-hospital outcomes in other studies as well (Palaiodimos et al., 2020).
We also identified several routine laboratory values that are predictive of ICU admission and mechanical ventilation. Elevated serum LDH, CRP, anion gap, and glucose, as well as decreased serum calcium, sodium, and albumin were strong predictors of ICU admission and mechanical ventilation. LDH is an indicator of tissue damage and has been found to be a marker of severity in P. jirovecii pneumonia (Zaman and White, 1988). Along with CRP, it was among the two most important predictors of ICU admission and ventilation in the parsimonious model among patients who had LDH and CRP measurements on admission. This finding is consistent with previous reports identifying LDH as an important prognostic factor (Gong et al., 2020;Ji et al., 2020;Mo et al., 2020;Yan et al., 2020). In addition, lower serum calcium is associated with cell lysis and tissue destruction, as it is often seen as part of the tumor lysis syndrome. Elevated serum anion gap is a marker of metabolic acidosis and ischemia, suggesting that tissue hypoxia and hypoperfusion may be components of severe disease.
For all three prognostic models, we developed predicting hospitalizations, ICU care, and mechanical ventilation, AUC ranges within 86-88%, which indicates strong predictive power. Interestingly, we can achieve AUC within 85-86% for ICU and ventilation prediction with a parsimonious linear model utilizing no more than 10 variables. In all cases, we can also develop a parsimonious model with binarized variables using medically suggested normal and abnormal variable thresholds. These binarized models have similar performance with their continuous counterparts. The ICU and ventilation models using all patients are very accurate, but, arguably, make a number of 'easier' decisions since more than 60% of the patients are never hospitalized. Many of these patients are younger, healthy, and likely present with mild-to-moderate symptoms. To test the robustness of the models to patients with potentially more 'complex' disease, we developed ICU and ventilation models on a restricted set of patients. This is the subset of patients who are hospitalized and most of the crucial labs are available for them (specifically CRP and LDH which emerged as important from our models). The best AUC for these models drops, but not below 82%, which indicates robustness of the model even when dealing with arguably harder to assess cases. LDH, CRP, calcium, lung opacity, anion gap, SpO2, sodium, and a comorbidity of insulin-controlled diabetes appear as the most significant for these patients. Interestingly, the corresponding binarized models have about 10% lower AUC; apparently, for the more severely ill, clinical variables deviate substantially from normal and knowing the exact values is crucial.
The models have been validated with two different approaches, using random splits of the data into training and testing, as well as training in some hospitals and testing at a different hospital. Performance metrics are relatively consistent with these two approaches. We also compared the models against standard pneumonia severity scores, PSI and CURB-65, establishing that our models are significantly stronger, which highlights the different clinical profile of COVID-19. We also examined how much in advance of the ICU or ventilation outcomes our models are able to make a prediction. Of course, this is not entirely in our control; it depends on what state the patients get admitted and how soon their condition deteriorates to require ICU admission and/or ventilation. Table 6 reports the corresponding statistics. For example, the restricted ICU and ventilation models are making a correct prediction upon admission (using the lab results closest to that time) for outcomes that on average occur 38 and 35 hr later, respectively.
To further test the accuracy of the restricted ICU and ventilation models well in advance of the corresponding event, we considered an extended BWH test set (adding 11 more patients) and computed the accuracy of the models when the test set was restricted to patients whose outcome (ICU admission or ventilation) was more than x hours after the admission lab results based on which the prediction was made, with x being 6 hr, or 12 hr, or 18 hr, or 24 hr, or even 48 hr. The ICU model reaches an AUC of 87% and a weighted F1-score of 86% at x = 18 hr. The ventilation model reaches an AUC of 64% and an F1-score of 72% at x = 48 hr. These results demonstrate that the predictive models can indeed make predictions well into the future, when physicians would be less certain about the course of the disease and when there is potentially enough time to intervene and improve outcomes.
A manual review of the predictions by the models indicates that they performed well at predicting future ICU admissions for patients who presented with mild disease several days before ICU admission was necessary. Such patients were hemodynamically stable and had minimal oxygen requirements on the floor, before clinical deterioration necessitated ICU admission. We identified several such patients. A typical case is that of a 51-year-old male with a history of hypertension, obesity, and insulin-dependent type 2 diabetes mellitus, who presented with a 3-day history of dyspnea, cough and myalgias. In the emergency department, he was hemodynamically stable, saturating at 96-97% on 2 L of nasal cannula. The patient was admitted to the floor and did well for 3 days, saturating at 93-96% on room air. On the fourth day of hospitalization, he had increasing oxygen requirements and the decision was made to transfer him to the ICU. He was intubated and ventilated for 30 days. Our prediction models accurately predicted at the time of his presentation that he would eventually require ICU admission and mechanical ventilation. This prediction was based on such variables as an elevated LDH (241 U/L) and the presence of insulin-dependent diabetes mellitus. Another such case is that of a 59-year-old male without a significant prior medical history who presented with 2 days of dyspnea, nausea, and diarrhea. At the emergency department, he was tachycardic at 110 beats per minute and saturating at 96% on room air, and the patient was admitted. For 2 days, the patient was hemodynamically stable, saturating at 94-97% on room air. On the third day of hospitalization, he had increasing oxygen requirements, eventually requiring transfer to the ICU. He was intubated and ventilated for the next 14 days. Our prediction model predicted the patient's decompensation at his presentation, due to elevations in LDH (348 U/L) and CRP (102.3 mg/L).
We also considered the role of ACEIs and ARBs and their potential association with the outcomes. It has been speculated that ACEIs may worsen COVID-19 outcomes because they upregulate the expression of ACE2, which the virus targets for cell entry. No such evidence has been reported in earlier studies (Kuster et al., 2020;Patel and Verma, 2020). In fact, a smaller study ) (n = 1128 vs. 2566 in our case) reported a beneficial effect and (Rossi et al., 2020) warn of potential harmful effects of discontinuing ACEIs or ARBs due to COVID-19. Our hospitalization model suggests that ACEIs do not increase hospitalization risk and may slightly reduce it (OR 95% CI is (0.52,1.04) with a mean of 0.73). In the ICU and ventilation models, the role of these two medications is statistically weaker to observe any meaningful association.
The models we derived can be used for a variety of purposes: (i) guiding patient triage to appropriate inpatient units, (ii) guiding staffing and resource planning logistics, and (iii) understanding patient risk profiles to inform future policy decisions, such as targeted risk-based stay-at-home restrictions, testing, and vaccination prioritization guidelines once a vaccine becomes available.
Calculators implementing the parsimonious models corresponding to each of the Tables 1, 2, 3, 4, 5 have been made available online .

Data extraction
Natural Language Processing (NLP) was used to extract patient comorbidities (see Appendix for details), pre-existing medications, admission vital signs, hospitalization course, ICU admission, and mechanical intubation.

Pre-processing
The categorical features were converted to numerical by 'one-hot' encoding. Each categorical feature, such as gender and race, was encoded as an indicator variable for each category. Features were standardized by subtracting the mean and dividing by the standard deviation.
Several pre-processing steps, including variable imputation, outlier elimination, and removal of highly correlated variables were undertaken (see Appendix). After completing these procedures, 106 variables for each patient remained to be used by the hospitalization model. For the ICU and ventilation prediction models, we added laboratory results and radiologic findings. We removed variables with more than 90% missing values out of the roughly 2500 patients retained for these models; the remaining missing values were imputed as described above. These pre-processing steps retained 130 variables for the ICU and ventilation models.

Classification methods
We employed nonlinear ensemble methods including Random forests (RF) (Breiman, 2001) and XGBoost (Chen and Guestrin, 2016). We also employed 'custom' linear methods which yield interpretable models; specifically, support vector machines (SVM) (Cortes and Vapnik, 1995) and Logistic Regression (LR). In both cases, the variants we computed were robust to noise and the presence of outliers (Chen and Paschalidis, 2018), using proper regularization. LR, in addition to a prediction, provides the likelihood associated with the predicted outcome, which can be used as a confidence measure in decision making. Further details on these methods are in the Appendix.
For each outcome, we used the statistical feature selection and recursive feature elimination procedures described in the Appendix to develop an LR parsimonious model. The LR coefficients are comparable since the variables are standardized. Hence, a larger absolute coefficient indicates that the corresponding variable is a more significant predictor. Positive (negative) coefficients imply positive (negative) correlation with the outcome. We also developed a version of this model by converting all continuous variables into binary variables, using medically motivated thresholds (see Appendix). We report the coefficients of the 'binarized' model and the implied odds ratio (OR), representing how the odds of the outcome are scaled by having a specific variable being abnormal vs. normal, while controlling for all other variables in the model.

Outcomes and performance metrics
Model performance metrics included the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) and the Weighted-F1 score. The ROC plots the true positive rate (a.k.a. recall or sensitivity) against the false positive rate (equal to one minus the specificity). We optimized algorithm parameters to maximize AUC.
The F1 score is the harmonic mean of precision and recall. Precision (or positive predictive value) is defined as the ratio of true positives over true and false positives. The Weighted-F1 score is computed by weighting the F1-score of each class by the number of patients in that class.

Model validation
The data were split into a training (80%) and a test set (20%). Algorithm parameters were optimized on the training (derivation) set using fivefold cross-validation. Performance metrics were computed on the test set. This process was repeated five times, each time with a random split into training/ testing sets. In columns labeled as Random in Tables 1, 2, 3, 4, 5, we report the average (and standard deviation) of the test performance metrics over the five random splits. We also performed a different type of validation. We trained the models on MGH, FH, NWH, and NSM patients, and evaluated performance on BWH patients. These results are reported under the columns BWH in the tables.

R01 GM135930
Ioannis Ch Paschalidis The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. . Transparent reporting form

Data availability
Source code for processing patient data is provided together with the submission. Due to HIPAA restrictions and Data Use Agreements we can not make the original patient data publicly available. Interested parties may submit a request to obtain access to de-identified data to the authors. The authors would request pertinent IRB approval to make available a de-identified version of the data, stripped of any protected health information as specified under HIPAA rules. 2. Natural Language Processing (NLP) of clinical notes The de-identified data consisted of demographics, lab results, history and physical examination (H and P) notes, progress notes, radiology reports, and discharge notes. We extracted all variables needed for each patient and built a profile using NLP tools. There were mainly two difficulties. First, many important features such as vitals and medical history (prior conditions, medications) were not in a table format and were extracted from the report text using different regular expression templates, post-processing the results to eliminate errors due to non-uniformity in the reports (e.g., a line break may separate a date from the field indicating the type). Second, the negations in the text should be recognized. Simply recognizing a medical term such as 'cough' or 'fever' is not sufficient since the report may include 'Patient denies fever or cough'. We applied multiple NLP schemes to overcome these difficulties. Regular expression matching is the basic strategy we used to extract features such as body temperature values (with or without decimal followed by '?C/?F') and blood pressure values ('xx(x)/xx(x)' even if they are mixed up with a date 'mm/dd/yyyy' having similar symbols). Extracting pulse and respiratory rates is challenging since it is easy to mismatch the corresponding values; thus, we also matched the indicators 'RR:' (respiratory rate) or 'P' (pulse rate) in the vicinity of the number.
To extract symptoms in H and P notes and findings in radiology reports, we used two NLP models: a Named Entity Recognition (NER) model, and a Natural Language Inference (NLI) model (Zhu et al., 2018). The first model aims at finding all the symptoms/disease named entities in the report. The key motivation of NER is that it is hard to list all possible disease names and search for them in each sentence; instead, NER models use the context to infer the possible targets, thus, even abbreviations like 'N/V' will be recognized. We used the spaCy NER model (Kiperwasser and Goldberg, 2016) trained on the BC5CDR corpus. The NLI model is used to detect negations, by checking if a sentence as a premise supports the hypothesis that the patient truly has the disease/symptoms in it. We applied a fine-tuned RoBERTa model (Liu et al., 2019) to perform NLI.
For medication extraction, we used the Unified Medical Language System (UMLS) (UMLS, 2019), which comprehensively contains medical terms and their relationships. We added a medication to the patient's prior to admission medication list only If the medication or brand name is found in the UMLS 'Pharmacologic Substance' or 'Clinical Drug' category.
Symptoms, medical history, and prior medications from H and P notes are often described using different terminology or acronyms that imply the same condition or medication (e.g., dyspnea and SOB). We manually mapped these non-unique descriptors to distinct categories. An appropriate classification was also used for comorbidities, prior medications, radiological findings, and laboratories. The entire list of variables extracted and used in the analysis is provided in Appendix 1-table 3.
higher than the 99th percentile or lower than the 1st percentile, was replaced with the 99th or 1st percentile, respectively. Finally, and to avoid collinearity, of the variables that were highly correlated (absolute correlation coefficient higher than 0.8) we removed one among the two.
For each model, we used a variety of statistical feature selection approaches. Specifically, we first calculated a p-value for each variable as described earlier and removed all variables with a p-value exceeding 0.05. Further, we used (' 1 -norm) regularized LR and performed recursive feature elimination as follows. We run LR and obtained the coefficients of the model. We then eliminated the variable with the smallest absolute coefficient and re-run LR to obtain a new model. We kept iterating in this fashion, to select a model that maximizes a metric equal to the mean AUC minus its standard deviation in a validation dataset.

Thresholds for the binarized models
Thresholds used for generating binarized versions of our parsimonious models are reported in Appendix 1-table 5. In these models, a variable is set to one if the corresponding continuous variable is abnormal and 0 otherwise.

Standard pneumonia severity scores
For comparison purposes we implemented two commonly used pneumonia severity scores, CURB-65 (Lim et al., 2003) and the Pneumonia Severity Index (PSI) (Fine et al., 1997). CURB-65 uses a mental test assessment, Blood Urea Nitrogen (BUN), respiratory rate, blood pressure, and the indicator of age being 65 or older. PSI uses similar information, a host of laboratory values, and comorbidities. From CURB-65 we did not score for mental status since we did not have such information. From PSI, we did not use mental status and whether the patient was a nursing home resident. Given that laboratory values are used, we computed these scores to predict ICU care and ventilator use. In each case, we computed the corresponding score and then optimized a threshold using cross-validation over the training set in order to make the prediction. We used these thresholds and evaluated performance of each scoring system in the test set.

Training/Derivation Model Performance
Performance metrics for the various models on the training/derivation cohorts are reported in Appendix 1-tables 6,7,8,9,10. These are computed for both the random splitting of the data into training and testing sets (in this case, we provide the mean and standard deviation over the five random splits), as well as for the training dataset formed from patients at MGH, FH, NWH, and NSM (these results are under the column named BWH in Appendix 1-tables 6, 7, 8, 9, 10, simply to match the terminology of Tables 1, 2, 3, 4, 5).
Appendix 1-table 6. Derivation cohort performance for the hospitalization prediction model. Abbreviations and metrics reported are as in Table 1 Continued on next page