Examining the perceived impact of the COVID-19 pandemic on cervical cancer screening practices among clinicians practicing in Federally Qualified Health Centers: A mixed methods study

  1. Lindsay Fuzzell  Is a corresponding author
  2. Paige Lake  Is a corresponding author
  3. Naomi C Brownstein
  4. Holly B Fontenot
  5. Ashley Whitmer
  6. Alexandra Michel
  7. McKenzie McIntyre
  8. Sarah L Rossi
  9. Sidika Kajtezovic
  10. Susan T Vadaparampil
  11. Rebecca Perkins
  1. H. Lee Moffitt Cancer Center & Research Institute, Health Outcomes and Behavior, United States
  2. Medical University of South Carolina, Public Health Sciences, United States
  3. University of Hawaii at Manoa, Nancy Atmospera-Walch School of Nursing, United States
  4. Boston University, Chobanian & Avedisian School of Medicine, United States
  5. H. Lee Moffitt Cancer Center & Research Institute, Office of Community Outreach, Engagement, and Equity, United States

Abstract

Background:

The COVID-19 pandemic led to reductions in cervical cancer screening and colposcopy. Therefore, in this mixed methods study we explored perceived pandemic-related practice changes to cervical cancer screenings in federally qualified health centers (FQHCs).

Methods:

Between October 2021 and June 2022, we conducted a national web survey of clinicians (physicians and advanced practice providers) who performed cervical cancer screening in FQHCs in the United States during the post-acute phase of the COVID-19 pandemic, along with a sub-set of qualitative interviews via video conference, to examine perceived changes in cervical cancer screening practices during the pandemic.

Results:

A total of 148 clinicians completed surveys; a subset (n=13) completed qualitative interviews. Most (86%) reported reduced cervical cancer screening early in the pandemic, and 28% reported continued reduction in services at the time of survey completion (October 2021- July 2022). Nearly half (45%) reported staff shortages impacting their ability to screen or track patients. Compared to clinicians in Obstetrics/Gynecology/Women’s health, those in family medicine and other specialties more often reported reduced screening compared to pre-pandemic. Most (92%) felt that screening using HPV self-sampling would be very or somewhat helpful to address screening backlogs. Qualitative interviews highlighted the impacts of staff shortages and strategies for improvement.

Conclusions:

Findings highlight that in late 2021 and early 2022, many clinicians in FQHCs reported reduced cervical cancer screening and of pandemic-related staffing shortages impacting screening and follow-up. If not addressed, reduced screenings among underserved populations could worsen cervical cancer disparities in the future.

Funding:

This study was funded by the American Cancer Society, who had no role in the study’s design, conduct, or reporting.

Editor's evaluation

This US study presents findings from an online survey and in-person interviews of healthcare providers in areas associated with cervical screening provision during the post-acute phase of the pandemic. The findings are valuable as they provide insights into a range of areas, from healthcare characteristics to screening barriers and HPV self-sampling. The evidence supporting the claims of the authors is solid. The work will be of interest to public health scientists and a cancer prevention and control audience.

https://doi.org/10.7554/eLife.86358.sa0

Introduction

Cervical cancer prevention via screening and treatment of pre-invasive disease has dramatically reduced cervical cancer incidence and mortality rates (Sawaya and Huchko, 2017). However, lack of access to screening and treatment services results in geographic, racial/ethnic, and socioeconomic disparities in cervical cancer incidence and mortality (Buskwofie et al., 2020; Vu et al., 2018; Chen et al., 2012; Akers et al., 2007). A recent study of cervical cancer patients showed that over half were either never screened or were overdue for screening (Benard et al., 2021). Lack of screening remains the most common reason why individuals develop cervical cancer in the United States (US) and worldwide. In the US, cervical cancer screening is considered a critical element of preventive healthcare, and the addition of Human Papillomavirus (HPV) testing, along with Pap testing, can improve prevention programs by allowing longer screening intervals for patients testing negative, while providing more precise risk estimates to allow evidence-based management of patients with abnormal screening results (Schiffman et al., 2011; Leinonen et al., 2009; Mayrand et al., 2007).

Since the COVID-19 pandemic began in the US in 2020, however, cancer screenings decreased for many cancer types (Chen et al., 2021; Poljak et al., 2021; Amram et al., 2022; Smith and Perkins, 2022), with cervical cancer screening decreasing more than others (Miller et al., 2021; Mayo et al., 2021; Fedewa et al., 2022). Early in the pandemic, patient fear of contracting COVID-19 and reduction in non-urgent medical services impacted the ability to perform cervical cancer screening and colposcopy (Massad, 2022). Federally qualified health centers (FQHCs) in the US are government funded health centers or clinics that provide care to medically underserved populations. Maintaining cancer screening in these and other safety net facilities is critical as they serve patients at the highest risk for cervical cancer: publicly insured/uninsured, immigrant, and historically marginalized populations (Adams et al., 2020; Fisher-Borne et al., 2021). A survey of 22 federally qualified health centers (FQHCs) that conducted cervical cancer screenings in 2020 found that 90% reported cancelling cervical cancer screenings during the height of the pandemic (Fisher-Borne et al., 2021). While 86% reported rescheduling cancer screenings for future visits, the success of this strategy to maintain screening rates was not measured. FQHCs reported strategies such as switching to telehealth visits and implementing in-office structural changes, new waiting room protocols, and new referral processes to address pandemic restrictions (Fisher-Borne et al., 2021). Following widespread vaccination and the resumption of in person services, cancer screening rates have begun to rebound (Chen et al., 2021; McBain et al., 2021), but challenges still exist. Currently, medical staff shortages and backlogs of patients needing to catch up on preventive services lead to longer wait times for scheduling appointments and decreased screening rates (Smith and Perkins, 2022; Massad, 2022; Wentzensen et al., 2021).

Little work has explored the impact of the COVID-19 pandemic on clinician perceptions of cervical cancer screening and staffing challenges in FQHCs. In order to identify characteristics that could be targets for future interventions or additional supports, this paper examines the association of clinician characteristics with perceived changes in cervical cancer screening and the impact of pandemic-related staffing changes on screening and abnormal results follow-up during the pandemic period of October 2021 through July 2022 in FQHCs and safety net settings of care.

Methods

Participant recruitment and target population

The target population were clinicians, defined for the purpose of this study as physicians and Advanced Practice Providers (APPs), who conducted cervical cancer screening in federally qualified health centers and safety net settings of care (hereafter referred to as ‘FQHCs’) in the United States during the post-acute phase of the COVID-19 pandemic. Clinicians were eligible to participate if they: (1) performed cervical cancer screening, (2) were a physician or APP, and (3) were currently practicing in an FQHC in the US between October 2021 and July 2022, the post-acute period of the pandemic in the US when COVID-19 vaccination was widely available to the general population. We recruited clinicians for participation in the online survey hosted via Qualtrics through periodic recruitment email messages via the American Cancer Society Vaccinating Adolescents Against Cancer (VACs) program and the professional networks of the PIs (RBP, STV).

Survey participants were asked if they would also be willing to participate in qualitative interviews via phone. A random sample of those who indicated willingness were contacted for participation. This study was approved by Moffitt Cancer Center’s Scientific Review Committee and Institutional Review Board (MCC #20048) and Boston University Medical Center’s Institutional Review Board (H-41533). All survey participants viewed an information sheet in lieu of reading and signing an informed consent form, and interview participants provided verbal consent. All were compensated for their time completing the survey or interview.

Survey development and validation

Quantitative survey questions were developed based on recent literature exploring the effects of the COVID-19 pandemic on cancer screening practices (Miller et al., 2021; Wentzensen et al., 2021) and the investigators’ clinical observations. The draft survey was reviewed by an expert panel of FQHC providers (n=8), refined, piloted, and finalized after incorporating pilot feedback and testing technical functionality of the Qualtrics survey among the study team.

Clinician characteristics assessed included age, race/ethnicity, training, specialty, and geographic region. Age was measured in years and categorized for analysis as <30, 30–39, 40–49, 50+. Gender identity was assessed as male, female, transgender, and other. Race was assessed as Asian, Black/African American, White, Mixed race, Native Hawaiian/Pacific Islander, American Indian/Alaska Native, and Other. Ethnicity was assessed as Hispanic/Latinx or non-Hispanic/Latinx. Race/ethnicity was categorized for analysis as White non-Hispanic versus all others due to small cell sizes of non-White and Hispanic participants. For all variables assessed in this manuscript that allowed write-in/free responses, responses were re-classified within the pre-determined categories for each variable when possible.

Clinician training was assessed as physician (medical doctor [MD], doctor of osteopathic medicine [DO]) or advanced practice providers (APPs) (physician assistant [PA], nurse practitioner [NP], and certified nurse midwife [CNM]). Clinician training was categorized as: (1) MD/DO (doctors of medicine and osteopathic medicine) and (2) APPs (NPs, CNMs, PAs). Clinical specialty was assessed as Obstetrics and Gynecology (OBGYN), family medicine, internal medicine (IM), pediatric/adolescent medicine, women’s health, and other (via write in). Based on prior literarature (Neugut et al., 2019) and the number of respondents in each category, we created the following categories for clinician specialty: (1) Women’s Health/OBGYN, (2) Family Medicine, and (3) IM, Pediatrics/Adolescent Medicine. Geographic location included four US regions (Northeast, South, Midwest, West) and a non-responder category for those who did not provide state or zip code. Based on national data indicating geographic variation in coverage rates by US region (Buskwofie et al., 2020) as well and distribution of respondents, region was categorized as (1) Northeast, (2) South, and (3) West and Midwest.

We also assessed clinical behaviors and attitudes associated with cervical cancer screening. Questions captured the number of screens performed monthly, test(s) used for screening, attitudes toward using self-collected HPV testing for cervical cancer screening, barriers to screening, tracking systems, and staffing changes.

Qualitative interview guide questions were developed based on recent literature (Miller et al., 2021; Wentzensen et al., 2021) and the investigators’ clinical observations. The draft interview guide was reviewed by an expert panel of FQHC providers (n=8) and revised. Interview questions explored survey topics in depth, including experiences with providing cervical cancer screening at different points during the pandemic, barriers to providing care, as well as strategies for improving follow-up, including tracking systems and self-sampled HPV testing.

Data analysis

Quantitative survey data

We assessed descriptive statistics (frequencies, percentages) of clinician characteristics and outcome variables. We conducted separate exact binary logistic regressions (due to small cell sizes) examining the associations of clinician characteristics with (a) screening practices at the time of survey participation (the same/more versus less than pre-pandemic), and (b) pandemic-associated staffing changes impacting the ability to screen or follow-up (yes/no). The following variables were included in the full models for each outcome: race/ethnicity, age, gender, region, clinician training., clinician specialty. We used manual forward selection with a value for entry and significance of 0.10 because this strikes a balance between the commonly accepted method of using AIC (which assumes significance level of 0.157), and the often used alpha of 0.05, which could lead to failure to identify associations due to small sample size. Variables were added sequentially with the variable with the lowest p-value below 0.10 added at each step. We produced forest plots displaying odds ratios and confidence intervals from this model (Figure 2). Analyses were conducted in SAS version 9.4.

Qualitative interview data

Interviews were conducted by three co-authors (RBP, AM, HBF) trained in qualitative methodology via video conference (Zoom); interviews were audio recorded and transcribed verbatim. Data were coded using thematic content analysis (Elo and Kyngäs, 2008). Based on the questions in the initial interview guide, a priori codes were developed and a codebook created to operationalize and define each code. The qualitative analysis team independently reviewed the data twice. The team hand coded the data with the initial codes and made notes on possible new codes in the first coding pass. Then, notes on possible new codes were discussed until consensus was reached. The codes were then revised and transcripts reviewed using the updated code categories. This second coding pass served to clean coding from the first coding pass and identify emergent themes not initially identified (Unknown, 1998). At least two coders reviewed each transcript. Discrepancies were resolved by discussion in weekly group meetings. A centralized shared data sheet was used for coding to facilitate communication across different institutions.

Role of the funding source

This study was funded by the American Cancer Society, who had no role in the study’s design, conduct, or reporting.

Results

Quantitative survey data

A total of 159 potential participants viewed the online study information sheet and completed screening items; 11 were excluded due to ineligible clinical training (n=5) or not conducting cervical cancer screening (n=6). Data were cleaned and invalid surveys were removed. Invalid surveys included potential duplicate responses identified by repeat IP address, nonsensical write-in free responses, and those with numerous skipped items. Table 1 details clinician characteristics and screening practices of the final analytic sample (n=148). Figure 1 provides a flow diagram describing the process of determining the final analytic sample size. The sample was primarily female (85%), White (70%), non-Hispanic (86%), and practiced in the Northeast (63%). Most (70%) reported specializing in family medicine, 19% reported Women’s Health/OBGYN, and 11% reported other specialties. All but one participant (99%) used Pap/HPV co-testing for routine screening of patients aged 30–65, and 61% performed 10 or fewer screens per month. Most (93%) clinicians determined the next step in management themselves when their patients had abnormal results (rather than refer to a specialist). Most (78%) had colposcopy available on site, though only 31% of participants reported that treatment (Loop Electrosurgical Excision Procedure, [LEEP]) was available on site.

Table 1
Clinician characteristics and screening practices.
VariableFrequency%Valid N
Clinician characteristics
Age147
Less than 302014
30–395638
40–493624
50+3524
Gender identity
Female*12585148
Male2215
Transgender/gender non-binary10.67
Race148
Asian139
Black/African American1510
Mixed race107
Other75
White10370
Ethnicity148
Hispanic/Latinx2114
Not Hispanic/Latinx12786
Clinician Training148
MD and DO6745
APPs8155
Clinician Specialty148
Women’s Health and Ob/GYN2819
Family Medicine10370
Internal Medicine, Pediatric/Adolescent Medicine, and ‘other’1711
Region148
Northeast9363
South2819
West & Midwest2618
Non-responders10.7
Current number of cervical cancer screenings performed per month
1–109061
11–202718
>203121
Pap/HPV co-testing as screening method for patients aged 30–65147 99148
Respondent determines management following abnormal results (yes)13893148
Health center provides colposcopy on site (yes)11578148
Health center provides treatment (LEEP) on site (yes)4631148
PANDEMIC IMPACT ON SCREENING AND MANAGEMENT
Screening in 2020 compared to pre-pandemic (less) §12795134
Screening services stopped at any time during the pandemic (yes) §6653125
Colposcopy services stopped at any time during the pandemic (yes) §, 3631115
LEEP services stopped at any time during the pandemic (yes) §, 81746
Screening in 2021/now compared to pre-pandemic §140
Less3928
Same6546
More3626
  1. *for all percentages included in all tables, when percentages were .6-.9, we rounded up to the next whole number.

  2. *

    Due to small numbers, transgender/non-binary/other were unable to be analyzed as their own category. They were assigned to female for regression analyses because female was the most common response. No difference was noted when grouped with male.

  3. APPs included: NPs (52), CNMs (7), PAs (17), and other (5).

  4. The remaining respondent used primary HPV testing. No respondents in this sample used cytology alone.

  5. §

    Participants who selected ‘unsure’ were excluded from the denominator. 14 (9%) participants were unsure whether screening was less in 2020 compared to pre-pandemic, 23 (16%) were unsure whether screening services were stopped at any time, 53 participants (36%) were unsure whether colposcopy practices were stopped, 21 (14%) were unsure whether LEEP services were stopped, and 8 (5%) were unsure whether they were screening more or less in 2021/now compared to pre-pandemic.

  6. Participants who did not indicate that they performed colposcopy and LEEP services on site were excluded from the demonimator.

Study flow chart depicting participant exclusions and final analytic sample.

Most (95%) reported decreased screening during 2020 compared to pre-pandemic, and 53% stated that screening services were completely suspended at some point during the pandemic. Smaller proportions reported suspensions of colposcopy (31%) and LEEP (17%) services. By October 2021-July 2022, when the survey was conducted, screening had recovered somewhat. Approximately one-quarter (28%) reported less cervical cancer screening currently than before the pandemic, 46% reported the same amount, and 26% more screening. Among clinics providing LEEP services, 76% had currently resumed pre-pandemic LEEP capacity (data not shown).

We examined cervical cancer screenings performed monthly by clinician training and specialty (Table 2). Overall, 32% of clinicians screened 1–5 patients monthly, 29% screened 6–10 patients, 18% screened 11–20 patients, and 21% reported screening >20 patients. Approximately 18% of MD/DOs and 23% of APPs screened >20 patients per month, while 37% of MD/DOs and 27% of APPs screened 1–5 patients per month. Screening practices varied by specialty, with 59% of clinicians in OBGYN/Women’s Health screening >20 patients per month compared to 11% in Family Medicine.

Table 2
Cervical cancer screenings performed monthly by clinician specialty and clinician training.
1–5 patients per monthN=476–10 patients per monthN=4311–20 patients per monthN=27>20 patients per monthN=31TotalN=148
Clinician Training
MD/DO25 (37%)20 (30%)10 (15%)12 (18%)67
APPs22 (27%)23 (28%)17 (21%)19 (23%)81
Clinician Specialty
OBGYN/Women’s Health2 (7%)4 (14%)6 (21%)17 (59%)29
Family Medicine39 (38%)34 (33%)19 (18%)11 (11%)103
IM, Peds/Adol. Med.6 (38%)5 (31%)2 (13%)3 (19%)16
  1. Placeholder for Figure 1*Study flow chart depicting participant exclusions and final analytic sample.

Table 3 and Figure 2 detail logistic regression model results for clinician and practice characteristics associated with odds of doing the same amount or more cervical cancer screening at the time of survey completion (2021–2022) as compared to before the COVID-19 pandemic. Region, gender, and age were not included in the model after completing the specified variable selection process. Clinician specialty was significantly associated with odds of doing the same or more cervical cancer screening at time of the survey (2021–2022) than before the pandemic (p=0.04). Compared to Women’s Health/OBGYNs, those who identified as family medicine clinicians and other were significantly associated with decreased odds of performing the same or more screening at time of survey (2021–22) (Family medicine: OR = 0.29, 95% CI: 0.08–1.07, p=0.06; Other: OR = 0.12, 95% CI: 0.025–0.606, p=0.01). Further, clinician training was significantly associated with increased odds of doing the same or more screening at time of the survey (2021–2022) as compared to before the pandemic (p = 0.06); compared to MDs/DOs, APPs had higher odds of performing the same or more screening at time of the survey (2021–2022) (OR = 2.15, 95% CI: 0.967–4.80, p=0.06). Clinician race/ethnicity was also significantly associated, with non-White clinicians more likely to report the same or more screening at time of the survey (2021–2022) as compared to White non-Hispanic clinicians (OR = 2.16, 95% CI:.894–5.21, p=0.08).

Table 3
Final model of clinician and practice characteristics associated with odds of reporting conducting the same amount or more cervical cancer screening now/in 2021 than before the COVID-19 pandemic (N=140).

Manual forwards selection was utilized and the following variables were not selected for the final model (p>0.10): (1) region (2) gender and (3) age.

Overall pBSEAdjusted odds ratiopCI
Clinician training0.0605
 APPs0.76760.40892.1550.06050.967–4.802
 MD/DO-----
Clinician specialty0.0364
 Family Medicine–1.22140.65940.2950.06400.081–1.07
 Int. Med., Peds/Adol. Med.–2.09960.81590.1230.01010.025-.606
 Women’s Health/OBGYN-----
Clinician race/ethnicity0.0873
 All other races/ethnicities0.76940.45002.11590.08730.894–5.214
 White non-Hispanic-----
  1. *CI reported is for OR.

  2. *Placeholder for Figure 2* Forest plots depicting clinician and practice characteristics associated with odds of reporting conducting the same or more cervical cancer screening now/in 2021 vs. before the pandemic.

Forest plots depicting clinician & practice characteristics associated with odds of reporting conducting the same amount or more cervical cancer screening now/in 2021 vs before the pandemic.

Clinicians reported various barriers to cervical cancer screening (Table 4). The following were ‘often’ considered barriers by respondents: limited in-person appointment availability (45%), patients not scheduling (57%) or attending appointments (42%), switching to telemedicine (33%) and the need to address more pressing health concerns (31%). Another important barrier was pandemic-associated staffing changes impacting the ability to screen for cervical cancer, track abnormal results, or follow-up with their patients, which was reported by 45% of participants. Approximately half of participants reported current decreased staffing levels of medical assistants (56%), and office staff (43%) as compared to pre-pandemic while approximately one third reported decreases in physicians (35%), APPs (28%), and nurses (28%). Only 12% reported lack of health insurance was an important barrier to screening.

Table 4
Barriers to cervical cancer screening and strategies for tracking patients.
BARRIERSRarelyn (%)Sometimesn (%)Oftenn (%)Unsuren (%)Valid N
Systems barriers148
Limited in-person appointment availability at our health center24 (16)53 (36)66 (45)5 (3)
Patients not scheduling appointments5 (3)50 (34)85 (57)8 (6)
Patients not attending appointments (no shows)8 (6)73 (49)62 (42)5 (3)
Patient lack of health insurance or limited coverage*83 (56)36 (24)18 (12)11 (8)
Inability to track patients who are due for screening58 (39)46 (31)32 (22)12 (8)
Health center (or providers) not prioritizing screening due to need to address more acute health problems34 (23)61 (41)46 (31)7 (5)
Switched to telemedicine visits so screening not available34 (23)59 (40)48 (33)6 (4)
Staffing barriersFrequencyPercent148
COVID-related staffing changes impacted ability to screen or track abnormal results (yes)6745
Current health center staffing compared to pre-pandemicDecreased
n (%)
Stayed the same
n (%)
Increased
n (%)
Unsure
n (%)
148
Physician (MD, DO)52 (35)80 (54)6 (4)10 (7)
Nurse practitioner, Physician Assistant, Certified Nurse Midwife, other Advanced Practice Provider42 (28)71 (48)22 (15)13 (9)
Nurse (RN, LPN)42 (28)71 (48)22 (15)13 (9)
Medical Assistant83 (56)45 (30)8 (6)12 (8)
Office Staff64 (43)64 (43)6 (4)14 (10)
  1. *

    Participants were also asked what proportion of patients were unable to obtain treatment (LEEP) due to financial issues, 70% (n=102) answered 0–20%.

Clinician and practice characteristics associated with odds of reporting staff shortages, tracking abnormal results, and follow-up were also assessed using logistic regression. In manual forwards selection, gender, region, age, race/ethnicity, clinician specialty and training were not selected for the final model, indicating no factors significantly associated with staffing shortages. Table 5 highlights results related to strategies for tracking patient screening and abnormal results. To address missed care during the pandemic, most participants reported scheduling screening at the time of telemedicine visits (74%), performing screening when patients presented for other concerns (61%), and querying electronic medical records (62%). Few (22%) reported extra clinical sessions or extended hours devoted to screening. A minority (20%) reported that they did not have any system to track patients overdue for screening. The most commonly reported tracking systems for screening included the electronic medical record (63%) and dedicated staff members (25%). When asked about management of abnormal screening test results, participants most commonly reported that they were not aware of a tracking system (38%). When systems were in place, they included: electronic medical record tracking (34%), a dedicated staff member (36%), and paper logs (5%).

Table 5
Strategies for tracking patients and catching up on missed screenings*.
STRATEGIESFrequencyPercentValid N
Policies or plans for catching up on screenings that were missed due to the pandemic148
Patients seen via telemedicine are scheduled for future screening visits11074
Electronic medical record is queried to identify patients who are overdue9262
Added dedicated cervical cancer screening days/hours3222
Try to perform cervical cancer screening at acute problem visits/take advantage of opportunities to screen during unrelated visits9061
System for tracking patients overdue for screening148
No, unaware of any system2920
Paper log of patients53
Each dept. has its own system53
Electronic medical record tracker9463
Dedicated staff person/team member to review records and contact patients3725
Other1611
System for tracking abnormal results (e.g., colposcopy referrals)148
Paper log of patients85
Each dept. has its own system75
I am not aware of any system/each provider tracks own results5638
Electronic medical record tracker5034
Dedicated staff person to review records and contact patients5336
Other1611
  1. Note, participants were asked to check all that apply therefore answers sum to >100%.

HPV self-sampling has been proposed as a method to improve cervical cancer screening rates. Table 6 highlights clinician attitudes towards adopting HPV self-sampling as a strategy. A total of 31% felt that self-sampling would be very helpful and 61% felt it would be somewhat helpful to address pandemic-associated screening deficits. Approximately half (49%) would offer self-sampling only to patients who were unable to complete in-clinic screening, 35% would offer to any patient who preferred to self-sample, 6% would enact self-sampling for all patients, and 5% would not offer self-sampling. The most common perceived benefits of self-sampling were screening patients who had difficulty undergoing speculum exams (26% moderate benefit, 56% large benefit), or screening patients who had access to care issues (34% moderate benefit, 39% large benefit). However, clinicians reported concerns about patients collecting inadequate samples (33% moderate, 33% large concern), not returning specimens in a timely manner (35% moderate, 38% large concern), or not presenting for other primary care services (33% moderate, 31% large concern). Participants were able to add free text to explain their answers in this section. Several participants who expressed concerns about HPV self-sampling described negative experiences with poor return rates and inadequate samples in home-based colon cancer screening.

Table 6
HPV self-sampling perceptions and practices.
Frequency%Valid N
Helpfulness of HPV self-sampling to catch up patients overdue for screening due to COVID-19 pandemic147
Not helpful128
Somewhat helpful8961
Very helpful4631
Would recommend HPV self-sampling instead of clinician-collected sample for cervical cancer screening148
All patients96
Any patient who preferred a self-sample over a clinician-collected sample5235
Only pts. who couldn’t have screening in clinic because of transportation issues, fear of coming to clinic, difficulty with speculum exams7249
N/A I would not offer HPV self-sampling85
Other75
Location to perform self-sample HPV tests148
In clinic86
At home96
Either in clinic or home, depending on pt. preference12086
Other32
Benefits/advantages of self-sampled HPV testingNot a benefit
n (%)
Small benefit
n (%)
Moderate benefit
n (%)
Large benefit
n (%)
147
Screen patients who have difficulty accessing screening due to lack of qualified providers, distance to clinic, or logistical barriers (e.g., childcare or work schedules)7 (5)32 (22)50 (34)58 (39)
Screen patients via telemedicine10 (7)50 (34)44 (30)43 (29)
Screen patients who would prefer not to have speculum exams (e.g. mobility issues or history of trauma)3 (2)23 (16)38 (26)83 (56)
Concerns about self-sampled HPV testingNot a concernSmall concernModer-ate concernLarge concern147
A pelvic exam by a clinician should be part of cervical cancer screening20 (13)57 (39)38 (26)32 (22)
Patients may not collect adequate specimens4 (3)45 (31)49 (33)49 (33)
Patient may not return specimen in a timely manner3 (2)37 (25)51 (35)56 (38)
If performed at home, patients may not present for routine primary care or follow-up for abnormal test results13 (9)39 (27)49 (33)46 (31)

Qualitative data

A total of 15 clinicians participated in qualitative interviews. The qualitative sub-sample was primarily female (93%), White (67%), non-Hispanic (100%), and practiced in the Northeast (67%). More than half (53%) were APPs, and 73% specialized in Family Medicine. Three themes emerged in the qualitative analysis including: initial pandemic-associated barriers, ongoing barriers (systems and staffing), facilitators and strategies for catching up on cervical cancer screening (Table 7).

Table 7
Qualitative themes with exemplar quotes.
ThemeExemplar quotes
Initial pandemic-associated barriers“I would say it definitely disrupted all the cancer screenings, the mammo[gram]’s, the colonoscopies, the pap smears, I would say for the whole year of 2020 into about March of 2021.” (APP, Family Medicine)
“We were only doing acute visits… everything else was by phone.” (MD, Family Medicine)
Ongoing barriers (system and staffing)System-related:
“We have the EMR triggering, and we have active tracking of abnormal Paps. But as far as getting people in for their routine screening, I don't believe we have someone actively tracking that. I feel like it’s more on the provider picking it up as they open the chart.” (APP, Family Medicine)
Staffing-related:
“We are still working with reduced staff in the office. So, there are definitely still much fewer appointments available.” (APP, Family Medicine)
“We realized … we really need to start doing colposcopy again. But unfortunately, that’s also when our physician colposcopy provider left.” (MD/DO, Family Medicine)
“Rates of burnout, and then the competition from other systems, hiring people away was pretty debilitating at times.” (APP, Family Medicine)
Facilitators and strategies for catching up on cervical cancer screeningStaffing and tracking:
“Patients get reminders… the health center as a whole has been trying to run lists of people that are due and bring them in.” (APP, Family Medicine)
“If they had an abnormal PAP, the nursing staff would have ticklers [in the EMR] created as a reminder that it’s time for the patient to have a PAP… We have two nurses who are dedicated not for just PAP tracking but for general ticklers.” (MD/DO, Internal Medicine).
HPV self-sampling benefits:
“It decreases any concerns for like privacy, for discomfort, you know, patients who have trauma histories, maybe patients who are transgender, patients who, you know, like I said, work schedules don't allow them to get in on time, um, it just opens up a way for them to still all be screened in a way that can hopefully feel comfortable and accessible.” (APP, OBGYN/Women’s Health)
“I think it could be [useful to address pandemic-related screening deficits]. Especially if we don't have, um, as many in-person appointments available.” (APP, Family Medicine)
HPV self-sampling concerns:
Inadequate sample:
“Making sure that people you know, kind of collect it correctly, mostly just because in my experience, people have not great knowledge about their own anatomy sometimes… if somebody accidentally puts the swab in their rectum, instead of the vagina, you would probably get an HPV result, because you can do HPV testing in the rectum, but you're not getting a, a cervical cancer screening.” (APP, OBGYN/Women’s Health)
Kits will not be returned:
“We do our –occult blood sampling with home tests, and sometimes –many times, those kits go home and never come back. We're always chasing a patient to kind of get them to bring it back or mail it back.” (APP, Family Medicine)

Initial pandemic-associated barriers

These initial barriers were related to closing of offices/limiting office visits, patient fear of in-person care, prioritizing acute/urgent health conditions over preventive care, and inability to provide cervical cancer screening during telemedicine visits. In primary care offices, early disruptions were associated with caring for persons with COVID-19: “People working, especially in family medicine, were distributed to the COVID clinic… And so non-essential visits including routine pap smears were put on hold” (APP, Family Medicine). Many clinics switched to telemedicine, which was helpful to address acute issues, but reduced opportunities for cervical cancer screening. One said: “If they had been in the clinic… I would have probably done cervical cancer screening at that time.” This participant noted that rescheduling well care was often unsuccessful: “I'll have the medical assistant call… but we have a really high no-show rate when people are just coming in for well exams” (APP, Women’s Health).

Clinicians also noted that patients were afraid to come for care early in the pandemic: “Patients were hesitant, especially in the first year of [the] COVID pandemic, to leave their home for unnecessary reasons, including screening tests such as Pap smear” (MD, Family Medicine). Later in the pandemic, when more patients were seen for primary care, clinicians described situations where other medical conditions took priority: “primary care visits were all like trying to catch up on everything else cause all of a sudden now everyone’s diabetes is out of control, and their anxiety is out of control, and cancer screening ends up being at the bottom of the list among the issues that they want to talk about” (MD, Family Medicine). As the pandemic moved into the endemic phase, clinicians described additional challenges: “The social determinants are still hitting some of our patients pretty hard… I don’t know that it’s COVID as much anymore that’s affecting their ability to access care” (MD, Family Medicine).

Ongoing barriers (system and staffing)

Several participants described current and ongoing limitations to existing systems: “Only if a patient has had an abnormal [result] are they actively being tracked… [otherwise] until they access the Health Center for their next visit we really have no idea” (APP, Women’s Health/OBGYN). Others described EMR functionality that went unused due to limited staff capacity or poorly functioning EMRs: “In our old system you could literally put a quick text [smart phrase that pulls patient information into a medical record note]… and it will just come up with all the history of the Paps. We can't do any of it in this new system… I'm literally going through the system, and looking at all the past Paps, and I'm writing them in the note” (MD, Family Medicine).

Participants described profound staffing shortages: “We're missing MAs, front desk, providers, nurses too. Pretty much literally everybody, every position, we're short” (APP, Family Medicine). Another said: “We stayed [open] without somebody cleaning the clinic 100%... so we had to do some of the work ourselves” (MD, Family Medicine). Staffing shortages also negatively impacted outreach: “We're not outreaching to patients and trying to get them in, we're just trying to get through the day… we just don't have the manpower to see everybody” (APP, Family Medicine). The relatively lower compensation at FQHCs posed an additional challenge both to staff retention and to creating and utilizing patient tracking systems: “As a federally qualified health center, we often are not the best payer for different roles. And so we tend to have a lot of turnover, particularly in our medical assistants, nurses, and it’s quite hard to hire.” Additionally, this participant also noted, “We also tend not to have the biggest or the most robust IT department… And any time we need to get information from these registries, we need to ask our IT department. But they're pretty understaffed. And also underpaid” (MD, Family Medicine). Childcare also posed challenges: “I'd say the majority of our staff in the nursing and medical assistant roles are moms and some of them are single moms. So we lost a few because… they had no childcare [realted to the pandemic] or they couldn't come in” (APP, Family Medicine). In contrast, COVID-19 vaccine mandates were not felt to be significant contributors to staff shortages.

Facilitators and strategies for catching up on cervical cancer screening

The participants discussed how the availability of COVID-19 vaccinations shifted the risk-benefit ratio of seeing patients in person for routine care: “before we were able to be vaccinated… it felt like unnecessary risk” (APP, Women’s Health). As the pandemic continued into its second year, clinicians perceived the benefits of resuming in person visits outweighed the risk of contracting COVID-19 in healthcare settings; therefore, the focus shifted to catch-up measures: “When we realized that this was gonna be a long-term change… there was a big push to catch people up [with screening for cervical cancer]” (APP, Family Medicine).

Participants discussed strategies for patient outreach to catch-up on screening, including automated components within the EMR, dedicated staff who identify patients who are due to screen, providing evening or weekend hours, and mobile health units. One noted, “The health center as a whole has been trying to run lists of people that are due and bring them in” (APP, Family Medicine).Clinicians described strategies related to accountable care organizations, which are value-based care entities promoted by the US Centers for Medicare and Medicaid (Centers for Medicare & Medicaid Services, 2023), stating: “We're an accountable care organization, it incentivizes getting all of your quality metrics where you want them… The pap smears are tracked every quarter… If you hit above 75% of your pap smears, they give you an incentive quarterly” (APP, Family Medicine). Another suggested that healthcare systems and insurance plans could be utilized: “We [our practice] discussed perhaps using our accountable entity to try to do some outreach as well, because they do outreach right now for colon cancer and mammograms” (MD, Family Medicine).

Some participants described potential strategies to increase staff retention: “Increase in pay I feel will help. But also recognition for the staff, because some of the staff feel underappreciated…. and maybe more organized so that everything can run smoothly and uniformly” (APP, Family Medicine). Another added: “Better salaries, better benefits, better working conditions. In the sense that if somebody needed to take care of a child and go home early, then staggered staffing, flexible hours as part of the benefits, so that somebody else can cover. And, of course, monetary, icing on the cake, so to speak, always works” (MD, OBGYN).

Self-sampling for HPV testing is not currently FDA approved in the US, but may be an option in the future. Most participants thought self-sampling would be helpful to address pandemic-related screening deficits: “People are coming back with a lot of problems that they've been hanging on to for a couple of years. So that could help take care of some of their health maintenance and not further delay it because they're worried about X, Y, Z also. Then sure, that would help with the COVID deficit specifically” (APP, Family Medicine). Many noted that patients self-collected other specimens, and felt that HPV self-collection would be feasible: “We have a lot of our patients doing self-swabs right now anyways for vaginitis… and I'm used to having patients swab themselves for other things like in pregnancy we do GBS swabs, so I feel confident that people can correctly be instructed on how to self-swab” (APP, OBGYN). However, others were concerned about patients’ abilities to properly collect the specimens: “There’s certain populations, especially the underserved community that I do work in might face challenges to follow the instruction or even read on how to do it” (MD/DO, Family Medicine). Others described negative experiences using mailing for self-collected colon cancer screening: “It would be really clever if we could just send out swabs to patients. But I don't know. We tried that with FIT (fecal occult blood) testing, and we were told by the lab that they don't get a high enough return of the kits. And so it actually was cost prohibitive to just be sending out FIT tests” (MD/DO, Family Medicine).

Discussion

We examined patterns of cervical cancer screening provision and abnormal results follow-up between October 2021 through July 2022 among clinicians practicing in federally qualified health centers. Over 80% of clinicians reported decreased screening during the start of the pandemic in 2020, but approximately 67% reported that screening had resumed to pre-pandemic levels at the time of the survey (2021–2022). Those who identified specialty as family medicine or other had decreased odds for, and those who identified training as APP, had increase odds for performing the same or more screening at time of survey (2021–22) as compared to before the pandemic. Clinician barriers, both reported quantitatively and qualitatively, focused on staffing shortages as well as structural systems to track and reestablish care for those who were overdue for screening and those who needed follow-up after an abnormal screening test.

Barriers to screening evolved over the course of the pandemic. In 2020, fear of contracting COVID was the primary barrier to provision of services by clinicians and health systems, and use of services by patients. Clinicians described near cessation of cervical cancer screening services early in the pandemic, as both clinicians and patients felt that the risk of contracting COVID when providing well care outweighed the benefits of cervical cancer screening in the short term. Vaccinations and the realization that COVID was becoming endemic changed this calculus, and clinics began re-opening services and recalling patients for screenings. In 2021/22, the primary barrier to cervical cancer screening shifted from contagion concerns to staffing shortages and the need for primary care clinicians to address other chronic health conditions. However, clinicians also noted that patients not scheduling or not attending appointments was an important barrier to screening. Quantitative findings indicated that cancer screenings were less often performed in specialties that did not focus on women’s health, such as internal or family medicine. Qualitative data indicated that this may have resulted from a need to provide direct care for COVID-19 patients or to focus on other chronic health conditions that had worsened due to lack of care during 2020 (Amit et al., 2020; Castanon et al., 2021; Network EHR, 2020). In addition, our findings noted that APPs performed more cervical cancer screening than physicians, which could indicate appropriate allocation of patients needing preventive care to APPs, while assigning sicker patients to physicians who could better address complex medical concerns. Additional research is needed to confirm and further explore these findings.

Staff shortages hindering the ability to provide cervical cancer screening and follow-up care were reported by nearly half of clinicians. Clinicians reported reductions in staffing at all levels: physicians/APPs, nurses, medical assistants, and front desk staff. Staff shortages, both clinical and non-clinical across many healthcare settings, have been reported in other contexts as a result of the pandemic (Holthof and Luedi, 2021; Chervoni-Knapp, 2022). Two factors were felt to be the most important contributors to staff shortages: low salaries and lack of childcare. Because FHQCs typically pay lower salaries than other practice settings (Friedberg et al., 2017; Quinn et al., 2013), participants reported high levels of staff turnover and difficulties with recruitment. Pandemic-related remote schooling and rules related to infection control created childcare difficulties for many parents. Participants reported this to be a particular problem for female staff in lower salaried positions, such as medical assistants (Boesch and Hamm, 2020; Organisation for Economic Co-operation and Development, 2019).

Strategies for addressing pandemic-related screening deficiencies included improving staffing levels as well as systems for follow-up and tracking. Several clinicians described success associated with robust tracking systems including population management reports, system-wide incentives, automated patient outreach, and dedicated staff for patient recall and scheduling. Others, however, reported absent systems or being unable to utilize EMR capabilities due to staff shortages. Higher salaries, improved organization within the healthcare system, and ensuring that staff felt respected and valued by leadership were felt to be important strategies for improving care provision (Prasad et al., 2021; Serrano et al., 2021; Sinsky et al., 2021; Talbot and Dean, 2018).

Participants overall felt that HPV self-sampling would be a useful tool to address pandemic-related screening deficits, as has been noted in the literature (Fuzzell et al., 2021). Many felt confident that patients could self-collect the swabs given their experience using self-swabbing with patients for vaginitis or group B strep in pregnancy. However, others were concerned that patients might not collect the specimen properly, leading to a false negative cancer screening result. Self-sampling when using PCR-based testing has demonstrated overall similar accuracy to clinician-based samples (Arbyn et al., 2022), though studies to validate this in US populations are ongoing (National Cancer Institute, 2023). For some participants, clinic-collected sampling was viewed more favorably than home-testing via mailed kits due to negative experiences with home-based colon cancer testing. A meta-analysis of self-sampling indicated increased screening participation when self-sampling is offered, with clinic-based offering being more effective than mail-in kits (Costa et al., 2023).

As healthcare continues to face challenges including COVID-19, influenza, behavioral health, and exacerbation of chronic diseases, strategies are needed to ensure that patients are provided with cervical cancer prevention services. This is especially important in FQHCs, who serve patients at the highest risk of invasive cervical cancer (Hébert et al., 2018; Singh et al., 2004; Barry and Breen, 2005; Friedman et al., 2012; Bradley et al., 2001). Maintaining adequate staffing is a critical need noted in our study and by others (Frogner, 2022). Higher salaries were felt to be most important, as well as improved organization of clinic function and flexible scheduling to support working parents with childcare needs (Burrowes et al., 2023; U.S. Bureau of Cancer Statistics, 2023).

This study has several strengths and weaknesses. We surveyed clinicians practicing in FQHCs in the US on the perceived impact of the pandemic on screening and abnormal results follow-up. Few investigations thus far have examined perceptions of those practicing in FQHCS, in particular as it pertains to impacts of pandemic-related challenges to cervical cancer screening. Despite this, we note several limitations. We recruited our sample through FQHC networks; thus, we were unable to calculate a response rate, nor were we able to achieve a nationally representative sample and thus, findings cannot be widely generalized. Notwithstanding efforts to achieve a regionally diverse sample, 63% of responding clinicians were practicing in the Northeast at the time of their participation. Given that COVID-19 policies varied widely by state, this regional imbalance may limit the generalizability of our results. Despite the oversample of clinicians in the Northeast, region was not a significant predictor of either outcome. Similarly, our sample was 85% female and 70% White. Although ideally we would have included a sample that was more diverse with respect to race and gender, these characteristics are not disparate from the majority of clinicians who perform cervical cancer screening (e.g., race: Women’s Health NPs [77% White] (Healthcare Ws, 2018), active Ob/Gyns [67% White] (AAMC, 2022), all active physicians [64% White] (AAMC, 2022); gender: all NPs [92% female] (Hooker et al., 2016), Ob/Gyns [64% female] (AAMC, 2022), all active physicians [37% female] (AAMC, 2022)). Importantly, we do not have data on the overall number of screenings provided by each FQHC. The majority of our sample reported that they personally were providing screening at pre-pandemic levels, but half also report staff shortages impacting screening and follow up. Therefore, we cannot confirm whether the efforts of remaining staff are sufficient to compensate for missing personnel in terms of the overall availability of services. Finally, the use of manual forward selection with our a priori determined significance level has limitations, including the possibility of overfitting. Additional studies would be useful to confirm these findings.

These findings highlight that in late 2021 and early 2022, clinicians in FQHCs are still perceiving impacts of the pandemic broadly to cervical cancer screening. They also still report experiencing pandemic-related impacts of staffing changes on screening and follow-up. If not addressed, reductions in screening due to staff shortages, and low patient engagement with the healthcare system may lead to increase in cervical cancer in the short and long term. Future research should closely tracktrends in provision of screening, colposcopy, and treatment services in underserved communities and settings in order to avoid future increases in cancer incidence.

Data availability

Full human subjects data are unavailable via a data repository due to confidentiality concerns. A limited dataset may be made available upon reasonable request from other academic researchers and requests should be submitted via email to the corresponding author and will be approved on a case by case basis by study PIs and the institutional SRC and IRB. SAS version 9.4 was used to analyze data. SAS code has been made available at https://doi.org/10.7910/DVN/URBYSD.

The following data sets were generated

References

  1. Software
    1. Network EHR
    (2020)
    Preventive cancer screenings during COVID-19 pandemic
    COVID.
    1. Poljak M
    2. Cuschieri K
    3. Waheed DEN
    4. Baay M
    5. Vorsters A
    (2021)
    Impact of the COVID-19 pandemic on human papillomavirus-based testing services to support cervical cancer screening
    Acta Dermatovenerologica Alpina, Pannonica, et Adriatica 30:21–26.
    1. Quinn MT
    2. Gunter KE
    3. Nocon RS
    4. Lewis SE
    5. Vable AM
    6. Tang H
    7. Park S-Y
    8. Casalino LP
    9. Huang ES
    10. Birnberg J
    11. Burnet DL
    12. Summerfelt WT
    13. Chin MH
    (2013)
    Undergoing transformation to the patient centered medical home in safety net health centers: perspectives from the front lines
    Ethnicity & Disease 23:356–362.

Decision letter

  1. Eduardo L Franco
    Senior and Reviewing Editor; McGill University, Canada
  2. Parker Tope
    Reviewer; McGill University, Canada

Our editorial process produces two outputs: (i) public reviews designed to be posted alongside the preprint for the benefit of readers; (ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.

Decision letter after peer review:

Thank you for submitting your article "Examining the impact of the COVID-19 pandemic on cervical cancer screening practices among clinicians practicing in Federally Qualified Health Centers: A mixed methods study" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, and I oversaw the evaluation in my dual role of Reviewing Editor and Senior Editor. The following individual involved in the review of your submission has agreed to reveal their identity: Parker Tope (Reviewer #1).

Essential revisions:

As is customary in eLife, the reviewers have discussed their critiques with one another. What follows below is an edited compilation of the essential and ancillary points provided by reviewers in their critiques and in their interaction post-review. Please submit a revised version that addresses these concerns directly. Although we expect that you will address these comments in your response letter, we also need to see the corresponding revision clearly marked in the text of the manuscript. Some of the reviewers' comments may seem to be simple queries or challenges that do not prompt revisions to the text. Please keep in mind, however, that readers may have the same perspective as the reviewers. Therefore, it is essential that you attempt to amend or expand the text to clarify the narrative accordingly.

Reviewer #1 (Recommendations for the authors):

Introduction

The introduction contextualizes well previous literature on the impact of the pandemic on cancer screening services and introduces tangible examples of how the pandemic influenced factors along the screening continuum.

While the authors mention investigating the association between clinician characteristics and outcomes of interest in the methods, there is little justification for doing so. Are clinical characteristics meant to serve as proxies for determining the patient populations that are attended to by clinician participants?

Methods

I would suggest reorganizing the methods into the following sections for logical flow: Target population, Survey development and validation (which would include measures), survey administration (which would include Participant Recruitment), and finally analysis (Quantitative and Qualitative). This would aid the reader in following the process of the survey construction, dissemination, and finally, how obtained data were used.

What is the target population of the study? Explicitly stating the target population can help readers determine if sampling methods were appropriate given the sampling frame.

The authors mention federally qualified health centres (FQHC) as their participant sources, however, there is no explanation as to why participant recruitment was focused on FQHCs. Are there key populations served by FQHCs in the US? Including further explanation would allow for further assessment of the study's generalizability as well as comparability with other research outside of the US.

How was the survey administered? Was there a particular software or an electronic form used for the online survey? I suggest including this information in the Methods section for transparent reporting.

The authors state that the survey questions were piloted across an expert panel, however, it is unclear as to whether the platform that was used for the survey administration was also piloted for possible technical functionality issues. Not piloting the technology could lead to unknown sources of error and thus introduce bias. If piloting was conducted, I suggest that this be more clearly stated in the methods.

The authors collect race/ethnicity information from their survey participants, specifically collecting disaggregated race/ethnicity data. In their regression analyses, the researchers aggregated racial categories without explanation. In accordance with best practices in handling race/ethnicity data, when collecting disaggregated data, subsequent aggregation of race/ethnicity categories should be clearly justified (e.g., insufficient sample size).

There is no mention of missing data, or incomplete survey responses. Did the authors collect this information (i.e., whether survey responses that were started but not completed were captured in their final dataset?). Was data cleaned and assessed for unlikely or aberrant values?

With regards to the qualitative interviews, how were participants interviewed? In-person, via Zoom, or over the phone? Who conducted the interviews?

I have difficulty understanding why the authors conducted multivariable logistic regression, rather than univariate. Given the study design (i.e., survey) as well as the main objective of the study, which was to explore the perceived need for screening and appears to be descriptive, I am uncertain about the authors' adjustment for several covariates. The overarching concern is that adjustment for covariates (unclear as to whether these are theorized as predictors or confounders) conflates a descriptive research question with causal methods. If the objective were to determine how clinician characteristics causally affected (1) the level of screening pre- and during-pandemic times and (2) the severity of systemic difficulties (i.e., staffing), then adjustment for covariates should be justified and reflected in the study's objective. There appears to be little causality to be inferred through this study and rather a descriptive perspective, which would only appropriately justify univariate, descriptive analyses.

Even if adjustment for covariates were appropriate for this research question, the use of manual forwards selection using p-value as the selection criteria can lead to over-fitting and requires well-justified selection of potential covariates to be sequentially added to the model. If the authors proceed to include previously conducted analyses in this manuscript, I suggest that such limitations be acknowledged.

Results and Discussion:

The authors comprehensively present the descriptive findings of the survey data. I very much appreciated the inclusion of direct quotes from interviewees in addition to summarizing key qualitative pieces in the included table. These quotes provide a narrative component to the article and give voice to the challenges and frustrations experienced by clinicians.

Given the authors' emphasis on interviewees' difficulties in staffing, accommodating childcare for staff, and better compensating non-provider healthcare workers, I would suggest the inclusion of a section in the discussion emphasizing how the COVID-19 pandemic has also disproportionately affected the lived experiences of non-providers in healthcare, who have essential roles in facilitating the cancer care system.

Reviewer #2 (Recommendations for the authors):

Introduction:

1. Provide a brief overview of what Federally Qualified Health Centers (FQHCs) are and how they differ from other healthcare facilities in the US. It would help readers unfamiliar with the US healthcare system understand why safety net facilities like FQHCs are essential in cervical cancer screening.

2. Explain why the chosen period (October 2021 through July 2022) was significant or relevant to the study. This would help readers understand why this specific time frame was chosen for the study and how it might have impacted the findings. Was this period extra hard in the U.S?

Method:

3. The paper would benefit from discussing the choice of statistical method for analysing the quantitative survey data. Stepwise regression is a discussed method with the downside of overfitting. At the same time, an explanation or discussion should follow of choosing a significance value for entry at 0.10.

4. Why did the authors choose the p-values of 0.10 as significant?

Results:

5. Page 7, line 190: The author points out that “the most commonly reported barriers were limited in-person appointment availability (46%)…” However, this number cannot be found in Table 3, pages 20-21. I guess that it is a typo and should instead be 45%?

Discussion:

6. Discuss whether the composition of responders represents the people who generally work at the safety net facilities. The sample contains an overrepresentation of white females, which could affect the results.

7. The paper would benefit from a discussion on the choice of statistical method for analysing the quantitative survey data. It is my understanding that stepwise regression is a discussed method with the downside of overfitting. At the same time, there should follow an explanation or discussion of the choosing of a p-value of 0.10, as this in my opinion is high.

Tables and figures:

8. In Table 1, under age, the sum of the numbers does not add up to N 148; instead, the sum is 147.

Reviewer #3 (Recommendations for the authors):

1. General: inconsistencies in percentages between the manuscript text and tables were observed throughout. The manuscript needs to be checked carefully and corrections made. Some may be due to a lack of rounding; appropriate rounding should be applied on percentages noted in tables and footnoted.

2. Abbreviations are provided in the text (and abstract) without defining these in the first place. These may be familiar/standard in the US but not for an international audience.

3. Title: only 45% of the participants of this study were clinicians. Adding or replacing this term with 'health care providers' would more accurately describe study participants. This point should be applied throughout the article.

4a. General/abstract: although I appreciate the constraints of the word limit for the abstract, the current wording does not do justice to the work presented. Suggest re-writing sections of it.

4b. Abstract/methods section: lines 35-38 are not methods but results. Other information should be stated in this section e.g. how the national sample was obtained, how the survey was conducted, and domains of questioning.

4c. Abstract/results: Findings in the Results section for APPs and ethnicity did not reach statistical significance as presented in the paper. There were various interesting findings that could replace these statements.

4d. Abstract/conclusion: although I agree with the validity of the statement in the conclusion, it does not sum up the results presented.

5a. Results section

Line 170: it is stated that 38% reported suspension of colposcopy and 6% of LEEP services, based on denominators of 95 and 127 participants respectively, after taking the number of unsure answers out of the total of 148 participants (as per footnote of Table 1). However, Table 1 also stated that only 115/148 provided colposcopy on site and 46/148 provided LEEP which has not been taken into consideration. Please revise both the manuscript text and Table 1 entries accordingly.

5b. Line 180: the p-value has been incorrectly rounded.

5c. Lines 185-188: The text states that clinician training was significantly associated with increased odds of the same or more screening however the p-value provided is 0.06 which signifies weak evidence at best. Even 0.05 is considered borderline significance. The same applies to the association with clinician race/ethnicity. Please amend these statements accordingly or remove.

6a. Discussion general: findings of non-attendance and increased frequency of women not booking screening appointments (not even mentioned in the results but presented in Table 3), are important points to mention in the discussion and linked to observed cervical screening attendance in the US reported during a similar time period.

6b. Lines 327-328: Is this statement based on qualitative evidence? If so please include this in the Results section as well.

6c. Lines 331-333: Quantitative findings referred to in this sentence were not included in the Results section nor relevant tables. It would be informative to provide a breakdown of screens provided by speciality in Table 1.

6d. Lines 335-336: The statement that APPs performed more screens than physicians has not been included in the results. It would be informative to provide a breakdown of screens provided per training in Table 1.

6e. Lines 367-369 These themes have already been raised earlier in the discussion (lines 339-346). Suggest merging the two relevant paragraphs.

7. Table 1: No details on staffing are provided in this table; title 9 rows from the end of the table should be amended.

8a. Table 2: recommend adding zeros before the point for more clarity.

8b. Table 2: a footnote listing the variables for which regression was adjusted should be listed.

8c. Table 3: add 'adjusted' to 'odds ratio'.

https://doi.org/10.7554/eLife.86358.sa1

Author response

Essential revisions:

Reviewer #1 (Recommendations for the authors):

Introduction

The introduction contextualizes well previous literature on the impact of the pandemic on cancer screening services and introduces tangible examples of how the pandemic influenced factors along the screening continuum.

While the authors mention investigating the association between clinician characteristics and outcomes of interest in the methods, there is little justification for doing so. Are clinical characteristics meant to serve as proxies for determining the patient populations that are attended to by clinician participants?

Thank you for your comment. We intentionally sought to examine clinician characteristics that may be associated with perceived changes in cervical cancer screening and the impact of pandemic-related staffing changes on screening and abnormal results follow-up during the post-acute pandemic period. The reason for doing so was to identify characteristics that could be targets for future interventions or additional support. For example, if more family medicine practitioners reported lower screening rates, that indicates a potential need for interventions focused on family medicine clinicians to help to avoid future disparities in cervical cancer. Similarly, characteristics like age, race, ethnicity, gender, region are worth exploring as statistically significant associations and could indicate that more supports and resources could be provided for providers in particular sub-groups. We now include a sentence on pg. 4 that indicates the reasoning behind exploring these associations:

“In order to identify characteristics that could be targets for future interventions or additional supports, this paper examines the association of clinician characteristics with perceived changes in cervical cancer screening and the impact of pandemic-related staffing changes on screening and abnormal results follow-up during the pandemic period of October 2021 through July 2022 in FQHCs and safety net settings of care.”

Methods

I would suggest reorganizing the methods into the following sections for logical flow: Target population, Survey development and validation (which would include measures), survey administration (which would include Participant Recruitment), and finally analysis (Quantitative and Qualitative). This would aid the reader in following the process of the survey construction, dissemination, and finally, how obtained data were used.

We have addressed these changes as suggested on pgs. 4 and 5.

What is the target population of the study? Explicitly stating the target population can help readers determine if sampling methods were appropriate given the sampling frame.

The target population was clinicians who conducted cervical cancer screening in federally qualified health centers and safety net facilities in the United States during the post-acute phase of the COVID-19 pandemic, which is now noted on pg. 4 of the manuscript:

“The target population were clinicians, defined for the purpose of this study as physicians and Advanced Practice Providers (APPs), who conducted cervical cancer screening in federally qualified health centers and safety net settings of care in the United States during the post-acute phase of the COVID-19 pandemic.”

The authors mention federally qualified health centres (FQHC) as their participant sources, however, there is no explanation as to why participant recruitment was focused on FQHCs. Are there key populations served by FQHCs in the US? Including further explanation would allow for further assessment of the study's generalizability as well as comparability with other research outside of the US.

The study focused on safety net settings of care. The most common safety net setting in the US are FQHCs, federally funded health centers or clinics that serve medically underserved areas and populations and often provide care at no or low cost to those with limited or no health insurance. During the pandemic, there was little research that focused specifically on FQHCs and ability to provide cervical cancer screening to these underserved populations who are at higher risk of cervical cancer than the general population. This is noted on pg. 3 of the manuscript:

“Federally qualified health centers (FQHCs) in the US are government funded health centers or clinics that provide care to medically underserved populations. Maintaining cancer screening in these and other safety net facilities is critical as they serve patients at the highest risk for cervical cancer: publicly insured/uninsured, immigrant, and historically marginalized populations.”

How was the survey administered? Was there a particular software or an electronic form used for the online survey? I suggest including this information in the Methods section for transparent reporting.

The survey was administered via online survey designed and hosted via Qualtrics, a common platform for market and research surveys. This is now noted on pg. 4 of the manuscript:

“We recruited clinicians for participation in the online survey hosted via Qualtrics….”

The authors state that the survey questions were piloted across an expert panel, however, it is unclear as to whether the platform that was used for the survey administration was also piloted for possible technical functionality issues. Not piloting the technology could lead to unknown sources of error and thus introduce bias. If piloting was conducted, I suggest that this be more clearly stated in the methods.

Thank you for the opportunity to clarify the process. The survey items were piloted with the expert panel prior to design of the Qualtrics survey. Once survey items were embedded into the Qualtrics platform, the research team internally tested the survey, making note of any technical errors that resulted from skip logic, select all versus single selection items, etc., and correcting any issues that were identified. They study also used the same Qualtrics survey platform to design a separate survey for over 1200 providers in the year prior and thus, had extensive experience identifying technical issues. The testing of technical functionality by the study team is now noted on pg. 4 in the Survey development and validation section:

“The draft survey was reviewed by an expert panel of FQHC providers (n=8), refined, piloted, and finalized after incorporating pilot feedback and testing technical functionality of the Qualtrics survey among the study team.”

The authors collect race/ethnicity information from their survey participants, specifically collecting disaggregated race/ethnicity data. In their regression analyses, the researchers aggregated racial categories without explanation. In accordance with best practices in handling race/ethnicity data, when collecting disaggregated data, subsequent aggregation of race/ethnicity categories should be clearly justified (e.g., insufficient sample size).

The reviewer makes an excellent point. Due to small cell sizes for persons of color (n of less than 15 each for Black, Asian, mixed race, and other race categories), we elected to categorize this variable with two groups: white non-Hispanic versus all non-white races (including Hispanic/Latinx). This is noted on pg. 5 and we’ve added the reasoning to the text (small cell sizes):

“Race/ethnicity was categorized for analysis as white non-Hispanic versus all others due to small cell sizes of non-white and Hispanic participants.”

There is no mention of missing data, or incomplete survey responses. Did the authors collect this information (i.e., whether survey responses that were started but not completed were captured in their final dataset?). Was data cleaned and assessed for unlikely or aberrant values?

Data were cleaned and examined for potential duplicate responses identified by repeat IP address and identical participant characteristics, nonsensical free responses, or randomly selected responses. See added explanation now included in text on pg. 7:

“Data were cleaned and invalid surveys were removed. Invalid surveys included potential duplicate responses identified by repeat IP address, nonsensical write-in free responses, and those with numerous skipped items.”

Overall, in the Qualtrics survey platform, if participants skipped an item purposely or unintentionally left an item blank, Qualtrics automatically prompted them to complete that item. They could either choose to complete it or select “ignore” to skip the item. There were very few skipped or missing items, but if a particular participant appeared to be missing more than a few errant responses, all of their responses were assessed to determine whether their participation was invalid and whether their survey should be removed. For each item included in analyses, Tables 1, 3, 4, and 5 include a “valid N” column. The total N for this sample was 148, thus any valid Ns less than 148 indicate missing responses for that particular item.

With regards to the qualitative interviews, how were participants interviewed? In-person, via Zoom, or over the phone? Who conducted the interviews?

Interviews were conducted via Zoom by three co-authors trained in qualitative methodology. This info is now indicated on pg. 6:

“Interviews were conducted by three co-authors (RBP, AM, HBF) trained in qualitative methodology via video conference (Zoom)”

I have difficulty understanding why the authors conducted multivariable logistic regression, rather than univariate. Given the study design (i.e., survey) as well as the main objective of the study, which was to explore the perceived need for screening and appears to be descriptive, I am uncertain about the authors' adjustment for several covariates. The overarching concern is that adjustment for covariates (unclear as to whether these are theorized as predictors or confounders) conflates a descriptive research question with causal methods. If the objective were to determine how clinician characteristics causally affected (1) the level of screening pre- and during-pandemic times and (2) the severity of systemic difficulties (i.e., staffing), then adjustment for covariates should be justified and reflected in the study's objective. There appears to be little causality to be inferred through this study and rather a descriptive perspective, which would only appropriately justify univariate, descriptive analyses.

The reviewer is correct that the analyses presented are not causal and the aim was to examine cross-sectional associations between clinician characteristics (race/ethnicity, gender, age, region, clinician training, clinician specialty) and the dependent variable (perceived screening practices at the time of survey participation the same/more versus less than pre-pandemic). Our team, including a biostatistician (NB), conducted these analyses with these goals in mind. We also confirmed there were no mentions of causality in the manuscript. Multivariable logistic regression is appropriate because we aimed to examine associations of multiple independent variables with a single dependent variable (perceived screening practices), and each association controls for confounding by the other variables in the model. If needed, we can provide univariate crude associations by request for supplementary material.

Even if adjustment for covariates were appropriate for this research question, the use of manual forwards selection using p-value as the selection criteria can lead to over-fitting and requires well-justified selection of potential covariates to be sequentially added to the model. If the authors proceed to include previously conducted analyses in this manuscript, I suggest that such limitations be acknowledged.

We acknowledge that manual forward selection has drawbacks, as do other approaches such as backward selection or AIC. With a small sample size (N=148), we chose manual forward selection because it begins with a null model and builds upon itself incrementally, rather than backward selection, which starts with a larger model. The biostatistician and study team carefully selected variables to be added to the model. The limitations of manual forward selection are now noted on pg. 16 in the limitations section of the discussion:

“Finally, the use of manual forward selection with our a priori determined significance level has limitations, including the possibility of overfitting. Additional studies would be useful to confirm these findings.”

Results and Discussion:

The authors comprehensively present the descriptive findings of the survey data. I very much appreciated the inclusion of direct quotes from interviewees in addition to summarizing key qualitative pieces in the included table. These quotes provide a narrative component to the article and give voice to the challenges and frustrations experienced by clinicians.

Given the authors' emphasis on interviewees' difficulties in staffing, accommodating childcare for staff, and better compensating non-provider healthcare workers, I would suggest the inclusion of a section in the discussion emphasizing how the COVID-19 pandemic has also disproportionately affected the lived experiences of non-providers in healthcare, who have essential roles in facilitating the cancer care system.

Thank you for this thoughtful feedback. On pgs. 14-15, we include discussion of staff shortages (both clinical and non-clinical staff) as a result of the pandemic, and the impact on these workers.

“Staff shortages hindering the ability to provide cervical cancer screening and follow-up care were reported by nearly half of clinicians. Clinicians reported reductions in staffing at all levels: physicians/APPs, nurses, medical assistants, and front desk staff. Staff shortages, both clinical and non-clinical across many healthcare settings, have been reported in other contexts as a result of the pandemic.29,30 Two factors were felt to be the most important contributors to staff shortages: low salaries and lack of childcare. Because FHQCs typically pay lower salaries than other practice settings,31,32 participants reported high levels of staff turnover and difficulties with recruitment. Pandemic-related remote schooling and rules related to infection control created childcare difficulties for many parents. Participants reported this to be a particular problem for female staff in lower salaried positions, such as medical assistants.33,34

Reviewer #2 (Recommendations for the authors):

Introduction:

1. Provide a brief overview of what Federally Qualified Health Centers (FQHCs) are and how they differ from other healthcare facilities in the US. It would help readers unfamiliar with the US healthcare system understand why safety net facilities like FQHCs are essential in cervical cancer screening.

Excellent suggestion. We now describe FQHCs in the second paragraph of the introduction, contextualized by the higher rates of those diagnosed with cervical cancer in the populations served by FQHCs.

2. Explain why the chosen period (October 2021 through July 2022) was significant or relevant to the study. This would help readers understand why this specific time frame was chosen for the study and how it might have impacted the findings. Was this period extra hard in the U.S?

We began recruitment in October 2021 because at that time, the pandemic appeared to be less acute and COVID-19 vaccination had become widespread in the US, with healthcare organizations attempting to resume normal operations. The goal was to recruit approximately 150 clinicians. However, the Omicron wave hit in winter 2021-22, which overwhelmed healthcare systems and forced us to pause recruitment until approximately March 2022. We resumed recruitment efforts in the spring of 2022 and reached our target sample size by July. We have added information to the Methods section (Participant recruitment and target population) on pg.4 indicating the goal to focus on perceived cervical cancer screening practices during the post-acute period after vaccination was generally available.

Method:

3. The paper would benefit from discussing the choice of statistical method for analysing the quantitative survey data. Stepwise regression is a discussed method with the downside of overfitting. At the same time, an explanation or discussion should follow of choosing a significance value for entry at 0.10.

We now acknowledge the limitation of overfitting on pg. 16 of the manuscript: “the use of manual forward selection with our a priori determined significance level has limitations, including the possibility of overfitting” Additionally, because of the small sample size, the nature of these analyses is exploratory. Using a priori hypotheses would have included more potential variables and would have resulted in a larger model than what we ultimately utilized, with very small cell sizes for many of the variables. As suggested by the study biostatistician, we selected p of.10 as a significance value for entry. This strikes a balance between the commonly accepted method of using the AIC (Akaike’s Information Criterion), which implicitly assumes a significance level of 0.157, and simultaneously mitigates the potentially low power corresponding to a level of 0.05 in a sample as small as ours, which is now noted on pg. 6:

“We used manual forward selection with a value for entry and significance of 0.10 because this strikes a balance between the commonly accepted method of using AIC (which assumes significance level of 0.157), and the often used α of 0.05, which could lead to failure to identify associations due to small sample size.”

4. Why did the authors choose the p-values of 0.10 as significant?

Similar to our response above pertaining to entry p values of.10, we use a significance level of.10 to balance the tradeoff between high type I error inherent in other levels such as 0.157 (the level assumed when using the AIC to choose a model) and low power to conduct exploratory analyses with a smaller α. This small study was designed as a supplement to a larger quantitative study and was intended to be hypothesis generating for future, confirmatory studies. Given these goals, the study biostatistician and research team felt that 0.10 was an appropriate significance level selected a priori for our study. On pg. 6, we have clarified the reasoning stated in the response above applies to both entry and significance:

“We used manual forward selection with a value for entry and significance of 0.10 because…”

Results:

5. Page 7, line 190: The author points out that “the most commonly reported barriers were limited in-person appointment availability (46%)…” However, this number cannot be found in Table 3, pages 20-21. I guess that it is a typo and should instead be 45%?

Thank you for noting this typo. We have corrected the percentage to 45% in text on pg. 9.

Discussion:

6. Discuss whether the composition of responders represents the people who generally work at the safety net facilities. The sample contains an overrepresentation of white females, which could affect the results.

As we note in the public response, we acknowledge the high enrollment of White women in our provider sample and now address this point in the discussion on pg. 16:

“Similarly, our sample was 85% female and 70% White. Although ideally we would have included a sample that was more diverse with respect to race and gender, these characteristics are not disparate from the majority of clinicians who perform cervical cancer screening (e.g., race: Women’s Health NPs [77% White], active Ob/Gyns [67% White], all active physicians [64% White]; gender: all NPs [92% female], Ob/Gyns [64% female], all active physicians [37% female]).”

Data describing these characteristics are reported in the Association of American Medical Colleges (AAMC) 2022 Physician Specialty Data Report and Executive Summary, the 2018 NPWH Women’s Health Nurse Practitioner Workforce Demographics and Compensation Survey: Highlights Report, and a published paper describing the characteristics of nurse practitioners in the US, which are cited in text.

7. The paper would benefit from a discussion on the choice of statistical method for analysing the quantitative survey data. It is my understanding that stepwise regression is a discussed method with the downside of overfitting. At the same time, there should follow an explanation or discussion of the choosing of a p-value of 0.10, as this in my opinion is high.

As we state in our response to Reviewer 1, multivariable logistic regression is appropriate because we aimed to examine associations of multiple independent variables with a single dependent variable (perceived screening practices), and each association controls for confounding by the other variables in the model. On pg. 6 of the manuscript we state the reasoning for use of exact models:

“We conducted separate exact binary logistic regressions (due to small cell sizes)…”

As we noted in our response to Reviewer 2, we now acknowledge the limitation of overfitting on pg. 16:

“the use of manual forward selection with our a priori determined significance level has limitations, including the possibility of overfitting”. Additionally, as suggested by the study biostatistician, we selected p of.10 as a significance value for entry. This strikes a balance as it is more liberal than a commonly accepted method of using the AIC (Akaike’s Information Criterion), which implicitly assumes a significance level of 0.157, and simultaneously mitigates the potentially low power corresponding to a level of 0.05 in a sample as small as ours, which is now noted on pg. 6: We used manual forward selection with a value for entry and significance of 0.10 because this strikes a balance between the commonly accepted method of using AIC (which assumes significance level of 0.157), and the often used α of 0.05, which could lead to failure to identify associations due to small sample size.”

Tables and figures:

8. In Table 1, under age, the sum of the numbers does not add up to N 148; instead, the sum is 147.

Thank you for your attention to detail. We confirmed with a check of our descriptive statistics results and one participant did not respond to this item. Therefore, the valid n for age should be 147 as now indicated in Table 1.

Reviewer #3 (Recommendations for the authors):

1. General: inconsistencies in percentages between the manuscript text and tables were observed throughout. The manuscript needs to be checked carefully and corrections made. Some may be due to a lack of rounding; appropriate rounding should be applied on percentages noted in tables and footnoted.

Thank you for noting this. All errors have been corrected.

2. Abbreviations are provided in the text (and abstract) without defining these in the first place. These may be familiar/standard in the US but not for an international audience.

Thank you for noting this oversight. We have amended the text to spell out acronyms where appropriate.

3. Title: only 45% of the participants of this study were clinicians. Adding or replacing this term with 'health care providers' would more accurately describe study participants. This point should be applied throughout the article.

Thank you for your note on terminology. As defined in this study, all participants meet the definition of “clinician”. We have clarified this in the text on pg. 4:

“The target population were clinicians, defined for the purpose of this study as physicians and advanced practice providers, who conducted cervical cancer screening in federally qualified health centers in the United States during the post-acute phase of the COVID-19 pandemic.”

While we agree that the use of ‘health care provider’ is commonly used, we use ‘clinician’ throughout the manuscript because a pilot test of survey items for the parent study to this survey indicated that some physicians and patients may perceive the language of ‘provider’ to be paternalistic and potentially antisemitic. We therefore phrased survey items utilizing the term ‘clinician’ and this has carried over to our manuscripts. We are amenable to changing this language if the international readership of the journal would find this more appropriate.

4a. General/abstract: although I appreciate the constraints of the word limit for the abstract, the current wording does not do justice to the work presented. Suggest re-writing sections of it.

4b. Abstract/methods section: lines 35-38 are not methods but results. Other information should be stated in this section e.g. how the national sample was obtained, how the survey was conducted, and domains of questioning.

We have restructured the method section of the abstract to reflect these suggestions.

4c. Abstract/results: Findings in the Results section for APPs and ethnicity did not reach statistical significance as presented in the paper. There were various interesting findings that could replace these statements.

We have amended this section of the abstract to better reflect our findings.

4d. Abstract/conclusion: although I agree with the validity of the statement in the conclusion, it does not sum up the results presented.

We have amended the concluding statement of the abstract as suggested.

5a. Results section

Line 170: it is stated that 38% reported suspension of colposcopy and 6% of LEEP services, based on denominators of 95 and 127 participants respectively, after taking the number of unsure answers out of the total of 148 participants (as per footnote of Table 1). However, Table 1 also stated that only 115/148 provided colposcopy on site and 46/148 provided LEEP which has not been taken into consideration. Please revise both the manuscript text and Table 1 entries accordingly.

Thank you for noting this important oversight. We have recalculated the percentage that suspended colposcopy and LEEP based on new denominators of those who actually perform these services on site and have updated the percentages in the Results section of the manuscript and in Table 1.

5b. Line 180: the p-value has been incorrectly rounded.

We have corrected the p value as mentioned (now.04 rather than.03).

5c. Lines 185-188: The text states that clinician training was significantly associated with increased odds of the same or more screening however the p-value provided is 0.06 which signifies weak evidence at best. Even 0.05 is considered borderline significance. The same applies to the association with clinician race/ethnicity. Please amend these statements accordingly or remove.

Clinician training was associated with increased odds of the same or more screening, a finding that was statistically significant based on our significance level of 0.10, which was chosen a priori/before we examined the data. We acknowledge that this level could be considered borderline significant if one had chosen a different level, such as 0.05. Given the small sample size and exploratory nature of this study, we felt that this significance level was justified, and even more conservative than other options such as the AIC (α=0.157). We note in the discussion that these findings should be explored/confirmed with additional research and also note on pg. 6 of the method the reasoning for choosing 0.10 as the significance value as noted in the response to Reviewer 2. (A value of 0.10 strikes a balance as it is more liberal than a commonly accepted method of using the AIC (Akaike’s Information Criterion), which implicitly assumes a significance level of 0.157, and simultaneously mitigates the potentially low power corresponding to a level of 0.05 in a sample as small as ours.)

6a. Discussion general: findings of non-attendance and increased frequency of women not booking screening appointments (not even mentioned in the results but presented in Table 3), are important points to mention in the discussion and linked to observed cervical screening attendance in the US reported during a similar time period.

Thank you for pointing this out. We have added the following to the discussion on pg. 14:

“However, clinicians also noted that patients not scheduling or not attending appointments was an important barrier to screening.”

6b. Lines 327-328: Is this statement based on qualitative evidence? If so please include this in the Results section as well.

The refenced statement states: “Clinicians described near cessation of cervical cancer screening services early in the pandemic, as both clinicians and patients felt that the risk of contracting COVID when providing well care outweighed the benefits of cervical cancer screening in the short term,” is based on both quantitative and qualitative evidence. Most (80%) of survey respondents noted decreased screening early in the pandemic. This was contextualized in qualitative interviews as near cessation of services during lockdowns. Clinicians described patient fears about attending preventive services as well as their own concerns about contracting COVID while providing care. Exemplar quotes currently in the Results section are:

1. Near cessation of screening: ““People working, especially in family medicine, were distributed to the COVID clinic… And so non-essential visits including routine pap smears were put on hold” (APP, Family Medicine).”

2. Risk of contracting COVID outweighed benefits for patients: “Patients were hesitant, especially in the first year of [the] COVID pandemic, to leave their home for unnecessary reasons, including screening tests such as Pap smear” (MD, Family Medicine).

3. Risk of contracting COVID outweighed benefits for clinicians: “before we were able to be vaccinated… it felt like unnecessary risk” (APP, Women’s Health).

6c. Lines 331-333: Quantitative findings referred to in this sentence were not included in the Results section nor relevant tables. It would be informative to provide a breakdown of screens provided by speciality in Table 1.

We appreciate this suggestion and we have now created a separate table that displays cervical cancer screenings performed monthly by clinician specialty. This is now Table 2. We have also added this information into the Results section on pg. 7.

6d. Lines 335-336: The statement that APPs performed more screens than physicians has not been included in the results. It would be informative to provide a breakdown of screens provided per training in Table 1.

As indicated above, we have now created a separate table that displays cervical cancer screenings performed monthly by clinician training. This is now Table 2. We have also added this information into the Results section.

6e. Lines 367-369 These themes have already been raised earlier in the discussion (lines 339-346). Suggest merging the two relevant paragraphs.

We have made this suggested change.

7. Table 1: No details on staffing are provided in this table; title 9 rows from the end of the table should be amended.

Thank you for your attention to detail. We have amended the sub-title in this section of the table.

8a. Table 2: recommend adding zeros before the point for more clarity.

Zeros have been added to Table 2 as suggested.

8b. Table 2: a footnote listing the variables for which regression was adjusted should be listed.

At the end of the title of table 2 (now table 3), we state:

“Manual forwards selection was utilized, and the following variables were not selected for the final model (p >.10): (1) region (2) gender and (3) age.”

8c. Table 3: add 'adjusted' to 'odds ratio'.

Adjusted’ has been added to the table.

https://doi.org/10.7554/eLife.86358.sa2

Article and author information

Author details

  1. Lindsay Fuzzell

    H. Lee Moffitt Cancer Center & Research Institute, Health Outcomes and Behavior, Tampa, United States
    Contribution
    Conceptualization, Data curation, Supervision, Visualization, Writing – original draft, Project administration, Writing – review and editing
    Contributed equally with
    Paige Lake
    For correspondence
    Lindsay.Fuzzell@moffitt.org
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9688-5365
  2. Paige Lake

    H. Lee Moffitt Cancer Center & Research Institute, Health Outcomes and Behavior, Tampa, United States
    Contribution
    Formal analysis, Visualization, Writing – original draft, Project administration, Writing – review and editing
    Contributed equally with
    Lindsay Fuzzell
    For correspondence
    paige.lake@moffitt.org
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5591-6417
  3. Naomi C Brownstein

    Medical University of South Carolina, Public Health Sciences, Charleston, United States
    Contribution
    Conceptualization, Data curation, Formal analysis, Supervision, Visualization, Methodology
    Competing interests
    No competing interests declared
  4. Holly B Fontenot

    University of Hawaii at Manoa, Nancy Atmospera-Walch School of Nursing, Honolulu, United States
    Contribution
    Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Writing – review and editing
    Competing interests
    No competing interests declared
  5. Ashley Whitmer

    H. Lee Moffitt Cancer Center & Research Institute, Health Outcomes and Behavior, Tampa, United States
    Contribution
    Project administration, Writing – review and editing
    Competing interests
    No competing interests declared
  6. Alexandra Michel

    University of Hawaii at Manoa, Nancy Atmospera-Walch School of Nursing, Honolulu, United States
    Contribution
    Formal analysis, Visualization, Writing – review and editing
    Competing interests
    No competing interests declared
  7. McKenzie McIntyre

    H. Lee Moffitt Cancer Center & Research Institute, Health Outcomes and Behavior, Tampa, United States
    Contribution
    Project administration, Writing – review and editing
    Competing interests
    No competing interests declared
  8. Sarah L Rossi

    Boston University, Chobanian & Avedisian School of Medicine, Boston, United States
    Contribution
    Formal analysis, Visualization, Writing – review and editing
    Competing interests
    No competing interests declared
  9. Sidika Kajtezovic

    Boston University, Chobanian & Avedisian School of Medicine, Boston, United States
    Contribution
    Formal analysis, Visualization
    Competing interests
    No competing interests declared
  10. Susan T Vadaparampil

    1. H. Lee Moffitt Cancer Center & Research Institute, Health Outcomes and Behavior, Tampa, United States
    2. H. Lee Moffitt Cancer Center & Research Institute, Office of Community Outreach, Engagement, and Equity, Tampa, United States
    Contribution
    Conceptualization, Resources, Data curation, Supervision, Funding acquisition, Investigation, Methodology, Writing – review and editing
    Competing interests
    No competing interests declared
  11. Rebecca Perkins

    Boston University, Chobanian & Avedisian School of Medicine, Boston, United States
    Contribution
    Conceptualization, Resources, Data curation, Formal analysis, Supervision, Funding acquisition, Investigation, Visualization, Methodology, Writing – original draft, Writing – review and editing
    Competing interests
    No competing interests declared

Funding

American Cancer Society

  • Susan T Vadaparampil

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

The authors acknowledge Moffitt Cancer Center’s Biostatistics and Bioinformatics Shared Resource (BBSR).

Ethics

This study was approved by Moffitt Cancer Center's Scientific Review Committee and Institutional Review Board (MCC #20048) and Boston University Medical Center's Institutional Review Board (H-41533).

Senior and Reviewing Editor

  1. Eduardo L Franco, McGill University, Canada

Reviewer

  1. Parker Tope, McGill University, Canada

Version history

  1. Received: January 21, 2023
  2. Preprint posted: January 28, 2023 (view preprint)
  3. Accepted: July 28, 2023
  4. Version of Record published: September 4, 2023 (version 1)

Copyright

© 2023, Fuzzell, Lake et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 51
    Page views
  • 9
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Lindsay Fuzzell
  2. Paige Lake
  3. Naomi C Brownstein
  4. Holly B Fontenot
  5. Ashley Whitmer
  6. Alexandra Michel
  7. McKenzie McIntyre
  8. Sarah L Rossi
  9. Sidika Kajtezovic
  10. Susan T Vadaparampil
  11. Rebecca Perkins
(2023)
Examining the perceived impact of the COVID-19 pandemic on cervical cancer screening practices among clinicians practicing in Federally Qualified Health Centers: A mixed methods study
eLife 12:e86358.
https://doi.org/10.7554/eLife.86358

Further reading

    1. Epidemiology and Global Health
    Tina Bech Olesen, Henry Jensen ... Sisse H Njor
    Research Article Updated

    Background:

    In most of the world, the mammography screening programmes were paused at the start of the pandemic, whilst mammography screening continued in Denmark. We examined the mammography screening participation during the COVID-19 pandemic in Denmark.

    Methods:

    The study population comprised all women aged 50–69 years old invited to participate in mammography screening from 2016 to 2021 in Denmark based on data from the Danish Quality Database for Mammography Screening in combination with population-based registries. Using a generalised linear model, we estimated prevalence ratios (PRs) and 95% confidence intervals (CIs) of mammography screening participation within 90, 180, and 365 d since invitation during the pandemic in comparison with the previous years adjusting for age, year and month of invitation.

    Results:

    The study comprised 1,828,791 invitations among 847,766 women. Before the pandemic, 80.2% of invitations resulted in participation in mammography screening within 90 d, 82.7% within 180 d, and 83.1% within 365 d. At the start of the pandemic, the participation in screening within 90 d was reduced to 69.9% for those invited in pre-lockdown and to 76.5% for those invited in first lockdown. Extending the length of follow-up time to 365 d only a minor overall reduction was observed (PR = 0.94; 95% CI: 0.93–0.95 in pre-lockdown and PR = 0.97; 95% CI: 0.96–0.97 in first lockdown). A lower participation was, however, seen among immigrants and among women with a low income.

    Conclusions:

    The short-term participation in mammography screening was reduced at the start of the pandemic, whilst only a minor reduction in the overall participation was observed with longer follow-up time, indicating that women postponed screening. Some groups of women, nonetheless, had a lower participation, indicating that the social inequity in screening participation was exacerbated during the pandemic.

    Funding:

    The study was funded by the Danish Cancer Society Scientific Committee (grant number R321-A17417) and the Danish regions.

    1. Epidemiology and Global Health
    2. Genetics and Genomics
    Arturo Torres Ortiz, Michelle Kendall ... Louis Grandjean
    Research Article

    Accurate inference of who infected whom in an infectious disease outbreak is critical for the delivery of effective infection prevention and control. The increased resolution of pathogen whole-genome sequencing has significantly improved our ability to infer transmission events. Despite this, transmission inference often remains limited by the lack of genomic variation between the source case and infected contacts. Although within-host genetic diversity is common among a wide variety of pathogens, conventional whole-genome sequencing phylogenetic approaches exclusively use consensus sequences, which consider only the most prevalent nucleotide at each position and therefore fail to capture low frequency variation within samples. We hypothesized that including within-sample variation in a phylogenetic model would help to identify who infected whom in instances in which this was previously impossible. Using whole-genome sequences from SARS-CoV-2 multi-institutional outbreaks as an example, we show how within-sample diversity is partially maintained among repeated serial samples from the same host, it can transmitted between those cases with known epidemiological links, and how this improves phylogenetic inference and our understanding of who infected whom. Our technique is applicable to other infectious diseases and has immediate clinical utility in infection prevention and control.