Modeling resource allocation strategies for insecticidetreated bed nets to achieve malaria eradication
Abstract
Large reductions in the global malaria burden have been achieved, but plateauing funding poses a challenge for progressing towards the ultimate goal of malaria eradication. Using previously published mathematical models of Plasmodium falciparum and Plasmodium vivax transmission incorporating insecticidetreated nets (ITNs) as an illustrative intervention, we sought to identify the global funding allocation that maximized impact under defined objectives and across a range of global funding budgets. The optimal strategy for case reduction mirrored an allocation framework that prioritizes funding for hightransmission settings, resulting in total case reductions of 76% and 66% at intermediate budget levels, respectively. Allocation strategies that had the greatest impact on case reductions were associated with lesser nearterm impacts on the global population at risk. The optimal funding distribution prioritized high ITN coverage in hightransmission settings endemic for P. falciparum only, while maintaining lower levels in lowtransmission settings. However, at high budgets, 62% of funding was targeted to lowtransmission settings coendemic for P. falciparum and P. vivax. These results support current global strategies to prioritize funding to highburden P. falciparumendemic settings in subSaharan Africa to minimize clinical malaria burden and progress towards elimination, but highlight a tradeoff with ‘shrinking the map’ through a focus on nearelimination settings and addressing the burden of P. vivax.
eLife assessment
This study presents a valuable finding on the optimal prioritization in different malaria transmission settings for the distribution of insecticidetreated nets to reduce the malaria burden. The evidence supporting the claims of the authors is solid. The work will be of interest from a global funder perspective, though somewhat less relevant for individual countries.
https://doi.org/10.7554/eLife.88283.3.sa0Introduction
Global support for malaria eradication has fluctuated in response to changing health policies over the past 75 years. From near global endemicity in the 1900’s over 100 countries have eliminated malaria, with 10 of these certified malariafree by the World Health Organization (WHO) in the last two decades (Feachem et al., 2010; Shretta et al., 2017; Weiss et al., 2019). Despite this success, 41% and 57% of the global population in 2017 were estimated to live in areas at risk of infection with Plasmodium falciparum and Plasmodium vivax, respectively (Weiss et al., 2019; Battle et al., 2019). In 2021 there were an estimated 247 million new malaria cases and over 600,000 deaths, primarily in children under 5 years of age (World Health Organization, 2007; World Health Organization, 2022b). Mosquito resistance to the insecticides used in vector control, parasite resistance to both firstline therapeutics and diagnostics, and local active conflicts continue to threaten elimination efforts (World Health Organization, 2007; World Health Organization, 2022a). Nevertheless, the global community continues to strive towards the ultimate aim of eradication, which could save millions of lives and thus offer high returns on investment (Chen et al., 2018; Strategic Advisory Group on Malaria Eradication, 2020).
The global goals outlined in the Global Technical Strategy for Malaria (GTS) 2016–2030 include reducing malaria incidence and mortality rates by 90%, achieving elimination in 35 countries, and preventing reestablishment of transmission in all countries currently classified as malariafree by 2030 (World Health Organization, 2007; World Health Organization, 2015). Various stakeholders have also set timelines for the wider goal of global eradication, ranging from 2030–2050 (World Health Organization, 2007; World Health Organization, 2020, Chen et al., 2018; Strategic Advisory Group on Malaria Eradication, 2020). However, there remains a lack of consensus on how best to achieve this longerterm aspiration. Historically, large progress was made in eliminating malaria mainly in lowertransmission countries in temperate regions during the Global Malaria Eradication Program in the 1950s, with the global population at risk of malaria reducing from around 70% of the world population in 1950 to 50% in 2000 (Hay et al., 2004). Renewed commitment to malaria control in the early 2000s with the Roll Back Malaria initiative subsequently extended the focus to the highly endemic areas in subSaharan Africa (Feachem et al., 2010). Whilst it is now widely acknowledged that the current tool set is insufficient in itself to eradicate the parasite, there continues to be debate about how resources should be allocated (Snow, 2015). Some advocate for a focus on highburden settings to lower the overall global burden (World Health Organization, 2007; World Health Organization, 2019), while others call for increased funding to middleincome lowburden countries through a ‘shrink the map strategy’ where elimination is considered a driver of global progress (Newby et al., 2016). A third set of policy options is influenced by equity considerations including allocating funds to achieve equal allocation per person at risk, equal access to bed nets and treatment, maximize lives saved, or to achieve equitable overall health status (World Health Organization, 2007; World Health Organization, 2013, Raine et al., 2016).
Global strategies are influenced by international donors, which represent 68% of the global investment in malaria control and elimination activities (World Health Organization, 2007; World Health Organization, 2022b). The Global Fund and the U.S. President’s Malaria Initiative are two of the largest contributors to this investment. Their strategies pursue a combination approach, prioritizing malaria reduction in highburden countries while achieving subregional elimination in select settings (The Global Fund, 2021, United States Agency for International Development & Centers for Disease Control and Prevention, 2021). Given that the global investment for malaria control and elimination still falls short of the 6.8 billion USD currently estimated to be needed to meet GTS 2016–2030 goals (World Health Organization, 2007; World Health Organization, 2022b), an optimized strategy to allocate limited resources is critical to maximizing the chance of successfully achieving the GTS goals and longerterm eradication aspirations.
In this study, we use mathematical modeling to explore the optimal allocation of limited global resources to maximize the longterm reduction in P. falciparum and P. vivax malaria. Our aim is to determine whether financial resources should initially focus on hightransmission countries, lowtransmission countries, or a balance between the two across a range of global budgets. In doing so, we consider potential tradeoffs between shortterm gains and longterm impact. We use compartmental deterministic versions of two previously developed and tested individualbased models of P. falciparum and P. vivax transmission, respectively (Griffin et al., 2010; White et al., 2018). Using the compartmental model structures allows us to fully explore the space of possible resource allocation decisions using optimization, which would be prohibitively costly to perform using more complex individualbased models. Furthermore, to evaluate the impact of resource allocation options, we focus on a single intervention  insecticidetreated nets (ITNs). Whilst in reality, national malaria elimination programs encompass a broad range of preventative and therapeutic tools alongside different surveillance strategies as transmission decreases, this simplification is made for computational feasibility, with ITNs chosen as they (a) provide both an individual protective effect and populationlevel transmission reductions (i.e. indirect effects); (b) are the most widely used single malaria intervention other than firstline treatment; and (c) extensive distribution and costing data are available that allow us to incorporate their decreasing technical efficiency at high coverage.
Results
We identified 105 malariaendemic countries based on 2000 P. falciparum and P. vivax prevalence estimates (before the scaleup of interventions), of which 44, 9, and 52 were endemic for P. falciparum only, P. vivax only, and coendemic for both species, respectively. Globally, the clinical burden of malaria was focused in settings of high transmission intensity endemic for P. falciparum only, followed by lowtransmission settings coendemic for P. falciparum and P. vivax (Figure 1A). Conversely, 89% of the global population at risk of malaria was located in coendemic settings with very low and low transmission intensities (Figure 1B). All 25 countries with high transmission intensity and 11 of 17 countries with moderate transmission intensity were in Africa, while almost half of global cases and populations at risk in lowtransmission coendemic settings originated in India.
Deterministic compartmental versions of two previously published and validated mathematical models of P. falciparum and P. vivax malaria transmission dynamics (Griffin et al., 2010; Griffin et al., 2014; Griffin et al., 2016; White et al., 2018) were used to explore associations between ITN use and clinical malaria incidence. In model simulations, the relationship between ITN usage and malaria infection outcomes varied by the baseline entomological inoculation rate (EIR), representing local transmission intensity, and parasite species (Figure 2). The same increase in ITN usage achieved a larger relative reduction in clinical incidence in lowEIR than in highEIR settings. Low levels of ITN usage were sufficient to eliminate malaria in lowtransmission settings, whereas high ITN usage was necessary to achieve a substantial decrease in clinical incidence in highEIR settings. At the same EIR value, ITNs also led to a larger relative reduction in P. falciparum than P. vivax clinical incidence. However, ITN usage of 80% was not sufficient to lead to the full elimination of either P. falciparum or P. vivax in the highest transmission settings. In combination, the models projected that ITNs could reduce global P. falciparum and P. vivax cases by 83.6% from 252.0 million and by 99.9% from 69.3 million in 2000, respectively, assuming a maximum ITN usage of 80%.
We next used a nonlinear generalized simulated annealing function to determine the optimal global resource allocation for ITNs across a range of budgets. We defined optimality as the funding allocation across countries which minimizes a given objective. We considered two objectives: first, reducing the global number of clinical malaria cases, and second, reducing both the global number of clinical cases and the number of settings not having yet reached a preelimination phase. The latter can be interpreted as accounting for an additional positive contribution of progressing towards elimination on top of a reduced case burden (e.g. general health system strengthening through a reduced focus on malaria). To relate funding to the impact on malaria, we incorporated a nonlinear relationship between costs and ITN usage, resulting in an increase in the marginal cost of ITN distribution at high coverage levels (BertozziVilla et al., 2021). We considered a range of fixed budgets, with the maximum budget being that which enabled achieving the lowest possible number of cases in the model. Low, intermediate, and high budget levels refer to 25%, 50%, and 75% of this maximum, respectively.
In our main analysis, we ignored the time dimension over which funds are distributed, instead focusing on the endemic equilibrium reached for each level of allocation (sensitivity to this assumption is explored in a second analysis with dynamic reallocation every 3 years). The optimal strategies were compared with three existing approaches to resource allocation: (1) prioritization of hightransmission settings, (2) prioritization of lowtransmission (nearelimination) settings, and (3) proportional allocation by disease burden. Strategies prioritizing high or lowtransmission settings involved the sequential allocation of funding to groups of countries based on their transmission intensity (from highest to lowest EIR or vice versa). The proportional allocation strategy mimics the current allocation algorithm employed by the Global Fund: budget shares are distributed according to the malaria disease burden in the 2000–2004 period (The Global Fund, 2019). To allow comparison with this existing funding model, we also started allocation decisions from the year 2000.
We found that the optimal strategies for reducing total malaria cases (i.e. global burden) and for case reduction and preelimination to be similar to the strategy that prioritized funding for hightransmission settings. These three strategies achieved the largest reductions in global malaria cases at all budgets, including reductions of 76%, 73%, and 66% at the intermediate budget level, respectively (Figure 3A, Table 1). At low to intermediate budgets, the proportional allocation strategy also reduced malaria cases effectively by up to 53%. While these four scenarios had very similar effects on malaria cases at low budgets, they diverged with increasing funding, where the proportional allocation strategy did not achieve substantial further reductions. Depending on the available budget, the optimal strategy for case reduction averted up to 31% more cases than prioritization of hightransmission settings and 64% more cases than proportional allocation, corresponding to respective differences of 37.9 and 74.5 million cases globally.
We additionally found there to be a tradeoff between reducing global cases and reducing the global population at risk of malaria. Both the optimal strategies and the strategy prioritizing hightransmission settings did not achieve substantial reductions in the global population at risk until large investments were reached (Figure 3B, Table 1). Even at a high budget, the global population at risk was only reduced by 19% under the scenario prioritizing hightransmission settings, with higher reductions of 42–58% for the optimal strategies, while proportional allocation had almost no effect on this outcome. Conversely, diverting funding to prioritize lowtransmission settings was highly effective at increasing the number of settings eliminating malaria, achieving a 56% reduction in the global population at risk already at intermediate budgets. However, this investment only led to a minimal reduction of 24% in total malaria case load (Figure 3, Table 1). At high budget levels, prioritizing lowtransmission settings resulted in up to 3.8 times (a total of 159.4 million) more cases than the optimal allocation for case reduction. Despite the population at risk remaining relatively large with the optimal strategy for case reduction and preelimination, it nevertheless led to preelimination in more malariaendemic settings than all other strategies (Appendix 1—figure 7), in addition to close to minimum cases across all budgets (Figure 3).
The allocation strategies also had differential impacts on P. falciparum and P. vivax cases, with case reductions generally occurring first for P. falciparum except when prioritizing lowtransmission settings. P. vivax cases were not substantially affected at low global budgets for all other allocation strategies, and proportional allocation had almost no effect on reducing P. vivax clinical burden at any budget (Figure 3C), leading to a temporary increase in the proportion of total cases attributable to P. vivax relative to P. falciparum. The global population at risk remained high with the optimal strategy for case reduction even at high budgets, partly due to a large remaining population at risk of P. vivax infection (Figure 3D), which was not targeted when aiming to minimize total cases (Figure 1).
The optimized distribution of funding to minimize clinical burden depended on the available global budget and was driven by the settingspecific transmission intensity and the population at risk (Figure 4, Figure 1). With very low to low budget levels, as much as 85% of funding was allocated to moderate to high transmission settings (Figure 4A, Appendix 1—figure 8A). This allocation pattern led to the maximum ITN usage of 80% being reached in settings of high transmission intensity and smaller population sizes even at low budgets, while maintaining lower levels in lowtransmission settings with larger populations (Figure 4B, Appendix 1—figure 8B). The proportion of the budget allocated to low and very low transmission settings increased with increasing budgets, and low transmission settings received the majority of funding at intermediate to maximum budgets. This allocation pattern remained very similar when optimizing for both case reduction and preelimination (Appendix 1—figure 9). Similar patterns were also observed for the optimized distribution of funding between settings endemic for only P. falciparum compared to P. falciparum and P. vivax coendemic settings (Figure 4C–D), with the former being prioritized at low to intermediate budgets. At the maximum budget, 70% of global funding was targeted at low and very lowtransmission settings coendemic for both parasite species.
To evaluate the robustness of the results, we conducted a sensitivity analysis on our assumption of ITN distribution efficiency. Results remained similar when assuming a linear relationship between ITN usage and distribution costs (Appendix 1—figure 10). While the main analysis involves a single allocation decision to minimize longterm case burden (leading to a constant ITN usage over time in each setting irrespective of subsequent changes in burden), we additionally explored an optimal strategy with dynamic reallocation of funding every 3 years to minimize cases in the short term. At high budgets, capturing dynamic changes over time through reallocation of funding based on minimizing P. falciparum cases every 3 years led to the same case reductions over time as a onetime optimization with the allocation of a constant ITN usage (Appendix 1—figure 11). At lower budgets, reallocation every 3 years achieved a higher impact at several timepoints, but total cases remained similar between the two approaches. Although reallocation of resources from settings which achieved elimination to higher transmission settings did not lead to substantially fewer cases, it reduced total spending over the 39 year period in some cases (Appendix 1—figure 11).
Discussion
Our study highlights the potential impact that funding allocation decisions could have on the global burden of malaria. We estimated that optimizing ITN allocation to minimize global clinical incidence could, at a high budget, avert 83% of clinical cases compared to no intervention. In comparison, the optimal strategy to minimize the clinical incidence and maximize the number of settings reaching preelimination averted 82% of clinical cases, prioritizing hightransmission settings 81%, proportional allocation 61%, and prioritizing lowtransmission settings 37%. Our results support initially prioritizing funding towards reaching high ITN usage in the highburden P. falciparum endemic settings to minimize global clinical cases and advance elimination in more malariaendemic settings, but highlight the tradeoff between this strategy and reducing the global population at risk of malaria as well as addressing the burden of P. vivax.
Prioritizing lowtransmission settings demonstrated how focusing on ‘shrinking the malaria map’ by quickly reaching elimination in lowtransmission countries diverts funding away from the highburden countries with the largest caseloads. Prioritizing lowtransmission settings achieved elimination in 42% of settings and reduced the global population at risk by 56% when 50% of the maximum budget had been spent, but also resulted in 3.2 times more clinical cases than the optimal allocation scenario. Investing a larger share of global funding towards hightransmission settings aligns more closely with the current WHO ‘high burden to high impact’ approach, which places an emphasis on reducing the malaria burden in the 11 countries which comprise 70% of global cases (World Health Organization, 2007; World Health Organization, 2019). Previous research supports this approach, finding that the 20 highestburden countries would need to obtain 88% of global investments to reach case and mortality risk estimates in alignment with GTS goals (Patouillard et al., 2017). This is similar to the modeled optimized funding strategy presented here, which allocated up to 76% of very low budgets to settings of high transmission intensity located in subSaharan Africa. An initial focus on high and moderatetransmission settings is further supported by our results showing that a balance can be found between achieving close to optimal case reductions while also progressing towards elimination in the maximum number of settings. Even within a single country, targeting interventions to local hotspots has been shown to lead to higher cost savings than universal application (Barrenho et al., 2017), and could lead to elimination in settings where untargeted interventions would have little impact (Bousema et al., 2012).
Assessing optimal funding patterns is a global priority due to the funding gap between supply and demand for resources for malaria control and elimination (World Health Organization, 2007; World Health Organization, 2022b). However, allocation decisions will remain important even if more funding becomes available, as some of the largest differences in total cases between the modeled strategies occurred at intermediate to high budgets. Our results suggest that most of global funding should only be focused in lowtransmission settings coendemic for P. falciparum and P. vivax at high budgets once ITN use has already been maximized in hightransmission settings. Global allocation decisions are likely to affect P. falciparum and P. vivax burden differently, which could have implications for the future global epidemiology of malaria. For example, with a focus on disease burden reduction, a temporary increase in the proportion of malaria cases attributable to P. vivax was projected, in line with recent observations in nearelimination areas (Battle et al., 2019; Price et al., 2020). Nevertheless, even when international funding for malaria increased between 2007–2009, African countries remained the major recipients of financial support, while P. vivaxdominant countries were not as well funded (Snow et al., 2010). This serves as a reminder that achieving the elimination of malaria from all endemic countries will ultimately require targeting investments so as to also address the burden of P. vivax malaria.
Different priorities in resource allocation decisions greatly affect which countries receive funding and what health benefits are achieved. The modeled strategies follow key ethical principles in the allocation of scarce healthcare resources, such as targeting those of greatest need (prioritizing hightransmission settings, proportional allocation) or those with the largest expected health gain (optimized for case reduction, prioritizing hightransmission settings) (World Health Organization, 2007; World Health Organization, 2013). Allocation proportional to disease burden did not achieve as great an impact as other strategies because the funding share assigned to settings was constant irrespective of the invested budget and its impact. In modeling this strategy, we did not reassign excess funding in hightransmission settings to other malaria interventions, as would likely occur in practice. This illustrates the possibility that such an allocation approach can potentially target certain countries disproportionally and result in further inequities in health outcomes (Barrenho et al., 2017). From an international funder perspective, achieving vertical equity might, therefore, also encompass higher disbursements to countries with lower affordability of malaria interventions (Barrenho et al., 2017), as reflected in the Global Fund’s proportional allocation formula which accounts for the economic capacity of countries and specific strategic priorities (The Global Fund, 2019). While these factors were not included in the proportional allocation used here, the estimated impact of these two strategies was nevertheless very similar (Appendix 1—figure 12).
While our models are based on country patterns of transmission settings and corresponding populations in 2000, there are several factors leading to heterogeneity in transmission dynamics at the national and subnational levels which were not modeled and limit our conclusions. Seasonality, changing population size, and geographic variation in P. vivax relapse patterns or in mosquito vectors could affect the projected impact of ITNs and optimized distribution of resources across settings. The two representative Anopheles species used in the simulations are also both very anthropophagic, which may have led to an overestimation of the effect of ITNs in some settings. By using ITNs as the sole means to reduce mosquitotohuman transmission, we did not capture the complexities of other key interventions that play a role in burden reduction and elimination, the geospatial heterogeneity in costeffectiveness and optimized distribution of intervention packages on a subnational level, or related pricing dynamics (Conteh et al., 2021; Drake et al., 2017). For P. vivax in particular, reducing the global economic burden and achieving elimination will depend on the incorporation of hypnozoitocidal treatment and G6PD screening into case management (Devine et al., 2021). Furthermore, for both parasites, intervention strategies generally become more focal as transmission decreases, with targeted surveillance and response strategies prioritized over widespread vector control. Therefore, policy decisions should additionally be based on analysis of countryspecific contexts, and our findings are not informative for individual country allocation decisions. Results do, however, account for nonlinearities in the relationship between ITN distribution and usage to represent changes in cost as a country moves from control to elimination: interventions that are effective in malaria control settings, such as widespread vector control, may be phased out or limited in favor of more expensive active surveillance and a focus on confirmed diagnoses and atrisk populations (Shretta et al., 2017). We also assumed that transmission settings are independent of each other, and did not allow for the possibility of reintroduction of disease, such as has occurred throughout the Eastern Mediterranean from imported cases (World Health Organization, 2007). While our analysis presents allocation strategies to progress toward eradication, the results do not provide insight into the allocation of funding to maintain elimination. In practice, the threat of malaria resurgence has important implications for when to scale back interventions.
Our analysis demonstrates the most impactful allocation of a global funding portfolio for ITNs to reduce global malaria cases. Unifying all funding sources in a global strategic allocation framework as presented here requires international donor allocation decisions to account for available domestic resources. National governments of endemic countries contribute 31% of all malariadirected funding globally (World Health Organization, 2020), and government financing is a major source of malaria spending in nearelimination countries in particular (Haakenstad et al., 2019). Within the wider political economy which shapes the funding landscape and priority setting, there remains substantial scope for optimizing allocation decisions, including improving the efficiency of withincountry allocation of malaria interventions. Subnational malaria elimination in localized settings within a country can also provide motivation for continued elimination in other areas and friendly competition between regions to boost global elimination efforts (Lindblade and Kachur, 2020). Although more efficient allocation cannot fully compensate for projected shortfalls in malaria funding, mathematical modeling can aid efforts in determining optimal approaches to achieve the largest possible impact with available resources.
Materials and methods
Transmission models
Request a detailed protocolWe used deterministic compartmental versions of two previously published individualbased transmission models of P. falciparum and P. vivax malaria to estimate the impact of varying ITN usage on clinical incidence in different transmission settings. The P. falciparum model has previously been fitted to agestratified data from a variety of subSaharan African settings to recreate observed patterns in parasite prevalence (PfPR_{210}), the incidence of clinical disease, immunity profiles, and vector components relating to rainfall, mosquito density, and the EIR (Griffin et al., 2016). We developed a deterministic version of an existing individualbased model of P. vivax transmission, originally calibrated to data from Papua New Guinea but also shown to reproduce global patterns of P. vivax prevalence and clinical incidence (White et al., 2018). Models for both parasite species are structured by age and heterogeneity in exposure to mosquito bites, and account for human immunity patterns. They model mosquito transmission and population dynamics, and the impact of scaleup of ITNs in identical ways. Full assumptions, mathematical details, and parameter values can be found in Appendix 1 and in previous publications (Griffin et al., 2010; Griffin et al., 2014; Griffin et al., 2016; White et al., 2018).
Data sources
Request a detailed protocolWe calibrated the model to baseline transmission intensity in all malariaendemic countries before the scaleup of interventions, using the year 2000 as an indicator of these levels in line with the current allocation approach taken by the Global Fund (The Global Fund, 2019). Annual EIR was used as a measure of parasite transmission intensity, representing the rate at which people are bitten by infectious mosquitoes. We simulated models to represent a wide range of EIRs for P. falciparum and P. vivax. These transmission settings were matched to 2000 countrylevel prevalence data resulting in EIRs of 0.001–80 for P. falciparum and 0.001–1.3 for P. vivax. P. falciparum estimates came from parasite prevalence in children aged 2–10 years and P. vivax prevalence estimates came from light microscopy data across all ages, based on standard reporting for each species (Weiss et al., 2019; Battle et al., 2019). The relationship between parasite prevalence and EIR for specific countries is shown in Appendix 1—figures 5 and 6. In each country, the population at risk for P. falciparum and P. vivax malaria was obtained by summing WorldPop gridded 2000 global population estimates (Tatem, 2017) within Malaria Atlas Project transmission spatial limits using geoboundaries (Runfola et al., 2020) (Appendix 1: Countrylevel data and modeling assumptions on the global malaria distribution). The analysis was conducted on the national level, since this scale also applies to funding decisions made by international donors (The Global Fund, 2019). As this exercise represents a simplification of reality, population sizes were held constant, and projected population growth is not reflected in the number of cases and the population at risk in different settings. Seasonality was also not incorporated in the model, as EIRs are matched to annual prevalence estimates and the effects of seasonal changes are averaged across the time frame captured. For all analyses, countries were grouped according to their EIR, resulting in a range of transmission settings compatible with the global distribution of malaria. Results were further summarized by grouping EIRs into broader transmission intensity settings according to WHO prevalence cutoffs of 0–1%, 1–10%, 10–35%, and ≥35% (World Health Organization, 2007; World Health Organization, 2022a). This corresponded approximately to classifying EIRs of less than 0.1, 0.1–1, 1–7, and 7 or higher as very low, low, moderate and high transmission intensity, respectively.
Interventions
Request a detailed protocolIn all transmission settings, we simulated the impact of varying coverages of ITNs on clinical incidence. While most countries implement a package of combined interventions, to reduce the computational complexity of the optimization we considered the impact of ITN usage alone in addition to 40% treatment of clinical disease. ITNs are a core intervention recommended for largescale deployment in areas with ongoing malaria transmission by WHO (Winskill et al., 2019; World Health Organization, 2007; World Health Organization, 2022a) and funding for vector control represents much of the global investments required for malaria control and elimination (Patouillard et al., 2017). Modeled coverages represent population ITN usage between 0 and 80%, with the upper limit reflective of common targets for universal access (Koenker et al., 2018). In each setting, the models were run until clinical incidence stabilized at a new equilibrium with the given ITN usage.
Previous studies have shown that, as population coverage of ITNs increases, the marginal cost of distribution increases as well (BertozziVilla et al., 2021). We incorporated this nonlinearity in costs by estimating the annual ITN distribution required to achieve the simulated population usage based on published data from across Africa, assuming that nets would be distributed on a 3yearly cycle and accounting for ITN retention over time (Appendix 1). The cost associated with a given simulated ITN usage was calculated by multiplying the number of nets distributed per capita per year by the population size and by the unit cost of distributing an ITN, assumed to be $3.50 (SherrardSmith et al., 2022).
Optimization
Request a detailed protocolThe optimal funding allocation for case reduction was determined by finding the allocation of ITNs b across transmission settings that minimizes the total number of malaria cases at equilibrium. Case totals were calculated as the sum of the product of clinical incidence cinc_{i} and the population p_{i} in each transmission setting i. Simultaneous optimization for case reduction and preelimination was implemented with an extra weighting term in the objective function, corresponding to a reduction in total remaining cases by a proportion w of the total cases averted by the ITN allocation, C. This, therefore, represents a positive contribution for each setting reaching the preelimination phase. The weighting on preelimination compared to case reduction was 0 in the scenario optimized for case reduction, and varied between 0.5 and 100 times in the other optimization scenarios. Resource allocation must respect a budget constraint, which requires that the sum of the cost of the ITNs distributed cannot exceed the initial budget B, with ${b}_{i}$ the initial number of ITNs distributed in setting $i$ and c the cost of a single pyrethroidtreated net. The second constraint requires that the ITN usage ${b}_{i}^{\star}$ must be between 0 and 80% (Koenker et al., 2018), with ITN usage being a function of ITNs distributed, as shown in the following equation.
The optimization was undertaken using generalized simulated annealing (Xiang et al., 2013). We included a penalty term in the objective function to incorporate linear constraints. Further details can be found in Appendix 1.
The optimal allocation strategy for minimizing cases was also examined over a period of 39 years using the P. falciparum model, comparing a single allocation of a constant ITN usage to minimize clinical incidence at 39 years, to reallocation every 3 years (similar to Global Fund allocation periods The Global Fund, 2016) leading to varying ITN usage over time. At the beginning of each 3 year period, we determined the optimized allocation of resources to be held fixed until the next round of funding, with the objective of minimizing 3 year global clinical incidence. Once P. falciparum elimination is reached in a given setting, ITN distribution is discontinued, and in the next period, the same total budget B will be distributed among the remaining settings. We calculated the total budget required to minimize case numbers at 39 years and compared the impact of reallocating every 3 years with a onetime allocation of 25%, 50%, 75%, and 100% of the budget. To ensure computational feasibility, 39 years was used as it was the shortest time frame over which the effect of redistribution of funding from countries having achieved elimination could be observed.
Analysis
We compared the impact of the two optimal allocation strategies (scenarios 1 A and 1B) and three additional allocation scenarios on global malaria cases and the global population at risk. Modeled scenarios are shown in Table 2. Scenarios 1C1E represent existing policy strategies that involve prioritizing hightransmission settings, prioritizing lowtransmission (nearelimination) settings, or resource allocation proportional to disease burden in the year 2000. Global malaria case burden and the population at risk were compared between baseline levels in 2000 and after reaching an endemic equilibrium under each scenario for a given budget.
Certification of malaria elimination requires proof that the chain of indigenous malaria transmission has been interrupted for at least 3 years and a demonstrated capacity to prevent return transmission (World Health Organization, 2007; World Health Organization, 2018). In our analysis, transmission settings were defined as having reached malaria elimination once less than one case remained per the setting’s total population. Once a setting reaches elimination, the entire population is removed from the global total population at risk, representing a ‘shrink the map’ strategy. The preelimination phase was defined as having reached less than 1 case per 1000 persons at risk in a setting (Mendis et al., 2009).
All strategies were evaluated at different budgets ranging from 0 to the minimum investment required to achieve the lowest possible number of cases in the model (noting that ITNs alone are not predicted to eradicate malaria in our model). No distinctions were made between national government spending and international donor funding, as the purpose of the analysis was to look at resource allocation and not to recommend specific internal and external funding choices.
All analyses were conducted in R v. 4.0.5 (R Foundation for Statistical Computing, Vienna, Austria). The sf (v. 0.9–8, Pebesma, 2018), raster (v. 3.4–10, Hijmans and Van Etten, 2012), and terra (v.1.3–4, Hijmans et al., 2022) packages were used for spatial data manipulation. The Akima package (v.0.6–2.2, Akima et al., 2022) was used for surface development, and the GenSA package (v.1.1.7, Gubian et al., 2023) for model optimization.
Appendix 1
Mathematical models
Overview
In this paper, we use an existing deterministic, compartmental, mathematical model of P. falciparum malaria transmission between humans and mosquitoes, which was originally calibrated to agestratified data from settings across subSaharan Africa (Griffin et al., 2016). We also developed a deterministic version of an existing individualbased model of P. vivax transmission, originally calibrated to data from Papua New Guinea but also shown to reproduce global epidemiological patterns (White et al., 2018). Both models are structured by age and heterogeneity in exposure to mosquito bites, and allow for the presence of maternal immunity at birth and naturally acquired immunity across the life course. The mosquito and vector control components are modeled identically in both models, except for the force of infection acting on mosquitoes. A diagram of the model structures with human and adult mosquito components is shown in Appendix 1figure 1.
Population and transmission dynamics were modeled separately for both species, assuming they are independent of each other, because the epidemiological significance of biological interactions between the parasites within hosts remains unclear (Mueller et al., 2013).
Note that while the term ‘individuals’ may be used in descriptions, the models are compartmental and do not track individuals; compartments represent the average number of people in a given state.
Human demography
In both the P. falciparum and P. vivax models, the aging process in the human population follows an exponential distribution. Humans can reach a maximum age of 100 years and experience a constant death rate of 1/21 per year based on the assumed median age of the population. The birth rate was assumed to equal the mortality rate so that the population remains stable over time. Demographic changes over time are, therefore, not accounted for.
In all following sections, human demographic processes are omitted from the equations of the transmission models and the immunity models for simplicity. All compartments experience the same constant mortality rate, while all births occur in the susceptible compartment.
Heterogeneity in mosquito biting rates
In both models, exposure to mosquito bites is assumed to depend on age, due to varying body surface area and behavioral patterns.
The relative biting rate at age $a$ is calculated as:
Where $\rho $ and ${a}_{0}$ are estimated parameters determining the relationship between age and biting rate.
Additionally, the human population in the model is stratified according to lifetime relative biting rate $\zeta $ which represents the heterogeneity in exposure to mosquito bites that occurs at various spatial scales, for example, due to attractiveness of humans to mosquitoes, housing standards and proximity to mosquito breeding sites, and is described by a lognormal distribution with a mean of 1, as follows:
P. falciparum human model component
In the P. falciparum model, humans move through four states of transmission and are present in only one of the six states at each timestep: susceptible (S), untreated symptomatic infection (D), successfully treated symptomatic infection (T), asymptomatic infection (A), asymptomatic subpatent infection (U), and prophylaxis (P). Individuals in the model are born susceptible to infection but are temporarily protected by maternal immunity during the first six months of life. Humans are exposed to infectious bites from mosquitoes and are infected at a rate Λ, representing the force of infection from mosquitoes to humans. The force of infection depends on an individual’s preerythrocytic immunity, the agedependent biting rate, and the mosquito population size and level of infectivity. Following a latent period, d_{E}, and depending on clinical immunity levels, a proportion ϕ of infected individuals develop clinical disease, while the remaining move into the asymptomatic infection state. A proportion f_{T} of those with clinical disease are successfully treated. Treated individuals recover from infection at the rate r_{T} and move to the prophylaxis state, which represents a period of drugdependent partial protection from reinfection. Recovery from untreated symptomatic infection to the asymptomatic infection state occurs at a rate r_{D}, while those with asymptomatic infection develop a subpatent infection at a rate r_{A}. The subpatent infection and prophylaxis states then clear infection and return to the susceptible state at rates r_{U} and r_{P}, respectively. In the susceptible compartment, reinfection can occur, while asymptomatic and subpatent infections are also susceptible to superinfection, potentially giving rise to further clinical cases. P. falciparum parameters are listed in Appendix 1—table 1.
The human component of the model is described by the following set of partial differential equations with regard to time $t$ and age $a$:
Note that age and timedependence in state variables and parameters, as well as mortality and birth rates, are omitted in equations for clarity.
Accounting for the heterogeneity and agedependence in mosquito biting rates described above, the force of infection $\mathrm{\Lambda}\left(a,t\right)$ and the EIR $\mathrm{\epsilon}\left(a,t\right)$ for age $a$ at time $t$ are given by:
Where ${\epsilon}_{0}$ is the mean entomological inoculation rate (EIR) experienced by adults at time $t$, and $b$ is the probability that a human will be infected when bitten by an infectious mosquito.
The mean EIR experienced by adults is represented by:
Where $\alpha $ is the mosquito biting rate in humans, ${I}_{M}$ is the compartment for adult infectious mosquitoes (see vector model component), and $\omega $ is a normalization constant for the biting rate over various age groups with a population age distribution of $\eta \left(a\right)$, as follows.
The probability of infection b, probability of clinical symptomatic disease ϕ, and recovery rate from asymptomatic infection r_{A}, all depend on immunity levels. The acquisition and decay of naturallyacquired immunity is tracked dynamically in the model and is driven by both age and exposure. Naturallyacquired immunity affects three different outcomes in the model, leading to: (1) a reduced probability of developing a bloodstage infection following an infectious bite due to preerythrocytic immunity, ${I}_{B}$ , (2) a reduced probability of progression to clinical disease following infection, dependent on exposuredriven and maternally acquired clinical immunity, ${I}_{CA}$ and ${I}_{CM}$ , and (3) a reduced probability of a bloodstage infection being detected by microscopy, dependent on acquired immunity to the detectability of infection, ${I}_{D}$ .
The following partial differential equations represent exposuredriven immunity levels at time $t$ and age $a$.
Preerythrocytic immunity:
Clinical immunity:
Detection immunity:
Where $u$ parameters represent a refractory period during which the different types of immunity cannot be further boosted after receiving a boost, and where $d$ parameters stand for the mean duration of the different types of immunity.
Maternal immunity is acquired and lost as follows:
Where ${d}_{CM}$ is the average duration of maternal immunity, ${P}_{CM}$ is the proportion of the mother’s clinical immunity acquired by the newborn, and ${I}_{CA}\left(t,20\right)$ denotes the clinical immunity level of a 20yearold woman.
Immunity levels are converted into time and agedependent probabilities using Hill functions.
The probability that a human will be infected when bitten by an infectious mosquito, $b$, can be represented as:
Where ${b}_{0}$ is the maximum probability of infection (with no immunity), ${b}_{1}$ is the maximum relative reduction in the probability of infection due to immunity, and ${I}_{B0}$ and ${\kappa}_{B}$ are scale and shape parameters estimated during model fitting.
The probability of a new bloodstage infection becoming symptomatic, $\varphi $, is represented by:
Where ${\varphi}_{0}$ is the maximum probability of becoming symptomatic (with no immunity), ${\varphi}_{1}$ is the maximum relative reduction in the probability of becoming symptomatic due to immunity, and ${I}_{C0}$ and ${\kappa}_{C}$ are scale and shape parameters, respectively.
Immunity can also lead to bloodstage infections becoming subpatent with low parasitemias. The probability that an asymptomatic infection is detectable by microscopy, $q$, is represented by:
Where ${d}_{1}$ is the minimum probability of detectability (with full immunity), and ${I}_{D0}$ and ${\kappa}_{D}$ are scale and shape parameters, respectively. ${f}_{D}$ is an agedependent function modifying the detectability of infection:
With γ_{D} and $a$_{D} representing shape and scale parameters, and ${f}_{D0}$ representing the timescale at which immunity changes with age.
P. vivax human model component
In the P. vivax model, acquisition, and recovery from bloodstage infection in the absence of treatment is also represented by four compartments: susceptible (S), untreated symptomatic infection (I_{D}), successfully treated symptomatic infection (T), asymptomatic patent bloodstage infection detectable by light microscopy (I_{LM}), asymptomatic submicroscopic infection not detectable by light microscopy, but detectable by PCR (I_{PCR}), and prophylaxis (P). Additionally, the model represents the liver stage of P. vivax infection by tracking average hypnozoite batches in the population. Hypnozoites can form after an infectious bite and remain dormant in the liver for up to several years, which can give rise to relapse bloodstage infections. P. vivax parameters are listed in Appendix 1—table 2.
New bloodstage infections can, therefore, originate from either mosquito bites or relapses and are represented by the force of infection ${\lambda}_{H}^{0}$ . The force of infection depends on the agedependent biting rate, the mosquito population size and its level of infectivity, the probability of infection resulting from an infectious bite, the latent period between sporozoite inoculation and development of bloodstage merozoites, d_{E}, as well as relapse infections from the liver stage. Upon infection, a proportion ${\Phi}_{LM}$ of humans develop infection detectable by light microscopy (LM), while the remainder have lowdensity parasitemia and move into the I_{PCR} compartment. A proportion ${\Phi}_{D}$ of those with LMdetectable infection develop a clinical episode, of which a proportion ${{\rm X}}_{T}$ are successfully treated with a bloodstage antimalarial. Treated individuals recover from infection at rate r_{T} and move to the prophylaxis state, which provides temporary protection from reinfection before becoming susceptible again at a rate r_{P}. Recovery from clinical disease to asymptomatic LMdetectable infection, from asymptomatic LMdetectable infection to asymptomatic PCRdetectable infection, and from asymptomatic PCRdetectable infection to susceptibility occur at rates r_{D}, r_{LM}, and r_{PCR}, respectively. Newborns are susceptible to infection, have no hypnozoites, and are temporarily protected by maternal immunity. Reinfection is possible after recovery, and those with asymptomatic blood stage infections (I_{LM} and I_{PCR}) are susceptible to superinfection, potentially giving rise to further clinical cases.
The dynamics of hypnozoite infection in the model describe the accumulation and clearance of $k$ batches of hypnozoites in the liver, whereby each new (super)infection from an infectious mosquito bite creates a new batch. This process occurs for each model compartment and is described in detail in the original publication (White et al., 2018). Hypnozoites from any batch can reactivate and cause a relapse at a rate $kf$, and batches are cleared at a constant rate $k{\gamma}_{L}$ , which reduces the number of batches from $k$ to $k1$. For computational efficiency, the possible number of batches in the population must be limited to a maximum value $K$, so that superinfections among the population with $k=K$ do not lead to an increase in hypnozoite batch numbers. We assumed a maximum batch number of 2, which increased computational efficiency and aligned with modeled distributions of hypnozoite batch numbers in the population for the simulated low transmission intensities.
The human component of the model is then described by the following set of partial differential equations with regard to time $t$ and age $a$:
Where $fk$ and ${\gamma}_{L}k$ are the relapse and clearance rates of hypnozoite batch $k$, respectively. Age and timedependence in state variables and parameters, as well as mortality and birth rates, are omitted in the equations for clarity.
The equations reflect the accumulation of hypnozoite batches from $k$ to $k+1$ due to infections arising from new infectious bites (${\lambda}_{H}^{0}$), but not due to relapse infections ($fk$). The total force of bloodstage infection is, therefore:
Similar to the P. falciparum model, the force of infection from mosquito bites accounts for heterogeneity and agedependence in mosquito biting rates as follows:
Where ${\epsilon}_{0}$ is the mean entomological inoculation rate (EIR) experienced by adults at time $t$, and $b$ is the probability that a human will be infected when bitten by an infectious mosquito. In the P. vivax model, $b$ is a constant and does not depend on immunity levels. In the calculation of the mean EIR experienced by adults, $\alpha $ is the mosquito biting rate in humans, ${I}_{M}$ is the compartment for adult infectious mosquitoes (see vector model component), and $\omega $ is a normalization constant for the biting rate over various age groups with a population age distribution of $\eta \left(a\right)$ .
Transmission dynamics in the model are influenced by antiparasite (A_{P}) and clinical immunity (A_{C}) against P. vivax. Antiparasite immunity is assumed to reduce the probability of bloodstage infections achieving high enough density to be detectable by light microscopy (${\Phi}_{LM}$) and to increase the rate at which submicroscopic infections are cleared (${r}_{PCR}$). Clinical immunity reduces the probability that LMdetectable infections progress to clinical disease (${\Phi}_{D}$). Like for P. falciparum, the dynamics of the acquisition and decay of naturallyacquired immunity in the model depend on age and exposure. For P. vivax, immunity levels are boosted by both primary infections and relapses and are described by the following set of partial differential equations with regards to time $t$ and age $a$:
Antiparasite immunity:
Clinical immunity:
Where $u$ parameters represent a refractory period during which the different types of immunity cannot be further boosted after receiving a boost, and where $r$ parameters stand for the rates of decay of the different types of immunity. $k$ refers to the hypnozoite batch (with $K$ being the maximum number of hypnozoite batches).
The levels of maternally acquired antiparasite and clinical immunity are calculated as:
Where ${d}_{mat}$ is the average duration of maternal immunity, ${P}_{mat}$ is the proportion of the mother’s immunity acquired by the newborn, and ${A}_{P}^{*}\left(ta,20\right)$ and ${A}_{C}^{*}\left(ta,20\right)$ denote the antiparasite and clinical immunity levels of a 20yearold woman averaged over their hypnozoite batches, respectively.
Immunity levels are then converted into timedependent probabilities using Hill functions.
The probability that a bloodstage infection becomes detectable by LM, ${\Phi}_{LM}$ , can be represented as:
Where ${\Phi}_{LM,min}$ is the minimum probability of LMdetectable infection (with full immunity), ${\Phi}_{LM,max}$ is the maximum probability of LMdetectable infection (with no immunity), and ${A}_{LM,50\%}$ and ${K}_{LM}$ are scale and shape parameters estimated during model fitting.
The probability of an LMdetectable bloodstage infection becoming symptomatic, ${\Phi}_{D}$ , is represented by:
Where ${\Phi}_{D,min}$ is the minimum probability of developing a clinical episode (with full immunity), ${\Phi}_{D,max}$ is the maximum probability of a clinical episode (with no immunity), and ${A}_{D,50\%}$ and ${K}_{D}$ are scale and shape parameters.
The recovery rate from I_{PCR} is calculated as $\frac{1}{{d}_{PCR}^{k}}$ . The average duration of a lowdensity bloodstage infection, ${d}_{PCR}^{k}$ , is represented by:
Where ${d}_{PCR,min}$ is the minimum duration (with full immunity), ${d}_{PCR,max}$ is the maximum duration (with no immunity), and ${A}_{PCR,50\%}$ and ${K}_{PCR}$ are scale and shape parameters.
Mosquito component of the P. falciparum and P. vivax model
The mosquito components of the P. falciparum and P. vivax models capture adult mosquito transmission dynamics, as well as larval population dynamics, and are nearly identical. Modeled vector bionomics correspond to Anopheles gambiae s.s. and Anopheles punctulatus for P. falciparum and P. vivax transmission, respectively.
Mosquito transmission model
Adult mosquitoes move between three states, ${S}_{M}$ (susceptible), ${E}_{M}$ (exposed), and ${I}_{M}$ (infectious), as follows:
is the force of infection from humans to mosquitos, $\beta \left(t\right)$ represents the timevarying adult mosquito emergence rate, μ is the adult mosquito death rate, and ${\tau}_{M}$ represents the extrinsic incubation period. ${P}_{M}$ represents the probability that a mosquito survives between being infected and sporozoites appearing in the salivary glands and is calculated as $\mathrm{exp}\left(\mu {\tau}_{M}\right)$.
The force of infection experienced by the vector is the sum of the contribution to mosquito infections from all human infectious states. As described for the human model components for both species, it also depends on the mosquito biting rate in humans (which depends on net usage), $\alpha $, and a normalization constant for the biting rate over various age groups, $\omega $.
Force of infection experienced by mosquitoes in the P. falciparum model
In the P. falciparum model, the force of infection acting on mosquitoes is represented by:
Where ${c}_{D}$ , ${c}_{T}$ , ${c}_{A}$ , and ${c}_{U}$ represent the humantomosquito infectiousness for untreated symptomatic infection, treated symptomatic infection, asymptomatic infection, and asymptomatic subpatent infection, respectively. ${\tau}_{1}$ is the time lag between parasitemia with asexual parasite stages and gametocytemia to account for the time to P. falciparum gametocyte development.
The infectiousness of humans with asymptomatic infection, ${c}_{A}$ , is reduced by a lower probability of detection of infection by microscopy due to the assumption that lower parasite densities are less detectable. While infectiousness parameters $c}_{D$ and $c}_{U$ are constant, infectivity for asymptomatic infection is calculated as follows:
Where $q$ is the immunitydependent probability that an asymptomatic infection is detectable by microscopy (Equation 14) and the parameter γ_{1} was estimated during the original model fitting in previous publications (Griffin et al., 2010; Griffin et al., 2014; Griffin et al., 2016).
Force of infection experienced by mosquitoes in the P. vivax model
In the P. vivax model, the force of infection acting on mosquitoes is represented by:
Where ${c}_{D}$ , ${c}_{T}$ , ${c}_{LM}$ , and ${c}_{PCR}$ represent the humantomosquito infectiousness for untreated symptomatic infection, treated symptomatic infection, asymptomatic LMdetectable infection and asymptomatic PCRdetectable infection, respectively. Due to the quicker development of P. vivax gametocytes compared to P. falciparum, there is assumed to be no delay between infection and infectiousness in humans.
Larval development
For both P. falciparum and P. vivax the larval stage model, shown in the following equations, is based on the previously described model in White et al., 2011. Female adult mosquitoes lay eggs at a rate ${\beta}_{L}$ . Upon hatching from eggs, larvae progress through early and late larvae stages ($E$ and $L$ compartments) before developing into to the pupal stage ${P}_{L}$ . Adult female mosquitoes emerge from the pupal stage in Equation (29), which is calculated as $\beta =0.5\frac{{P}_{L}}{{d}_{P}}$.
The duration of each larval stage is represented by ${d}_{E}$ , ${d}_{L}$, and ${d}_{P}$. The larval stages are regulated by densitydependent mortality rates, with a timevarying carrying capacity, $K$, that represents the ability of the environment to sustain breeding sites through different periods of the year and with the density of larvae in relation to the carrying capacity regulated by a parameter $\gamma $. Since seasonality in transmission dynamics was not modeled at the country level in this analysis, the carrying capacity was assumed to be constant throughout the year. The carrying capacity determines the mosquito density and hence the baseline transmission intensity in the absence of interventions. It is calculated as:
Where ${M}_{0}$ is the initial female mosquito density, ${\mu}_{0}$ is the baseline mosquito death rate, and ${\lambda}_{M}$ is defined as:
In this equation, the number of eggs laid per day, ${\beta}_{L}$ , is defined as:
Where ${\beta}_{{L}_{max}}$ is the maximum number of eggs per oviposition per mosquito. The adult mosquito death rate μ and the mosquito feeding rate ${f}_{R}$ are affected by the use of ITNs and further described in the following section on modeling vector control. Full details on the derivation of the egglaying rate ${\beta}_{L}$ and the carrying capacity $K$ have been previously published (White et al., 2011).
Modeling the impact of ITNs
ITNs are modeled as described previously (Griffin et al., 2010; Griffin et al., 2016). Mosquito population and transmission dynamics are affected by the use of ITNs in four ways: the mosquito death rate is increased, the feeding or gonotrophic cycle is increased, the proportion of bites taken on protected and unprotected people is changed, and the proportion of bites taken on humans relative to animals is affected. The probability that a bloodseeking mosquito successfully feeds on a human (as opposed to being repelled or killed) will depend on speciesdependent bionomics and behaviors of the mosquito, as well as the antivectoral interventions present in the human population. Parameter values can be found in Appendix 1—table 3.
Mosquito feeding behavior
In the model there are four possible outcomes of a mosquito feeding attempt:
The mosquito bites a nonhuman host.
The mosquito attempts to bite a human host but is killed by the ITN before biting.
The mosquito successfully feeds on a human host and survives that feeding attempt.
The mosquito attempts to bite a human host but is repelled by the ITN without feeding, and repeats the attempt to find a blood meal source.
We define the probability of a mosquito biting a human host during a single attempt as ${y}_{i}$ , the probability that a mosquito bites a human host and survives the feeding attempt as ${w}_{i}$ , and the probability of a mosquito being repelled without feeding as ${z}_{i}$ . These probabilities exclude natural vector mortality, so that for a population without protection from ITNs (e.g. prior to their introduction), ${y}_{1}={w}_{1}=1$ and ${z}_{1}=0$.
The presence of ITNs modifies these probabilities of surviving a feeding attempt or being repelled without feeding. Upon entering a house with ITNs, mosquitoes can experience three different outcomes: being repelled by the ITN without feeding (probability ${r}_{N}$), being killed by the ITN before biting (probability ${d}_{N}$), or feeding successfully (probability ${s}_{N}$). It is assumed that all biting attempts inside a house occur in humans. The repellency of ITNs in terms of the insecticide and barrier effect decays over time, giving the following probabilities:
Where ${r}_{N0}$ is the maximum probability of a mosquito being repelled by a bednet and ${r}_{NM}$ is the minimum probability of being repelled by a bednet that no longer has insecticidal activity and possibly holes reducing the barrier effect. ${\gamma}_{N}$ represents the rate of decay of the effect of ITNs over time $t$ since their distribution and is calculated as $\frac{\mathrm{log}\left(2\right)}{LLIN\phantom{\rule{thinmathspace}{0ex}}halflife}$. The killing effect of ITNs decreases at the same constant rate from a maximum probability of ${d}_{N0}$ . In model simulations, ITNs are distributed every three years.
With $i=1$ representing the population not covered by an ITN and $i=2$ representing the population covered by an ITN, this gives the following probabilities of successfully feeding, $W$, and being repelled without feeding, $Z$, during a single feeding attempt on a human:
Where ${c}_{i}$ is the proportion of the population in the respective group, and ${\Phi}_{b}$ is the proportion of bites taken on humans in bed, which was derived from previous publications (Griffin et al., 2010).
During a single feeding attempt (which may be on animals or humans), the average probability of mosquitoes feeding or being repelled without feeding, $\stackrel{}{W}$ and $\stackrel{}{Z}$, are then:
Where ${Q}_{0}$ is the proportion of bites taken on humans in the absence of any vector control intervention.
Effect of ITNs on mosquito mortality
The average probability of mosquitoes being repelled without feeding in the model affects the mosquito feeding rate, ${f}_{R}$, as follows:
Where ${\delta}_{1}$ is the time spent looking for a blood meal in the absence of vector control, and ${\delta}_{2}$ is the time spent resting between blood meals, which is assumed to be unaffected by ITN usage.
The average probabilities of feeding or being repelled also affect the probability of surviving the period of feeding, ${p}_{1}$, as follows:
Where ${\mu}_{0}$ is the baseline mosquito death rate in the absence of interventions.
The probability of surviving the period of resting, ${p}_{2}$ , is not affected by ITNs:
This allows to calculate the mosquito mortality rate affecting mosquito population dynamics in the set of Equations (29):
Effect of ITNs on the force of infection acting on humans and mosquitoes
In the presence of ITNs, the anthropophagy (the proportion of successful bites which are on humans) of mosquitoes is represented by parameter $Q$. This is affected by ITN usage as follows:
Further details on the assumptions in this calculation can be found in an earlier publication (Griffin et al., 2010).
This then gives the biting rate on humans, $\alpha $, as shown in the equations for the force of infection experienced by humans (Equations 4–6) and by mosquitoes (Equations 4–6):
Effect of ITNs on larval development
The mosquito death rate μ and the feeding rate ${f}_{R}$ also influence the calculation of the carrying capacity $K$ and the egglaying rate ${\beta}_{L}$ in Equation 34 and Equation 36, thereby affecting larval development.
Assumptions in model outcomes
Model dynamics over time
To represent longterm reductions in clinical burden, model simulations were run until a new equilibrium was reached postintervention for all ITN usage levels, which corresponded to 75 years for P. falciparum and 175 years for P. vivax. As shown in Appendix 1—figure 2, when ITNs are continuously distributed over time, clinical incidence outcomes initially fluctuate before reaching a longterm equilibrium due to various effects on population immunity and mosquito population dynamics in the model. For example, in hightransmission P. falciparum settings, clinical incidence experiences a steep initial decline after ITN introduction, before gradually rebounding to an equilibrium value (Appendix 1—figure 2A). In the P. vivax model, stabilization at an equilibrium transmission level was further delayed due to the presence of hypnozoites in a deterministic framework, whereby even an extremely small reservoir could lead to rebounds in clinical infections after decades. To limit P. vivax simulations to a computationally feasible time period, we prevented this rebound by introducing the assumption that once a hypnozoite prevalence of less than 1 in 1,00,000 is reached in the population, the reservoir is further depleted and cannot lead to a renewed chain of transmission (Appendix 1—figure 2B).
Clinical incidence and assumptions about case detection
We simulated clinical incidence assuming cases would be detected through weekly active case detection (ACD). ACD represents a more sensitive method to assess disease burden and was used in the majority of studies used to calibrate P. falciparum and P. vivax models (Griffin et al., 2010; White et al., 2018). This assumption results in higher case incidence than reported case numbers because not everyone seeks care at a health clinic for a clinical episode (Griffin et al., 2010). As estimated in previous publications, weekly ACD was assumed to detect 72.3% and 13.4% of all P. falciparum and P. vivax clinical cases detected by daily ACD, respectively (Griffin et al., 2010; White et al., 2018; Battle et al., 2015).
Countrylevel data and modeling assumptions on the global malaria distribution
To represent the global distribution of malaria, a P. falciparum prevalence in 2–10 yearolds (PfPR_{210}) (2000) raster layer (Weiss et al., 2019) was clipped to a P. falciparum transmission spatial limits (2010) raster layer (Gething et al., 2011) obtained from the Malaria Atlas Project. Country shapefiles, obtained from geoBoundaries (Runfola et al., 2020), were overlaid on prevalence estimates, and the mean PfPR_{210} within each boundary was calculated. A similar process was completed for P. vivax using PvPR_{099} (2000) and P. vivax transmission spatial limits (2010) raster layers (Battle et al., 2019). WorldPop gridded 2000 global population estimates (Tatem, 2017) were summed within boundaries to output the total population at risk of malaria infection living within each country. For both species, parasite prevalence was then matched to modeled EIR associated with the closest prevalence estimate. The group of countries with the lowest transmission intensity included those with an EIR of 0.001 or lower.
In our analysis, we assumed that most of subSaharan Africa was not endemic for P. vivax, because P. vivax prevalence and incidence could not be estimated (Battle et al., 2019). Even though there is evidence for lowlevel P. vivax endemicity throughout the continent, there is no routine surveillance for nonP. falciparum cases and the prevalence of the Duffynegative phenotype among African populations is protective against endemic transmission of P. vivax (Battle et al., 2019). Therefore, our estimates for the population at risk of P. vivax malaria do not include much of subSaharan Africa (except the Horn of Africa).
Although model simulations were matched to countrylevel prevalence, we did not aim to capture the wide geographic variation in malaria epidemiology in detail. For example, in all simulations with the P. vivax model, we fixed the relapse and hypnozoite clearance rates, based on the original parameter values used in the calibrated model in Papua New Guinea (White et al., 2018). The timings of relapse are thought to follow different patterns in different geographical areas, with a particular distinction between tropical strains relapsing quickly after initial infection and temperate strains relapsing only after 6–12 months (Battle et al., 2014). Nevertheless, projections from the model calibrated to subnational Papua New Guinean data were also shown to be in line with global epidemiological patterns at various prevalence levels (White et al., 2018). Similarly, we did not account for the geographic variation in dominant malaria vector species, which are particularly diverse across P. vivax endemic areas (Sinka et al., 2012).
In all model simulations and analyses, we assumed infections with the two parasite species to be independent, in line with the presentation of estimates from Malaria Atlas Project. Therefore, in each setting, we considered total malaria cases to represent the sum of modeled P. falciparum and P. vivax cases, total malaria prevalence to represent the sum of P. falciparum and P. vivax parasite prevalence, and the total EIR to represent the sum of average P. falciparum and P. vivaxinfectious bites per person year. With the geographical areas endemic for the two species overlapping in many locations, we assumed the population at risk of malaria in each setting to represent the higher of the population at risk of P. falciparum or of P. vivax.
Relationship between distribution and usage of ITNs
As described in the manuscript, the nonlinear relationship between costs and ITN usage was accounted for by converting the modeled population usage into the required number of ITNs to be distributed to achieve this usage. For this, a published methodology was used; full assumptions and definitions can be found in the original publication (BertozziVilla et al., 2021). Equations are detailed below and parameter values for the application in this paper are summarized in Appendix 1—table 4.
First, the simulated ITN usage was converted into ITN population access based on observed ITN use rates. By definition:
Since access in the population cannot exceed 1, the modeled ITN usage could not be higher than the assumed use rate.
Second, a Loess curve was fitted to 2020 data on net access and nets per capita per countrymonth from across Africa, reproducing a similar relationship as shown in the original publication (Appendix 1—figure 3, BertozziVilla et al., 2021). The net access derived for a given usage was then converted into nets per capita using the Loess curve. We extrapolated the trend for higher access levels and assumed that all access levels below the minimum observed would require the same nets per capita (i.e. the same cost) to achieve.
Lastly, the nets per capita were converted into the nets distributed per personyear, accounting for net retention over time and assuming a distribution frequency of every 3 years. Like in the original publication, ITNs were assumed to be lost from the population following a smooth compact function after distribution, so that the proportion of nets retained over time, $p\left(t\right)$ , equals:
Where $\kappa $ is a fitted rate parameter estimated from the data in the original publication. $\tau $ determines the time by which no nets are retained in the population, and was estimated from the assumed net halflife, as follows:
Integrating the net loss function over a distribution cycle then allows to derive the annual nets distributed per capita:
Where $DF$ represents the distribution frequency.
Optimization model
The mathematical problem consists of finding the allocation b of ITNs that minimizes global malaria cases, i.e., the sum of the product between the population ${p}_{i}$ times the clinical incidence $cin{c}_{i}$ for each EIR setting i. In the objective function, we also allow for the option of placing a positive contribution on settings reaching a preelimination phase (defined as a clinical incidence of less than 1 case per 1000 persons at risk) in addition to minimizing the global malaria case burden. This premium accounts for the potential benefits of reaching low levels of malaria transmission that go beyond the reduction in cases, e.g., general health system strengthening. For each setting reaching preelimination, the total remaining cases are reduced by a proportion w of the total cases averted by the ITN allocation (compared to total cases at baseline/without interventions), C. w, therefore, represents the weighting placed on preelimination in a setting relative to total case reduction. In the scenario optimized for case reduction, this weight equals 0.
This optimization must respect the budget constraint that the cost of ITNs distributed at each EIR setting ${b}_{i}$ must be less than or equal to the total budget B, with c being the cost of a single pyrethroidtreated net. In addition, the ITN usage ${b}_{i}^{*}$ in each setting i must be between 0% and an upper limit of 80%, which is a common target for universal access (Koenker et al., 2018). Notice that in our model, ITN distributed $b}_{i$ is not the same as ITN usage ${b}_{i}^{\star}$ , because only a fraction of ITNs distributed will be used over time. We represent with $f\left({b}_{i}\right)$ the function that maps ITNs distributed into ITNs used (see ‘Relationship between distribution and usage of ITNs’ for more details on this function):
Optimization was performed using generalized simulated annealing using the GenSA R package (v.1.1.7.) (Xiang et al., 2013). GenSA can receive a nonlinear objective function and searches an inputted search space for the global minimum. The function can tolerate a field which contains multiple local minima by simulating an annealing process using the stochasticity of a temperature parameter to escape local minima and continue the search for a global minimum (Xiang et al., 2013). Because many different combinations of ITN usage levels across different settings can lead to small case numbers, our objective function has many local minima. Therefore, we decided, as suggested in Xiang et al., 2013, to use a high value of 10^{6} for the temperature and to increase the maximum number of iterations from the default values of 5 * 10^{4}–5 * 10^{6}.
Since this version of the algorithm is not designed for constrained optimization, we transformed the problem into an unconstrained optimization by introducing a penalty term in the objective function. The unconstrained problem without the preelimination premium can be represented as:
Namely, the objective function will assume a very high value in all cases where the budget constraint is not respected. In this way, the simulated annealing algorithm would discard all solutions outside of the budgetary constraints.
The search space was built using the Akima package (v.0.6–2.2, Akima et al., 2022), to construct two 3D surfaces of clinical incidence model outputs for every combination of bed net usage and EIR (Appendix 1—figure 4). The dimensions of the resulting surfaces were 9000 × 9000 points.
The optimization function was run through a range of $B$ from no intervention (starting point) to full coverage, with results indicating the resource allocation combination which most reduced clinical incidence from baseline at each level of funding.
Data availability
The manuscript is a computational study, so no data have been generated. The previously published malaria transmission models code is available to download at GitHub (copy archived at Unwin, 2023). The code to conduct the analysis and produce the figures and tables in the manuscript are available to download at GitHub (copy archived at Mrcide, 2022). Datasets of parasite prevalence and spatial limits used in the analysis are publicly available from the Malaria Atlas Project at https://malariaatlas.org/.
References

Geographic resource allocation based on cost effectiveness: An application to malaria policyApplied Health Economics and Health Policy 15:299–306.https://doi.org/10.1007/s4025801703052

Shrinking the malaria map: progress and prospectsThe Lancet 376:1566–1578.https://doi.org/10.1016/S01406736(10)612706

Tracking spending on malaria by source in 106 countries, 200016: an economic modelling studyThe Lancet. Infectious Diseases 19:703–716.https://doi.org/10.1016/S14733099(19)301653

The global distribution and population at risk of malaria: past, present, and futureThe Lancet. Infectious Diseases 4:327–336.https://doi.org/10.1016/S14733099(04)010436

Opportunities for subnational malaria elimination in highburden countriesThe American Journal of Tropical Medicine and Hygiene 103:2153–2154.https://doi.org/10.4269/ajtmh.201342

From malaria control to eradication: The WHO perspectiveTropical Medicine & International Health 14:802–809.https://doi.org/10.1111/j.13653156.2009.02287.x

SoftwareMalaria_Optimal_Allocation, version swh:1:rev:df8ebf7d6db04077bef72b0c410734ff204699cfSoftware Heriatge.

Natural acquisition of immunity to plasmodium vivaxAdvances in Parasitology 81:77–131.https://doi.org/10.1016/B9780124078260.000035

Plasmodium vivax in the Era of the Shrinking P. falciparum MapTrends in Parasitology 36:560–570.https://doi.org/10.1016/j.pt.2020.03.009

Challenges, solutions and future directions in the evaluation of service innovations in health care and public healthHealth Services and Delivery Research 4:1–136.https://doi.org/10.3310/hsdr04160

Optimising the deployment of vector control tools against malaria: a datainformed modelling studyThe Lancet. Planetary Health 6:e100–e109.https://doi.org/10.1016/S25425196(21)002965

Major Infectious DiseasesMalaria elimination and eradication, Major Infectious Diseases, The International Bank for Reconstruction and Development / The World Bank, 10.1596/9781464805240.

A global map of dominant malaria vectorsParasites & Vectors 5:69.https://doi.org/10.1186/17563305569

BookMalaria Eradication: Benefits, Future Scenarios & FeasibilityGeneva: World Health Organization.

WorldPop, open data for spatial demographyScientific Data 4:170004.https://doi.org/10.1038/sdata.2017.4

BookThe Global Fund Strategy 20172022: Investing to end epidemicsGeneva: The Global Fund.

ReportEnd Malaria Faster  U.S. President’s Malaria Initiative Strategy 20212026Washington, DC: United Nations Office for the Coordination of Humanitarian Affairs.

SoftwareDeterministicmalariamodel, version swh:1:rev:a1685c9f979b170825988c491ff74e4fc69f1615Software Heritage.

BookGuidelines on Prevention of the Reintroduction of MalariaCairo: World Health Organization.

ReportBriefing Note  How Should Funds for Malaria Control Be Spent When There Are Not EnoughGeneva: World Health Organization.

ReportGlobal Technical Strategy for Malaria 20162030Geneva: World Health Organization.

ReportUpdate on the E2020 initiative of 21 malariaeliminating countriesGeneva: World Health Organization.

ReportHigh burden to high impact: a targeted malaria responseGeneva: World Health Organization.
Article and author information
Author details
Funding
Wellcome Trust
https://doi.org/10.35802/220900 Nora Schmit
 Hillary M Topazian
 Matteo Pianella
 Katharina Hauck
 Azra C Ghani
 Giovanni D Charles
Medical Research Council (MR/R015600/1)
 Nora Schmit
 Hillary M Topazian
 Matteo Pianella
 Giovanni D Charles
 Peter Winskill
 Katharina Hauck
 Azra C Ghani
Community Jameel
 Katharina Hauck
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. For the purpose of Open Access, the authors have applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.
Acknowledgements
This work was supported by the Wellcome Trust [reference 220900/Z/20/Z]. NS, HMT, MP, GDC, PW, KH, and ACG also acknowledge funding from the MRC Centre for Global Infectious Disease Analysis [reference MR/R015600/1], jointly funded by the UK Medical Research Council (MRC) and the UK Foreign, Commonwealth & Development Office (FCDO), under the MRC/FCDO Concordat agreement and is also part of the EDCTP2 program supported by the European Union. KH also acknowledges funding by Community Jameel. Disclaimer: 'The views expressed are those of the author(s) and not necessarily those of the NIHR, the UK Health Security Agency or the Department of Health and Social Care'. For the purpose of open access, the authors have applied a ‘Creative Commons Attribution’ (CC BY) license to any Author Accepted Manuscript version arising from this submission.
Version history
 Sent for peer review:
 Preprint posted:
 Reviewed Preprint version 1:
 Reviewed Preprint version 2:
 Version of Record published:
Cite all versions
You can cite all versions using the DOI https://doi.org/10.7554/eLife.88283. This DOI represents all versions, and will always resolve to the latest one.
Copyright
© 2023, Schmit, Topazian, Pianella et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 658
 views

 57
 downloads

 0
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Epidemiology and Global Health
 Evolutionary Biology
Several coronaviruses infect humans, with three, including the SARSCoV2, causing diseases. While coronaviruses are especially prone to induce pandemics, we know little about their evolutionary history, hosttohost transmissions, and biogeography. One of the difficulties lies in dating the origination of the family, a particularly challenging task for RNA viruses in general. Previous cophylogenetic tests of virushost associations, including in the Coronaviridae family, have suggested a virushost codiversification history stretching many millions of years. Here, we establish a framework for robustly testing scenarios of ancient origination and codiversification versus recent origination and diversification by host switches. Applied to coronaviruses and their mammalian hosts, our results support a scenario of recent origination of coronaviruses in bats and diversification by host switches, with preferential host switches within mammalian orders. Hotspots of coronavirus diversity, concentrated in East Asia and Europe, are consistent with this scenario of relatively recent origination and localized host switches. Spillovers from bats to other species are rare, but have the highest probability to be towards humans than to any other mammal species, implicating humans as the evolutionary intermediate host. The high hostswitching rates within orders, as well as between humans, domesticated mammals, and nonflying wild mammals, indicates the potential for rapid additional spreading of coronaviruses across the world. Our results suggest that the evolutionary history of extant mammalian coronaviruses is recent, and that cases of longterm virus–host codiversification have been largely overestimated.

 Cancer Biology
 Epidemiology and Global Health
Cancer is considered a risk factor for COVID19 mortality, yet several countries have reported that deaths with a primary code of cancer remained within historic levels during the COVID19 pandemic. Here, we further elucidate the relationship between cancer mortality and COVID19 on a population level in the US. We compared pandemicrelated mortality patterns from underlying and multiple cause (MC) death data for six types of cancer, diabetes, and Alzheimer’s. Any pandemicrelated changes in coding practices should be eliminated by study of MC data. Nationally in 2020, MC cancer mortality rose by only 3% over a prepandemic baseline, corresponding to ~13,600 excess deaths. Mortality elevation was measurably higher for less deadly cancers (breast, colorectal, and hematological, 2–7%) than cancers with a poor survival rate (lung and pancreatic, 0–1%). In comparison, there was substantial elevation in MC deaths from diabetes (37%) and Alzheimer’s (19%). To understand these differences, we simulated the expected excess mortality for each condition using COVID19 attack rates, life expectancy, population size, and mean age of individuals living with each condition. We find that the observed mortality differences are primarily explained by differences in life expectancy, with the risk of death from deadly cancers outcompeting the risk of death from COVID19.