Skip to main content

Table 1 A summary of impact evaluation study designs and methodologies

From: Evaluating malaria programmes in moderate- and low-transmission settings: practical ways to generate robust evidence

Methodology/study designWhen is it useful?What types of data can be used?How robust is the design?
Interrupted time seriesPolicy change or other intervention introduced on a known date. Useful when no underlying contemporaneous control group, but can be adapted to include a control groupTime-series data (retrospective or prospective), ideally RHISGood. Considers secular trends and confounding factors, counterfactual can be estimated
Dose–responseWhen no clear intervention and comparison areas, but intervention at varying levels of intensity by districtSub-national data (e.g., district-level) describing intervention, impact indicator, and potential confounders. Ideally RHIS. Requires data on process and activities to define ‘intensity’Moderate, if high spatial and temporal resolution and confounders included. Can estimate counterfactuals for alternative programme coverage levels. Prone to confounding because intensity of intervention or program applied may be related to impact outcome
Constructed controls (matching or discontinuity designs, instrumental variables)When no clear intervention and comparison areas, but differences in individual use and access to interventions, or eligibility criteria determine whether an individual or area received interventions. Useful for inference at the individual levelIndividual-level data from cross-sectional survey data with large sample size, and all possible confounders measuredModerate. Limited by availability of data from which to estimate controls. Often uses data from a single cross-sectional survey, and evaluation may have low power to identify changes where cross-sectional RDT positivity is the primary impact indicator
Stepped-wedgePhased introduction of programme with or without randomizationRHIS or repeat cross-sectional surveysModerate. Important to account for other programmes or contextual changes occurring during the phased roll-out of program being evaluated