Regression discontinuity design
In statistics, econometrics, political science, epidemiology, and related disciplines, a regression discontinuity design (RDD) is a quasi-experimental pretest-posttest design that aims to determine the causal effects of interventions by assigning a cutoff or threshold above or below which an intervention is assigned. By comparing observations lying closely on either side of the threshold, it is possible to estimate the average treatment effect in environments in which randomisation is unfeasible. However, it remains impossible to make true causal inference with this method alone, as it does not automatically reject causal effects by any potential confounding variable. First applied by Donald Thistlethwaite and Donald Campbell (1960) to the evaluation of scholarship programs,[1] the RDD has become increasingly popular in recent years.[2] Recent study comparisons of randomised controlled trials (RCTs) and RDDs have empirically demonstrated the internal validity of the design.[3]
Example
The intuition behind the RDD is well illustrated using the evaluation of merit-based scholarships. The main problem with estimating the causal effect of such an intervention is the homogeneity of performance to the assignment of treatment (e.g. scholarship award). Since high-performing students are more likely to be awarded the merit scholarship and continue performing well at the same time, comparing the outcomes of awardees and non-recipients would lead to an upward bias of the estimates. Even if the scholarship did not improve grades at all, awardees would have performed better than non-recipients, simply because scholarships were given to students who were performing well before.
Despite the absence of an experimental design, an RDD can exploit exogenous characteristics of the intervention to elicit causal effects. If all students above a given grade — for example 80% — are given the scholarship, it is possible to elicit the local treatment effect by comparing students around the 80% cut-off. The intuition here is that a student scoring 79% is likely to be very similar to a student scoring 81% — given the pre-defined threshold of 80%. However, one student will receive the scholarship while the other will not. Comparing the outcome of the awardee (treatment group) to the counterfactual outcome of the non-recipient (control group) will hence deliver the local treatment effect.
Methodology
The two most common approaches to estimation using an RDD are non-parametric and parametric (normally polynomial regression).
Non-parametric estimation
The most common non-parametric method used in the RDD context is a local linear regression. This is of the form:
where is the treatment cutoff and is a binary variable equal to one if . Letting be the bandwidth of data used, we have . Different slopes and intercepts fit data on either side of the cutoff. Typically either a rectangular kernel (no weighting) or a triangular kernel are used. The rectangular kernel has a more straightforward interpretation over sophisticated kernels which yield little efficiency gains.[4]
The major benefit of using non-parametric methods in an RDD is that they provide estimates based on data closer to the cut-off, which is intuitively appealing. This reduces some bias that can result from using data farther away from the cutoff to estimate the discontinuity at the cutoff.[4] More formally, local linear regressions are preferred because they have better bias properties[5] and have better convergence.[6] However, the use of both types of estimation, if feasible, is a useful way to argue that the estimated results do not rely too heavily on the particular approach taken.
Parametric estimation
An example of a parametric estimation is:
where
and is the treatment cutoff. Note that the polynomial part can be shortened or extended according to the needs.
Required assumptions
Regression discontinuity design requires that all potentially relevant variables besides the treatment variable and outcome variable be continuous at the point where the treatment and outcome discontinuities occur. One sufficient, though not necessary,[10] condition is if the treatment assignment is "as good as random" at the threshold for treatment.[9] If this holds, then it guarantees that those who just barely received treatment are comparable to those who just barely did not receive treatment, as treatment status is effectively random.
Treatment assignment at the threshold can be "as good as random" if there is randomness in the assignment variable and the agents considered (individuals, firms, etc.) cannot perfectly manipulate their treatment status. For example, suppose the treatment is passing an exam, where a grade of 50% is required. In this case, this example is a valid regression discontinuity design so long as grades are somewhat random, due either to the randomness of grading or randomness of student performance.
Students must not also be able to perfectly manipulate their grade so as to determine their treatment status perfectly. Two examples include students being able to convince teachers to "mercy pass" them, or students being allowed to retake the exam until they pass. In the former case, those students who barely fail but are able to secure a "mercy pass" may differ from those who just barely fail but cannot secure a "mercy pass". This leads to selection bias, as the treatment and control groups now differ. In the latter case, some students may decide to retake the exam, stopping once they pass. This also leads to selection bias since only some students will decide to retake the exam.[4]
Testing the validity of the assumptions
It is impossible to definitively test for validity if agents are able to determine their treatment status perfectly. However, some tests can provide evidence that either supports or discounts the validity of the regression discontinuity design.
Density test
McCrary (2008) suggested examining the density of observations of the assignment variable.[12] Suppose there is a discontinuity in the density of the assignment variable at the threshold for treatment. In this case, this may suggest that some agents were able to manipulate their treatment status perfectly.
For example, if several students are able to get a "mercy pass", then there will be more students who just barely passed the exam than who just barely failed. Similarly, if students are allowed to retake the exam until they pass, then there will be a similar result. In both cases, this will likely show up when the density of exam grades is examined. "Gaming the system" in this manner could bias the treatment effect estimate.
Continuity of observable variables
Since the validity of the regression discontinuity design relies on those who were just barely treated being the same as those who were just barely not treated, it makes sense to examine if these groups are similarly based on observable variables. For the earlier example, one could test if those who just barely passed have different characteristics (demographics, family income, etc.) than those who just barely failed. Although some variables may differ for the two groups based on random chance, most of these variables should be the same.[13]
Predetermined variables
Similar to the continuity of observable variables, one would expect there to be continuity in predetermined variables at the treatment cutoff. Since these variables were determined before the treatment decision, treatment status should not affect them. Consider the earlier merit-based scholarship example. If the outcome of interest is future grades, then we would not expect the scholarship to affect previous grades. If a discontinuity in predetermined variables is present at the treatment cutoff, then this puts the validity of the regression discontinuity design into question.
Other discontinuities
If discontinuities are present at other points of the assignment variable, where these are not expected, then this may make the regression discontinuity design suspect. Consider the example of Carpenter and Dobkin (2011) who studied the effect of legal access to alcohol in the United States.[8] As the access to alcohol increases at age 21, this leads to changes in various outcomes, such as mortality rates and morbidity rates. If mortality and morbidity rates also increase discontinuously at other ages, then it throws the interpretation of the discontinuity at age 21 into question.
Inclusion and exclusion of covariates
If parameter estimates are sensitive to removing or adding covariates to the model, then this may cast doubt on the validity of the regression discontinuity design. A significant change may suggest that those who just barely got treatment to differ in these covariates from those who just barely did not get treatment. Including covariates would remove some of this bias. If a large amount of bias is present, and the covariates explain a significant amount of this, then their inclusion or exclusion would significantly change the parameter estimate.[4]
Recent work has shown how to add covariates, under what conditions doing so is valid, and the potential for increased precision.[14]
Advantages
- When properly implemented and analysed, the RDD yields an unbiased estimate of the local treatment effect.[15] The RDD can be almost as good as a randomised experiment in measuring a treatment effect.
- RDD, as a quasi-experiment, does not require ex-ante randomisation and circumvents ethical issues of random assignment.
- Well-executed RDD studies can generate treatment effect estimates similar to estimates from randomised studies.[16]
Disadvantages
- The estimated effects are only unbiased if the functional form of the relationship between the treatment and outcome is correctly modelled. The most popular caveats are non-linear relationships that are mistaken as a discontinuity.
- Contamination by other treatments. Suppose another treatment occurs at the same cutoff value of the same assignment variable. In that case, the measured discontinuity in the outcome variable may be partially attributed to this other treatment. For example, suppose a researcher wishes to study the impact of legal access to alcohol on mental health using a regression discontinuity design at the minimum legal drinking age. The measured impact could be confused with legal access to gambling, which may occur at the same age.
Extensions
Fuzzy RDD
The identification of causal effects hinges on the crucial assumption that there is indeed a sharp cut-off, around which there is a discontinuity in the probability of assignment from 0 to 1. In reality, however, cutoffs are often not strictly implemented (e.g. exercised discretion for students who just fell short of passing the threshold) and the estimates will hence be biased.
In contrast to the sharp regression discontinuity design, a fuzzy regression discontinuity design (FRDD) does not require a sharp discontinuity in the probability of assignment. Still, it is applicable as long as the probability of assignment is different. The intuition behind it is related to the instrumental variable strategy and intention to treat. Fuzzy RDD does not provide an unbiased estimate when the quantity of interest is the proportional effect (e.g. vaccine effectiveness), but extensions exist that do.[17]
Regression kink design
When the assignment variable is continuous (e.g. student aid) and depends predictably on another observed variable (e.g. family income), one can identify treatment effects using sharp changes in the slope of the treatment function. This technique was coined regression kink design by Nielsen, Sørensen, and Taber (2010), though they cite similar earlier analyses.[18] They write, "This approach resembles the regression discontinuity idea. Instead of a discontinuity of in the level of the stipend-income function, we have a discontinuity in the slope of the function." Rigorous theoretical foundations were provided by Card et al. (2012)[19] and an empirical application by Bockerman et al. (2018).[20]
Note that regression kinks (or kinked regression) can also mean a type of segmented regression, which is a different type of analysis.
Final Considerations
The RD design takes the shape of a quasi-experimental research design with a clear structure that is devoid of randomized experimental features. Several aspects deny the RD designs an allowance for a status quo. For instance, the designs often involve serious issues that do not offer room for random experiments. Besides, the design of the experiments depends on the accuracy of the modelling process and the relationship between inputs and outputs.
References
- Thistlethwaite, D.; Campbell, D. (1960). "Regression-Discontinuity Analysis: An alternative to the ex post facto experiment". Journal of Educational Psychology. 51 (6): 309–317. doi:10.1037/h0044319. S2CID 13668989.
- Imbens, G.; Lemieux, T. (2008). "Regression Discontinuity Designs: A Guide to Practice" (PDF). Journal of Econometrics. 142 (2): 615–635. doi:10.1016/j.jeconom.2007.05.001.
- Chaplin, Duncan D.; Cook, Thomas D.; Zurovac, Jelena; Coopersmith, Jared S.; Finucane, Mariel M.; Vollmer, Lauren N.; Morris, Rebecca E. (2018). "The Internal and External Validity of the Regression Discontinuity Design: A Meta-Analysis of 15 Within-Study Comparisons". Journal of Policy Analysis and Management. 37 (2): 403–429. doi:10.1002/pam.22051. ISSN 1520-6688.
- Lee; Lemieux (2010). "Regression Discontinuity Designs in Economics". Journal of Economic Literature. 48 (2): 281–355. doi:10.1257/jel.48.2.281. S2CID 14166110.
- Fan; Gijbels (1996). Local Polynomial Modelling and Its Applications. London: Chapman and Hall. ISBN 978-0-412-98321-4.
- Porter (2003). "Estimation in the Regression Discontinuity Model" (PDF). Unpublished Manuscript.
- Duflo (2003). "Grandmothers and Granddaughters: Old-age Pensions and Intrahousehold Allocation in South Africa". World Bank Economic Review. 17 (1): 1–25. doi:10.1093/wber/lhg013. hdl:10986/17173.
- Carpenter; Dobkin (2011). "The Minimum Legal Drinking Age and Public Health". Journal of Economic Perspectives. 25 (2): 133–156. doi:10.1257/jep.25.2.133. JSTOR 23049457. PMC 3182479. PMID 21595328.
- Lee (2008). "Randomized Experiments from Non-random Selection in U.S. House Elections". Journal of Econometrics. 142 (2): 675–697. CiteSeerX 10.1.1.409.5179. doi:10.1016/j.jeconom.2007.05.004. S2CID 2293046.
- de la Cuesta, B; Imai, K (2016). "Misunderstandings About the Regression Discontinuity Design in the Study of Close Elections". Annual Review of Political Science. 19 (1): 375–396. doi:10.1146/annurev-polisci-032015-010115.
- Moss, B. G.; Yeaton, W. H.; Lloyd, J.E. (2014). "Evaluating the Effectiveness of Developmental Mathematics by Embedding a Randomized Experiment Within a Regression Discontinuity Design". Educational Evaluation and Policy Analysis. 36 (2): 170–185. doi:10.3102/0162373713504988. S2CID 123440758.
- McCrary (2008). "Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test". Journal of Econometrics. 142 (2): 698–714. CiteSeerX 10.1.1.395.6501. doi:10.1016/j.jeconom.2007.05.005.
- Lee; Moretti; Butler (2004). "Do Voters Affect or Elect Policies? Evidence from the U.S. House". Quarterly Journal of Economics. 119 (3): 807–859. doi:10.1162/0033553041502153.
- Calonico; Cattaneo; Farrell; Titiunik (2018). "Regression Discontinuity Designs Using Covariates". arXiv:1809.03904 [econ.EM].
- Rubin (1977). "Assignment to Treatment on the Basis of a Covariate". Journal of Educational and Behavioral Statistics. 2 (1): 1–26. doi:10.3102/10769986002001001. S2CID 123013161.
- Moss, B. G.; Yeaton, W. H.; Lloyd, J. E. (2014). "Evaluating the Effectiveness of Developmental Mathematics by Embedding a Randomized Experiment Within a Regression Discontinuity Design". Educational Evaluation and Policy Analysis. 36 (2): 170–185. doi:10.3102/0162373713504988. S2CID 123440758.
- Mukherjee, Abhiroop; Panayotov, George; Sen, Rik; Dutta, Harsha; Ghosh, Pulak (2022). "Measuring vaccine effectiveness from limited public health datasets: Framework and estimates from India's second COVID wave". Science Advances. 8 (18): eabn4274. Bibcode:2022SciA....8N4274M. doi:10.1126/sciadv.abn4274. PMC 9075799. PMID 35522748.
- Nielsen, H. S.; Sørensen, T.; Taber, C. R. (2010). "Estimating the Effect of Student Aid on College Enrollment: Evidence from a Government Grant Policy Reform". American Economic Journal: Economic Policy. 2 (2): 185–215. doi:10.1257/pol.2.2.185. hdl:10419/35588. JSTOR 25760068.
- Card, David; Lee, David S.; Pei, Zhuan; Weber, Andrea (2012). "Nonlinear Policy Rules and the Identification and Estimation of Causal Effects in a Generalized Regression Kink Design". NBER Working Paper No. W18564. doi:10.3386/w18564. SSRN 2179402.
- Bockerman, Petri; Kanninen, Ohto; Suoniemi, Ilpo (2018). "A Kink that Makes You Sick: The Effect of Sick Pay on Absence". Journal of Applied Econometrics. 33 (4): 568–579. doi:10.1002/jae.2620.
Further reading
- Angrist, J. D.; Pischke, J.-S. (2008). "Getting a Little Jumpy: Regression Discontinuity Designs". Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press. pp. 251–268. ISBN 978-0-691-12035-5.
- Cattaneo, Matias D.; Titiunik, Rocio (2022). "Regression Discontinuity Designs". Annual Review of Economics. 14: 821–851. doi:10.1146/annurev-economics-051520-021409. S2CID 125763727.
- Cook, Thomas D. (2008). "'Waiting for Life to Arrive': A history of the regression-discontinuity design in Psychology, Statistics and Economics". Journal of Econometrics. 142 (2): 636–654. doi:10.1016/j.jeconom.2007.05.002.
- Imbens, Guido W.; Wooldridge, Jeffrey M. (2009). "Recent Developments in the Econometrics of Program Evaluation". Journal of Economic Literature. 47 (1): 5–86. doi:10.1257/jel.47.1.5.
- Maas, Iris L.; Nolte, Sandra; Walter, Otto B.; Berger, Thomas; Hautzinger, Martin (2017). "The regression discontinuity design showed to be a valid alternative to a randomized controlled trial for estimating treatment effects". Journal of Clinical Epidemiology. 82: 94–102. doi:10.1016/j.jclinepi.2016.11.008. PMID 27865902.
External links
- Regression-Discontinuity Analysis at Research Methods Knowledge Base