Reading List on Natural and Randomized Experiments
Bibliography provided by Christopher Carter, University of California, Berkeley; Pia Raffler, Harvard University; Tesalia Rizzo, University of California, Merced; and Guadalupe Tuñón, Princeton University.
I. The Basics on Causal Inference and the Potential Outcomes Framework
The readings in this section provide an overview of the potential outcomes framework and the fundamental problem of causal inference. They also discuss design-based research as a strategy for recovering unbiased estimates of causal effects.
Gerber, A. S., & Green, D. P. (2008). “Field experiments and natural experiments.” In The Oxford Handbook of Political Science.
Holland, P. W. (1986). “Statistics and causal inference.” Journal of the American Statistical Association, 81(396), 945-960.
Dunning, T. (2010). "Design-based inference: Beyond the pitfalls of regression analysis." In Henry E. Brady and David Collier, eds., Rethinking Social Inquiry: Diverse Tools, Shared Standards. 2nd ed. Lanham, Md.: Rowman and Littlefield.
Freedman, D.A. (2006). "Statistical models for causation: what inferential leverage do they provide?." Evaluation review 30, no. 6, 691-713.
For a more introductory text: Angrist, J.D. and Pischke, J.S. (2014). Mastering 'metrics: The path from cause to effect. Princeton University Press.
For a more rigorous introductory text: Angrist, J.D. and Pischke, J.S. (2008). Mostly harmless econometrics: An empiricist's companion. Princeton University Press.
For annotated R code and exercises: Kosuke, I. (2018). Quantitative Social Science: An introduction. Princeton University Press.
II. Natural Experiments
Introduction to Natural Experiments
What are natural experiments? The readings in this section review the concept of natural experiments and discuss their strengths and limitations through a survey of recent examples from political science and economics. They introduce a common formal framework for understanding and assessing natural experiments.
Dunning, T. (2012). Natural experiments in the social sciences: A design-based approach. Cambridge University Press. Chapter 1 and pp. 105-121.
Sekhon, J. S., & Titiunik, R. (2012). “When natural experiments are neither natural nor experiments.” American Political Science Review, 106(1), 35-57.
Di Tella, R., Galiant, S., & Schargrodsky, E. (2007). “The formation of beliefs: evidence from the allocation of land titles to squatters.” The Quarterly Journal of Economics, 122(1), 209-241.
Hinnerich, B. T., & Pettersson-Lidbom, P. (2014). “Democracy, redistribution, and political participation: Evidence from Sweden 1919–1938.” Econometrica, 82(3), 961-993.
Sances, M. W. (2016). “The Distributional Impact of Greater Responsiveness: Evidence from New York Towns.” The Journal of Politics, 78(1), 105-119.
Blattman, C., & Annan, J. (2010). “The consequences of child soldiering.” The Review of Economics and Statistics, 92(4), 882-898.
Natural Experiments: Quantitative Methods
This set of readings discuss the role of causal and statistical assumptions in the analysis of natural experiments. They focus on instrumental-variables (IV) analysis to illustrate the plausibility of these assumptions in a variety of applications.
Dunning, T. (2012). Natural experiments in the social sciences: A design-based approach. Cambridge University Press. Chapter 4 and pp. 135-153.
Angrist, J.D. and Pischke, J.S. (2008). Mostly harmless econometrics: An empiricist's companion. Princeton University Press, Chapter 4.
Sovey, Allison J. and Donald P. Green. 2009. “Instrumental Variables Estimation in Political Science: A Readers’ Guide.” American Journal of Political Science, 55: 188-200.
Clingingsmith, D., Khwaja, A. I., & Kremer, M. (2009). Estimating the impact of the Hajj: religion and tolerance in Islam's global gathering. The Quarterly Journal of Economics, 124(3), 1133-1170.
Miguel, Edward, Shanker Satyanath, and Ernest Sergenti. (2004). “Economic Shocks and Civil Conflict: An Instrumental Variables Approach.” Journal of Political Economy, 122: 725-53.
Strengthening Natural Experiments Through Qualitative Evidence
The readings in this section highlight the essential role of qualitative methods in the analysis of natural experiments. They discuss how qualitative methods can help address some of the criticisms of natural experiments, as well as how natural experiments can bolster the inferences drawn from qualitative evidence. The readings also include examples of how qualitative evidence can bolster the credibility of causal assumptions and aid in the interpretation of quantitative results. The exchange between Kocher & Monteiro and Fewerda & Miller highlights the utility of qualitative methods in evaluating potential natural experiments in the context of an empirical application.
Dunning, T. (2012). Natural experiments in the social sciences: A design-based approach. Cambridge University Press. Chapter 7.
Ferwerda, J. & Miller, N. (2014). “Political Devolution and Resistance to Foreign Rule: A Natural Experiment.” American Political Science Review. 108(3), 642-660.
Kocher, M.A., & Monteiro, N.P. (2016). “Lines of Demarcation: Causation, Design-Based Inference, and Historical Research.” Perspectives on Politics 14 (4): 952–975
Ferwerda, J. & Miller, N. (2015). “Rail Lines and Demarcation Lines: A Response”
Freedman, D. A. (1991). Statistical models and shoe leather. Sociological methodology, 291-313.
Homola, J., Pereira, M., & Tavits, M. (2020). Legacies of the Third Reich: Concentration Camps and Out-group Intolerance. American Political Science Review, 1-18.
Pepinsky, T.B. and Goodman, S.W. & Ziller, C. (2020). Does Proximity to Nazi Concentration Camps Make Germans Intolerant? Modeling Spatial Heterogeneity and Historical Persistence. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3547321.
Additional Readings to Aid in the Design of Your Own Natural Experiment
Dunning, T. (2012). Natural experiments in the social sciences: A design-based approach. Cambridge University Press. Chapter 11.
Diamond, J., & Robinson, J. A. (Eds.). (2010). Natural experiments of history. Harvard University Press. Afterword: “Using Comparative Methods in the Study of Human History.”
III. Randomized Experiments
The readings in this section provide a general overview of the different steps involved in designing, implementing, and analysing data from randomized control trials. They also discuss many of the topics highlighted in the following section.
Gerber, A.S. and Green, D.P, (2012). Field Experiments: Field Experiments: Design, Analysis, and Interpretation. Norton.
Glennerster, R. and Takavarasha, K. (2013). Running randomized evaluations: A practical guide. Princeton University Press.
Glennerster, R. (2017). “The practicalities of running randomized evaluations: partnerships, measurement, ethics, and transparency.” In Handbook of Economic Field Experiments (Vol. 1, pp. 175-243). North-Holland.
JPAL’s Research Resources
Given the many different ways of analyzing data from experiments and publication bias, research transparency is taken more and more seriously. The readings below provide an overview of the rationale for detailed pre-analysis plans as well as guidance on how to write them. There are many different approaches to writing pre-analysis plans, these background papers will introduce you to the debate surrounding pre-analysis plans, as well as provide some practical advice on how to write one yourself. It’s a good idea to explore the EGAP pre-registration site and read some of the pre-analysis plans uploaded there.
Humphreys, M., De la Sierra, R.S. and Van der Windt, P. (2013). “Fishing, commitment, and communication: A proposal for comprehensive nonbinding research registration.” Political Analysis, 21(1), pp.1-20.
Gelman, A. and Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Department of Statistics, Columbia University.
EGAP: 10 Things to Know About Pre-Analysis Plans
How many units of randomization do I need in order to detect effects of reasonable magnitude? What is the optimal stratification strategy? The readings and tools below offer guidance and tools for conducting power analyses.
EGAP: 10 Things to Know About Statistical Power
Blair, G., Cooper, J., Coppock, A., & Humphreys, M. (2019). Declaring and diagnosing research designs. American Political Science Review, 113(3), 838-859.
You can find the -- very powerful -- corresponding software here: DeclareDesign
Another, much more limited, software for power calculations is Optimal Design (Windows only).
J-PAL has helpful guidelines for using it.
Implementing Randomized Experiments
IPA and EGAP have a number of helpful guidelines for implementing data collection.
IPA’s Research Protocols
EGAP Methods Guide: Ten Things to Know About Survey Design
EGAP Methods Guide: Ten Things to Know About Survey Implementation
Integrating Qualitative and Quantitative Methods
The readings below discuss the substantial opportunities for integrating qualitative methods into randomized experiments to improve theory formulation, mechanism tests, measurement, and external validity.
Levy Paluck, E. (2010). “The promising integration of qualitative methods and field experiments.” The ANNALS of the American Academy of Political and Social Science, 628(1), pp.59-71.
Thachil, T. (2018). “Improving Surveys Through Ethnography: Insights from India’s Urban Periphery.” Studies in Comparative International Development, 53(3), pp.281-299.
Fearon, J.D. and Laitin, D.D. (2009). “Integrating qualitative and quantitative methods.” In: The Oxford Handbook of Political Science.
Humphreys, M., & Jacobs, A. M. (2015). Mixing methods: A Bayesian approach. American Political Science Review, 109(4), 653-673.
Ethics of Randomized Experiments
The readings below present different viewpoints on the ethics of conducting field experiments in political science.
Cronin-Furman, K. and Lake, M. (2018). “Ethics abroad: Fieldwork in Fragile and Violent Contexts.” PS: Political Science & Politics, pp.1-8
Desposato, S. (2018). “Subjects and Scholars’ Views on the Ethics of Political Science Field Experiments.” Perspectives on Politics, 16(3), pp.739-750.
Humphreys, M. (2015). “Reflections on the ethics of social experimentation.” Journal of Globalization and Development, 6(1), pp.87-112.
Silent Voices Blog, The Bukavu Series, Governance in Conflict Network
Carlson, Liz (2020). “Field Experiments and Behavioral Theories: Science and Ethics.” PS: Political Science and Politics.
Randomized experiments are one important resource to ensure the internal validity of an empirical study. However, internal validity gains can come at a loss of external validity. Are the findings applicable to cases outside the randomized sample? Can the experiment be scaled up to a broader population? Can the findings extend to other cultural, geographic or economic contexts? The readings in this section discuss the (limits to) external validity of randomized experiments.
Dehejia, R., Pop-Eleches, C. and Samii, C. (2019). “From local to global: External validity in a fertility natural experiment.” Journal of Business & Economic Statistics, pp.1-27.
Deaton, Angus (2010). “Instruments, Randomization, and Learning about Development.” Journal of Economic Literature
Dunning, T., Grossman, G., Humphreys, M., Hyde, S. D., McIntosh, C., Nellis, G., ... & Buntaine, M. T. (2019). Voter information campaigns and political accountability: Cumulative findings from a preregistered meta-analysis of coordinated trials. Science Advances, 5(7).