QUALITATIVE AND MULTI-METHOD RESEARCH SHORT COURSES AT APSA 2018

The QMMR section sponsored five short courses at APSA’s annual meeting in Boston. The courses were held on Wednesday August 29.

Please see below for the descriptions.

If you have any questions about one or more of the short courses, please contact the instructors listed below.

SC06: Designing Multi-Method Research (QMMR1) Half Day, 9:00AM – 1:00PM

INSTRUCTORS: Kendra Koivu, University of New Mexico (klkoivu@unm.edu), and Jason Seawright,

Northwestern University (j-seawright@northwestern.edu)

This course provides students with an introduction to research designs that combine a qualitative and a quantitative component in the service of a single causal inference: multi- or mixed-method designs. We will discuss older “triangulation” ideas about multi-method design but focus on the newer “integrative” approach that uses one method to test the assumptions of the other. We will explore motivating ideas about causation, causal inference, and the strengths of various methods. However, the center of gravity is on considering formal multi-method research designs combining case studies with regression, natural experiments, randomized experiments, and techniques from machine learning.

We begin with key ideas about causation and causal inference that drive contemporary statistical and multi-method thinking, centrally including the potential outcomes framework. We will discuss that framework, considering what it captures and omits from other ideas about causation. Centrally, we will discuss the way that the potential outcomes framework opens opportunities for multi-method research by specifying the assumptions needed to get causal results out of regression analysis. We then move to the central question in most discussion of multi-method research: how to combine regression-type studies with case studies. Optimal case-selection strategies will be analyzed. We conclude by  considering multi-method designs that include more recent, and sometimes more credible, quantitative components: natural experiments, randomized experiments, and machine-learning algorithms related to conceptualization, measurement, and causal inference. For each design, we will look at the assumptions needed for causal inference, identify relevant case-study designs, and explore case selection.

SC13: Managing Qualitative Data and Qualitative Research Transparency (QMMR2)

INSTRUCTORS: Diana Kapiszewski, Georgetown University (dk784@georgetown.edu), and Sebastian Karcher, Syracuse University (skarcher@maxwell.syr.edu)

Half Day, 9:00AM – 1:00PM

This short course has three central goals. First, the course provides guidance to help scholars manage data through the research lifecycle. We show how participants can meet funders’ data-management requirements and improve their own research by creating a data management plan. We discuss strategies for effectively documenting data throughout the research process to enhance their value to those who generated them and to other scholars. We also provide practical advice on keeping data secure to protect against data loss as well as illicit access to sensitive data. Second, we consider the multiple benefits of sharing data, the various uses of shared data (e.g. for evaluating scholarly products, for secondary analysis, and for pedagogical purposes), the challenges involved in sharing qualitative research data (including copyright and human participants-related concerns), and various ways to address those challenges. Finally, we discuss transparency in qualitative research. Achieving production transparency (i.e., describing how the data drawn on in published work were produced), and analytic transparency (i.e., describing how data were analyzed and how they support empirical claims and inferences in published work) facilitate the effective interpretation and evaluation of scholarly products. We introduce several ways of achieving both types of transparency in qualitative research. We focus in particular on “Annotation for Transparent Inquiry,” a new approach to transparency for work that uses narrative causal analysis supported by individual data sources.

SC14: Process Tracing (QMMR3)

INSTRUCTORS: Andrew Bennett, Georgetown University (bennetta@georgetown.edu), Jeff Checkel, Simon Fraser University (jtc11@sfu.ca), and Tasha Fairfield, London School of Economics (T.A.Fairfield@LSE.ac.uk).

Half Day, 1:30 – 5:30PM

DESCRIPTION: This course will cover the underlying logic and best practices of process tracing, which is a within-case method of developing and testing causal explanations of individual cases.

The first session of the course will briefly summarize the philosophy of science behind explanation via reference to hypothesized causal mechanisms. It will then outline the logic of process tracing in terms of Bayesian methods of inference, including the application of “hoop tests,” “smoking gun tests,” “doubly decisive tests,” and “straw in the wind tests.”

The second session of the course will focus on best practices and examples of process tracing, including the more inductive use of process tracing for theory development as well as its deductive use for theory testing. As time allows, and depending on the number of students, the instructors will ask students to outline briefly how they plan to use process tracing in their current research project. This will allow the instructors and fellow students to offer constructive advice on how best to carry out process tracing in each student’s project.

SC17: The Overlooked Challenges of Generalizing About Mechanisms (QMMR4)

INSTRUCTOR: Derek Beach, University of Aarhus, Denmark (derek@ps.au.dk)

Half Day, 1:30 – 5:30PM

How can we generalize within-case findings about processes and causal mechanisms to other, non- studied cases? The simple answer is that generalizations are possible to other cases that appear to be causally similar to the studied case(s). Existing guidelines suggest that the claim of similarity from the source to cases targeted for the generalization can be substantiated by comparisons – using either large- n, statistical analyses, medium-n comparative methods like QCA, or a small-n most/least-likely case logic (e.g. Lieberman, 2005; Goertz, 2017; Schneider and Rohlfing, 2013, 2016; Levy, 2008).

The core problem with existing guidelines is that they are blind to the potential of mechanistic heterogeneity existing in sets of cases that look similar at the cross-case level. If mechanisms are unpacked into their constituent working parts, how the process works can be very different in different contexts (Bunge, 1997; Falleti and Lynch, 2009; Gryzmala-Busse 2011; Steel, 2008). Mechanistic heterogeneity means that the same cause can trigger different mechanisms due to differences in contextual conditions.

However, the methods we utilize to substantiate the claim of case similarity only look at the level of causes, thereby forcing us to assume that other cases are also homogeneous at the level of mechanisms, when in fact the source case and the cases targeted for generalization might be very different at the process-level, thereby resulting in flawed generalizations.

In this short course, we will first explore the new mechanistic literature from the natural sciences and its recent applications in the social sciences. In this understanding of process, the goal of within-case analysis is to explore how things work in actual, real-world cases using mechanistic evidence. However, taking process seriously has serious but understudied implications for our ability to generalize.

We will then discuss the state-of-the-art regarding generalization about mechanisms and processes post-comparisons, and develop the problems one can encounter when assuming mechanistic homogeneity based solely on cross-case comparisons.

This is followed by an assessment of different potential techniques to deal with the problem of hidden potential mechanistic heterogeneity in sets of cases. These include theoretical tools like lifting the level of abstraction of our theories of mechanisms (and the limits of this), and the actual step-by-step probing of a population using empirical signatures of processes to determine the bounds of valid generalizations about mechanisms (Beach and Pedersen, forthcoming).

The course will utilize the recent book by Haggard and Kaufman (2016) as a working example of mechanistic heterogeneity across cases, but will also include real-world examples of mechanistic heterogeneity and what can be done about it from the field of policy evaluation.