Blog

Determining the Causal Effects of Interventions: Instrument-Based Study Designs

December 11, 2019
Methods
Image of colorful lottery tickets.

Introduction

As discussed in our previous blog post, there are a variety of methods that can be used to identify causal impacts of policies, programs, or practices. One approach is instrument-based study designs. When using instrument-based methods, researchers take advantage of some random or random-like source of variation in what people are exposed to. This source of random variation is often called an “instrument”.

In this post, we'll introduce considerations about the context in which one instrument-based approach might be more useful than another to inform the decision-making process. For more details on instrument-based methods, how they work, and the distinctions between them, review our Instrument-Based Designs Methods Note.

Example Research Questions

Instrument-based methods are useful for answering a variety of research questions. Some examples from E4A funded projects are included below.

What are the effects of living wage policies on material and psychological well-being, health status, and health behaviors? Do some living wage policies have bigger effects than others?

How does the presence of children in a defendant’s life affect sentencing decisions? Do those decisions impact children’s experiences with the foster care and health care systems?

Possible Approaches

The distinctions between different instrument-based study designs have to do with the ways in which people come to be exposed to an intervention and how that is leveraged in the analysis. In some situations, more than one instrument-based method may be appropriate or applicable. The choice between them depends on the approach that is most likely to be accurate and precise in that context. One approach, an interrupted time series design, compares health outcomes in a place just before and just after some important change has taken place, such as implementation of a new policy. If, however, other factors affecting health were changing at the same as the policy, we would worry about whether the impacts were due to the policy of interest or the other changes that were taking place. An alternative option, known as a difference in differences approach, compares the place of interest to a similar place without the policy change but with similar changes in other health factors.

In all cases, to deliver the highest possible quality evidence alternative explanations for an association besides causality (e.g., other changes in health factors at the same as the policy) need to be identified and ruled out on a case-by-case basis.

Putting Evidence into Practice

Evidence for Action has funded a variety of instrument-based or similarly designed studies. For example, one project is using the adoption of living wage policies to assess the health effects of additional income and another examines how parents’ interactions with the criminal justice system affect children’s health and well-being outcomes, specifically how sentencing differs based arbitrary judge assignment.

In each of these cases, the research findings can inform decisionmakers whether programs, policies, or practices positively or negatively impact health and other outcomes.

Tools & Resources

For more on instrument-based methods, how they work, and the distinctions between them, read our Instrument-Based Designs Methods Note. We welcome your comments and feedback on these ideas!

Download Methods Note

Read the Journal Article

Many instrument-based study designs can be described as "quasi-experimental", although this term has been used inconsistently in prior research. We use the term instrument-based to be as specific as possible about this approach to identify the causal impacts of policies, programs, or practices.

About the Author

Ellicott Matthay, PhD, is a social epidemiologist and postdoctoral scholar with E4A. She conducts methodological investigations to improve the way that research in her substantive areas is done, because she believes that improving the methodological rigor of applied studies is one of the most important steps to identifying effective prevention strategies.