Methods Notes

The E4A Methods Lab was developed to address common methods questions or challenges in Culture of Health research. Our goals are to strengthen the research of E4A grantees and the larger community of population health researchers, to help prospective grantees recognize compelling research opportunities, and to stimulate cross-disciplinary conversation and appreciation across the community of population health researchers. We do this through publications on our E4A Methods Blog, academic journal articles, and Methods Notes. The Methods Notes offer overviews on various topics. Please contact us if you have ideas for additional methods notes or blog posts.

Image of the first page of the methods note pdf.

Study Designs Based on Controlling Confounders Versus Using Instrumental Variables

Nearly all research questions of interest to E4A relate to establishing causality: if we intervene to change some system, policy, or action, will this deliver improvements in health or health equity? Quantitative methods for evaluating causal research questions, as opposed to questions that are merely predictive or descriptive, have developed rapidly since the 1980s. Most study designs fall within what we term instrument-based or confound-control designs. This note provides an overview, highlighting the advantages and disadvantages.

Image of the first page of the methods note pdf.

Identifying and Choosing an Instrument-Based Study Design

Several closely related instrument-based methods have different names, such as instrumental variables, differences-in-differences, or fuzzy regression discontinuity analyses. Each can be analyzed using the statistical methods for instrumental variables, assuming that the exposure of interest has been measured. This methods note highlights how these methods differ and why one might be preferred over another can make it easier to decide which, if any, are applicable and valid in specific contexts.

Image of the first page of the methods note pdf.

Guidance for Sample Size, Statistical Power, and Smallest Detectable Effect Size Calculations

Calculations of statistical power and the related values of sample size and smallest detectable effect size are essential to the design of high-quality population health studies. However, conducting these calculations can be tricky. In evaluating proposals, we find that power calculations are among the most common difficulties in E4A applications. The goal of this Methods Note is to review the key inputs for conducting power calculations to ensure proposed studies in population health research are adequately powered and evaluated fairly.

Image of the first page of the methods note pdf.

Considerations for Plausible and Important Effect Sizes in Population Health Research

When the goal of a study is to draw causal inferences about the impact of an intervention on population health outcomes, calculations of statistical power and the related values of sample size and smallest detectable effect size are essential. Often, the most challenging aspect of a power calculation is accurately anticipating what effect sizes are plausible to achieve for the social intervention or exposure under study. Researchers and funders must also consider how large an effect size must be to justify studying a pro-posed policy or intervention. The purpose of this Methods Note is to discuss considerations for plausible and important effect sizes in population health research.

Image of the first page of the methods note pdf.

Do social interventions have different health effects for different people? Heterogeneous treatment effects

This is the first in a series of two Methods Notes focusing on the related phenomena referred to in different fields as “effect modification”, “interaction”, “effect moderation”, or “heterogeneous treatment effects”. Here we use the term “heterogeneous treatment effects”, or HTEs. We explain what HTEs are, why HTEs are so important for population health research, challenges in evaluating HTEs, and open questions for future research. HTEs are important regardless of the type of exposure being considered, including clinical treatments, programs, policies, or systems. For this note, we use the term “treatment” as shorthand for any type of exposure.

Image of the first page of the methods note pdf.

Why is there so much uncertainty about heterogeneous treatment effects?

This is the second in a series of two Methods Notes focusing on heterogenous treatment effects. As we discussed in our previous methods note introducing the concept of heterogeneous treatment effects (HTEs), understanding whether the effects of an exposure or treatment are different for different people is essential to population health research, especially health equity research. Here we discuss some of the research challenges in evaluating HTEs and priorities for future research about HTEs.

Image of the first page of the methods note pdf.

Time-Varying Confounding: A Summary of Key Ideas and Frequently Asked Questions

To estimate the health effects of exposure to a condition, the sequence or whole history of exposure, rather than the effect of exposure at a single point in time often needs to be considered. For example, does sustained access to greenspace reduce cardiovascular risk? An important step in evaluating the health effects of time-varying changes is to specify how to characterize the pattern of exposure to the condition of interest. What feature of the condition (e.g., duration, frequency, intensity, etc.) is most relevant for our health outcome? Once the characterization is specified, confounding needs to be addressed. More in-depth information on what this means and approaches to take is available in the note.

Image of the first page of the methods note pdf.

Diversity, Equity, and Inclusion Validity Arguments

And the Matrix of Evidence for Validity Argumentation for the design or evaluation of research instruments. Authors Dakota Cintron and Erin Hagan focus on the recently developed Matrix of Evidence for Validity Argumentation when exploring development of instruments using a diversity, equity, and inclusion (DEI) lens because it offers a tool or schema for constructing, organizing, and arranging validity arguments throughout the instrument design process.