Colin Fogarty is an Assistant Professor of Statistics at the University of Michigan. His research interests lie in the design and analysis of randomized experiments and observational studies. In observational studies, Colin develops methods to assess the robustness of a study’s findings to unmeasured confounding. His research on randomization experiments predominantly focuses upon randomization inference under both constant and heterogeneous effects. He received his PhD in Statistics from the Wharton School of the University of Pennsylvania, where he was advised by Dylan Small.
Talk: Sensitivity and Multiplicity
Abstract: Corrections for multiple comparisons generally imagine that all other modeling assumptions are met for the hypothesis tests being conducted, such that the only reason for inflated false rejections is the fact that multiplicity has been ignored when performing inference. In reality, such modes of inference often rest upon unverifiable assumptions. Common expedients include the assumption of ``representativeness" of the sample at hand for the population of interest; and of "no unmeasured confounding" when inferring treatment effects in observational studies. In a sensitivity analysis, one quantifies the magnitude of the departure from unverifiable assumptions required to explain away the findings of a study. Individually, both sensitivity analyses and multiplicity controls can reduce the rate at which true signals are detected and reported. In studies with multiple outcomes resting upon untestable assumptions, one may be concerned that correcting for multiple comparisons while also conducting a sensitivity analysis could render the study entirely devoid of power. We present results on sensitivity analysis for observational studies with multiple endpoints, where the researcher must simultaneously account for multiple comparisons and assess robustness to hidden bias. We find that of the two pursuits, it is recognizing the potential for hidden bias that plays the largest role in determining the conclusions of a study: individual findings that are robust to hidden bias are remarkably persistent in the face of multiple comparisons, while sensitive findings are quickly erased regardless of the number of comparisons. Through simulation studies and empirical examples, we show that through the incorporation of the proposed methodology within a closed testing framework, in a sensitivity analysis one can often attain the same power for testing individual hypotheses that one would have attained had one not accounted for multiple comparisons at all. This suggests that once one commits to conducting a sensitivity analysis, the additional loss in power from controlling for multiple comparisons may be substantially attenuated.