Skip to main content
Cornell university
Cornell Statistics and Data Science Cornell Statistics and Data Science
  • About Us

    About Us
    Cornell's Department of Statistics and Data Science offers four programs at the undergraduate and graduate levels. Undergraduates can earn a BA in statistical science, social...

    Welcome to the Department of Statistics and Data Science
    History
    Facilities
    Statistics Graduate Society
    Recently Published Papers
  • Academics

    Academics

    Undergraduate
    PhD
    MPS
    PhD Minor in Data Science
    Courses & Course Enrollment
  • People

    People

    Faculty
    Field Faculty
    PhDs
    Emeritus Faculty
    Academic Staff
    Staff
    Research Areas of Expertise
    Statistical Consultants
  • News and Events

    News and Events

    Events
    News
  • Resources

    Resources

    Professional Societies and Meetings
    Affiliated Groups
    Career Services
    Cornell Statistical Consulting Unit
  • Alumni

    Alumni
    Cornell's Statistics and Data Science degrees prepare students for a wide variety of careers, from academia to industry.  See the After Graduation page for a general overview of...

    Alumni Profiles

Search form

You are here

  1. Home 
  2. Events 
  3. Statistics Seminars

Statistics Seminar Speaker: Colin Fogarty, 11/15/2023

Event Layout

Wednesday Nov 15 2023

Statistics Seminar Speaker: Colin Fogarty, 11/15/2023

4:15pm @ G01 Biotech
In Statistics Seminars

Colin Fogarty is an Assistant Professor of Statistics at the University of Michigan. His research interests lie in the design and analysis of randomized experiments and observational studies. In observational studies, Colin develops methods to assess the robustness of a study’s findings to unmeasured confounding. His research on randomization experiments predominantly focuses upon randomization inference under both constant and heterogeneous effects. He received his PhD in Statistics from the Wharton School of the University of Pennsylvania, where he was advised by Dylan Small.

Talk: Sensitivity and Multiplicity

Abstract: Corrections for multiple comparisons generally imagine that all other modeling assumptions are met for the hypothesis tests being conducted, such that the only reason for inflated false rejections is the fact that multiplicity has been ignored when performing inference. In reality, such modes of inference often rest upon unverifiable assumptions. Common expedients include the assumption of ``representativeness" of the sample at hand for the population of interest; and of "no unmeasured confounding" when inferring treatment effects in observational studies. In a sensitivity analysis, one quantifies the magnitude of the departure from unverifiable assumptions required to explain away the findings of a study. Individually, both sensitivity analyses and multiplicity controls can reduce the rate at which true signals are detected and reported. In studies with multiple outcomes resting upon untestable assumptions, one may be concerned that correcting for multiple comparisons while also conducting a sensitivity analysis could render the study entirely devoid of power. We present results on sensitivity analysis for observational studies with multiple endpoints, where the researcher must simultaneously account for multiple comparisons and assess robustness to hidden bias. We find that of the two pursuits, it is recognizing the potential for hidden bias that plays the largest role in determining the conclusions of a study: individual findings that are robust to hidden bias are remarkably persistent in the face of multiple comparisons, while sensitive findings are quickly erased regardless of the number of comparisons. Through simulation studies and empirical examples, we show that through the incorporation of the proposed methodology within a closed testing framework, in a sensitivity analysis one can often attain the same power for testing individual hypotheses that one would have attained had one not accounted for multiple comparisons at all. This suggests that once one commits to conducting a sensitivity analysis, the additional loss in power from controlling for multiple comparisons may be substantially attenuated.

Event Categories

  • Statistics Seminars
  • Special Events

Image Gallery

A color photo of a man smiling for a photo
  • Home
  • About Us
  • Contact Us
  • Careers
© Cornell University Department of Statistics and Data Science

1198 Comstock Hall, 129 Garden Ave., Ithaca, NY 14853

Social Menu

  • Facebook
  • Twitter
  • YouTube
Cornell Bowers CIS College of Computing and Information Science Cornell CALS ILR School

If you have a disability and are having trouble accessing information on this website or need materials in an alternate format, contact web-accessibility@cornell.edu for assistance.