Dr. Edoardo M. Airoldi is the Millard E. Gladfelter Professor of Statistics and Data Science at Temple University. He also serves as Director of the Fox School’s Data Science Center.
Airoldi joined the Fox School from Harvard University, where he had served since 2009 as a full-time faculty member in the Department of Statistics. He founded and directed the Harvard Laboratory for Applied Statistics & Data Science, until 2017. Additionally, he held visiting positions at MIT and Yale University, and served as a research associate at Princeton University.
A distinguished researcher, Airoldi has authored more than 140 publications and earned more than 12,000 citations. His work focuses on statistical theory and methods for designing and analyzing experiments on large networks and, more generally, modeling and inferential issues that arise in analyses that leverage network data.
His work has appeared in journals across statistics, computer science, and general science, including Annals of Statistics, Journal of the American Statistical Association, Journal of Machine Learning Research, Proceedings of the National Academy of Sciences, and Nature. He has received a Sloan Fellowship, the Shutzer Fellowship from the Radcliffe Institute of Advanced Studies, an NSF CAREER Award, and an ONR Young Investigator Program Award, among others. He has delivered a plenary talk at the National Academy of Sciences Sackler Colloquium on “Causal Inference and Big Data,” in 2015, and he has given an IMS Medallion Lecture at the Joint Statistical Meetings, in 2017.
Airoldi earned his PhD in Computer Science from Carnegie Mellon University, where he also received his Master of Science degrees in Statistics and Statistical and Computational Learning. He earned a Bachelor of Science in Mathematical Statistics and Economics from Italy’s Bocconi University.
Talk: Model-assisted design of experiments
Abstract: Classical approaches to causal inference largely rely on the assumption of “lack of interference”, according to which the outcome of an individual does not depend on the treatment assigned to others, as well as on many other simplifying assumptions, including the absence of strategic behavior. In many applications, however, such as evaluating the effectiveness of health-related interventions that leverage social structure, assessing the impact of product innovations and ad campaigns on social media platforms, or experimentation at scale in large IT companies, several common simplifying assumptions are untenable. Moreover, the effect of interference itself is often an inferential target of interest, rather than a nuisance. In this talk, we will formalize issues that arise in estimating causal effects when interference can be attributed to a network among the units of analysis, within the potential outcomes framework. We will introduce and a model-assisted strategy for experimental design in this context centered around a useful role for statistical models. In particular, we wish for certain finite-sample properties of the estimator to hold even if the model catastrophically fails, while we would like to gain efficiency if certain aspects of the model are correct. We will then contrast design-based, model-based and model-assisted approaches to experimental design from a decision theoretic perspective.