The Statistics Seminar speaker for Tuesday, December 11, 2018, is Pragya Sur, a fifth year Ph.D. student in the Dept. of Statistics at Stanford University, advised by Prof. Emmanuel Candes. Prior to joining Stanford, she received a Bachelor of Statistics in 2012 and a Masters of Statistics in 2014 from the Indian Statistical Institute, Kolkata. She is broadly interested in developing theory and methodology for accurate inference and prediction in high-dimensional models commonly used to analyze modern large-scale datasets. In parallel, she is interested in controlled variable selection and its connections to causality, and questions arising in the context of fair machine learning. She is a recipient of a Ric Weiland Graduate Fellowship in the Humanities and Sciences from Stanford University.
Talk: A modern maximum-likelihood approach for high-dimensional logistic regression
Abstract: Logistic regression is arguably the most widely used and studied non-linear model in statistics. Classical maximum-likelihood theory based statistical inference is ubiquitous in this context. This theory hinges on well-known fundamental results---(1) the maximum-likelihood-estimate (MLE) is asymptotically unbiased and normally distributed, (2) its variability can be quantified via the inverse Fisher information, and (3) the likelihood-ratio-test (LRT) is asymptotically a Chi-Squared. In this talk, I will show that in the common modern setting where the number of features and the sample size are both large and comparable, classical results are far from accurate. In fact, (1) the MLE is biased, (2) its variability is far greater than classical results, and (3) the LRT is not distributed as a Chi-Square. Consequently, p-values obtained based on classical theory are completely invalid in high dimensions.
In turn, I will propose a new theory that characterizes the asymptotic behavior of both the MLE and the LRT under some assumptions on the covariate distribution, in a high-dimensional setting. Empirical evidence demonstrates that this asymptotic theory provides accurate inference in finite samples. Practical implementation of these results necessitates the estimation of a single scalar, the overall signal strength, and I will propose a procedure for estimating this parameter precisely. Finally, I will describe analogous characterizations for regularized estimators such as the logistic lasso or ridge in high dimensions.
This is based on joint work with Emmanuel Candes and Yuxin Chen.