Jonathan Williams is an Assistant Professor in the Department of Statistics at North Carolina State University. He holds a PhD in Statistics from University of North Carolina at Chapel Hill and a master’s degree in Mathematics from New York University. His current research interests span topics in foundations of statistics and imprecise probability, machine learning approaches to uncertainty quantification such as conformal predictions and universal inference, and applications of hidden Markov models to disease progression and escalation of intrastate violence.
Talk: Imprecision in statistical learning
Abstract: Motivated by the need for the development of safe and reliable methods for uncertainty quantification in machine learning, I propose and develop ideas for a model-free statistical framework for imprecise probabilistic prediction inference. This framework facilitates uncertainty quantification in the form of prediction sets that offer finite-sample control of type 1 errors, a property shared with conformal prediction sets, but this new approach also offers more versatile tools for imprecise probabilistic reasoning. Furthermore, I propose and consider the theoretical and empirical properties of a precise probabilistic approximation to the model-free imprecise framework. Approximating a belief/plausibility measure pair by an [optimal in some sense] probability measure in the credal set is a critical resolution needed for the broader adoption of imprecise probabilistic approaches to inference in statistical and machine learning communities. It is largely undetermined in the statistical and machine learning literatures, more generally, how to properly quantify uncertainty in that there is no generally accepted standard of accountability of stated uncertainties. The research I present is aimed at motivating a framework for statistical inference with reliability and accountability as the guiding principles. The related research articles can be found at https://jonathanpw.github.io/.