Frederic Koehler is currently a Motwani Postdoctoral Fellow in the Department of Computer Science, Stanford University. He received his Ph.D. in Mathematics and Statistics from the Massachusetts Institute of Technology, where he was coadvised by Ankur Moitra and Elchanan Mossel, and before that he studied mathematics as an undergraduate at Princeton University. His research interest lies at the intersection of algorithms and statistics and includes work on sampling algorithms, generalization theory, computational-statistical gaps, and provable learning algorithms.
Talk: Learning Classifiers under Benign Misspecification
Abstract: Learning to classify data points with a halfspace is one of the foundational problems in machine learning. It is known that learning optimal halfspaces is statistically possible even when we are agnostic to the generative model of data, but matching this guarantee is computationally hard in general. We introduce a model of misspecification for generalized linear models where the misspecification is benign/semirandom (the noise level can only be decreased) and give new algorithms which achieve close to optimal guarantees in this model, even though standard heuristics provably fail. In particular, our results resolve a number of open questions for the special case of learning halfspaces with Massart noise. Based on joint work with Sitan Chen, Ankur Moitra, and Morris Yau.