Feng Ruan is currently an assistant professor at Department of Statistics and Data Science from Northwestern University. Previously, he obtained his Ph.D. in Statistics at Stanford, advised by John Duchi, and was a postdoctoral researcher in EECS at the University of California, Berkeley, advised by Professor Michael Jordan. His current research has three driving goals: (1) Build optimal statistical inferential procedures accounting for crucial resource constraints such as computation, privacy, etc. (2) Develop modeling and analytic tools that give a calculus for understanding generally solvable non-convex problems. (3) Design new objectives so that local algorithms achieve guaranteed performances for problems of combinatorial structures. His personal website is https://fengruan.github.io/
Talk: Sparsity without l1 Regularization
Abstract: Sparse models are desirable for many scientific purposes. Standard strategies that yield sparsity are based on explicit regularizations, which include, e.g., l1 regularizations, early stopping and post-processing (e.g., clipping). The first part of the talk introduces a previous-unknown implicit sparsity-inducing mechanism---based on a variant of (non-convex) kernel feature selection— where I will clarify rigorously how the new mechanism obtains exactly sparse models in finite samples even if it does not apply any known regularization techniques. The second part of this talk continues the study of the (non-convex) kernel feature selection objective, and focuses on its ability to recovery signal variables. We show, surprisingly, that the design of the kernel in the non-convex objective is crucial if methods that find local minima are to succeed.