Soroosh Shafiee joined the faculty at Cornell in July 2023 as an Assistant Professor in the School of Operations Research and Information Engineering. Before that, he held positions as a postdoctoral researcher at the Tepper School of Business at Carnegie Mellon University and the Automatic Control Laboratory at ETH Zurich. He holds a B.Sc. and M.Sc. degree in Electrical Engineering from the University of Tehran and a Ph.D. degree in Management of Technology from École Polytechnique Fédérale de Lausanne. His research interests revolve around optimization under uncertainty, low-complexity decision-making and optimal transport.
Title: Outlier-Robustness and Optimal Transport
Abstract: Distributionally robust optimization (DRO) is an effective approach for data-driven decision-making in the presence of uncertainty. Geometric uncertainty due to sampling or localized perturbations of data points is captured by Wasserstein DRO (WDRO), which seeks to learn a model that performs uniformly well over a Wasserstein ball centered around the observed data distribution. However, WDRO fails to account for non-geometric perturbations such as adversarial outliers, which can greatly distort the Wasserstein distance measurement and impede the learned model. We address this gap by proposing a novel outlier-robust WDRO framework for decision-making under both geometric (Wasserstein) perturbations and non-geometric (total variation (TV)) contamination that allows an -fraction of data to be arbitrarily corrupted. We design an uncertainty set using a certain robust Wasserstein ball that accounts for both perturbation types and derive minimax optimal excess risk bounds for this procedure that explicitly capture the Wasserstein and TV risks. We prove a strong duality result that enables tractable convex reformulations and efficient computation of our outlier-robust WDRO problem. When the loss function depends only on low-dimensional features of the data, we eliminate certain dimension dependencies from the risk bounds that are unavoidable in the general setting. Finally, we present experiments validating our theory on standard regression and classification tasks.