Anirban Bhattacharya is an assistant professor in the department of statistics at Texas A&M University. His current methodlogical and theoretical research interests focus on latent variable models for multivariate categorical and count data, Bayesian variable selection in linear and non-linear models, probabilistic models for analysis of network data, trade-off between computational and theoretical complexity in Gaussian process regression models and understanding properties of continuous shrinkage priors in high dimensions. In general, He is broadly interested in studying theoretical properties of high-dimensional and nonparametric Bayesian procedures. Dr. Bhattacharya has previously worked on developing parsimonious models for high-dimensional contingency table data, motivated by epidemiological and genetic applications.
Talk: Scalable MCMC for the horseshoe prior using MCMC approximations
Abstract: Gaussian scale mixture priors such as the horseshoe are frequently employed in Bayesian analysis of high-dimensional models, and several members of this family have optimal risk properties when the truth is sparse. While optimization-based algorithms for the extremely popular Lasso and elastic net procedures can scale to dimension in the hundreds of thousands, corresponding Bayesian methods that use Markov chain Monte Carlo (MCMC) for computation are limited to problems at least an order of magnitude smaller. This is due to high computational cost per step of the associated Markov kernel and growth of the variance of time-averaging estimators as a function of dimension. We propose an MCMC algorithm for computation in these models that combines block updating and approximations of the Markov kernel to directly combat both of these factors. Our algorithm gives orders of magnitude speedup over the best existing alternatives in high-dimensional applications. We give theoretical guarantees for the accuracy of the kernel approximation. The scalability of the algorithm is illustrated in simulations with problem size as large as N=5,000 observations and p=50,000 predictors, and an application to a genome wide association study with N=2,267 and p=98,385. The empirical results also show that the new algorithm yields estimates with lower mean squared error, intervals with better coverage, and elucidates features of the posterior that were often missed by previous algorithms in high dimensions, including bimodality of posterior marginals indicating uncertainty about which covariates belong in the model. This latter feature is an important motivation for a Bayesian approach to testing and selection in high dimensions.