Abstract |
Bayesian model selection algorithms can be used as an alternative to optimization-based methods for model selection, and there is evidence that Bayesian methods approximate the L0-penalty better, but not much has been published about model selection consistency of Bayesian methods in the high dimensional setting. In this talk, we will discuss the notion of strong selection consistency and show that some of the simple spike-and-slab priors, if allowed to be sample-size dependent, can be strongly consistent even when the number of features exceeds the sample size. The spike-and-slab variable selection algorithms however are not so scalable outside the linear model framework. A more scalable alternative, called Skinny Gibbs, is introduced to mitigate the computational burden without losing strong selection consistency. Logistic regression with high dimensional covariates is used as a primary example. The talk is based on joint work with Naveen Narisetty and Juan Shen. |