Elastic net regression modeling with the orthant normal prior The elastic net procedure is a form of regularized optimization for linear regression that provides a bridge between ridge regression and the lasso. The estimate that it produces can be viewed as a Bayesian posterior mode under a prior distribution implied by the form of the elastic net penalty. This article broadens the scope of the Bayesian connection by providing a complete characterization of a class of prior distributions that generate the elastic net estimate as the posterior mode. The resulting model-based framework allows for a methodology that moves beyond exclusive use of the posterior mode by considering inference based on the full posterior distribution. par Two characterizations of the class of prior distributions are introduced: a properly normalized, direct characterization, which is shown to be conjugate for linear regression models, and an alternate representation as a scale mixture of normal distributions. Prior distributions are proposed for the regularization parameters, resulting in an infinite mixture of elastic net regression models that allows for adaptive, data-based shrinkage of the regression coefficients. Posterior inference is easily achieved using Markov chain Monte Carlo (MCMC) methods. Uncertainty about model specification is addressed from a Bayesian perspective by assigning prior probabilities to all possible models. Corresponding computational approaches are described. Software for implementing the MCMC methods described in this article, written in C++ with an R package interface, is available at url{ hans/software/}.

References in zbMATH (referenced in 13 articles , 1 standard article )

Showing results 1 to 13 of 13.
Sorted by year (citations)

  1. Smith, Adam N.; Allenby, Greg M.: Demand models with random partitions (2020)
  2. Bhadra, Anindya; Datta, Jyotishka; Polson, Nicholas G.; Willard, Brandon: Lasso meets horseshoe: a survey (2019)
  3. Chiquet, Julien; Mary-Huard, Tristan; Robin, Stéphane: Structured regularization for conditional Gaussian graphical models (2017)
  4. Hamedani, Hamideh D.; Moosavi, Sara Sadat: Dealing with big data: comparing dimension reduction and shrinkage regression methods (2017)
  5. Bhattacharya, Anirban; Dunson, David B.; Pati, Debdeep; Pillai, Natesh S.: Sub-optimality of some continuous shrinkage priors (2016)
  6. Tan, Aixin; Huang, Jian: Bayesian inference for high-dimensional linear regression under mnet priors (2016)
  7. Ghosh, Joyee; Tan, Aixin: Sandwich algorithms for Bayesian variable selection (2015)
  8. Jang, Woncheol; Lim, Johan; Lazar, Nicole A.; Loh, Ji Meng; Yu, Donghyeon: Some properties of generalized fused Lasso and its applications to high dimensional data (2015)
  9. Karagiannis, Georgios; Konomi, Bledar A.; Lin, Guang: A Bayesian mixed shrinkage prior procedure for spatial-stochastic basis selection and evaluation of gPC expansions: applications to elliptic SPDEs (2015)
  10. Liu, Fei; Chakraborty, Sounak; Li, Fan; Liu, Yan; Lozano, Aurelie C.: Bayesian regularization via graph Laplacian (2014)
  11. Pati, Debdeep; Bhattacharya, Anirban; Pillai, Natesh S.; Dunson, David: Posterior contraction in sparse Bayesian factor models for massive covariance matrices (2014)
  12. Thomas, A. C.; Ventura, Samuel L.; Jensen, Shane T.; Ma, Stephen: Competing process hazard function models for player ratings in ice hockey (2013)
  13. Hans, Chris: Elastic net regression modeling with the orthant normal prior (2011)