Elastic net regression modeling with the orthant normal prior The elastic net procedure is a form of regularized optimization for linear regression that provides a bridge between ridge regression and the lasso. The estimate that it produces can be viewed as a Bayesian posterior mode under a prior distribution implied by the form of the elastic net penalty. This article broadens the scope of the Bayesian connection by providing a complete characterization of a class of prior distributions that generate the elastic net estimate as the posterior mode. The resulting model-based framework allows for a methodology that moves beyond exclusive use of the posterior mode by considering inference based on the full posterior distribution. par Two characterizations of the class of prior distributions are introduced: a properly normalized, direct characterization, which is shown to be conjugate for linear regression models, and an alternate representation as a scale mixture of normal distributions. Prior distributions are proposed for the regularization parameters, resulting in an infinite mixture of elastic net regression models that allows for adaptive, data-based shrinkage of the regression coefficients. Posterior inference is easily achieved using Markov chain Monte Carlo (MCMC) methods. Uncertainty about model specification is addressed from a Bayesian perspective by assigning prior probabilities to all possible models. Corresponding computational approaches are described. Software for implementing the MCMC methods described in this article, written in C++ with an R package interface, is available at url{ hans/software/}.