glasso
The graphical lasso: new insights and alternatives. The graphical lasso [5] is an algorithm for learning the structure in an undirected Gaussian graphical model, using ℓ 1 regularization to control the number of zeros in the precision matrix Θ=Σ -1 [2, 11]. The R package glasso [5] is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of glasso can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform glasso. By studying the “normal equations” we see that, glasso is solving the dual of the graphical lasso penalized likelihood, by block coordinate ascent; a result which can also be found in [2]. In this dual, the target of estimation is Σ, the covariance matrix, rather than the precision matrix Θ. We propose similar primal algorithms p-glasso and dp-glasso, that also operate by block-coordinate descent, where Θ is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate sub-problems. We conclude that dp-glasso is superior from several points of view.
Keywords for this software
References in zbMATH (referenced in 122 articles , 1 standard article )
Showing results 1 to 20 of 122.
Sorted by year (- Boutsidis, Christos; Drineas, Petros; Kambadur, Prabhanjan; Kontopoulou, Eugenia-Maria; Zouzias, Anastasios: A randomized algorithm for approximating the log determinant of a symmetric positive definite matrix (2017)
- Hirose, Kei; Fujisawa, Hironori; Sese, Jun: Robust sparse Gaussian graphical modeling (2017)
- Janková, Jana; van de Geer, Sara: Honest confidence regions and optimality in high-dimensional precision matrix estimation (2017)
- Jonatan Kallus, Jose Sanchez, Alexandra Jauhiainen, Sven Nelander, Rebecka Jornsten: ROPE: high-dimensional network modeling with robust control of edge FDR (2017) arXiv
- Kühnel, Line; Sommer, Stefan; Pai, Akshay; Raket, Lars Lau: Most likely separation of intensity and warping effects in image registration (2017)
- Peyhardi, Jean; Fernique, Pierre: Characterization of convolution splitting graphical models (2017)
- Zhang, Hai; Wang, Puyu; Dong, Qing; Wang, Pu: Sparse Bayesian linear regression using generalized normal priors (2017)
- Adragni, Kofi P.; Al-Najjar, Elias; Martin, Sean; Popuri, Sai K.; Raim, Andrew M.: Group-wise sufficient dimension reduction with principal fitted components (2016)
- Bai, Jushan; Liao, Yuan: Efficient estimation of approximate factor models via penalized maximum likelihood (2016)
- Bar-Hen, Avner; Poggi, Jean-Michel: Influence measures and stability for graphical models (2016)
- Blum, Yuna; Houée-Bigot, Magalie; Causeur, David: Sparse factor model for co-expression networks with an application using prior biological knowledge (2016)
- Chen, Lisha; Huang, Jianhua Z.: Sparse reduced-rank regression with covariance estimation (2016)
- Chun, Hyonho; Lee, Myung Hee; Fleet, James C.; Oh, Ji Hwan: Graphical models via joint quantile regression with component selection (2016)
- Fourdrinier, Dominique; Mezoued, Fatiha; Wells, Martin T.: Estimation of the inverse scatter matrix of an elliptically symmetric distribution (2016)
- Goncalves, André R.; von Zuben, Fernando J.; Banerjee, Arindam: Multi-task sparse structure learning with Gaussian copula models (2016)
- Lin, Jiahe; Basu, Sumanta; Banerjee, Moulinath; Michailidis, George: Penalized maximum likelihood estimation of multi-layered Gaussian graphical models (2016)
- Martin, Victorin; Lasgouttes, Jean-Marc; Furtlehner, Cyril: Latent binary MRF for online reconstruction of large scale systems (2016)
- Ma, Shiqian: Alternating proximal gradient method for convex minimization (2016)
- Maurya, Ashwini: A well-conditioned and sparse estimation of covariance and inverse covariance matrices using a joint penalty (2016)
- Müller, Patric; van de Geer, Sara: Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable (2016)