IMRO: A proximal quasi-Newton method for solving ℓ 1 -regularized least squares problems. We present a proximal quasi-Newton method in which the approximation of the Hessian has the special format of “identity minus rank one” (IMRO) in each iteration. The proposed structure enables us to effectively recover the proximal point. The algorithm is applied to $ell_1$-regularized least squares problems arising in many applications including sparse recovery in compressive sensing, machine learning, and statistics. Our numerical experiment suggests that the proposed technique competes favorably with other state-of-the-art solvers for this class of problems. We also provide a complexity analysis for variants of IMRO, showing that it matches known best bounds.
Keywords for this software
References in zbMATH (referenced in 8 articles , 1 standard article )
Showing results 1 to 8 of 8.
- Cheng, Wanyou; Dai, Yu-Hong: An active set Newton-CG method for (\ell_1) optimization (2021)
- Becker, Stephen; Fadili, Jalal; Ochs, Peter: On quasi-Newton forward-backward splitting: proximal calculus and convergence (2019)
- Cheng, Wanyou; Hu, Qingjie; Li, Donghui: A fast conjugate gradient algorithm with active set prediction for (\ell_1) optimization (2019)
- Ochs, Peter; Pock, Thomas: Adaptive FISTA for nonconvex optimization (2019)
- Wang, Xiaoyu; Wang, Xiao; Yuan, Ya-Xiang: Stochastic proximal quasi-Newton methods for non-convex composite optimization (2019)
- Fountoulakis, Kimon; Tappenden, Rachael: A flexible coordinate descent method (2018)
- Friedlander, Michael P.; Goh, Gabriel: Efficient evaluation of scaled proximal operators (2017)
- Karimi, Sahar; Vavasis, Stephen: IMRO: A proximal quasi-Newton method for solving (\ell_1)-regularized least squares problems (2017)