FarRSA for l1-regularized convex optimization: local convergence and numerical experience. FaRSA is a new method for minimizing the sum of a differentiable convex function and an ℓ 1 -norm regularizer. The main features of the method include: (i) an evolving set of indices corresponding to variables that are predicted to be nonzero at a solution; (ii) subproblems that only need to be solved in a reduced space to lessen per-iteration computational costs; (iii) conditions that determine how accurately each subproblem must be solved, which allow conjugate gradient or coordinate descent techniques to be employed; (iv) a computationally practical condition that dictates when the subspace explored by the current subproblem should be updated; and (v) a reduced proximal gradient step that ensures a sufficient decrease in the objective function when it is decided that the index set that holds the nonzero variables should be expanded. We prove global convergence of the method and demonstrate its performance on a set of model prediction problems with a Matlab implementation. Here, we introduce an enhanced subproblem termination condition that allows us to prove that the iterates converge locally at a superlinear rate. Moreover, we present the details of our publicly available C implementation along with extensive numerical comparisons to other state-of-the-art solvers.
Keywords for this software
References in zbMATH (referenced in 1 article )
Showing result 1 of 1.
- Chen, Tianyi; Curtis, Frank E.; Robinson, Daniel P.: FarRSA for (\ell_1)-regularized convex optimization: local convergence and numerical experience (2018)