Parallel stochastic gradient algorithms for large-scale matrix completion. This paper develops Jellyfish, an algorithm for solving data-processing problems with matrix-valued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and least-squares problems regularized by the nuclear norm or $gamma _2$-norm. Jellyfish implements a projected incremental gradient method with a biased, random ordering of the increments. This biased ordering allows for a parallel implementation that admits a speed-up nearly proportional to the number of processors. On large-scale matrix completion tasks, Jellyfish is orders of magnitude more efficient than existing codes. For example, on the Netflix Prize data set, prior art computes rating predictions in approximately 4 h, while Jellyfish solves the same problem in under 3 min on a 12 core workstation.

References in zbMATH (referenced in 29 articles )

Showing results 1 to 20 of 29.
Sorted by year (citations)

1 2 next

  1. Bauch, Jonathan; Nadler, Boaz; Zilber, Pini: Rank (2r) iterative least squares: efficient recovery of ill-conditioned low rank matrices from few entries (2021)
  2. Gürbüzbalaban, M.; Ozdaglar, A.; Parrilo, P. A.: Why random reshuffling beats stochastic gradient descent (2021)
  3. Sun, Ruoyu; Ye, Yinyu: Worst-case complexity of cyclic coordinate descent: (O(n^2)) gap with randomized version (2021)
  4. Boffi, Nicholas M.; Slotine, Jean-Jacques E.: A continuous-time analysis of distributed stochastic gradient (2020)
  5. Godichon-Baggioni, Antoine; Saadane, Sofiane: On the rates of convergence of parallelized averaged stochastic gradient algorithms (2020)
  6. Ma, Shiqian; Wang, Fei; Wei, Linchuan; Wolkowicz, Henry: Robust principal component analysis using facial reduction (2020)
  7. Sun, Ruoyu; Luo, Zhi-Quan; Ye, Yinyu: On the efficiency of random permutation for ADMM and coordinate descent (2020)
  8. Yu, Ming; Gupta, Varun; Kolar, Mladen: Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach (2020)
  9. Gürbüzbalaban, M.; Ozdaglar, A.; Parrilo, P. A.: Convergence rate of incremental gradient and incremental Newton methods (2019)
  10. Kumar, Rajiv; Willemsen, Bram; Herrmann, Felix J.; Malcolm, Alison: Enabling numerically exact local solver for waveform inversion -- a low-rank approach (2019)
  11. Mishra, Bamdev; Kasai, Hiroyuki; Jawanpuria, Pratik; Saroop, Atul: A Riemannian gossip approach to subspace learning on Grassmann manifold (2019)
  12. Yang, Shuoguang; Wang, Mengdi; Fang, Ethan X.: Multilevel stochastic gradient methods for nested composition optimization (2019)
  13. Cottet, Vincent; Alquier, Pierre: 1-bit matrix completion: PAC-Bayesian analysis of a variational approximation (2018)
  14. Yin, Penghang; Xin, Jack; Qi, Yingyong: Linear feature transform and enhancement of classification on deep neural network (2018)
  15. Ashraphijuo, Morteza; Wang, Xiaodong; Aggarwal, Vaneet: Rank determination for low-rank data completion (2017)
  16. Boyd, Nicholas; Schiebinger, Geoffrey; Recht, Benjamin: The alternating descent conditional gradient method for sparse inverse problems (2017)
  17. Bhaskar, Sonia A.: Probabilistic low-rank matrix completion from quantized measurements (2016)
  18. Tappenden, Rachael; Richtárik, Peter; Gondzio, Jacek: Inexact coordinate descent: complexity and preconditioning (2016)
  19. Boumal, Nicolas; Absil, P.-A.: Low-rank matrix completion via preconditioned optimization on the Grassmann manifold (2015)
  20. Harchaoui, Zaid; Juditsky, Anatoli; Nemirovski, Arkadi: Conditional gradient algorithms for norm-regularized smooth convex optimization (2015)

1 2 next