Jellyfish

Parallel stochastic gradient algorithms for large-scale matrix completion. This paper develops Jellyfish, an algorithm for solving data-processing problems with matrix-valued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and least-squares problems regularized by the nuclear norm or $gamma _2$-norm. Jellyfish implements a projected incremental gradient method with a biased, random ordering of the increments. This biased ordering allows for a parallel implementation that admits a speed-up nearly proportional to the number of processors. On large-scale matrix completion tasks, Jellyfish is orders of magnitude more efficient than existing codes. For example, on the Netflix Prize data set, prior art computes rating predictions in approximately 4 h, while Jellyfish solves the same problem in under 3 min on a 12 core workstation.


References in zbMATH (referenced in 28 articles )

Showing results 1 to 20 of 28.
Sorted by year (citations)

1 2 next

  1. Gürbüzbalaban, M.; Ozdaglar, A.; Parrilo, P. A.: Why random reshuffling beats stochastic gradient descent (2021)
  2. Sun, Ruoyu; Ye, Yinyu: Worst-case complexity of cyclic coordinate descent: (O(n^2)) gap with randomized version (2021)
  3. Boffi, Nicholas M.; Slotine, Jean-Jacques E.: A continuous-time analysis of distributed stochastic gradient (2020)
  4. Godichon-Baggioni, Antoine; Saadane, Sofiane: On the rates of convergence of parallelized averaged stochastic gradient algorithms (2020)
  5. Ma, Shiqian; Wang, Fei; Wei, Linchuan; Wolkowicz, Henry: Robust principal component analysis using facial reduction (2020)
  6. Sun, Ruoyu; Luo, Zhi-Quan; Ye, Yinyu: On the efficiency of random permutation for ADMM and coordinate descent (2020)
  7. Yu, Ming; Gupta, Varun; Kolar, Mladen: Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach (2020)
  8. Gürbüzbalaban, M.; Ozdaglar, A.; Parrilo, P. A.: Convergence rate of incremental gradient and incremental Newton methods (2019)
  9. Kumar, Rajiv; Willemsen, Bram; Herrmann, Felix J.; Malcolm, Alison: Enabling numerically exact local solver for waveform inversion -- a low-rank approach (2019)
  10. Mishra, Bamdev; Kasai, Hiroyuki; Jawanpuria, Pratik; Saroop, Atul: A Riemannian gossip approach to subspace learning on Grassmann manifold (2019)
  11. Yang, Shuoguang; Wang, Mengdi; Fang, Ethan X.: Multilevel stochastic gradient methods for nested composition optimization (2019)
  12. Cottet, Vincent; Alquier, Pierre: 1-bit matrix completion: PAC-Bayesian analysis of a variational approximation (2018)
  13. Yin, Penghang; Xin, Jack; Qi, Yingyong: Linear feature transform and enhancement of classification on deep neural network (2018)
  14. Ashraphijuo, Morteza; Wang, Xiaodong; Aggarwal, Vaneet: Rank determination for low-rank data completion (2017)
  15. Boyd, Nicholas; Schiebinger, Geoffrey; Recht, Benjamin: The alternating descent conditional gradient method for sparse inverse problems (2017)
  16. Bhaskar, Sonia A.: Probabilistic low-rank matrix completion from quantized measurements (2016)
  17. Tappenden, Rachael; Richtárik, Peter; Gondzio, Jacek: Inexact coordinate descent: complexity and preconditioning (2016)
  18. Boumal, Nicolas; Absil, P.-A.: Low-rank matrix completion via preconditioned optimization on the Grassmann manifold (2015)
  19. Harchaoui, Zaid; Juditsky, Anatoli; Nemirovski, Arkadi: Conditional gradient algorithms for norm-regularized smooth convex optimization (2015)
  20. Lin, An-Ya; Ling, Qing: Decentralized and privacy-preserving low-rank matrix completion (2015)

1 2 next