GraphLab

GraphLab: A New Framework For Parallel Machine Learning. Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems.


References in zbMATH (referenced in 24 articles )

Showing results 1 to 20 of 24.
Sorted by year (citations)

1 2 next

  1. Iwasaki, Hideya; Emoto, Kento; Morihata, Akimasa; Matsuzaki, Kiminori; Hu, Zhenjiang: Fregel: a functional domain-specific language for vertex-centric large-scale graph processing (2022)
  2. Li, Qi; Zhong, Jiang; Cao, Zehong; Li, Xue: Optimizing streaming graph partitioning via a heuristic greedy method and caching strategy (2020)
  3. Sambasivan, Rajiv; Das, Sourish; Sahu, Sujit K.: A Bayesian perspective of statistical machine learning for big data (2020)
  4. Zhang, Can; Zhu, Liehuang; Xu, Chang; Sharif, Kashif; Zhang, Chuan; Liu, Ximeng: PGAS: privacy-preserving graph encryption for accurate constrained shortest distance queries (2020)
  5. Das, Ariyam; Zaniolo, Carlo: A case for stale synchronous distributed model for declarative recursive computation (2019)
  6. Joana M. F. da Trindade, Konstantinos Karanasos, Carlo Curino, Samuel Madden, Julian Shun: Kaskade: Graph Views for Efficient Graph Analytics (2019) arXiv
  7. Fegaras, Leonidas: An algebra for distributed big data analytics (2017)
  8. Hong, Jihye; Park, Kisung; Han, Yongkoo; Rasel, Mostofa Kamal; Vonvou, Dawanga; Lee, Young-Koo: Disk-based shortest path discovery using distance index over large dynamic graphs (2017)
  9. Park, Sejun; Shin, Jinwoo: Convergence and correctness of max-product belief propagation for linear programming (2017)
  10. Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, Ion Stoica: Ray: A Distributed Framework for Emerging AI Applications (2017) arXiv
  11. Ai, Wu; Chen, Weisheng; Xie, Jin: A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights (2016)
  12. Bayer, Immanuel: fastFM: a library for factorization machines (2016)
  13. Ho, Qirong; Yin, Junming; Xing, Eric P.: Latent space inference of Internet-scale networks (2016)
  14. Lu, Jing; Hoi, Steven C. H.; Wang, Jialei; Zhao, Peilin; Liu, Zhi-Yong: Large scale online kernel learning (2016)
  15. Ignacio Arnaldo, Kalyan Veeramachaneni, Andrew Song, Una-May O’Reilly: Bring Your Own Learner: A Cloud-Based, Data-Parallel Commons for Machine Learning (2015) not zbMATH
  16. Magliacane, Sara; Stutz, Philip; Groth, Paul; Bernstein, Abraham: foxPSL: A fast, optimized and extended PSL implementation (2015) ioport
  17. Mahani, Alireza S.; Sharabiani, Mansour T. A.: SIMD parallel MCMC sampling with applications for big-data Bayesian analytics (2015)
  18. Martins, André F. T.; Figueiredo, Mário A. T.; Aguiar, Pedro M. Q.; Smith, Noah A.; Xing, Eric P.: (\mathrmAD^3): alternating directions dual decomposition for MAP inference in graphical models (2015)
  19. Agarwal, Alekh; Chapelle, Oliveier; Dudík, Miroslav; Langford, John: A reliable effective terascale linear learning system (2014)
  20. Cruz, Flavio; Rocha, Ricardo; Goldstein, Seth Copen; Pfenning, Frank: A linear logic programming language for concurrent programming over graph structures (2014)

1 2 next