MapReduce

MapReduce is a new parallel programming model initially developed for large-scale web content processing. Data analysis meets the issue of how to do calculation over extremely large datasets. The arrival of MapReduce provides a chance to utilize commodity hardware for massively parallel data analysis applications. The translation and optimization from relational algebra operators to MapReduce programs is still an open and dynamic research field. In this paper, we focus on a special type of data analysis query, namely multiple group by query. We first study the communication cost of the MapReduce model, then we give an initial implementation of multiple group by query. We then propose an optimized version which addresses and improves the communication cost issues. Our optimized version shows a better accelerating ability and a better scalability than the other version


References in zbMATH (referenced in 248 articles , 1 standard article )

Showing results 1 to 20 of 248.
Sorted by year (citations)

1 2 3 ... 11 12 13 next

  1. Audrito, Giorgio; Beal, Jacob; Damiani, Ferruccio; Pianini, Danilo; Viroli, Mirko: Field-based coordination with the share operator (2020)
  2. Czumaj, Artur; Łącki, Jakub; Mądry, Aleksander; Mitrović, Slobodan; Onak, Krzysztof; Sankowski, Piotr: Round compression for parallel matching algorithms (2020)
  3. Fotakis, Dimitris; Milis, Ioannis; Papadigenopoulos, Orestis; Vassalos, Vasilis; Zois, Georgios: Scheduling MapReduce jobs on identical and unrelated processors (2020)
  4. Genuzio, Marco; Ottaviano, Giuseppe; Vigna, Sebastiano: Fast scalable construction of ([compressed] static | minimal perfect hash) functions (2020)
  5. Ketsman, Bas; Albarghouthi, Aws; Koutris, Paraschos: Distribution policies for Datalog (2020)
  6. Montealegre, P.; Perez-Salazar, S.; Rapaport, I.; Todinca, I.: Graph reconstruction in the congested clique (2020)
  7. Sambasivan, Rajiv; Das, Sourish; Sahu, Sujit K.: A Bayesian perspective of statistical machine learning for big data (2020)
  8. Tang, Lu; Zhou, Ling; Song, Peter X.-K.: Distributed simultaneous inference in generalized linear models via confidence distribution (2020)
  9. Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario: Parallel extraction of association rules from genomics data (2019)
  10. Ali, Syed Muhammad Fawad; Mey, Johannes; Thiele, Maik: Parallelizing user-defined functions in the ETL workflow using orchestration style sheets (2019)
  11. Atar, Rami; Keslassy, Isaac; Mendelson, Gal: Subdiffusive load balancing in time-varying queueing systems (2019)
  12. Aydin, Kevin; Bateni, Mohammadhossein; Mirrokni, Vahab: Distributed balanced partitioning via linear embedding (2019)
  13. Biletskyy, Borys: Distributed Bayesian machine learning procedures (2019)
  14. Borodin, Allan; Pankratov, Denis; Salehi-Abari, Amirali: On conceptually simple algorithms for variants of online bipartite matching (2019)
  15. Brefeld, Ulf; Lasek, Jan; Mair, Sebastian: Probabilistic movement models and zones of control (2019)
  16. Claesson, Anders; Guðmundsson, Bjarki Ágúst: Enumerating permutations sortable by (k) passes through a pop-stack (2019)
  17. Dhaenens, Clarisse; Jourdan, Laetitia: Metaheuristics for data mining (2019)
  18. Gyssens, Marc; Hellings, Jelle; Paredaens, Jan; Van Gucht, Dirk; Wijsen, Jef; Wu, Yuqing: Calculi for symmetric queries (2019)
  19. Jiang, Yiwei; Zhou, Ping; Cheng, T. C. E.; Ji, Min: Optimal online algorithms for MapReduce scheduling on two uniform machines (2019)
  20. Jiang, Yiwei; Zhou, Ping; Zhou, Wei: MapReduce machine covering problem on a small number of machines (2019)

1 2 3 ... 11 12 13 next