Author(s): Motiur Rehman, Hidayat Ullah Khan, Shaik Sajeed
Published in: International Journal of Engineering Research & Technology
License: This work is licensed under a Creative Commons Attribution 4.0 International License.
Volume/Issue: Volume. 6 - Issue. 01 , February - 2017
In the Big Data amass, MapReduce has been viewed as one of the key engaging approachs for dealing with incessantly extending demands on figuring resources constrained by Big Datasets, yet in the meantime various issues touch base considering MapReduce keeping the ultimate objective to handle an a great deal more broad group of occupations, mix into Hadoop's local record framework. The reason behind this is the high adaptability of the MapReduce perspective which considers massively parallel and coursed execution over a broad number of figuring centers. This paper address the how supplant MapReduce with Apache Spark as the default get ready for Hadoop.Apache Spark is better than MapReduce towards drives issues and challenges in dealing with Big Data with the objective of giving a layout of the field, empowering better orchestrating and organization of Enormous Information wanders ,bigger sum reflection and hypothesis of MapReduce.
Number of Citations for this article: Data not Available
7 Paper(s) Found related to your topic:
Publish your Ph.D/Master's Thesis Online