Data Management in the Cloud MAP/REDUCE. Map/Reduce. Programming model Examples Execution model Criticism Iterative map/reduce
|
|
|
- Bernard Jackson
- 10 years ago
- Views:
Transcription
1 Data Management in the Cloud MAP/REDUCE 117 Programming model Examples Execution model Criticism Iterative map/reduce Map/Reduce 118
2 Motivation Background and Requirements computations are conceptually straightforward input data is (very) large distribution over hundreds or thousands of nodes Programming model for processing of large data sets abstraction to express simple computations hide details of parallelization, data distribution, fault-tolerance, and load-balancing 119 Programming Model Inspired by primitives from functional programming languages such as Lisp, Scheme, and Haskell Input and output are sets of key/value pairs Programmer specifies two functions map (k 1,v 1 ) list(k 2,v 2 ) reduce (k 2,list(v 2 )) list(v 2 ) Key and value domains input keys and values are drawn from a different domain than intermediate and output keys and values intermediate keys and values are drawn from the same domain as output keys and values 120
3 Map Function User-defined function processes input key/value pair produces a set of intermediate key/value pairs Map function I/O input: read from GFS file (chunk) output: written to intermediate file on local disk Map/reduce library executes map function groups together all intermediate values with the same key passes these values to reduce functions Effect of map function processes and partitions input data builds distributed map (transparent to user) similar to group by operation in SQL 121 User-defined function Reduce Function accepts one intermediate key and a set of values for that key merges these values together to form a (possibly) smaller set typically, zero or one output value is generated per invocation Reduce function I/O input: read from intermediate files using remote reads on local files of corresponding mapper nodes output: each reducer writes its output as a file back to GFS Effect of reduce function similar to aggregation operation in SQL 122
4 Map/Reduce Interaction Map 1 Reduce 1 Reduce 2 Map 2 Reduce 3 Map 3 Reduce 4 Map functions create a user-defined index from source data Reduce functions compute grouped aggregates based on index Flexible framework users can cast raw original data in any model that they need wide range of tasks can be expressed in this simple framework 123 MapReduce Example map(string key, String value): // key: document name // value: document contents for each word w in input_value: EmitIntermediate(w, 1 ); reduce(string key, Iterator values): // key: word // values: int result = 0; for each v in values: result += ParseInt(v); Emit(AsString(result)); 124
5 Distributed grep More Examples goal: find positions of a pattern in a set of files map: (File, String) list(integer, String), emits a <line#, line> pair for every line that matches the pattern reduce: identity function that simply outputs intermediate values Count of URL access frequency goal: analyze Web logs and count page requests map: (URL, String) list(url, Integer), emits <URL, 1> for every occurrence of a URL reduce: (URL, list(integer)) list(integer), sums the occurrences of each URL Workload of first example is in map function, whereas it is on the reduce in the second example 125 Reverse Web-link graph More Examples goal: find which source pages link to a target page map: (URL, CLOB) list(url, URL), parses the page content and emits one <target, source> pair for every target URL found in the source page reduce: (URL, list(url)) list(url), concatenates all lists for one source URL Term-vector per host goal: for each host, construct its term vector as a list of <word, frequency> pairs map: (URL, CLOB) list(string, List), parses the page content (CLOB) and emits a <hostname, term vector> pair for each document reduce: (String, list(list<string, Integer>)) list(list<string, Integer>), combines all per-document term vectors and emits final <hostname, term vector> pairs 126
6 Inverted index More Examples goal: create an index structure that maps search terms (words) to document identifiers (URLs) map: (URL, CLOB) list(string, URL), parses document content and emits a sequence of <word, document id> pairs reduce: (String, list(url)) list(url), accepts all pairs for a given word, and sorts and combines the corresponding document ids Distributed sort goal: sort records according to a user-defined key map: (?, Object) list(key, Record), extracts the key from each record and emits <key, record> pairs reduce: emits all pairs unchanged Map/reduce guarantees that pairs in each partition are processed ordered by key, but still requires clever partitioning function to work! 127 Relational Join Example R R R R??????? M M M M M M M Relation R 1 Relation R 2 Map function M: hash on key attribute (?, tuple) list(key, tuple) Reduce function R: join on each k value (key, list(tuple)) list(tuple) 128
7 Implementation Based on the Google computing environment same assumptions and properties as GFS builds on top of GFS Architecture one master, many workers users submit jobs consisting of a set of tasks to a scheduling system tasks are mapped to available workers within the cluster by master Execution overview map invocations are distributed across multiple machines by automatically partitioning the input data into a set of M splits input splits can be processed in parallel reduce invocations are distributed by partitioning the intermediate key space into R pieces using a partitioning function, e.g. hash(key) mod R 129 Execution Overview Figure Credit: MapReduce: Simplified Data Processing on Large Clusters by J. Dean and S. Ghemawat,
8 Execution Overview 1. Map/reduce library splits input files into M pieces and then starts copies of the program on a cluster of machines 2. One copy is the master, the rest are workers; master assigns M map and R reduce tasks to idle workers 3. Map worker reads its input split, parses out key/value pairs and passes them to user-defined map function 4. Buffered pairs are written to local disk, partitioned into R regions; location of pairs passed back to master 5. Reduce worker is notified by master with pair locations; uses RPC to read intermediate data from local disk of map workers and sorts it by intermediate key to group tuples by key 6. Reduce worker iterates over sorted data and for each unique key, it invokes user-defined reduce function; result appended to reduce partition 7. Master wakes up user program after all map and reduce tasks have been completed 131 Master Data Structures Information about all map and reduce task worker state: idle, in-progress, or completed identity of the worker machine (for non-idle tasks) Intermediate file regions propagates intermediate file locations from map to reduce tasks stores locations and sizes of the R intermediate file regions produced by each map task updates to this location and size information are received as map tasks are completed information pushed incrementally to workers that have in-progress reduce tasks 132
9 Worker failure Fault Tolerance master pings workers periodically; assumes failure if no response completed/in-progress map and in-progress reduce tasks on failed worker are rescheduled on a different worker node dependency between map and reduce tasks importance of chunk replicas Mater failure checkpoints of master data structure given that there is only a single master, failure is unlikely Failure semantics if user-defined functions are deterministic, execution with faults produces the same result as execution without faults rely on atomic commits of map and reduce tasks 133 Locality More Implementation Aspects network bandwidth is scarce resource move computation close to data master takes GFS metadata into consideration (location of replicas) Task granularity master makes O(M + R) scheduling decisions master stores O(M * R) states in memory M is typically larger than R Backup Tasks stragglers are a common cause for suboptimal performance as a map/reduce computation comes close to completion, master assigns the same task to multiple workers 134
10 Partitioning function Refinements default function can be replaced by user supports application-specific partitioning Ordering guarantees within a give partition, intermediate key/value pairs are processed in increasing key order Combiner function addresses significant key repetitions generated by some map functions partial merging of data by map worker, before it is sent over network typically the same code is used as by the reduce function Input and output types support to read input and produce output in several formats user can define their own readers and writers 135 Skipping bad records Refinements map/reduce framework detects on which record task failed when task is restarted this record is skipped Local execution addresses challenges debugging, profiling and small-scale testing alternative implementation that executes task sequentially on local machine Counters counter facility to count occurrences of various events counter values from worker machines propagated to master master aggregates counters from successful tasks 136
11 Performance Experiments Figure Credit: MapReduce: Simplified Data Processing on Large Clusters by J. Dean and S. Ghemawat, Map/Reduce Criticism Why not use a parallel DBMS instead? map/reduce is a giant step backwards no schema, no indexes, no high-level language not novel at all does not provide features of traditional DBMS incompatible with DBMS tools Performance comparison of approaches to large-scale data analysis Pavlo et al. A Comparison of Approaches to Large-Scale Data Analysis, Proc. Intl. Conf. on Management of Data (SIGMOD), 2009 parallel DBMS (Vertica and DBMS-X) vs. map/reduce (Hadoop) original map/reduce task: grep from Google paper typical database tasks: selection, aggregation, join, UDF 100-node cluster 138
12 Grep Task: Load Times 535 MB/node 1 TB/cluster Administrative command to reorganize data on each node Figure Credit: A Comparison of Approaches to Large-Scale Data Analysis by A. Pavlo et al., Grep Task: Execution Times 535 MB/node 1 TB/cluster Time required to combine all reduce partitions into one result Figure Credit: A Comparison of Approaches to Large-Scale Data Analysis by A. Pavlo et al.,
13 Analytical Tasks CREATE TABLE Documents ( CREATE TABLE UserVisits ( url VARCHAR(100) sourceip VARCHAR(16), PRIMARY KEY, desturl VARCHAR(100), contents TEXT ); visitdate DATE, adrevenue FLOAT, CREATE TABLE Rankings ( useragent VARCHAR(64), pageurl VARCHAR(100) countrycode VARCHAR(3), PRIMARY KEY, languagecode VARCHAR(3), pagerank INT, searchword VARCHAR(32), avgduration INT ); duration INT ); Data set 600K unique HTML documents 155M user visit records (20 GB/node) 18M ranking records (1 GB/node) 141 Selection Task SQL Query SELECT pageurl, pagerank FROM Rankings WHERE pagerank > X Relational DBMS use index on pagerank column Relative performance degrades as number of nodes increases Hadoop start-up cost increase with cluster size Figure Credit: A Comparison of Approaches to Large-Scale Data Analysis by A. Pavlo et al.,
14 Aggregation Task Calculate the total ad revenue for each source IP using the user visits table Variant 1: 2.5M records SELECT sourceip, SUM(adRevenue) FROM UserVisits GROUP BY sourceip Variant 2: 2,000 records SELECT SUBSTR(sourceIP, 1, 7), SUM(adRevenue) FROM UserVisits GROUP BY SUBSTR(sourceIP, 1, 7) 143 Aggregation Task 2.5M Groups 2,000 Groups Figure Credit: A Comparison of Approaches to Large-Scale Data Analysis by A. Pavlo et al.,
15 Join Task SQL Query SELECT INTO Temp UV.sourceIP, AVG(R.pageRank) AS avgpagerank, SUM(UV.adRevenue) AS totalrevenue FROM Rankings AS R, UserVisits AS UV WHERE R.pageURL = UV.destURL AND UV.visitDate BETWEEN DATE( ) AND DATE( ) GROUP BY UV.sourceIP SELECT sourceip, avgpagerank, totalrevenue FROM Temp ORDER BY totalrevenue DESC LIMIT 1 Map/reduce program Uses three phases Phase 1: filters records outside date range and joins with rankings file Phase 2: computes total ad revenue and average page rank based on source IP Phase 3: produces the record with the largest total ad revenue Phases run in strict sequential order 145 Join Task Figure Credit: A Comparison of Approaches to Large-Scale Data Analysis by A. Pavlo et al.,
16 UDF Aggregation Task Compute in-link count for each document in the data set SQL Query SELECT INTO Temp UDF(contents) FROM Documents SELECT url, SUM(value) FROM Temp GROUP BY url Map/reduce program documents are split into lines input key/value pairs: <line number, line contents> map: uses regex to find URLs and emits <URL, 1> for each URL reduce: counts the number of values for a given key Issues DBMS-X: not possible to run UDF over contents stored as BLOB in database; instead UDF has to access local file system Vertica: does not currently support UDF, uses a special pre-processor 147 UDF Aggregation Task Time required to combine all reduce partitions into one result Time required by UDF Time required by pre-processor 148
17 Map/Reduce vs. Parallel DBMS No schema, no index, no high-level language faster loading vs. faster execution easier prototyping vs. easier maintenance Fault tolerance restart of single worker vs. restart of transaction Installation and tool support easy to setup map/reduce vs. challenging to configure parallel DBMS no tools for tuning vs. tools for automatic performance tuning Performance per node results seem to indicate that parallel DBMS achieve the same performance as map/reduce in smaller clusters 149 Task granularity Iterative Map/Reduce one map stage followed by one reduce stage map stage reads from replicated storage reduce stage writes to replicated storage Complex queries typically require several map/reduce phase no fault tolerance between map/reduce phases data shuffling between map/reduce phases 150
18 PageRank Example Initial Rank Table R 0 URL Rank Rank Table R 3 URL Rank Linkage Table L SourceURL DestURL R i+1 p(desturl, g DestURL SUM(Rank)) R i.rank = R i.rank/g URL COUNT(DestURL) R i+1 R i.url = L.SourceURL L Slide Credit: B. Howe, U Washington 151 Map/Reduce Implementation Join and Compute Rank Aggregate Fix-Point Evaluation M R i R M R M R M L-split0 R M R M R M L-split1 i=i+1 not done Client done Slide Credit: B. Howe, U Washington 152
19 What s the Problem? R i L-split0 L-split1 M M M R R L is loaded and shuffled in each iteration L never changes Fix-point evaluated as a separate map/reduce task in each iteration Slide Credit: B. Howe, U Washington 153 Inter-Iteration Locality Goal of HaLoop scheduler place map and reduce tasks that occur in different iterations but access the same data on the same physical machines thereby increase data re-use between iterations and reduce shuffeling Restriction HaLoop requires that the number of reduce tasks is invariant over iterations Figure Credit: HaLoop: Efficient Iterative Data Processing on Large Clusters by Y. Bu et al.,
20 Scheduling Algorithm Input: Node node // The current iteration s schedule; initially empty Global variable: Map<Node, List<Partition>> current // The previous iteration s schedule Global variable: Map<Node, List<Partition>> previous 1: if iteration == 0 then 2: Partition part = hadoopschedule(node); 3: current.get(node).add(part); 4: else 5: if node.hasfullload() then 6: Node substitution = findnearestidlenode(node); 7: previous.get(substitution).addall(previous.remove(node)); 8: return; Find a susbtitution 9: end if 10: if previous.get(node).size() > 0 then 11: Partition part = previous.get(node).get(0); 12: schedule(part, node); 13: current.get(node).add(part); 14: previous.remove(part); 15: end if 16: end if Same as Hadoop Iteration-local schedule Slide Credit: B. Howe, U Washington 155 Caching and Indexing HaLoop caches loop-invariant data partitions on a physical node s local disk to reduce I/O cost Reducer input cache enabled if intermediate table is loop-invariant recursive join, PageRank, HITS, social network analysis Reducer output cache used to reduce costs of evaluating fix-point termination costs Mapper input cache aims to avoid non-local data reads in non-initial iterations K-means clustering, neural network analysis Cache reloading host node fails host node has full load and a map or reduce task must be scheduled on a different substitution node 156
21 HaLoop Architecture Figure Credit: HaLoop: Efficient Iterative Data Processing on Large Clusters by Y. Bu et al., Amazon EC2 Experiments 20, 50, 90 default small instances Datasets billions of triples (120 GB) Freebase (12 GB) Livejournal social network (18 GB) Queries transitive closure PageRank k-means Slide Credit: B. Howe, U Washington 158
22 Application Run Time Transitive Closure (Triples Dataset, 90 nodes) PageRank (Freebase Dataset, 90 nodes) Slide Credit: B. Howe, U Washington 159 Join Time Transitive Closure (Triples Dataset, 90 nodes) PageRank (Freebase Dataset, 90 nodes) Slide Credit: B. Howe, U Washington 160
23 Run Time Distribution Transitive Closure (Triples Dataset, 90 nodes) PageRank (Freebase Dataset, 90 nodes) Slide Credit: B. Howe, U Washington 161 Fix-Point Evaluation RageRank (Livejournal Dataset, 50 nodes) PageRank (Freebase Dataset, 90 nodes) Slide Credit: B. Howe, U Washington 162
24 References J. Dean and S. Ghemawat: MapReduce: Simplified Data Processing on Large Clusters. Proc. Symp. on Opearting Systems Design & Implementation (OSDI), pp , A. Pavlo, E. Paulson, A. Rasin, D. J. Abadi, D. J. DeWitt, S. Madden, and M. Stonebraker: A Comparison of Approaches to Large-Scale Data Analysis. Proc. Intl. Conf. on Management of Data (SIGMOD), pp , Y. Bu, B. Howe, M. Balazinska, and M. D. Ernst: HaLoop: Efficient Iterative Data Processing on Large Clusters. Proc. Intl. Conf. on Very Large Data Bases (VLDB), pp ,
MapReduce (in the cloud)
MapReduce (in the cloud) How to painlessly process terabytes of data by Irina Gordei MapReduce Presentation Outline What is MapReduce? Example How it works MapReduce in the cloud Conclusion Demo Motivation:
Introduction to Parallel Programming and MapReduce
Introduction to Parallel Programming and MapReduce Audience and Pre-Requisites This tutorial covers the basics of parallel programming and the MapReduce programming model. The pre-requisites are significant
YANG, Lin COMP 6311 Spring 2012 CSE HKUST
YANG, Lin COMP 6311 Spring 2012 CSE HKUST 1 Outline Background Overview of Big Data Management Comparison btw PDB and MR DB Solution on MapReduce Conclusion 2 Data-driven World Science Data bases from
MapReduce. from the paper. MapReduce: Simplified Data Processing on Large Clusters (2004)
MapReduce from the paper MapReduce: Simplified Data Processing on Large Clusters (2004) What it is MapReduce is a programming model and an associated implementation for processing and generating large
Big Data Processing with Google s MapReduce. Alexandru Costan
1 Big Data Processing with Google s MapReduce Alexandru Costan Outline Motivation MapReduce programming model Examples MapReduce system architecture Limitations Extensions 2 Motivation Big Data @Google:
MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example
MapReduce MapReduce and SQL Injections CS 3200 Final Lecture Jeffrey Dean and Sanjay Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. OSDI'04: Sixth Symposium on Operating System Design
Jeffrey D. Ullman slides. MapReduce for data intensive computing
Jeffrey D. Ullman slides MapReduce for data intensive computing Single-node architecture CPU Machine Learning, Statistics Memory Classical Data Mining Disk Commodity Clusters Web data sets can be very
A Comparison of Approaches to Large-Scale Data Analysis
A Comparison of Approaches to Large-Scale Data Analysis Sam Madden MIT CSAIL with Andrew Pavlo, Erik Paulson, Alexander Rasin, Daniel Abadi, David DeWitt, and Michael Stonebraker In SIGMOD 2009 MapReduce
Introduction to Hadoop
Introduction to Hadoop 1 What is Hadoop? the big data revolution extracting value from data cloud computing 2 Understanding MapReduce the word count problem more examples MCS 572 Lecture 24 Introduction
Big Data Data-intensive Computing Methods, Tools, and Applications (CMSC 34900)
Big Data Data-intensive Computing Methods, Tools, and Applications (CMSC 34900) Ian Foster Computation Institute Argonne National Lab & University of Chicago 2 3 SQL Overview Structured Query Language
MapReduce Jeffrey Dean and Sanjay Ghemawat. Background context
MapReduce Jeffrey Dean and Sanjay Ghemawat Background context BIG DATA!! o Large-scale services generate huge volumes of data: logs, crawls, user databases, web site content, etc. o Very useful to be able
Map Reduce / Hadoop / HDFS
Chapter 3: Map Reduce / Hadoop / HDFS 97 Overview Outline Distributed File Systems (re-visited) Motivation Programming Model Example Applications Big Data in Apache Hadoop HDFS in Hadoop YARN 98 Overview
Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel
Parallel Databases Increase performance by performing operations in parallel Parallel Architectures Shared memory Shared disk Shared nothing closely coupled loosely coupled Parallelism Terminology Speedup:
Cloud Computing at Google. Architecture
Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale
MapReduce: A Flexible Data Processing Tool
DOI:10.1145/1629175.1629198 MapReduce advantages over parallel databases include storage-system independence and fine-grain fault tolerance for large jobs. BY JEFFREY DEAN AND SANJAY GHEMAWAT MapReduce:
Big Data and Apache Hadoop s MapReduce
Big Data and Apache Hadoop s MapReduce Michael Hahsler Computer Science and Engineering Southern Methodist University January 23, 2012 Michael Hahsler (SMU/CSE) Hadoop/MapReduce January 23, 2012 1 / 23
Introduction to Hadoop
1 What is Hadoop? Introduction to Hadoop We are living in an era where large volumes of data are available and the problem is to extract meaning from the data avalanche. The goal of the software tools
A programming model in Cloud: MapReduce
A programming model in Cloud: MapReduce Programming model and implementation developed by Google for processing large data sets Users specify a map function to generate a set of intermediate key/value
A Comparison of Approaches to Large-Scale Data Analysis
A Comparison of Approaches to Large-Scale Data Analysis Andrew Pavlo Erik Paulson Alexander Rasin Brown University University of Wisconsin Brown University [email protected] [email protected] [email protected]
Advanced Data Management Technologies
ADMT 2015/16 Unit 15 J. Gamper 1/53 Advanced Data Management Technologies Unit 15 MapReduce J. Gamper Free University of Bozen-Bolzano Faculty of Computer Science IDSE Acknowledgements: Much of the information
Big Data Storage, Management and challenges. Ahmed Ali-Eldin
Big Data Storage, Management and challenges Ahmed Ali-Eldin (Ambitious) Plan What is Big Data? And Why talk about Big Data? How to store Big Data? BigTables (Google) Dynamo (Amazon) How to process Big
The Performance of MapReduce: An In-depth Study
The Performance of MapReduce: An In-depth Study Dawei Jiang Beng Chin Ooi Lei Shi Sai Wu School of Computing National University of Singapore {jiangdw, ooibc, shilei, wusai}@comp.nus.edu.sg ABSTRACT MapReduce
Lecture Data Warehouse Systems
Lecture Data Warehouse Systems Eva Zangerle SS 2013 PART C: Novel Approaches in DW NoSQL and MapReduce Stonebraker on Data Warehouses Star and snowflake schemas are a good idea in the DW world C-Stores
16.1 MAPREDUCE. For personal use only, not for distribution. 333
For personal use only, not for distribution. 333 16.1 MAPREDUCE Initially designed by the Google labs and used internally by Google, the MAPREDUCE distributed programming model is now promoted by several
Parallel Processing of cluster by Map Reduce
Parallel Processing of cluster by Map Reduce Abstract Madhavi Vaidya, Department of Computer Science Vivekanand College, Chembur, Mumbai [email protected] MapReduce is a parallel programming model
Big Data Analytics with MapReduce VL Implementierung von Datenbanksystemen 05-Feb-13
Big Data Analytics with MapReduce VL Implementierung von Datenbanksystemen 05-Feb-13 Astrid Rheinländer Wissensmanagement in der Bioinformatik What is Big Data? collection of data sets so large and complex
MAPREDUCE Programming Model
CS 2510 COMPUTER OPERATING SYSTEMS Cloud Computing MAPREDUCE Dr. Taieb Znati Computer Science Department University of Pittsburgh MAPREDUCE Programming Model Scaling Data Intensive Application MapReduce
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
From GWS to MapReduce: Google s Cloud Technology in the Early Days
Large-Scale Distributed Systems From GWS to MapReduce: Google s Cloud Technology in the Early Days Part II: MapReduce in a Datacenter COMP6511A Spring 2014 HKUST Lin Gu [email protected] MapReduce/Hadoop
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment
Analysing Large Files in a Hadoop Distributed Cluster Environment S Saravanan, B Uma Maheswari Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham,
ImprovedApproachestoHandleBigdatathroughHadoop
Global Journal of Computer Science and Technology: C Software & Data Engineering Volume 14 Issue 9 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals
Systems Engineering II. Pramod Bhatotia TU Dresden pramod.bhatotia@tu- dresden.de
Systems Engineering II Pramod Bhatotia TU Dresden pramod.bhatotia@tu- dresden.de About me! Since May 2015 2015 2012 Research Group Leader cfaed, TU Dresden PhD Student MPI- SWS Research Intern Microsoft
Outline. Motivation. Motivation. MapReduce & GraphLab: Programming Models for Large-Scale Parallel/Distributed Computing 2/28/2013
MapReduce & GraphLab: Programming Models for Large-Scale Parallel/Distributed Computing Iftekhar Naim Outline Motivation MapReduce Overview Design Issues & Abstractions Examples and Results Pros and Cons
Parallel Databases vs. Hadoop
Parallel Databases vs. Hadoop Juliana Freire New York University Some slides from J. Lin, C. Olston et al. The Debate Starts http://homes.cs.washington.edu/~billhowe/mapreduce_a_major_step_backwards.html
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic
BigData An Overview of Several Approaches David Mera Masaryk University Brno, Czech Republic 16/12/2013 Table of Contents 1 Introduction 2 Terminology 3 Approaches focused on batch data processing MapReduce-Hadoop
DATA MINING WITH HADOOP AND HIVE Introduction to Architecture
DATA MINING WITH HADOOP AND HIVE Introduction to Architecture Dr. Wlodek Zadrozny (Most slides come from Prof. Akella s class in 2014) 2015-2025. Reproduction or usage prohibited without permission of
Big Data Technology Map-Reduce Motivation: Indexing in Search Engines
Big Data Technology Map-Reduce Motivation: Indexing in Search Engines Edward Bortnikov & Ronny Lempel Yahoo Labs, Haifa Indexing in Search Engines Information Retrieval s two main stages: Indexing process
Hadoop vs. Parallel Databases. Juliana Freire!
Hadoop vs. Parallel Databases Juliana Freire! The Debate Starts The Debate Continues A comparison of approaches to large-scale data analysis. Pavlo et al., SIGMOD 2009! o Parallel DBMS beats MapReduce
Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 5: GFS & HDFS!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind
MapReduce: Simplified Data Processing on Large Clusters. Jeff Dean, Sanjay Ghemawat Google, Inc.
MapReduce: Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Motivation: Large Scale Data Processing Many tasks: Process lots of data to produce other data Want to use
http://www.wordle.net/
Hadoop & MapReduce http://www.wordle.net/ http://www.wordle.net/ Hadoop is an open-source software framework (or platform) for Reliable + Scalable + Distributed Storage/Computational unit Failures completely
MapReduce is a programming model and an associated implementation for processing
MapReduce: Simplified Data Processing on Large Clusters by Jeffrey Dean and Sanjay Ghemawat Abstract MapReduce is a programming model and an associated implementation for processing and generating large
Data-Intensive Computing with Map-Reduce and Hadoop
Data-Intensive Computing with Map-Reduce and Hadoop Shamil Humbetov Department of Computer Engineering Qafqaz University Baku, Azerbaijan [email protected] Abstract Every day, we create 2.5 quintillion
MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012
MapReduce and Hadoop Aaron Birkland Cornell Center for Advanced Computing January 2012 Motivation Simple programming model for Big Data Distributed, parallel but hides this Established success at petabyte
FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data
FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez To cite this version: Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez. FP-Hadoop:
Introduc)on to the MapReduce Paradigm and Apache Hadoop. Sriram Krishnan [email protected]
Introduc)on to the MapReduce Paradigm and Apache Hadoop Sriram Krishnan [email protected] Programming Model The computa)on takes a set of input key/ value pairs, and Produces a set of output key/value pairs.
Big Data and Scripting map/reduce in Hadoop
Big Data and Scripting map/reduce in Hadoop 1, 2, parts of a Hadoop map/reduce implementation core framework provides customization via indivudual map and reduce functions e.g. implementation in mongodb
Introduction to NoSQL Databases and MapReduce. Tore Risch Information Technology Uppsala University 2014-05-12
Introduction to NoSQL Databases and MapReduce Tore Risch Information Technology Uppsala University 2014-05-12 What is a NoSQL Database? 1. A key/value store Basic index manager, no complete query language
PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS
PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS By HAI JIN, SHADI IBRAHIM, LI QI, HAIJUN CAO, SONG WU and XUANHUA SHI Prepared by: Dr. Faramarz Safi Islamic Azad
Convex Optimization for Big Data: Lecture 2: Frameworks for Big Data Analytics
Convex Optimization for Big Data: Lecture 2: Frameworks for Big Data Analytics Sabeur Aridhi Aalto University, Finland Sabeur Aridhi Frameworks for Big Data Analytics 1 / 59 Introduction Contents 1 Introduction
CS246: Mining Massive Datasets Jure Leskovec, Stanford University. http://cs246.stanford.edu
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2 CPU Memory Machine Learning, Statistics Classical Data Mining Disk 3 20+ billion web pages x 20KB = 400+ TB
Data Science in the Wild
Data Science in the Wild Lecture 3 Some slides are taken from J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 Data Science and Big Data Big Data: the data cannot
Keywords: Big Data, HDFS, Map Reduce, Hadoop
Volume 5, Issue 7, July 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Configuration Tuning
Hadoop s Entry into the Traditional Analytical DBMS Market. Daniel Abadi Yale University August 3 rd, 2010
Hadoop s Entry into the Traditional Analytical DBMS Market Daniel Abadi Yale University August 3 rd, 2010 Data, Data, Everywhere Data explosion Web 2.0 more user data More devices that sense data More
Hadoop and Map-Reduce. Swati Gore
Hadoop and Map-Reduce Swati Gore Contents Why Hadoop? Hadoop Overview Hadoop Architecture Working Description Fault Tolerance Limitations Why Map-Reduce not MPI Distributed sort Why Hadoop? Existing Data
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique Mahesh Maurya a, Sunita Mahajan b * a Research Scholar, JJT University, MPSTME, Mumbai, India,[email protected]
CIS 4930/6930 Spring 2014 Introduction to Data Science /Data Intensive Computing. University of Florida, CISE Department Prof.
CIS 4930/6930 Spring 2014 Introduction to Data Science /Data Intensie Computing Uniersity of Florida, CISE Department Prof. Daisy Zhe Wang Map/Reduce: Simplified Data Processing on Large Clusters Parallel/Distributed
Infrastructures for big data
Infrastructures for big data Rasmus Pagh 1 Today s lecture Three technologies for handling big data: MapReduce (Hadoop) BigTable (and descendants) Data stream algorithms Alternatives to (some uses of)
Log Mining Based on Hadoop s Map and Reduce Technique
Log Mining Based on Hadoop s Map and Reduce Technique ABSTRACT: Anuja Pandit Department of Computer Science, [email protected] Amruta Deshpande Department of Computer Science, [email protected]
Exploring the Efficiency of Big Data Processing with Hadoop MapReduce
Exploring the Efficiency of Big Data Processing with Hadoop MapReduce Brian Ye, Anders Ye School of Computer Science and Communication (CSC), Royal Institute of Technology KTH, Stockholm, Sweden Abstract.
Big Data Begets Big Database Theory
Big Data Begets Big Database Theory Dan Suciu University of Washington 1 Motivation Industry analysts describe Big Data in terms of three V s: volume, velocity, variety. The data is too big to process
Comparison of Different Implementation of Inverted Indexes in Hadoop
Comparison of Different Implementation of Inverted Indexes in Hadoop Hediyeh Baban, S. Kami Makki, and Stefan Andrei Department of Computer Science Lamar University Beaumont, Texas (hbaban, kami.makki,
HadoopDB: An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads
HadoopDB: An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads Azza Abouzeid 1, Kamil Bajda-Pawlikowski 1, Daniel Abadi 1, Avi Silberschatz 1, Alexander Rasin 2 1 Yale University,
MapReduce and Hadoop Distributed File System
MapReduce and Hadoop Distributed File System 1 B. RAMAMURTHY Contact: Dr. Bina Ramamurthy CSE Department University at Buffalo (SUNY) [email protected] http://www.cse.buffalo.edu/faculty/bina Partially
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015
7/04/05 Fundamentals of Distributed Systems CC5- PROCESAMIENTO MASIVO DE DATOS OTOÑO 05 Lecture 4: DFS & MapReduce I Aidan Hogan [email protected] Inside Google circa 997/98 MASSIVE DATA PROCESSING (THE
The Hadoop Framework
The Hadoop Framework Nils Braden University of Applied Sciences Gießen-Friedberg Wiesenstraße 14 35390 Gießen [email protected] Abstract. The Hadoop Framework offers an approach to large-scale
MapReduce: Simplified Data Processing on Large Clusters
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat [email protected], [email protected] Google, Inc. Abstract MapReduce is a programming model and an associated implementation
COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Big Data by the numbers
COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Instructor: ([email protected]) TAs: Pierre-Luc Bacon ([email protected]) Ryan Lowe ([email protected])
MapReduce: Simplified Data Processing on Large Clusters
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat [email protected], [email protected] Google, Inc. Abstract MapReduce is a programming model and an associated implementation
MapReduce and the New Software Stack
20 Chapter 2 MapReduce and the New Software Stack Modern data-mining applications, often called big-data analysis, require us to manage immense amounts of data quickly. In many of these applications, the
HaLoop: Efficient Iterative Data Processing on Large Clusters
: Efficient Iterative Data Processing on Large Clusters Yingyi Bu Bill Howe Magdalena Balazinska Michael D. Ernst Department of Computer Science and Engineering University of Washington, Seattle, WA, U.S.A.
Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework
Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework Vidya Dhondiba Jadhav, Harshada Jayant Nazirkar, Sneha Manik Idekar Dept. of Information Technology, JSPM s BSIOTR (W),
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University http://www.mmds.org
Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA http://kzhang6.people.uic.edu/tutorial/amcis2014.html August 7, 2014 Schedule I. Introduction to big data
Developing MapReduce Programs
Cloud Computing Developing MapReduce Programs Dell Zhang Birkbeck, University of London 2015/16 MapReduce Algorithm Design MapReduce: Recap Programmers must specify two functions: map (k, v) * Takes
Introduction to Hadoop
Introduction to Hadoop Miles Osborne School of Informatics University of Edinburgh [email protected] October 28, 2010 Miles Osborne Introduction to Hadoop 1 Background Hadoop Programming Model Examples
Lecture 10 - Functional programming: Hadoop and MapReduce
Lecture 10 - Functional programming: Hadoop and MapReduce Sohan Dharmaraja Sohan Dharmaraja Lecture 10 - Functional programming: Hadoop and MapReduce 1 / 41 For today Big Data and Text analytics Functional
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after
Beyond Hadoop MapReduce Apache Tez and Apache Spark
Beyond Hadoop MapReduce Apache Tez and Apache Spark Prakasam Kannan Computer Science Department San Jose State University San Jose, CA 95192 408-924-1000 [email protected] ABSTRACT Hadoop MapReduce
Data Management in the Cloud -
Data Management in the Cloud - current issues and research directions Patrick Valduriez Esther Pacitti DNAC Congress, Paris, nov. 2010 http://www.med-hoc-net-2010.org SOPHIA ANTIPOLIS - MÉDITERRANÉE Is
Big Application Execution on Cloud using Hadoop Distributed File System
Big Application Execution on Cloud using Hadoop Distributed File System Ashkan Vates*, Upendra, Muwafaq Rahi Ali RPIIT Campus, Bastara Karnal, Haryana, India ---------------------------------------------------------------------***---------------------------------------------------------------------
The Google File System
The Google File System By Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung (Presented at SOSP 2003) Introduction Google search engine. Applications process lots of data. Need good file system. Solution:
Data Management in the Cloud
Data Management in the Cloud Ryan Stern [email protected] : Advanced Topics in Distributed Systems Department of Computer Science Colorado State University Outline Today Microsoft Cloud SQL Server
CS 4604: Introduc0on to Database Management Systems. B. Aditya Prakash Lecture #13: NoSQL and MapReduce
CS 4604: Introduc0on to Database Management Systems B. Aditya Prakash Lecture #13: NoSQL and MapReduce Announcements HW4 is out You have to use the PGSQL server START EARLY!! We can not help if everyone
Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park
Big Data: Using ArcGIS with Apache Hadoop Erik Hoel and Mike Park Outline Overview of Hadoop Adding GIS capabilities to Hadoop Integrating Hadoop with ArcGIS Apache Hadoop What is Hadoop? Hadoop is a scalable
