MapReduce: Algorithm Design Patterns
|
|
|
- Stephen Fletcher
- 10 years ago
- Views:
Transcription
1 Designing Algorithms for MapReduce MapReduce: Algorithm Design Patterns Need to adapt to a restricted model of computation Goals Scalability: adding machines will make the algo run faster Efficiency: resources will not be wasted The translation some algorithms into MapReduce isn t always obvious Some slides borrowed from Jimmy Lin, Jeff Ullman, Jerome Simeon, and Jure Leskovec Designing Algorithms for MapReduce Towards Scalable Hadoop Algorithms Need to adapt to a restricted model of computation Goals Scalability: adding machines will make the algo run faster Efficiency: resources will not be wasted The translation some algorithms into MapReduce isn t always obvious But there are useful design patterns that can help We will cover some and use examples to illustrate how they can be applied Ideal scaling characteristics: Twice the data, twice the running time Twice the resources, half the running time Why can t we achieve this? Synchronization requires communication Communication kills performance Thus avoid communication! Reduce intermediate data via local aggregation Combiners can help Towards Scalable Hadoop Algorithms Avoid object creation Inherently costly operation Garbage collection Avoid buffering Limited heap size Works for small datasets, but won t scale! Tools for Synchronization Cleverly-constructed data structures Bring partial results together Sort order of intermediate keys Control order in which reducers process keys Partitioner Control which reducer processes which keys Preserving state in mappers and reducers Capture dependencies across multiple keys and values Execute initialization and termination code before and after map/reduce tasks 1
2 Preserving State Mapper object Reducer object one object per task state state configure API initialization hook configure map one call per input key-value pair reduce close one call per intermediate key API cleanup hook close DESIGN PATTERNS Pattern 1: Local Aggregation Use combiners Do aggregation inside mappers Word Count: Baseline Suppose the collection has a total of n terms and d distinct terms. What are the communication costs for this mapreduce job? What are the communication costs if we add a combiner? Word Count: Aggregate in Mapper Word Count: Aggregate in Mapper (v. 2) H(dog) += 1 H(cat) += 1 H(dog) += 1 Key: preserve state across input key-value pairs! Are combiners still needed? Are combiners still needed? 2
3 Design Pattern for Local Aggregation In-mapper combining Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Explicitly control aggregation Speed Why is this faster than actual combiners? Design Pattern for Local Aggregation In-mapper combining Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Explicitly control aggregation Speed Why is this faster than actual combiners? No need to write all intermediate key-value pairs to disk! Design Pattern for Local Aggregation Design Pattern for Local Aggregation In-mapper combining Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Explicitly control aggregation Speed How/when can local aggregation help with reduce stragglers? In-mapper combining Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Explicitly control aggregation Speed How/when can local aggregation help with stragglers? When value distribution is skewed Design Pattern for Local Aggregation In-mapper combining Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Explicit control aggregation Speed Disadvantages Explicit memory management required if associative array grows too big, it will not fit in memory! Preserving state across multiple key-value pairs may lead to potential for order-dependent bugs Not a problem for word count Limiting Memory Usage To limit memory usage when using the in-mapper combining technique, block input key-value pairs and flush in-memory data structures periodically E.g., counter variable that keeps track of the number of input key-value pairs that have been processed Memory usage threshold needs to be determined empirically: with too large a value, the mapper may run out of memory, but with too small a value, opportunities for local aggregation may be lost Note: Hadoop physical memory is split between multiple tasks that may be running on a node concurrently difficult to coordinate resource consumption 3
4 Combiner Design Combiners and reducers share same method signature Sometimes, reducers can serve as combiners When is this the case? Combiner Design Combiners and reducers share same method signature Sometimes, reducers can serve as combiners When is this the case? Often, not works only when reducer is commutative and associative Remember: combiners are an optional optimization Should not affect algorithm correctness May be run 0, 1, or multiple times Example: find average of all integers associated with the same key Access logs: (user_id, session_id, session_length) Computing the Mean: Version 1 Computing the Mean: Version 1 Mean(1; 2; 3; 4; 5)?=? Mean(Mean(1; 2);Mean(3; 4; 5)) Can we use the reducer as a combiner? Computing the Mean: Version 1 Computing the Mean: Version 2 How would you fix this? Does this work? 4
5 Computing the Mean: Version 3 Computing the Mean: Version 4 Fixed? Can you make this more efficient? Pattern 2: Pairs and Stripes Keep track of joint events across a large number of observations Common in natural language processing Point-of-sale analysis to identify correlated product purchases E.g., if customer buys milk she also buys bread Assist in inventory management and product placement on store shelves Example: Term co-occurrence matrix for a text collection M = N x N matrix (N = vocabulary size) M ij : number of times i and j co-occur in some context (for concreteness, let s say context = sentence) MapReduce: Large Counting Problems Term co-occurrence matrix for a text collection is a specific instance of a large counting problem A large event space (number of terms) A large number of observations (the collection itself) Space requirement: n 2 Goal: keep track of interesting statistics about the events Basic approach Mappers generate partial counts Reducers aggregate partial counts How do we aggregate partial counts efficiently? Real-world English corpora can be hundreds of thousands of words, or even billions of words in web-scale collections First Try: Pairs Each mapper takes a sentence: Generate all co-occurring term pairs For all pairs, emit (a, b) count Reducers sum up counts associated with these pairs Use combiners! Pairs: Pseudo-Code Note the use of a complex key 5
6 Pairs Analysis Easy to implement, easy to understand Disadvantages Lots of pairs to sort and shuffle around Not many opportunities for combiners to work Another Try: Stripes Idea: group together pairs into an associative array (a, b) 1 (a, c) 2 (a, d) 5 (a, e) 3 (a, f) 2 a { b: 1, c: 2, d: 5, e: 3, f: 2 } Each mapper takes a sentence: Generate all co-occurring term pairs For each term, emit a { b: count b, c: count c, d: count d } Reducers perform element-wise sum of associative arrays a { b: 1, d: 5, e: 3 } + a { b: 1, c: 2, d: 2, f: 2 } a { b: 2, c: 2, d: 7, e: 3, f: 2 } Key: cleverly-constructed data structure brings together partial results Stripes: Pseudo-Code Stripes Analysis Far less sorting and shuffling of key-value pairs Can make better use of combiners Disadvantages More difficult to implement Underlying object more heavyweight higher serialization and de-serialization overhead Fundamental limitation in terms of size of event space What are the advantages of stripes? What about combiners? Both algorithms can benefit from the use of combiners, since the respective operations in their reducers (addition and element-wise sum of associative arrays) are both commutative and associative. Are combiners equally effective in both pairs and stripes? Relative Frequencies Absolute counts does not take into account the fact that some words appear more frequently than others, e.g., the How do we estimate relative frequencies from counts? What proportion of time does B appear in the context of A? count( A, B) f ( B A) = = count( A) count( A, B) count( A, B') B' How do we do this with MapReduce? 6
7 Easy! f(b A): Stripes a {b 1 :3, b 2 :12, b 3 :7, b 4 :1, } One pass to compute (a, *): = count(a) Another pass to directly compute f(b A): count(b 1 a) = 3/count(A) f(b A): Pairs How would you implement this? Given (a,b) à count, how to compute f(b a)? To compute the marginal, keep state in the reducer Build associative array in reducer Sort for a then b to detect if all pairs associated with a have been encountered Also need to guarantee that all a go to the same reducer: Define custom partitioner based on a (dog, aardvark) and (dog, zebra) do not necessarily go to the same reducer! Same drawback as stripes can run out of memory For this to work: (a, *) 32 (a, b 1 ) 3 (a, b 2 ) 12 (a, b 3 ) 7 (a, b 4 ) 1 f(b A): Pairs Reducer holds marginal value in memory (a, b 1 ) 3 / 32 (a, b 2 ) 12 / 32 (a, b 3 ) 7 / 32 (a, b 4 ) 1 / 32 Must emit extra (a, *) for every b n in mapper Must make sure all a s get sent to same reducer (use partitioner) Must make sure (a, *) comes first (define sort order) Must hold state in reducer across different key-value pairs Pattern 3: Order Inversion Common design pattern Computing relative frequencies requires marginal counts But marginal cannot be computed until you see all counts Buffering is a bad idea! Trick: getting the marginal counts to arrive at the reducer before the joint counts key values (dog, ) [6327, 8514,...] P compute marginal: w 0 N(dog,w0 ) = (dog, aardvark) [2,1] f(aardvark dog) = 3/42908 (dog, aardwolf) [1] f(aardwolf dog) = 1/ (dog, zebra) [2,1,1,1] f(zebra dog) = 5/42908 (doge, ) [682,...] compute P marginal: w 0 N(doge,w0 ) = Figure 3.3: Example of the sequence of key-value pairs presented to the reducer in the pairs algorithm for computing relative frequencies. This illustrates the application of the order inversion design pattern. Synchronization: Pairs vs. Stripes Approach 1: turn synchronization into an ordering problem Sort keys into correct order of computation Partition key space so that each reducer gets the appropriate set of partial results Hold state in reducer across multiple key-value pairs to perform computation Illustrated by the pairs approach Approach 2: construct data structures that bring partial results together Each reducer receives all the data it needs to complete the computation Illustrated by the stripes approach will be a list of values representing partial joint counts from the map phase (two separate values in this case). Summing these counts will yield the final joint count, i.e., the number of times dog and aardvark co-occur in the entire collection. At this point, since the reducer already knows the marginal, simple arithmetic su ces to compute the relative frequency. All subsequent joint counts Pattern are processed 4: insecondary exactly the same manner. Sorting When the reducer encounters the next special key-value pair (doge, ), the reducer resets its internal state and starts to accumulate the marginal all over again. Observe that the memory requirement for this algorithm is minimal, since only the marginal Values may be arbitrarily ordered (an integer) needs to be stored. No bu ering of individual co-occurring word counts is necessary, and therefore we have eliminated the scalability bottleneck of the previous algorithm. E.g., k This (v design 1, r), pattern, (v 3, r), which (v 4, r), we (v call 8, order r) inversion, occurs surprisingly often and across applications in many domains. It is so named because through proper coordination, we can access the result of a computation in the reducer (for example, an aggregate statistic) before processing the data needed for that computation. The key insight is to convert the sequencing of computations into a sorting problem. In most cases, an algorithm requires data in some fixed order: by controlling how keys are sorted and how the key space is partitioned, we can present data to the reducer in the order necessary to perform the proper computations. This greatly cuts down on the amount of partial results that the reducer needs to hold in memory. To summarize, the specific application of the order inversion design pattern for computing relative frequencies requires the following: MapReduce sorts input to reducers by key What if want to sort values also? Emitting a special key-value pair for each co-occurring word pair in the mapper to capture its contribution to the marginal. Controlling the sort order of the intermediate key so that the key-value pairs representing the marginal contributions are processed by the reducer before any of the pairs representing the joint word co-occurrence counts. 7
8 Secondary Sorting: Solutions Solution 1: Buffer values in memory, then sort Why is this a bad idea? Solution 2: Value-to-key conversion design pattern: form composite intermediate key, (k, v 1 ) Let execution framework do the sorting Preserve state across multiple key-value pairs to handle processing Anything else we need to do? Recap: Tools for Synchronization Cleverly-constructed data structures Bring data together Sort order of intermediate keys Control order in which reducers process keys Partitioner Control which reducer processes which keys Preserving state in mappers and reducers Capture dependencies across multiple keys and values Issues and Tradeoffs Number of key-value pairs Object creation overhead Time for sorting and shuffling pairs across the network Size of each key-value pair De/serialization overhead Local aggregation Opportunities to perform local aggregation varies Combiners make a big difference Combiners vs. in-mapper combining RAM vs. disk vs. network Debugging at Scale Works on small datasets, won t scale why? Memory management issues (buffering and object creation) Too much intermediate data Mangled input records Real-world data is messy! Word count: how many unique words in Wikipedia? There s no such thing as consistent data Watch out for corner cases Isolate unexpected behavior Cost Measures for Algorithms 1. Communication cost = total I/O of all processes. 2. Elapsed communication cost = max of I/O along any path. 3. (Elapsed ) computation costs analogous, but count only running time of processes. REASONING ABOUT COST 8
9 Example: Cost Measures What Cost Measures Mean For a map-reduce algorithm: Communication cost = input file size + 2 (sum of the sizes of all files passed from Map processes to Reduce processes) + the sum of the output sizes of the Reduce processes. Elapsed communication cost is the sum of the largest input + output for any map process, plus the same for any reduce process. Either the I/O (communication) or processing (computation) cost dominates. Ignore one or the other. Total costs tell what you pay in rent from your friendly neighborhood cloud. Elapsed costs are wall-clock time using parallelism. Join By Map-Reduce Our first example of an algorithm in this framework is a map-reduce example. Compute the natural join R and S each are stored in files. Tuples are pairs (a,b) or (b,c). R(A,B) S(B,C). JOINS IN MAPREDUCE Map-Reduce Join (2) Use a hash function h from B-values to 1..k. A Map process turns input tuple R(a,b) into keyvalue pair (b,(a,r)) and each input tuple S(b,c) into (b,(c,s)). Map-Reduce Join (3) Map processes send each key-value pair with key b to Reduce process h(b). Hadoop does this automatically; just tell it what k is. Each Reduce process matches all the pairs (b, (a,r)) with all (b,(c,s)) and outputs (a,b,c). 9
10 Cost of Map-Reduce Join Total communication cost = O( R + S + R S ). Elapsed communication cost = O(s ). We re going to pick k and the number of Map processes so I/O limit s is respected. We put a limit s on the amount of input or output that any one process can have. s could be: What fits in main memory What fits on local disk With proper indexes, computation cost is linear in the input + output size. So computation costs are like communication costs. Three-Way Join We shall consider a simple join of three relations, the natural join R(A,B) S(B,C) T(C,D). One way: cascade of two 2-way joins, each implemented by map-reduce. Fine, unless the 2-way joins produce large intermediate relations. Example: Large Intermediate Relations A = good pages ; B, C = all pages ; D = spam pages. R, S, and T each represent links. 3-way join = path of length 3 from good page to spam page. Another 3-Way Join Reduce processes use hash values of entire S(B,C) tuples as key. Choose a hash function h that maps B- and C- values to k buckets. There are k 2 Reduce processes, one for each (Bbucket, C-bucket) pair. R S = paths of length 2 from good page to any; S T = paths of length 2 from any page to spam page Mapping for 3-Way Join We map each tuple S(b,c) to ((h(b), h(c)), (S, b, c)). We map each R(a,b) tuple to ((h(b), y), (R, a, b)) for all y = 1, 2,,k. We map each T(c,d) tuple to ((x, h(c)), (T, c, d)) for all x = 1, 2,,k. S(b,c) where h(b)=1; h(c)=2 Assigning Tuples to Reducers h(c) = h(b) = 0 1 T(c,d), where h(c)=3 Keys Values 64 R(a,b), where h(b)=
11 Job of the Reducers Each reducer gets, for certain B-values b and C-values c : 1. All tuples from R with B = b, 2. All tuples from T with C = c, and 3. The tuple S(b,c) if it exists. Thus it can create every tuple of the form (a, b, c, d) in the join. RUNNING MAPREDUCE JOBS 66 Hadoop Workflow On Amazon: With EC2 1. Load data into HDFS 0. Allocate Hadoop cluster 1. Load data into HDFS 2. Develop code locally 2. Develop code locally EC2 You 3. Submit MapReduce job 3a. Go back to Step 2 Hadoop Cluster You 3. Submit MapReduce job 3a. Go back to Step 2 Your Hadoop Cluster 4. Retrieve data from HDFS 4. Retrieve data from HDFS 5. Clean up! Uh oh. Where did the data go? EC2 (The Cloud) On Amazon: EC2 and S3 Your Hadoop Cluster Copy from S3 to HDFS S3 (Persistent Store) First, take a deep breath Start small, start locally Strategies Debugging Hadoop Learn to use the webapp Where does println go? Don t use println, use logging Throw RuntimeExceptions Copy from HFDS to S3 11
12 References Data Intensive Text Processing with MapReduce, Lin and Dyer (Chapter 3) Mining of Massive Data Sets, Rajaraman et al. (Chapter 2) Hadoop tutorial: module4.html Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, The Google File System 12
Developing MapReduce Programs
Cloud Computing Developing MapReduce Programs Dell Zhang Birkbeck, University of London 2015/16 MapReduce Algorithm Design MapReduce: Recap Programmers must specify two functions: map (k, v) * Takes
University of Maryland. Tuesday, February 2, 2010
Data-Intensive Information Processing Applications Session #2 Hadoop: Nuts and Bolts Jimmy Lin University of Maryland Tuesday, February 2, 2010 This work is licensed under a Creative Commons Attribution-Noncommercial-Share
CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)
CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2016 MapReduce MapReduce is a programming model
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University http://www.mmds.org
Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit
MAPREDUCE Programming Model
CS 2510 COMPUTER OPERATING SYSTEMS Cloud Computing MAPREDUCE Dr. Taieb Znati Computer Science Department University of Pittsburgh MAPREDUCE Programming Model Scaling Data Intensive Application MapReduce
Jeffrey D. Ullman slides. MapReduce for data intensive computing
Jeffrey D. Ullman slides MapReduce for data intensive computing Single-node architecture CPU Machine Learning, Statistics Memory Classical Data Mining Disk Commodity Clusters Web data sets can be very
Energy Efficient MapReduce
Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing
Large-Scale Data Sets Clustering Based on MapReduce and Hadoop
Journal of Computational Information Systems 7: 16 (2011) 5956-5963 Available at http://www.jofcis.com Large-Scale Data Sets Clustering Based on MapReduce and Hadoop Ping ZHOU, Jingsheng LEI, Wenjun YE
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
A Cost-Benefit Analysis of Indexing Big Data with Map-Reduce
A Cost-Benefit Analysis of Indexing Big Data with Map-Reduce Dimitrios Siafarikas Argyrios Samourkasidis Avi Arampatzis Department of Electrical and Computer Engineering Democritus University of Thrace
Big Data and Scripting map/reduce in Hadoop
Big Data and Scripting map/reduce in Hadoop 1, 2, parts of a Hadoop map/reduce implementation core framework provides customization via indivudual map and reduce functions e.g. implementation in mongodb
Big Data Processing with Google s MapReduce. Alexandru Costan
1 Big Data Processing with Google s MapReduce Alexandru Costan Outline Motivation MapReduce programming model Examples MapReduce system architecture Limitations Extensions 2 Motivation Big Data @Google:
Keywords: Big Data, HDFS, Map Reduce, Hadoop
Volume 5, Issue 7, July 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Configuration Tuning
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU Heshan Li, Shaopeng Wang The Johns Hopkins University 3400 N. Charles Street Baltimore, Maryland 21218 {heshanli, shaopeng}@cs.jhu.edu 1 Overview
International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 ISSN 2278-7763
International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 A Discussion on Testing Hadoop Applications Sevuga Perumal Chidambaram ABSTRACT The purpose of analysing
A Performance Analysis of Distributed Indexing using Terrier
A Performance Analysis of Distributed Indexing using Terrier Amaury Couste Jakub Kozłowski William Martin Indexing Indexing Used by search
Introduction to Hadoop
Introduction to Hadoop 1 What is Hadoop? the big data revolution extracting value from data cloud computing 2 Understanding MapReduce the word count problem more examples MCS 572 Lecture 24 Introduction
Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 5: GFS & HDFS!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind
Big Data and Apache Hadoop s MapReduce
Big Data and Apache Hadoop s MapReduce Michael Hahsler Computer Science and Engineering Southern Methodist University January 23, 2012 Michael Hahsler (SMU/CSE) Hadoop/MapReduce January 23, 2012 1 / 23
The Performance Characteristics of MapReduce Applications on Scalable Clusters
The Performance Characteristics of MapReduce Applications on Scalable Clusters Kenneth Wottrich Denison University Granville, OH 43023 [email protected] ABSTRACT Many cluster owners and operators have
From GWS to MapReduce: Google s Cloud Technology in the Early Days
Large-Scale Distributed Systems From GWS to MapReduce: Google s Cloud Technology in the Early Days Part II: MapReduce in a Datacenter COMP6511A Spring 2014 HKUST Lin Gu [email protected] MapReduce/Hadoop
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
Distributed Computing and Big Data: Hadoop and MapReduce
Distributed Computing and Big Data: Hadoop and MapReduce Bill Keenan, Director Terry Heinze, Architect Thomson Reuters Research & Development Agenda R&D Overview Hadoop and MapReduce Overview Use Case:
Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park
Big Data: Using ArcGIS with Apache Hadoop Erik Hoel and Mike Park Outline Overview of Hadoop Adding GIS capabilities to Hadoop Integrating Hadoop with ArcGIS Apache Hadoop What is Hadoop? Hadoop is a scalable
Big Data Technology Map-Reduce Motivation: Indexing in Search Engines
Big Data Technology Map-Reduce Motivation: Indexing in Search Engines Edward Bortnikov & Ronny Lempel Yahoo Labs, Haifa Indexing in Search Engines Information Retrieval s two main stages: Indexing process
Hadoop Parallel Data Processing
MapReduce and Implementation Hadoop Parallel Data Processing Kai Shen A programming interface (two stage Map and Reduce) and system support such that: the interface is easy to program, and suitable for
Cloud Computing using MapReduce, Hadoop, Spark
Cloud Computing using MapReduce, Hadoop, Spark Benjamin Hindman [email protected] Why this talk? At some point, you ll have enough data to run your parallel algorithms on multiple computers SPMD (e.g.,
BIG DATA, MAPREDUCE & HADOOP
BIG, MAPREDUCE & HADOOP LARGE SCALE DISTRIBUTED SYSTEMS By Jean-Pierre Lozi A tutorial for the LSDS class LARGE SCALE DISTRIBUTED SYSTEMS BIG, MAPREDUCE & HADOOP 1 OBJECTIVES OF THIS LAB SESSION The LSDS
Introduction to Parallel Programming and MapReduce
Introduction to Parallel Programming and MapReduce Audience and Pre-Requisites This tutorial covers the basics of parallel programming and the MapReduce programming model. The pre-requisites are significant
Introduction to Big Data! with Apache Spark" UC#BERKELEY#
Introduction to Big Data! with Apache Spark" UC#BERKELEY# This Lecture" The Big Data Problem" Hardware for Big Data" Distributing Work" Handling Failures and Slow Machines" Map Reduce and Complex Jobs"
Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel
Parallel Databases Increase performance by performing operations in parallel Parallel Architectures Shared memory Shared disk Shared nothing closely coupled loosely coupled Parallelism Terminology Speedup:
Extreme Computing. Hadoop MapReduce in more detail. www.inf.ed.ac.uk
Extreme Computing Hadoop MapReduce in more detail How will I actually learn Hadoop? This class session Hadoop: The Definitive Guide RTFM There is a lot of material out there There is also a lot of useless
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after
Optimization and analysis of large scale data sorting algorithm based on Hadoop
Optimization and analysis of large scale sorting algorithm based on Hadoop Zhuo Wang, Longlong Tian, Dianjie Guo, Xiaoming Jiang Institute of Information Engineering, Chinese Academy of Sciences {wangzhuo,
http://www.wordle.net/
Hadoop & MapReduce http://www.wordle.net/ http://www.wordle.net/ Hadoop is an open-source software framework (or platform) for Reliable + Scalable + Distributed Storage/Computational unit Failures completely
Cloud Computing Summary and Preparation for Examination
Basics of Cloud Computing Lecture 8 Cloud Computing Summary and Preparation for Examination Satish Srirama Outline Quick recap of what we have learnt as part of this course How to prepare for the examination
MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012
MapReduce and Hadoop Aaron Birkland Cornell Center for Advanced Computing January 2012 Motivation Simple programming model for Big Data Distributed, parallel but hides this Established success at petabyte
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
Hadoop Design and k-means Clustering
Hadoop Design and k-means Clustering Kenneth Heafield Google Inc January 15, 2008 Example code from Hadoop 0.13.1 used under the Apache License Version 2.0 and modified for presentation. Except as otherwise
Introduction to Hadoop
1 What is Hadoop? Introduction to Hadoop We are living in an era where large volumes of data are available and the problem is to extract meaning from the data avalanche. The goal of the software tools
CS2510 Computer Operating Systems
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
CS2510 Computer Operating Systems
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
16.1 MAPREDUCE. For personal use only, not for distribution. 333
For personal use only, not for distribution. 333 16.1 MAPREDUCE Initially designed by the Google labs and used internally by Google, the MAPREDUCE distributed programming model is now promoted by several
Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data
Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA http://kzhang6.people.uic.edu/tutorial/amcis2014.html August 7, 2014 Schedule I. Introduction to big data
Bringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks [email protected] 2015 The MathWorks, Inc. 1 Data is the sword of the
MapReduce Jeffrey Dean and Sanjay Ghemawat. Background context
MapReduce Jeffrey Dean and Sanjay Ghemawat Background context BIG DATA!! o Large-scale services generate huge volumes of data: logs, crawls, user databases, web site content, etc. o Very useful to be able
ITG Software Engineering
Introduction to Cloudera Course ID: Page 1 Last Updated 12/15/2014 Introduction to Cloudera Course : This 5 day course introduces the student to the Hadoop architecture, file system, and the Hadoop Ecosystem.
GraySort on Apache Spark by Databricks
GraySort on Apache Spark by Databricks Reynold Xin, Parviz Deyhim, Ali Ghodsi, Xiangrui Meng, Matei Zaharia Databricks Inc. Apache Spark Sorting in Spark Overview Sorting Within a Partition Range Partitioner
The Hadoop Framework
The Hadoop Framework Nils Braden University of Applied Sciences Gießen-Friedberg Wiesenstraße 14 35390 Gießen [email protected] Abstract. The Hadoop Framework offers an approach to large-scale
The Google File System
The Google File System By Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung (Presented at SOSP 2003) Introduction Google search engine. Applications process lots of data. Need good file system. Solution:
MapReduce and the New Software Stack
20 Chapter 2 MapReduce and the New Software Stack Modern data-mining applications, often called big-data analysis, require us to manage immense amounts of data quickly. In many of these applications, the
Apache Hadoop. Alexandru Costan
1 Apache Hadoop Alexandru Costan Big Data Landscape No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard, except Hadoop 2 Outline What is Hadoop? Who uses it? Architecture HDFS MapReduce Open
This exam contains 13 pages (including this cover page) and 18 questions. Check to see if any pages are missing.
Big Data Processing 2013-2014 Q2 April 7, 2014 (Resit) Lecturer: Claudia Hauff Time Limit: 180 Minutes Name: Answer the questions in the spaces provided on this exam. If you run out of room for an answer,
Data Mining in the Swamp
WHITE PAPER Page 1 of 8 Data Mining in the Swamp Taming Unruly Data with Cloud Computing By John Brothers Business Intelligence is all about making better decisions from the data you have. However, all
Big Data Primer. 1 Why Big Data? Alex Sverdlov [email protected]
Big Data Primer Alex Sverdlov [email protected] 1 Why Big Data? Data has value. This immediately leads to: more data has more value, naturally causing datasets to grow rather large, even at small companies.
CS246: Mining Massive Datasets Jure Leskovec, Stanford University. http://cs246.stanford.edu
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2 CPU Memory Machine Learning, Statistics Classical Data Mining Disk 3 20+ billion web pages x 20KB = 400+ TB
Comparison of Different Implementation of Inverted Indexes in Hadoop
Comparison of Different Implementation of Inverted Indexes in Hadoop Hediyeh Baban, S. Kami Makki, and Stefan Andrei Department of Computer Science Lamar University Beaumont, Texas (hbaban, kami.makki,
Machine Learning Big Data using Map Reduce
Machine Learning Big Data using Map Reduce By Michael Bowles, PhD Where Does Big Data Come From? -Web data (web logs, click histories) -e-commerce applications (purchase histories) -Retail purchase histories
Similarity Search in a Very Large Scale Using Hadoop and HBase
Similarity Search in a Very Large Scale Using Hadoop and HBase Stanislav Barton, Vlastislav Dohnal, Philippe Rigaux LAMSADE - Universite Paris Dauphine, France Internet Memory Foundation, Paris, France
Hadoop WordCount Explained! IT332 Distributed Systems
Hadoop WordCount Explained! IT332 Distributed Systems Typical problem solved by MapReduce Read a lot of data Map: extract something you care about from each record Shuffle and Sort Reduce: aggregate, summarize,
GraySort and MinuteSort at Yahoo on Hadoop 0.23
GraySort and at Yahoo on Hadoop.23 Thomas Graves Yahoo! May, 213 The Apache Hadoop[1] software library is an open source framework that allows for the distributed processing of large data sets across clusters
Hadoop and Map-reduce computing
Hadoop and Map-reduce computing 1 Introduction This activity contains a great deal of background information and detailed instructions so that you can refer to it later for further activities and homework.
A Study on Workload Imbalance Issues in Data Intensive Distributed Computing
A Study on Workload Imbalance Issues in Data Intensive Distributed Computing Sven Groot 1, Kazuo Goda 1, and Masaru Kitsuregawa 1 University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan Abstract.
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
Big Data Storage, Management and challenges. Ahmed Ali-Eldin
Big Data Storage, Management and challenges Ahmed Ali-Eldin (Ambitious) Plan What is Big Data? And Why talk about Big Data? How to store Big Data? BigTables (Google) Dynamo (Amazon) How to process Big
MapReduce (in the cloud)
MapReduce (in the cloud) How to painlessly process terabytes of data by Irina Gordei MapReduce Presentation Outline What is MapReduce? Example How it works MapReduce in the cloud Conclusion Demo Motivation:
Parallel Programming Map-Reduce. Needless to Say, We Need Machine Learning for Big Data
Case Study 2: Document Retrieval Parallel Programming Map-Reduce Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin January 31 st, 2013 Carlos Guestrin
Introduction to Hadoop
Introduction to Hadoop Miles Osborne School of Informatics University of Edinburgh [email protected] October 28, 2010 Miles Osborne Introduction to Hadoop 1 Background Hadoop Programming Model Examples
HADOOP PERFORMANCE TUNING
PERFORMANCE TUNING Abstract This paper explains tuning of Hadoop configuration parameters which directly affects Map-Reduce job performance under various conditions, to achieve maximum performance. The
Lecture 10 - Functional programming: Hadoop and MapReduce
Lecture 10 - Functional programming: Hadoop and MapReduce Sohan Dharmaraja Sohan Dharmaraja Lecture 10 - Functional programming: Hadoop and MapReduce 1 / 41 For today Big Data and Text analytics Functional
Exploring the Efficiency of Big Data Processing with Hadoop MapReduce
Exploring the Efficiency of Big Data Processing with Hadoop MapReduce Brian Ye, Anders Ye School of Computer Science and Communication (CSC), Royal Institute of Technology KTH, Stockholm, Sweden Abstract.
MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example
MapReduce MapReduce and SQL Injections CS 3200 Final Lecture Jeffrey Dean and Sanjay Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. OSDI'04: Sixth Symposium on Operating System Design
CSE 344 Introduction to Data Management. Section 9: AWS, Hadoop, Pig Latin TA: Yi-Shu Wei
CSE 344 Introduction to Data Management Section 9: AWS, Hadoop, Pig Latin TA: Yi-Shu Wei Homework 8 Big Data analysis on billion triple dataset using Amazon Web Service (AWS) Billion Triple Set: contains
What is Analytic Infrastructure and Why Should You Care?
What is Analytic Infrastructure and Why Should You Care? Robert L Grossman University of Illinois at Chicago and Open Data Group [email protected] ABSTRACT We define analytic infrastructure to be the services,
Distributed Apriori in Hadoop MapReduce Framework
Distributed Apriori in Hadoop MapReduce Framework By Shulei Zhao (sz2352) and Rongxin Du (rd2537) Individual Contribution: Shulei Zhao: Implements centralized Apriori algorithm and input preprocessing
CIS 4930/6930 Spring 2014 Introduction to Data Science /Data Intensive Computing. University of Florida, CISE Department Prof.
CIS 4930/6930 Spring 2014 Introduction to Data Science /Data Intensie Computing Uniersity of Florida, CISE Department Prof. Daisy Zhe Wang Map/Reduce: Simplified Data Processing on Large Clusters Parallel/Distributed
Spark in Action. Fast Big Data Analytics using Scala. Matei Zaharia. www.spark- project.org. University of California, Berkeley UC BERKELEY
Spark in Action Fast Big Data Analytics using Scala Matei Zaharia University of California, Berkeley www.spark- project.org UC BERKELEY My Background Grad student in the AMP Lab at UC Berkeley» 50- person
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique Mahesh Maurya a, Sunita Mahajan b * a Research Scholar, JJT University, MPSTME, Mumbai, India,[email protected]
Cloud Computing. Chapter 8. 8.1 Hadoop
Chapter 8 Cloud Computing In cloud computing, the idea is that a large corporation that has many computers could sell time on them, for example to make profitable use of excess capacity. The typical customer
Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop
Lecture 32 Big Data 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop 1 2 Big Data Problems Data explosion Data from users on social
Sibyl: a system for large scale machine learning
Sibyl: a system for large scale machine learning Tushar Chandra, Eugene Ie, Kenneth Goldman, Tomas Lloret Llinares, Jim McFadden, Fernando Pereira, Joshua Redstone, Tal Shaked, Yoram Singer Machine Learning
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases Lecture 15 Big Data Management V (Big-data Analytics / Map-Reduce) Chapter 16 and 19: Abideboul et. Al. Demetris
Kafka & Redis for Big Data Solutions
Kafka & Redis for Big Data Solutions Christopher Curtin Head of Technical Research @ChrisCurtin About Me 25+ years in technology Head of Technical Research at Silverpop, an IBM Company (14 + years at Silverpop)
Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment
Analysing Large Files in a Hadoop Distributed Cluster Environment S Saravanan, B Uma Maheswari Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham,
Yuji Shirasaki (JVO NAOJ)
Yuji Shirasaki (JVO NAOJ) A big table : 20 billions of photometric data from various survey SDSS, TWOMASS, USNO-b1.0,GSC2.3,Rosat, UKIDSS, SDS(Subaru Deep Survey), VVDS (VLT), GDDS (Gemini), RXTE, GOODS,
Reduction of Data at Namenode in HDFS using harballing Technique
Reduction of Data at Namenode in HDFS using harballing Technique Vaibhav Gopal Korat, Kumar Swamy Pamu [email protected] [email protected] Abstract HDFS stands for the Hadoop Distributed File System.
RevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
Analysis of MapReduce Algorithms
Analysis of MapReduce Algorithms Harini Padmanaban Computer Science Department San Jose State University San Jose, CA 95192 408-924-1000 [email protected] ABSTRACT MapReduce is a programming model
Map Reduce / Hadoop / HDFS
Chapter 3: Map Reduce / Hadoop / HDFS 97 Overview Outline Distributed File Systems (re-visited) Motivation Programming Model Example Applications Big Data in Apache Hadoop HDFS in Hadoop YARN 98 Overview
Fast Data in the Era of Big Data: Twitter s Real-
Fast Data in the Era of Big Data: Twitter s Real- Time Related Query Suggestion Architecture Gilad Mishne, Jeff Dalton, Zhenghua Li, Aneesh Sharma, Jimmy Lin Presented by: Rania Ibrahim 1 AGENDA Motivation
Analysis and Modeling of MapReduce s Performance on Hadoop YARN
Analysis and Modeling of MapReduce s Performance on Hadoop YARN Qiuyi Tang Dept. of Mathematics and Computer Science Denison University [email protected] Dr. Thomas C. Bressoud Dept. of Mathematics and
A Tutorial Introduc/on to Big Data. Hands On Data Analy/cs over EMR. Robert Grossman University of Chicago Open Data Group
A Tutorial Introduc/on to Big Data Hands On Data Analy/cs over EMR Robert Grossman University of Chicago Open Data Group Collin BenneE Open Data Group November 12, 2012 1 Amazon AWS Elas/c MapReduce allows
Hadoop/MapReduce. Object-oriented framework presentation CSCI 5448 Casey McTaggart
Hadoop/MapReduce Object-oriented framework presentation CSCI 5448 Casey McTaggart What is Apache Hadoop? Large scale, open source software framework Yahoo! has been the largest contributor to date Dedicated
A bit about Hadoop. Luca Pireddu. March 9, 2012. CRS4Distributed Computing Group. [email protected] (CRS4) Luca Pireddu March 9, 2012 1 / 18
A bit about Hadoop Luca Pireddu CRS4Distributed Computing Group March 9, 2012 [email protected] (CRS4) Luca Pireddu March 9, 2012 1 / 18 Often seen problems Often seen problems Low parallelism I/O is
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
