Hadoop/MapReduce Workshop
|
|
|
- Clifton Bradford
- 10 years ago
- Views:
Transcription
1 Hadoop/MapReduce Workshop Dan Mazur Simon Nderitu August 14,
2 Outline Hadoop introduction and motivation Python review HDFS - The Hadoop Filesystem MapReduce examples and exercises Wordcount Distributed grep Distributed sort Maximum Mean and standard deviation Combiners 2
3 Exercise 0: Login and Setup Log-in to Guillimin $ ssh -X class##@hadoop.hpc.mcgill.ca Use the account number and password from the slip you received Copy workshop files $ cp -R /software/workshop/hadoop. Load Hadoop Module $ module show hadoop $ module load hadoop python 3
4 Exercise 1: First MapReduce job To make sure your environment is set up correctly Launch an example MapReduce job $ hadoop jar $HADOOP_EXAMPLES pi $HADOOP_EXAMPLES=/usr/hdp/ /hadoop-mapreduce/hadoopmapreduce-examples.jar Final output Job Finished in seconds Estimated value of Pi is
5 Hadoop What is Hadoop? 5
6 Hadoop What is Hadoop? A collection of related software ( software framework, software stack, software ecosystem ) Data-intensive computing ( Big Data ) Scales with data size Fault-tolerant Parallel Analysis of unstructured, semi-structured data Cluster Commodity hardware Open Source 6
7 MapReduce What is MapReduce? Parallel, distributed programming model Large-scale data processing Map() Filter, sort, embarrassingly parallel e.g. sort participants by first name Reduce() Summary e.g. count participants whose first name starts with each letter 7
8 Hadoop Ecosystem Apache Hadoop core HDFS - Hadoop Distributed File System Yarn - Resource management, job scheduling Pig - high-level scripting for MapReduce programs Hive - Data warehouse with SQL-like queries HCatalog - Abstracts data storage filenames and formats HBase - Database Zookeeper - Maintains and synchronizes configuration Oozie - Workflow scheduling Sqoop - Transfer data to/from relational databases 8
9 Hadoop Motivation For big data, hard disk input/output (I/O) is a bottleneck We have seen huge technology improvements in both CPU speeds and storage capacity I/O performance has not improved as dramatically We will see that Hadoop solves this problem by parallelizing I/O operations across many disks The genius of Hadoop is in how easy this parallelization is from the developer's perspective 9
10 Hadoop Motivation Big Data Challenges - The V -words Volume Amount of data (terabytes, petabytes) Want to distribute storage across many nodes and analyze it in place Variety Data comes in many formats, not always in a relational database Want to store data in original format Velocity Rate at which size of data grows, or speed with which it must be processed Want to expand storage and analysis capacity as data grows Data is big data if its volume, variety, or velocity are too great to manage easily with traditional tools 10
11 How Does Hadoop Solve These Problems? Distributed file system (HDFS) Scalable with redundancy Parallel computations on data nodes Batch (scheduled) processing 11
12 Hadoop works well for loosely coupled (embarrassingly) parallel problems Process coupling Hadoop vs. Competition MPI Database Hadoop Parallelism 12
13 Hadoop vs. Competition Map or reduce tasks are automatically re-run if they fail Data is stored redundantly and reproduced automatically if a drive fails Hadoop Fault Tolerance Database MPI Parallelism 13
14 Hadoop makes certain problems very easy to solve Hardest parts of parallel programming are abstracted away Today we will write several practical codes that could scale to 1000s of nodes with just a few lines of code Developer Productivity Hadoop vs. Competition Hadoop Database MPI Parallelism 14
15 Hadoop vs. and Competition Hadoop, MPI, and databases are all improving their weaknesses All are becoming fault-tolerant platforms for tightly-coupled, massively parallel problems Hadoop integrates easily into a workflow that includes MPI and/or databases Sqoop, HBase, etc. for working with databases Hadoop for post MPI data analysis Hadoop for pre MPI data processing Hadoop 2 introduced a scheduler, Yarn, that can schedule MPI, MapReduce, and other types of workloads New tools for tightly-coupled problems Apache Spark 15
16 Python Hadoop is implemented in Java Developers can work with any programming language For a workshop, it is important to have a common language 16
17 Python - for loops Can loop over individual instances of 'iterable' objects (lists, tuples, dictionaries) Looped sections use an indented block Be consistent: use a tab or 4 spaces, not both Do not forget the colon mylist = ['one', 'two', 'three'] for item in mylist: print(item) 17
18 Python standard input/output #!/usr/bin/python import sys import csv Load modules for system and comma separated value file functions reader = csv.reader(sys.stdin, delimiter=',') for line in reader: data0 = line[0] data1 = line[1] Loop over lines in reader 18
19 Dictionaries Unordered set of key/value pairs keys are unique, can be used as an index >>> dict = {'key':'value', 'apple':'the round fruit of a tree'} >>> print dict['key'] value >>> print dict['apple'] the round fruit of a tree dict 'key' 'apple' 'value' 'the round fruit of a tree' 19
20 Hadoop Distributed File System (HDFS) Key Concepts: Data is read and written in minimum units ( blocks ) A master node ( namenode ) manages the filesystem tree and the metadata for each file Data is stored on a group of worker nodes ( datanodes ) The same blocks are replicated across multiple datanodes (default replication = 3) 20
21 HDFS Blocks datanode datanode datanode datanode datanode datanode 1: 64 MB 2: 64 MB 3: 22 MB 150 MB myfile.txt 21
22 HDFS Blocks datanode datanode 1: 64 MB 2: 64 MB datanode datanode 1: 64 MB 2: 64 MB 3: 22 MB 150 MB myfile.txt Data is distributed block-by-block to multiple nodes 3: 22 MB datanode datanode 22
23 HDFS Blocks 1: 64 MB 2: 64 MB 3: 22 MB 150 MB myfile.txt Data redundancy default = 3x datanode datanode 1: 64 MB 2: 64 MB 3: 22 MB 3: 22 MB datanode datanode 3: 22 MB 1: 64 MB 2: 64 MB 2: 64 MB datanode datanode 1: 64 MB If we lose a node, data is available on 2 other nodes and the namenode arranges to create a 3rd copy on another node 23
24 Exercise 2: Using HDFS Put a file into HDFS $ hdfs dfs -put titanic.txt List files in HDFS $ hdfs dfs -ls Output the file contents $ hdfs dfs -cat titanic.txt $ hdfs dfs -tail titanic.txt Get help $ hdfs dfs -help Put the workshop data sets into HDFS $ hdfs dfs -put usask_access_logs $ hdfs dfs -put household_power_consumption.txt 24
25 MapReduce Roman census approach: Want to count (and tax) all people in the Roman empire Better to go to where the people are (decentralized) than try to bring them to you (centralized) Bring back information from each village (map phase) Summarize the global picture (reduce phase) 25
26 Roman Census: Mapping Village Village mapper mapper Village Capital Village mapper Village 287 men 293 women 104 children 854 sheep... mapper Village Note: These mappers are also combiners in Hadoop language. We will discuss what this means. 26
27 Roman Census: Reducing 854 sheep reducer 34 sheep reducer 1032 sheep 854sheep sheep sheep 854sheep sheep children 206 sheep 854sheep sheep sheep 854sheep sheep women 854sheep sheep sheep 854sheep sheep men 91 sheep 545 sheep reducer 2762 sheep 27
28 MapReduce Data key, key, key, key, key, key, key, key, key, value value value value value value value value value pairs pairs pairs pairs pairs pairs pairs pairs pairs mappers sort and shuffle key, all values key, all values key, all values key, all values reducers results 28
29 Mapper Takes specified data as input Works with a fraction of the data Works in parallel Outputs intermediate records key, value pairs Recall hash tables or python dictionaries 29
30 Reducer Takes a key or set of keys with all associated values as input Works with all data for that key Outputs the final results 30
31 MapReduce Word Counting Want to count the frequency of words in a document What are the key, value pairs for our mapper output? A) key=1, value='word' B) key='word', value=1 C) key=[number in hdfs block], value='word' D) key='word', value=[number in hdfs block] E) Something else 31
32 MapReduce Word Counting Want to count the frequency of words in a document What are the key, value pairs for our mapper output? A) key=1, value='word' B) key='word', value=1 C) key=[number in hdfs block], value='word' D) key='word', value=[number in hdfs block] E) Something else Explanation: We want to sort according to the words, so that is the key. We can generate a pair for each word, we don't need the mapper to keep track of frequencies 32
33 MapReduce Word Counting If our mapper input is hello world, hello!, what will our reducer input look like? A) hello world hello B) hello world hello C) hello hello world D) hello world
34 MapReduce Word Counting If our mapper input is hello world, hello!, what will our reducer input look like? A) hello world hello B) hello world hello C) hello hello world D) hello world Explanation: The reducer receives SORTED key, value pairs. The sorting is done automatically by Hadoop. D is also possible, we will learn about combiners later. 34
35 MapReduce Word Counting Want to count the frequency of words in a document What are the key, value pairs for our reducer output? A) key=1, value='word' B) key='word', value=1 C) key=[count in document], value='word' D) key='word', value=[count in document] E) Something else 35
36 MapReduce Word Counting Want to count the frequency of words in a document What are the key, value pairs for our reducer output? A) key=1, value='word' B) key='word', value=1 C) key=[count in document], value='word' D) key='word', value=[count in document] E) Something else 36
37 Hadoop Streaming Streaming lets developers use any programming language for mapping and reducing Use standard input and standard output The first tab character delimits between key and value Similar to Bash pipes $ cat file.txt./mapper sort./reducer HADOOP_STREAM=/usr/hdp/ /hadoop-mapreduce/hadoop-streaming.jar hadoop jar $HADOOP_STREAM -files <mapper script>,<reducer script> -input <input dir> -output <output dir> -mapper <mapper script> -reducer <reducer script> 37
38 mapper.py #!/usr/bin/env python Scripts require a hash bang line import sys Import the sys module for stdin for line in sys.stdin: # split the line into words words = line.split() for word in words: print word, '\t', 1 Loop over standard input Loop over words Print tab-separated key, value pairs 38
39 Reducer: Checking for key changes Often reducers will have to detect when the key changes in the sorted mapper output prevkey = None for line in inputreader: key = line[0] #If the current key is the same as previous key if key == prevkey: value =... # update for current line # Else we have started a new group of keys else: # if not first line of input if not prevkey == None: # Completed entire key, print value print prevkey, \t, value value =... # set for first entry of new key # Set prevkey to the current key prevkey = key #Output final key, value pair if prevkey: print prevkey, '\t', value 39
40 reducer.py #!/usr/bin/env python import sys prevword = None wordcount = 0 word = None for line in sys.stdin: word, count = line.split('\t', 1) count = int(count) if word == prevword: wordcount += count else: if prevword: print prevword, '\t', wordcount wordcount = count prevword = word if prevword: print prevword, '\t', wordcount 40
41 Testing map and reduce scripts It is useful to test your scripts with a small amount of data in serial to check for syntax errors $ head -100 mydata.txt./mapper.py sort./reducer.py 41
42 Exercise 3: Word count Place the directory montgomery into HDFS $ hdfs dfs -put montgomery Submit a MapReduce job with your *tested* scripts to count the word frequencies in Lucy Maud Montgomery's books $ hadoop jar $HADOOP_STREAM \ -files mapper_wordcount.py,reducer_wordcount.py\ -mapper mapper_wordcount.py \ -reducer reducer_wordcount.py \ -input montgomery \ -output wordcount 42
43 Exercise 3: Word count View the output directory $ hdfs dfs -ls wordcount View your results $hdfs dfs -cat wordcount/part View your (sorted) results $ hdfs dfs -cat wordcount/part sort -k 2 -n 43
44 Storage A Hadoop cluster has 10 nodes with 300GB of storage per node with the default HDFS setup (replication factor 3, 64MB blocks). Alice wants to upload two 400GB files and run WordCount on them both. What will happen? A) The data upload fails at the first file B) The data upload fails at the second file C) MapReduce job fails D) None of the above. MapReduce is successful. 44
45 Storage A Hadoop cluster has 10 nodes with 300GB of storage per node with the default HDFS setup (replication factor 3, 64MB blocks). Alice wants to upload two 400GB files and run WordCount on them both. What will happen? A) The data upload fails at the first file B) The data upload fails at the second file C) MapReduce job fails D) None of the above. MapReduce is successful. 45
46 Storage A Hadoop cluster has 10 nodes with 300GB of storage per node with the default HDFS setup (replication factor 3, 64MB blocks). Alice wants to upload three 400GB files and run WordCount on them all. What will happen? A) The data upload fails at the first file B) The data upload fails at the second file C) The data upload fails at the third file D) MapReduce job fails E) None of the above. MapReduce is successful. 46
47 Storage A Hadoop cluster has 10 nodes with 300GB of storage per node with the default HDFS setup (replication factor 3, 64MB blocks). Alice wants to upload three 400GB files and run WordCount on them all. What will happen? A) The data upload fails at the first file B) The data upload fails at the second file C) The data upload fails at the third file D) MapReduce job fails E) None of the above. MapReduce is successful. Explanation: The small cluster can only store 1.0TB of data with 3X replication! Alice wants to upload 1.2TB. 47
48 Simplifying MapReduce Commands The native streaming commands are cumbersome TIP: Create simplifying aliases and functions e.g. mapreduce_stream(){ hadoop jar $HADOOP_STREAM -files $1,$2 \ -mapper $1 \ -reducer $2 \ -input $3 -output $4 } alias mrs mapreduce_stream Place these commands into ~/.bashrc so they are executed in each new bash session (each login) To avoid confusion, we will only use the native commands today 48
49 Exercise 4: Hadoop Web UI Hadoop includes a web-based user interface Launch a firefox window $ firefox & Navigate to the Hadoop Job Monitor Navigate to the namenode and filesystem Navigate to the job history 49
50 50
51 Accessing Job Logs Through the web UI Through the command line $ yarn logs -applicationid application_ _
52 Example: Distributed Grep Note: We don't have to write any scripts! Note: There is no reducer phase Hadoop Command: $ hadoop jar $HADOOP_STREAM \ -D mapreduce.job.reduces=0 \ -D mapred.reduce.tasks=0 \ -input titanic.txt \ -output grepout \ -mapper /bin/grep Williams View Results: $ hdfs dfs -cat grepout/part-0000* 52
53 Household Power Consumption Dataset: household_power_consumption.txt From the UCI Machine Learning Repository 9 Columns, semicolon separated (see household_power_consumption.explain) 1. date: dd/mm/yyyy 2. time: hh:mm:ss 3. minute-averaged active power (kilowatts) 4. minute-averaged reactive power (kilowatts) 5. minute-averaged voltage (volts) 6. minute-averaged current (amps) 7. kitchen active energy (watt-hours) 8. laundry active energy (watt-hours) 9. water-heater and A/C active energy (watt-hours) $ hdfs dfs -put household_power_consumption.txt 53
54 Problematic Input In the household_power_consumption data, missing values are specified by '?' Analysts must decide how to deal with unexpected values in unstructured data Today, we will ignore it for line in sys.stdin: try: data = float(line.split(';')[2]) except: continue... 54
55 On which date was the maximum minute-averaged active power? What should the output of our mapper be? A) power 1 B) date 1 C) date power D) something else 55
56 On which date was the maximum minute-averaged active power? What should the output of our mapper be? A) power 1 B) date 1 C) date power D) something else 56
57 Working with.csv files in python We can use the csv module in python to parse.csv files more easily import sys, csv reader = csv.reader(sys.stdin, delimiter=';') for line in reader: data = float(line[2])... 57
58 Exercise 5: Compute the maximum Write a mapper and a reducer to compute the maximum value of the minute-averaged active power (3rd column), as well as the date on which this power occurred 58
59 Combiners To compute the max, you may have... Output a list of all values with a single common key Had a single reducer compute the maximum value in serial We would like to do some pre-reduction on the mapper nodes to balance the workload from the reducer to the mappers To find the maximum, we only need to send the maximum from each mapper through the network, not every value 59
60 Combiners (maximum)... 20/12/2006;02:46:00;1.516;0.262; ;6.200;0.000;1.000; /12/2006;02:47:00;1.498;0.258; ;6.200;0.000;2.000; /12/2006;02:48:00;1.518;0.264; ;6.200;0.000;1.000; Map 16/12/ /12/ /12/ /12/ /12/ /12/ /12/ /12/ /12/ /12/ /12/ Combine 16/12/2006 Shuffle and sort, reduce 60
61 Combiners To compute the maximum the reducer and the combiner can be the same script max() is associative and commutative max([a,b]) = max([b,a]) max([a, max([b,c])]) = max([max([a,b]),c]) 61
62 Combiners To test your combiner scripts $ cat data.txt./mapper.py sort./combiner.py./reducer.py Hadoop sorts locally before the combiner and globally between the combiner and reducer Note that Hadoop does not guarantee how many times the combiner will be run 62
63 Combiners What is the benefit of reducing the number of keyvalue pairs sent to the reducer? A) The amount of work done in the Map phase is reduced B) The amount of work done in the Reduce phase is reduced C) The amount of data sent through the network is reduced D) More than one of the above E) None of the above 63
64 Combiners What is the benefit of reducing the number of keyvalue pairs sent to the reducer? A) The amount of work done in the Map phase is reduced B) The amount of work done in the Reduce phase is reduced C) The amount of data sent through the network is reduced D) More than one of the above E) None of the above 64
65 Histograms Histograms 'bin' continuous data into discrete ranges Mwtoews,
66 Exercise 6: Histogram Write a mapper that uses round(power) to 'bin' the minute-averaged active power readings (3rd column) Output for each reading: [power bin], 1 Write a reducer that creates combined counts Input: [power bin], count Output: [power bin], count This script must also function as combiner Submit your tested scripts as a Hadoop job Use the reducer script as a combiner $ hadoop jar $HADOOP_STREAM... -combiner reducer_hist.py... You may generate a plot (plotting is outside the scope of the workshop) $ hdfs dfs -cat histogram/part solutions/plot_hist.py 66
67 Histogram 67
68 Mean and Standard Deviation We can't easily use combiners to compute the mean max(max(a,b), max(c,d,e)) = max(a,b,c,d,e) mean(mean(a,b), mean(c,d,e))!= mean(a,b,c,d,e) Reducer function can be used as a combiner if associative: (A*B)*C = A*(B*C) commutative: A*B = B*A e.g.: counting, addition, multiplication,... Computing the mean and standard deviation means the reducer is stuck with a lot of math Combiner idea for mean key = intermediate sum value = number of entries 68
69 Power consumption by day of week Are there days of the week when more power is consumed, on average? Want to know the mean and standard deviation for each week day Simplification: Compute average of minuteaveraged powers, grouped by day of week 69
70 Python datetime The datetime module is a powerful library for working with dates and times We can easily find the day of the week from the date from datetime import datetime weekday = datetime.strptime(date, "%d/%m/%y").weekday() 70
71 Exercise 7: Mean and Standard Deviation Write mapper and reducer code to compute the mean and standard deviation for active power (3rd column) for each of the seven days of the week Test your scripts using serial Bash commands Submit your job to Hadoop Tip: Wikipedia - Algorithms for calculating variance Python code to compute mean and variance in a single pass 71
72 Speedup - Mean and St.Dev. ~2 million entries Serial Bash version $ cat household_power_consumption.txt./mapper.py sort./reducer.py 75 seconds Hadoop version 2 mappers, 1 reducer: 50 seconds Speedup: 1.5X 2 mappers, 2 reducers: 48 seconds Speedup: 1.6X -D mapred.reduce.tasks=2 4 mappers, 4 reducers: 29 seconds Speedup: 2.6X 72
73 Choosing numbers of maps/reduces Mappers More mappers increases parallelism Too many mappers increases scheduling overhead Hadoop automatically sets the number of mappers according to the block size and input data size Reducers Too few reducers increases the computational load on each reducer Too many reducers increases shuffle and HDFS overhead Rule of thumb: Each reducer should process 1-10GB of data 73
74 Iterative MapReducing Many tasks in scientific computing cannot be easily expressed as a single MapReduce job Often, we require iterating over data K-means clustering is an example We will see how it can be implemented in MapReduce We will not implement it, just see how it works with the MapReduce framework A MapReduce K-means clustering is implemented in Mahout (scalable machine learning algorithms) 74
75 K-means clustering Unsupervised machine learning Divide a data set into k different categories based on the features of that data set Computational hotspot: computing distances between each cluster centroid and each data point, O(n*k) E.g. Clothing manufacturer: based on customer's height and weight data, divide them into 3 or more size categories E.g. Categorize astronomical objects into stars, galaxies, quasars, etc. based on spectral data E.g. Categorize gene expression profiles to study function within similar expressions 75
76 K-means clustering Step 1: Randomly generate K locations (circles) Step 3: Update locations to the centroid of each group Step 2: Group data points by proximity to locations Iterate over steps 2 and 3 Images: I, Weston.pace 76
77 MapReduce K-Means Data points Centroid locations Mapper Mapper Calculate centroid distances Assign data points to nearest centroid Mapper key: best centroid value: data point Mapper No Reducer Compute new centroids Reducer key: old centroid value: new centroid Converged? Yes Final Centroid locations 77
78 Iterative MapReducing To make iterative jobs easier, the Hadoop ecosystem has tools for iterative workloads Twister - Iterative MapReduce framework HaLoop - Iterative MapReduce framework Mahout - Scalable implementations of machine learning algorithms on Hadoop (including k-means) Spark - Framework for in-memory distributed computing, in-memory data sharing between jobs Make use of high-level interfaces to MapReduce for more complex jobs Pig, Mahout, etc. 78
79 What questions do you have? 79
80 In the time remaining... Import your own data into HDFS for analysis Your quota is 300GB (after replication) by default Examine some data from Twitter /software/workshop/twitter_data 3.8 million tweets + metadata ~ 11 GB Continue to work with the workshop data sets titanic.txt household_power_consumption.txt usask_access_log Contact us to add your user account to the Hadoop test environment (class accounts deactivate later today) [email protected] 80
81 Keep Learning... Contact us for access to our Hadoop test system Download a Hadoop virtual machine View online training materials Calcul Quebec workshop on Apache Spark (French) 81
82 Bonus Exercise: Top 10 websites Produce a top 10 list of websites accessed on the University of Saskatchewan website usask_access_logs Be careful Some lines wont conform to your expectations How to handle? skip? exception? 82
83 Top 10 List Mapper output: key 1 Combiner returns top 10 for each mapper output: key count Reducer finds the global top 10 output: key count 83
Hadoop/MapReduce Workshop. Dan Mazur, McGill HPC [email protected] [email protected] July 10, 2014
Hadoop/MapReduce Workshop Dan Mazur, McGill HPC [email protected] [email protected] July 10, 2014 1 Outline Hadoop introduction and motivation Python review HDFS - The Hadoop Filesystem MapReduce
Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data
Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give
Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop
Lecture 32 Big Data 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop 1 2 Big Data Problems Data explosion Data from users on social
MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012
MapReduce and Hadoop Aaron Birkland Cornell Center for Advanced Computing January 2012 Motivation Simple programming model for Big Data Distributed, parallel but hides this Established success at petabyte
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
BIG DATA What it is and how to use?
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
A bit about Hadoop. Luca Pireddu. March 9, 2012. CRS4Distributed Computing Group. [email protected] (CRS4) Luca Pireddu March 9, 2012 1 / 18
A bit about Hadoop Luca Pireddu CRS4Distributed Computing Group March 9, 2012 [email protected] (CRS4) Luca Pireddu March 9, 2012 1 / 18 Often seen problems Often seen problems Low parallelism I/O is
Hadoop 101. Lars George. NoSQL- Ma4ers, Cologne April 26, 2013
Hadoop 101 Lars George NoSQL- Ma4ers, Cologne April 26, 2013 1 What s Ahead? Overview of Apache Hadoop (and related tools) What it is Why it s relevant How it works No prior experience needed Feel free
Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook
Hadoop Ecosystem Overview CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook Agenda Introduce Hadoop projects to prepare you for your group work Intimate detail will be provided in future
Hadoop. Bioinformatics Big Data
Hadoop Bioinformatics Big Data Paolo D Onorio De Meo Mattia D Antonio [email protected] [email protected] Big Data Too much information! Big Data Explosive data growth proliferation of data capture
Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?
Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software? 可 以 跟 資 料 庫 結 合 嘛? Can Hadoop work with Databases? 開 發 者 們 有 聽 到
Introduction to Hadoop on the SDSC Gordon Data Intensive Cluster"
Introduction to Hadoop on the SDSC Gordon Data Intensive Cluster" Mahidhar Tatineni SDSC Summer Institute August 06, 2013 Overview "" Hadoop framework extensively used for scalable distributed processing
Prepared By : Manoj Kumar Joshi & Vikas Sawhney
Prepared By : Manoj Kumar Joshi & Vikas Sawhney General Agenda Introduction to Hadoop Architecture Acknowledgement Thanks to all the authors who left their selfexplanatory images on the internet. Thanks
Introduction to Hadoop. New York Oracle User Group Vikas Sawhney
Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
Hadoop Ecosystem B Y R A H I M A.
Hadoop Ecosystem B Y R A H I M A. History of Hadoop Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open
Extreme computing lab exercises Session one
Extreme computing lab exercises Session one Michail Basios ([email protected]) Stratis Viglas ([email protected]) 1 Getting started First you need to access the machine where you will be doing all
A very short Intro to Hadoop
4 Overview A very short Intro to Hadoop photo by: exfordy, flickr 5 How to Crunch a Petabyte? Lots of disks, spinning all the time Redundancy, since disks die Lots of CPU cores, working all the time Retry,
How To Use Hadoop
Hadoop in Action Justin Quan March 15, 2011 Poll What s to come Overview of Hadoop for the uninitiated How does Hadoop work? How do I use Hadoop? How do I get started? Final Thoughts Key Take Aways Hadoop
International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 ISSN 2278-7763
International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 A Discussion on Testing Hadoop Applications Sevuga Perumal Chidambaram ABSTRACT The purpose of analysing
Large scale processing using Hadoop. Ján Vaňo
Large scale processing using Hadoop Ján Vaňo What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data Includes: MapReduce offline computing engine
Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. https://hadoop.apache.org. Big Data Management and Analytics
Overview Big Data in Apache Hadoop - HDFS - MapReduce in Hadoop - YARN https://hadoop.apache.org 138 Apache Hadoop - Historical Background - 2003: Google publishes its cluster architecture & DFS (GFS)
Big Data and Apache Hadoop s MapReduce
Big Data and Apache Hadoop s MapReduce Michael Hahsler Computer Science and Engineering Southern Methodist University January 23, 2012 Michael Hahsler (SMU/CSE) Hadoop/MapReduce January 23, 2012 1 / 23
Hadoop Job Oriented Training Agenda
1 Hadoop Job Oriented Training Agenda Kapil CK [email protected] Module 1 M o d u l e 1 Understanding Hadoop This module covers an overview of big data, Hadoop, and the Hortonworks Data Platform. 1.1 Module
Introduction to Hadoop
Introduction to Hadoop Miles Osborne School of Informatics University of Edinburgh [email protected] October 28, 2010 Miles Osborne Introduction to Hadoop 1 Background Hadoop Programming Model Examples
Hadoop implementation of MapReduce computational model. Ján Vaňo
Hadoop implementation of MapReduce computational model Ján Vaňo What is MapReduce? A computational model published in a paper by Google in 2004 Based on distributed computation Complements Google s distributed
A Brief Outline on Bigdata Hadoop
A Brief Outline on Bigdata Hadoop Twinkle Gupta 1, Shruti Dixit 2 RGPV, Department of Computer Science and Engineering, Acropolis Institute of Technology and Research, Indore, India Abstract- Bigdata is
How To Scale Out Of A Nosql Database
Firebird meets NoSQL (Apache HBase) Case Study Firebird Conference 2011 Luxembourg 25.11.2011 26.11.2011 Thomas Steinmaurer DI +43 7236 3343 896 [email protected] www.scch.at Michael Zwick DI
Chase Wu New Jersey Ins0tute of Technology
CS 698: Special Topics in Big Data Chapter 4. Big Data Analytics Platforms Chase Wu New Jersey Ins0tute of Technology Some of the slides have been provided through the courtesy of Dr. Ching-Yung Lin at
CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)
CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2016 MapReduce MapReduce is a programming model
Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing
Data-Intensive Programming Timo Aaltonen Department of Pervasive Computing Data-Intensive Programming Lecturer: Timo Aaltonen University Lecturer [email protected] Assistants: Henri Terho and Antti
Data processing goes big
Test report: Integration Big Data Edition Data processing goes big Dr. Götz Güttich Integration is a powerful set of tools to access, transform, move and synchronize data. With more than 450 connectors,
t] open source Hadoop Beginner's Guide ij$ data avalanche Garry Turkington Learn how to crunch big data to extract meaning from
Hadoop Beginner's Guide Learn how to crunch big data to extract meaning from data avalanche Garry Turkington [ PUBLISHING t] open source I I community experience distilled ftu\ ij$ BIRMINGHAMMUMBAI ')
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE AGENDA Introduction to Big Data Introduction to Hadoop HDFS file system Map/Reduce framework Hadoop utilities Summary BIG DATA FACTS In what timeframe
Hadoop. http://hadoop.apache.org/ Sunday, November 25, 12
Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using
Implement Hadoop jobs to extract business value from large and varied data sets
Hadoop Development for Big Data Solutions: Hands-On You Will Learn How To: Implement Hadoop jobs to extract business value from large and varied data sets Write, customize and deploy MapReduce jobs to
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
Intro to Map/Reduce a.k.a. Hadoop
Intro to Map/Reduce a.k.a. Hadoop Based on: Mining of Massive Datasets by Ra jaraman and Ullman, Cambridge University Press, 2011 Data Mining for the masses by North, Global Text Project, 2012 Slides by
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008 Who Am I? Hadoop Developer Core contributor since Hadoop s infancy Focussed
Tutorial for Assignment 2.0
Tutorial for Assignment 2.0 Florian Klien & Christian Körner IMPORTANT The presented information has been tested on the following operating systems Mac OS X 10.6 Ubuntu Linux The installation on Windows
Constructing a Data Lake: Hadoop and Oracle Database United!
Constructing a Data Lake: Hadoop and Oracle Database United! Sharon Sophia Stephen Big Data PreSales Consultant February 21, 2015 Safe Harbor The following is intended to outline our general product direction.
Hadoop IST 734 SS CHUNG
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
DATA MINING WITH HADOOP AND HIVE Introduction to Architecture
DATA MINING WITH HADOOP AND HIVE Introduction to Architecture Dr. Wlodek Zadrozny (Most slides come from Prof. Akella s class in 2014) 2015-2025. Reproduction or usage prohibited without permission of
Map Reduce & Hadoop Recommended Text:
Big Data Map Reduce & Hadoop Recommended Text:! Large datasets are becoming more common The New York Stock Exchange generates about one terabyte of new trade data per day. Facebook hosts approximately
Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park
Big Data: Using ArcGIS with Apache Hadoop Erik Hoel and Mike Park Outline Overview of Hadoop Adding GIS capabilities to Hadoop Integrating Hadoop with ArcGIS Apache Hadoop What is Hadoop? Hadoop is a scalable
Introduction to Cloud Computing
Introduction to Cloud Computing Qloud Demonstration 15 319, spring 2010 3 rd Lecture, Jan 19 th Suhail Rehman Time to check out the Qloud! Enough Talk! Time for some Action! Finally you can have your own
HDFS. Hadoop Distributed File System
HDFS Kevin Swingler Hadoop Distributed File System File system designed to store VERY large files Streaming data access Running across clusters of commodity hardware Resilient to node failure 1 Large files
Data-Intensive Computing with Map-Reduce and Hadoop
Data-Intensive Computing with Map-Reduce and Hadoop Shamil Humbetov Department of Computer Engineering Qafqaz University Baku, Azerbaijan [email protected] Abstract Every day, we create 2.5 quintillion
Internals of Hadoop Application Framework and Distributed File System
International Journal of Scientific and Research Publications, Volume 5, Issue 7, July 2015 1 Internals of Hadoop Application Framework and Distributed File System Saminath.V, Sangeetha.M.S Abstract- Hadoop
ITG Software Engineering
Introduction to Apache Hadoop Course ID: Page 1 Last Updated 12/15/2014 Introduction to Apache Hadoop Course Overview: This 5 day course introduces the student to the Hadoop architecture, file system,
Maximizing Hadoop Performance and Storage Capacity with AltraHD TM
Maximizing Hadoop Performance and Storage Capacity with AltraHD TM Executive Summary The explosion of internet data, driven in large part by the growth of more and more powerful mobile devices, has created
Getting Started with Hadoop. Raanan Dagan Paul Tibaldi
Getting Started with Hadoop Raanan Dagan Paul Tibaldi What is Apache Hadoop? Hadoop is a platform for data storage and processing that is Scalable Fault tolerant Open source CORE HADOOP COMPONENTS Hadoop
COURSE CONTENT Big Data and Hadoop Training
COURSE CONTENT Big Data and Hadoop Training 1. Meet Hadoop Data! Data Storage and Analysis Comparison with Other Systems RDBMS Grid Computing Volunteer Computing A Brief History of Hadoop Apache Hadoop
What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea
What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea Overview Riding Google App Engine Taming Hadoop Summary Riding
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected]
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected] Hadoop, Why? Need to process huge datasets on large clusters of computers
Workshop on Hadoop with Big Data
Workshop on Hadoop with Big Data Hadoop? Apache Hadoop is an open source framework for distributed storage and processing of large sets of data on commodity hardware. Hadoop enables businesses to quickly
!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets
!"#$%&' ( Processing LARGE data sets )%#*'+,'-#.//"0( Framework for o! reliable o! scalable o! distributed computation of large data sets 4(5,67,!-+!"89,:*$;'0+$.
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
Hadoop Streaming. Table of contents
Table of contents 1 Hadoop Streaming...3 2 How Streaming Works... 3 3 Streaming Command Options...4 3.1 Specifying a Java Class as the Mapper/Reducer... 5 3.2 Packaging Files With Job Submissions... 5
map/reduce connected components
1, map/reduce connected components find connected components with analogous algorithm: map edges randomly to partitions (k subgraphs of n nodes) for each partition remove edges, so that only tree remains
Jeffrey D. Ullman slides. MapReduce for data intensive computing
Jeffrey D. Ullman slides MapReduce for data intensive computing Single-node architecture CPU Machine Learning, Statistics Memory Classical Data Mining Disk Commodity Clusters Web data sets can be very
Apache Hadoop new way for the company to store and analyze big data
Apache Hadoop new way for the company to store and analyze big data Reyna Ulaque Software Engineer Agenda What is Big Data? What is Hadoop? Who uses Hadoop? Hadoop Architecture Hadoop Distributed File
Introduction to MapReduce and Hadoop
Introduction to MapReduce and Hadoop Jie Tao Karlsruhe Institute of Technology [email protected] Die Kooperation von Why Map/Reduce? Massive data Can not be stored on a single machine Takes too long to process
Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015
Hadoop MapReduce and Spark Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015 Outline Hadoop Hadoop Import data on Hadoop Spark Spark features Scala MLlib MLlib
Big Data Processing with Google s MapReduce. Alexandru Costan
1 Big Data Processing with Google s MapReduce Alexandru Costan Outline Motivation MapReduce programming model Examples MapReduce system architecture Limitations Extensions 2 Motivation Big Data @Google:
This exam contains 13 pages (including this cover page) and 18 questions. Check to see if any pages are missing.
Big Data Processing 2013-2014 Q2 April 7, 2014 (Resit) Lecturer: Claudia Hauff Time Limit: 180 Minutes Name: Answer the questions in the spaces provided on this exam. If you run out of room for an answer,
PassTest. Bessere Qualität, bessere Dienstleistungen!
PassTest Bessere Qualität, bessere Dienstleistungen! Q&A Exam : CCD-410 Title : Cloudera Certified Developer for Apache Hadoop (CCDH) Version : DEMO 1 / 4 1.When is the earliest point at which the reduce
Extreme computing lab exercises Session one
Extreme computing lab exercises Session one Miles Osborne (original: Sasa Petrovic) October 23, 2012 1 Getting started First you need to access the machine where you will be doing all the work. Do this
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases Lecture 15 Big Data Management V (Big-data Analytics / Map-Reduce) Chapter 16 and 19: Abideboul et. Al. Demetris
Deploying Hadoop with Manager
Deploying Hadoop with Manager SUSE Big Data Made Easier Peter Linnell / Sales Engineer [email protected] Alejandro Bonilla / Sales Engineer [email protected] 2 Hadoop Core Components 3 Typical Hadoop Distribution
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA http://kzhang6.people.uic.edu/tutorial/amcis2014.html August 7, 2014 Schedule I. Introduction to big data
Big Data : Experiments with Apache Hadoop and JBoss Community projects
Big Data : Experiments with Apache Hadoop and JBoss Community projects About the speaker Anil Saldhana is Lead Security Architect at JBoss. Founder of PicketBox and PicketLink. Interested in using Big
Getting Started with Hadoop with Amazon s Elastic MapReduce
Getting Started with Hadoop with Amazon s Elastic MapReduce Scott Hendrickson [email protected] http://drskippy.net/projects/emr-hadoopmeetup.pdf Boulder/Denver Hadoop Meetup 8 July 2010 Scott Hendrickson
Open source Google-style large scale data analysis with Hadoop
Open source Google-style large scale data analysis with Hadoop Ioannis Konstantinou Email: [email protected] Web: http://www.cslab.ntua.gr/~ikons Computing Systems Laboratory School of Electrical
MapReduce with Apache Hadoop Analysing Big Data
MapReduce with Apache Hadoop Analysing Big Data April 2010 Gavin Heavyside [email protected] About Journey Dynamics Founded in 2006 to develop software technology to address the issues
Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015
Lecture 2 (08/31, 09/02, 09/09): Hadoop Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015 K. Zhang BUDT 758 What we ll cover Overview Architecture o Hadoop
Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc [email protected]
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc [email protected] What s Hadoop Framework for running applications on large clusters of commodity hardware Scale: petabytes of data
Hadoop & Spark Using Amazon EMR
Hadoop & Spark Using Amazon EMR Michael Hanisch, AWS Solutions Architecture 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Agenda Why did we build Amazon EMR? What is Amazon EMR?
Sector vs. Hadoop. A Brief Comparison Between the Two Systems
Sector vs. Hadoop A Brief Comparison Between the Two Systems Background Sector is a relatively new system that is broadly comparable to Hadoop, and people want to know what are the differences. Is Sector
Big Data Too Big To Ignore
Big Data Too Big To Ignore Geert! Big Data Consultant and Manager! Currently finishing a 3 rd Big Data project! IBM & Cloudera Certified! IBM & Microsoft Big Data Partner 2 Agenda! Defining Big Data! Introduction
Cloudera Distributed Hadoop (CDH) Installation and Configuration on Virtual Box
Cloudera Distributed Hadoop (CDH) Installation and Configuration on Virtual Box By Kavya Mugadur W1014808 1 Table of contents 1.What is CDH? 2. Hadoop Basics 3. Ways to install CDH 4. Installation and
Reduction of Data at Namenode in HDFS using harballing Technique
Reduction of Data at Namenode in HDFS using harballing Technique Vaibhav Gopal Korat, Kumar Swamy Pamu [email protected] [email protected] Abstract HDFS stands for the Hadoop Distributed File System.
MapReduce. Tushar B. Kute, http://tusharkute.com
MapReduce Tushar B. Kute, http://tusharkute.com What is MapReduce? MapReduce is a framework using which we can write applications to process huge amounts of data, in parallel, on large clusters of commodity
BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic
BigData An Overview of Several Approaches David Mera Masaryk University Brno, Czech Republic 16/12/2013 Table of Contents 1 Introduction 2 Terminology 3 Approaches focused on batch data processing MapReduce-Hadoop
Hadoop/MapReduce. Object-oriented framework presentation CSCI 5448 Casey McTaggart
Hadoop/MapReduce Object-oriented framework presentation CSCI 5448 Casey McTaggart What is Apache Hadoop? Large scale, open source software framework Yahoo! has been the largest contributor to date Dedicated
Distributed Filesystems
Distributed Filesystems Amir H. Payberah Swedish Institute of Computer Science [email protected] April 8, 2014 Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 1 / 32 What is Filesystem? Controls
HiBench Introduction. Carson Wang ([email protected]) Software & Services Group
HiBench Introduction Carson Wang ([email protected]) Agenda Background Workloads Configurations Benchmark Report Tuning Guide Background WHY Why we need big data benchmarking systems? WHAT What is
Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science
A Seminar report On Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science SUBMITTED TO: www.studymafia.org SUBMITTED BY: www.studymafia.org
Tutorial for Assignment 2.0
Tutorial for Assignment 2.0 Web Science and Web Technology Summer 2012 Slides based on last years tutorials by Chris Körner, Philipp Singer 1 Review and Motivation Agenda Assignment Information Introduction
CS2510 Computer Operating Systems
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
CS2510 Computer Operating Systems
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
COSC 6397 Big Data Analytics. 2 nd homework assignment Pig and Hive. Edgar Gabriel Spring 2015
COSC 6397 Big Data Analytics 2 nd homework assignment Pig and Hive Edgar Gabriel Spring 2015 2 nd Homework Rules Each student should deliver Source code (.java files) Documentation (.pdf,.doc,.tex or.txt
Maximizing Hadoop Performance with Hardware Compression
Maximizing Hadoop Performance with Hardware Compression Robert Reiner Director of Marketing Compression and Security Exar Corporation November 2012 1 What is Big? sets whose size is beyond the ability
Click Stream Data Analysis Using Hadoop
Governors State University OPUS Open Portal to University Scholarship Capstone Projects Spring 2015 Click Stream Data Analysis Using Hadoop Krishna Chand Reddy Gaddam Governors State University Sivakrishna
HADOOP. Revised 10/19/2015
HADOOP Revised 10/19/2015 This Page Intentionally Left Blank Table of Contents Hortonworks HDP Developer: Java... 1 Hortonworks HDP Developer: Apache Pig and Hive... 2 Hortonworks HDP Developer: Windows...
Linux Clusters Ins.tute: Turning HPC cluster into a Big Data Cluster. A Partnership for an Advanced Compu@ng Environment (PACE) OIT/ART, Georgia Tech
Linux Clusters Ins.tute: Turning HPC cluster into a Big Data Cluster Fang (Cherry) Liu, PhD [email protected] A Partnership for an Advanced Compu@ng Environment (PACE) OIT/ART, Georgia Tech Targets
