CSE-E5430 Scalable Cloud Computing Lecture 3

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "CSE-E5430 Scalable Cloud Computing Lecture 3"

Transcription

1 CSE-E5430 Scalable Cloud Computing Lecture 3 Keijo Heljanko Department of Computer Science School of Science Aalto University /25

2 Writing Hadoop Jobs Example: Assume we want to count the words in a file using Apache Hadoop (see also: Tutorial 2) What we need to write in Java: Code to include the needed Hadoop Java Libraries Code for the Mapper Code for the Reducer (Optionally: Code for the Combiner) Code for the main method for Job configuration and submission 2/25

3 WordCount: Including Hadoop Java Libraries import java.io.ioexception; import java.util.stringtokenizer; import org.apache.hadoop.conf.configuration; import org.apache.hadoop.fs.path; import org.apache.hadoop.io.intwritable; import org.apache.hadoop.io.text; import org.apache.hadoop.mapreduce.job; import org.apache.hadoop.mapreduce.mapper; import org.apache.hadoop.mapreduce.reducer; import org.apache.hadoop.mapreduce.lib.input. FileInputFormat; import org.apache.hadoop.mapreduce.lib.output. FileOutputFormat; 3/25

4 WordCount: Including Hadoop Java Libraries Most org.apache.hadoop.* libraries listed are needed by all Hadoop jobs, cut&paste from examples 4/25

5 WordCount: Including Hadoop Java Libraries The IntWritable is a Hadoop variant of the Java Integer type. The Text is a Hadoop variant of the Java String class In addition: the LongWritable is a Hadoop variant of the Java Long type, the FloatWritable is a Hadoop variant of the Java Float type, etc. The Hadoop types must be used for both Mapper and Reducer keys and values instead of Java native types 5/25

6 WordCount: Code for the Mapper public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable >{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasmoretokens()) { word.set(itr.nexttoken()); context.write(word, one); } } } 6/25

7 WordCount: Code for the Mapper The line public class WordCount tells we are defining the WordCount class, into which Mapper, Reducer, and main method all belong into The four parameters of the Mapper in: extends Mapper<Object, Text, Text, IntWritable> are the types of the input key, input value, output key, and output value, respectively The key argument is not used by the WordCount Mapper 7/25

8 WordCount: Code for the Mapper Remember that Hadoop keys and values are of Hadoop internal types: Text, IntWritable, LongWritable, FloatWritable, etc. Code is needed to covert between them and Java native types. Because of this, define the Hadoop types for creating the Mapper output: The code: private final static IntWritable one = new IntWritable(1) creates an IntWritable constant 1 The code: private Text word = new Text() creates a Text object to be used as output key object of the Mapper 8/25

9 WordCount: Code for the Mapper The Mapper method is defined as: public void map( Object key, Text value, Context context), where Context is an object used to write the output of the Mapper The Java code: StringTokenizer itr = new StringTokenizer(value.toString()) parses a Text value (line of text) converted into Java String into a list of words by returning an iterator itr over the words of the line of text The code word.set(itr.nexttoken()) overwrites the current contents of the Text object word by the next String contents returned from the iterator itr The code context.write(word, one) outputs the parsed word as key, and the constant 1 as the value from the Mapper giving an output pair (word,1) of (Text, IntWritable) type 9/25

10 WordCount: Code for the Reducer public static class IntSumReducer extends Reducer<Text,IntWritable,Text, IntWritable> { private IntWritable result = new IntWritable(); } public void reduce(text key, Iterable<IntWritable > values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } 10/25

11 WordCount: Code for the Reducer The four parameters of the Reducer in: extends Reducer<Text,IntWritable,Text,IntWritable> are the types of the input key, input value, output key, and output value, respectively. The input types of Reducer must match the output types of the Mapper 11/25

12 WordCount: Code for the Reducer The Reducer method is defined as: public void reduce(text key, Iterable<IntWritable> values, Context context), where Iterable<IntWritable> values is an iterator over the value type of the Mapper output, and context is an object used to write the output of the Reducer The reducer will be called once per each key, and will be given a list of values mapped to that key to iterate on using the values iterator 12/25

13 WordCount: Code for the Reducer The loop for (IntWritable val : values) iterates over the values mapped to the current key and sums them up The code context.write(key, result) writes the Reducer output as a (word, count) pair of (Text, IntWritable) type 13/25

14 WordCount: Code for main } public static void main(string[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setjarbyclass(wordcount.class); job.setmapperclass(tokenizermapper.class); job.setcombinerclass(intsumreducer.class); job.setreducerclass(intsumreducer.class); job.setoutputkeyclass(text.class); job.setoutputvalueclass(intwritable.class); FileInputFormat.addInputPath(job, new Path(args [0])); FileOutputFormat.setOutputPath(job, new Path(args [1])); System.exit(job.waitForCompletion(true)? 0 : 1); } 14/25

15 WordCount: Code for main The code Job job = Job.getInstance(conf, "word count"); creates a new MapReduce Job The line job.setjarbyclass(wordcount.class) tells the name of the defined class The Mapper is set: job.setmapperclass(tokenizermapper.class) The (optional!) Combiner is set: job.setcombinerclass(intsumreducer.class), note that this is sound only when the Reduce function is commutative and associative! (Addition is!) The Reducer is set: job.setreducerclass(intsumreducer.class) 15/25

16 WordCount: Code for main Java type system is not able to figure out correct types of output keys and values, so we have to specify them again: The code job.setoutputkeyclass(text.class) sets the output key of both Mapper and Reducer to the Text type. If Mapper has a different output key type from Reducer, the Mapper key type can be specified by overriding this using job.setmapoutputkeyclass() The code job.setoutputvalueclass(intwritable.class) sets the output key of both Mapper and Reducer to the IntWritable type. Also job.setmapoutputvalueclass() can be used to override this if the Mapper output type is different Also because of the Java limitations, it does not complain at compile time if Mapper outputs and Reducer inputs are of incompatible types 16/25

17 WordCount: Code for main The code FileInputFormat.addInputPaths(job, new Path (args[0])) sets the input to be all of files in input directory specified as first argument on command line The code FileOutputFormat.setOutputPath(job, new Path(args[1])) sets the output directed to the output directory specified as second argument on command line, one file per reducer The code System.exit(job.waitForCompletion(true)? 0 : 1) submits the job to Hadoop, waits for the job to finish 17/25

18 A Quick WordCount Demo: Starting Hadoop $ cd WordCount/ $ cat./start_commands stop-all.sh rm -rf /hadoop-2.7.1/dfs/ hadoop namenode -format sleep 10 start-all.sh echo "*** WARNING check here:" echo "*** echo "*** that you have at least one datanode alive" echo "*** before continuing" echo "*** If datanode does not start then" echo "*** run stop-all.sh and" echo "*** delete dfs and tmp directories from /hadoop directory" 18/25

19 A Quick WordCount Demo: Starting Hadoop (cnt.) $./start_commands This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh Stopping namenodes on [localhost] localhost: no namenode to stop localhost: no datanode to stop Stopping secondary namenodes [ ] : no secondarynamenode to stop stopping yarn daemons no resourcemanager to stop localhost: no nodemanager to stop no proxyserver to stop DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 15/09/20 23:51:28 INFO namenode.namenode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = hadoop/ STARTUP_MSG: args = [-format] STARTUP_MSG: version = /25

20 A Quick WordCount Demo: Starting Hadoop (cnt.)... This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [localhost] localhost: starting namenode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-namenode-hadoop.out localhost: starting datanode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hadoop.out Starting secondary namenodes [ ] : starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-secondarynamenode-hadoop.out starting yarn daemons starting resourcemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-resourcemanager-hadoop.out localhost: starting nodemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-hadoop.out *** WARNING check here: *** *** that you have at least one datanode alive *** before continuing *** If datanode does not start then *** run stop-all.sh and *** delete dfs and tmp directories from /hadoop directory 20/25

21 A Quick WordCount Demo: Uploading Kalevala to HDFS $ sudo apt-get install recode $ wget :33: Resolving ( , 2001:708:10:9::20:3 Connecting to ( :80... connected. HTTP request sent, awaiting response OK Length: (196K) [application/x-gzip] Saving to: kalevala.txt.gz :33:51 (828 KB/s) - kalevala.txt.gz saved [200872/200872] $ gunzip kalevala.txt.gz $ recode ISO UTF-8 kalevala.txt $ ls -al kalevala.txt -rw-r--r-- 1 keijoheljanko staff Nov kalevala.txt $ hadoop fs -mkdir /input $ hadoop fs -copyfromlocal kalevala.txt /input $ hadoop fs -ls /input Found 1 items -rw-r--r-- 1 hadoop supergroup :04 /input/kalevala.txt 21/25

22 A Quick WordCount Demo: Compiling WordCount $ ls -al WordCount.java -rw-r--r-- 1 hadoop hadoop 2089 Sep 8 00:51 WordCount.java $ hadoop com.sun.tools.javac.main WordCount.java $ jar cf wc.jar WordCount*.class $ ls -al wc.jar -rw-rw-r-- 1 hadoop hadoop 3071 Sep 21 00:08 wc.jar 22/25

23 A Quick WordCount Demo: Running WordCount (1/2) $ hadoop jar wc.jar WordCount /input /output 15/09/21 00:10:31 INFO client.rmproxy: Connecting to ResourceManager at / : /09/21 00:10:32 WARN mapreduce.jobresourceuploader: Hadoop command-line option parsing not p 15/09/21 00:10:32 INFO input.fileinputformat: Total input paths to process : 1 15/09/21 00:10:32 INFO mapreduce.jobsubmitter: number of splits:1 15/09/21 00:10:32 INFO mapreduce.jobsubmitter: Submitting tokens for job: job_ _00 15/09/21 00:10:33 INFO impl.yarnclientimpl: Submitted application application_ _00 15/09/21 00:10:33 INFO mapreduce.job: The url to track the job: 15/09/21 00:10:33 INFO mapreduce.job: Running job: job_ _ /09/21 00:10:42 INFO mapreduce.job: Job job_ _0001 running in uber mode : false 15/09/21 00:10:42 INFO mapreduce.job: map 0% reduce 0% 15/09/21 00:10:49 INFO mapreduce.job: map 100% reduce 0% 15/09/21 00:10:57 INFO mapreduce.job: map 100% reduce 100% 15/09/21 00:10:57 INFO mapreduce.job: Job job_ _0001 completed successfully... 23/25

24 A Quick WordCount Demo: Running WordCount (2/2)... File System Counters FILE: Number of bytes read= FILE: Number of bytes written= FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read= HDFS: Number of bytes written= HDFS: Number of read operations=6 HDFS: Number of large read operations=0 HDFS: Number of write operations=2... File Input Format Counters Bytes Read= File Output Format Counters Bytes Written= /25

25 A Quick WordCount Demo: Dowloading Output $ hadoop fs -copytolocal /output output $ ls -al output total 320 drwxrwxr-x 2 hadoop hadoop 4096 Sep 21 00:18. drwxrwxr-x 5 hadoop hadoop 4096 Sep 21 00:18.. -rw-r--r-- 1 hadoop hadoop 0 Sep 21 00:18 _SUCCESS -rw-r--r-- 1 hadoop hadoop Sep 21 00:18 part-r $ head -15 output/part-r "Ahto,1 "Aik 1 "Aika 1 "Ain 1 "Aina 2 "Ainap 1 "Ainapa 1 "Aita 2 "Ajoa 1 "Akat 1 "Akatp 1 "Akka 5 "Ala 1 "Ampuisitko 1 "Anna 7 25/25

MAPREDUCE - COMBINERS

MAPREDUCE - COMBINERS MAPREDUCE - COMBINERS http://www.tutorialspoint.com/map_reduce/map_reduce_combiners.htm Copyright tutorialspoint.com A Combiner, also known as a semi-reducer, is an optional class that operates by accepting

More information

Tutorial- Counting Words in File(s) using MapReduce

Tutorial- Counting Words in File(s) using MapReduce Tutorial- Counting Words in File(s) using MapReduce 1 Overview This document serves as a tutorial to setup and run a simple application in Hadoop MapReduce framework. A job in Hadoop MapReduce usually

More information

Hadoop Lab Notes. Nicola Tonellotto November 15, 2010

Hadoop Lab Notes. Nicola Tonellotto November 15, 2010 Hadoop Lab Notes Nicola Tonellotto November 15, 2010 2 Contents 1 Hadoop Setup 4 1.1 Prerequisites........................................... 4 1.2 Installation............................................

More information

Hadoop and Big Data. Keijo Heljanko. Department of Information and Computer Science School of Science Aalto University keijo.heljanko@aalto.

Hadoop and Big Data. Keijo Heljanko. Department of Information and Computer Science School of Science Aalto University keijo.heljanko@aalto. Keijo Heljanko Department of Information and Computer Science School of Science Aalto University keijo.heljanko@aalto.fi 1/77 Business Drivers of Cloud Computing Large data centers allow for economics

More information

BIG DATA APPLICATIONS

BIG DATA APPLICATIONS BIG DATA ANALYTICS USING HADOOP AND SPARK ON HATHI Boyu Zhang Research Computing ITaP BIG DATA APPLICATIONS Big data has become one of the most important aspects in scientific computing and business analytics

More information

Word Count Code using MR2 Classes and API

Word Count Code using MR2 Classes and API EDUREKA Word Count Code using MR2 Classes and API A Guide to Understand the Execution of Word Count edureka! A guide to understand the execution and flow of word count WRITE YOU FIRST MRV2 PROGRAM AND

More information

Big Data 2012 Hadoop Tutorial

Big Data 2012 Hadoop Tutorial Big Data 2012 Hadoop Tutorial Oct 19th, 2012 Martin Kaufmann Systems Group, ETH Zürich 1 Contact Exercise Session Friday 14.15 to 15.00 CHN D 46 Your Assistant Martin Kaufmann Office: CAB E 77.2 E-Mail:

More information

Getting to know Apache Hadoop

Getting to know Apache Hadoop Getting to know Apache Hadoop Oana Denisa Balalau Télécom ParisTech October 13, 2015 1 / 32 Table of Contents 1 Apache Hadoop 2 The Hadoop Distributed File System(HDFS) 3 Application management in the

More information

HADOOP SDJ INFOSOFT PVT LTD

HADOOP SDJ INFOSOFT PVT LTD HADOOP SDJ INFOSOFT PVT LTD DATA FACT 6/17/2016 SDJ INFOSOFT PVT. LTD www.javapadho.com Big Data Definition Big data is high volume, high velocity and highvariety information assets that demand cost

More information

Mrs: MapReduce for Scientific Computing in Python

Mrs: MapReduce for Scientific Computing in Python Mrs: for Scientific Computing in Python Andrew McNabb, Jeff Lund, and Kevin Seppi Brigham Young University November 16, 2012 Large scale problems require parallel processing Communication in parallel processing

More information

HDInsight Essentials. Rajesh Nadipalli. Chapter No. 1 "Hadoop and HDInsight in a Heartbeat"

HDInsight Essentials. Rajesh Nadipalli. Chapter No. 1 Hadoop and HDInsight in a Heartbeat HDInsight Essentials Rajesh Nadipalli Chapter No. 1 "Hadoop and HDInsight in a Heartbeat" In this package, you will find: A Biography of the author of the book A preview chapter from the book, Chapter

More information

Hadoop for Java Developers [HOL1813]

Hadoop for Java Developers [HOL1813] by Christopher M. Judd (javajudd@gmail.com) Contents 1 Lab 1 - Run First Hadoop Job............................ 1 Lab 2 - Run Hadoop in a Pseudo-Distributed mode................. 2 Lab 3 - Utilize HDFS................................

More information

Hadoop Configuration and First Examples

Hadoop Configuration and First Examples Hadoop Configuration and First Examples Big Data 2015 Hadoop Configuration In the bash_profile export all needed environment variables Hadoop Configuration Allow remote login Hadoop Configuration Download

More information

From Lone Dwarfs To Giant Superclusters: Rethinking OS Abstractions for the Cloud

From Lone Dwarfs To Giant Superclusters: Rethinking OS Abstractions for the Cloud y l e v si s a d e m t u b i r dist From Lone Dwarfs To Giant Superclusters: y t i d o m Rethinking OS com Abstractions for the Cloud Nikos Vasilakis, Ben Karel, Jonathan M. Smith The University of Pennsylvania

More information

Hadoop Basics with InfoSphere BigInsights

Hadoop Basics with InfoSphere BigInsights An IBM Proof of Technology Hadoop Basics with InfoSphere BigInsights Unit 2: Using MapReduce An IBM Proof of Technology Catalog Number Copyright IBM Corporation, 2013 US Government Users Restricted Rights

More information

hadoop Running hadoop on Grid'5000 Vinicius Cogo vielmo@lasige.di.fc.ul.pt Marcelo Pasin pasin@di.fc.ul.pt Andrea Charão andrea@inf.ufsm.

hadoop Running hadoop on Grid'5000 Vinicius Cogo vielmo@lasige.di.fc.ul.pt Marcelo Pasin pasin@di.fc.ul.pt Andrea Charão andrea@inf.ufsm. hadoop Running hadoop on Grid'5000 Vinicius Cogo vielmo@lasige.di.fc.ul.pt Marcelo Pasin pasin@di.fc.ul.pt Andrea Charão andrea@inf.ufsm.br Outline 1 Introduction 2 MapReduce 3 Hadoop 4 How to Install

More information

Hadoop (Hands On) Irene Finocchi and Emanuele Fusco

Hadoop (Hands On) Irene Finocchi and Emanuele Fusco Hadoop (Hands On) Irene Finocchi and Emanuele Fusco Big Data Computing March 23, 2015. Master s Degree in Computer Science Academic Year 2014-2015, spring semester I.Finocchi and E.Fusco Hadoop (Hands

More information

Enterprise Data Storage and Analysis on Tim Barr

Enterprise Data Storage and Analysis on Tim Barr Enterprise Data Storage and Analysis on Tim Barr January 15, 2015 Agenda Challenges in Big Data Analytics Why many Hadoop deployments under deliver What is Apache Spark Spark Core, SQL, Streaming, MLlib,

More information

Zebra and MapReduce. Table of contents. 1 Overview...2 2 Hadoop MapReduce APIs...2 3 Zebra MapReduce APIs...2 4 Zebra MapReduce Examples...

Zebra and MapReduce. Table of contents. 1 Overview...2 2 Hadoop MapReduce APIs...2 3 Zebra MapReduce APIs...2 4 Zebra MapReduce Examples... Table of contents 1 Overview...2 2 Hadoop MapReduce APIs...2 3 Zebra MapReduce APIs...2 4 Zebra MapReduce Examples... 2 1. Overview MapReduce allows you to take full advantage of Zebra's capabilities.

More information

Working With Hadoop. Important Terminology. Important Terminology. Anatomy of MapReduce Job Run. Important Terminology

Working With Hadoop. Important Terminology. Important Terminology. Anatomy of MapReduce Job Run. Important Terminology Working With Hadoop Now that we covered the basics of MapReduce, let s look at some Hadoop specifics. Mostly based on Tom White s book Hadoop: The Definitive Guide, 3 rd edition Note: We will use the new

More information

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems Processing of massive data: MapReduce 2. Hadoop 1 MapReduce Implementations Google were the first that applied MapReduce for big data analysis Their idea was introduced in their seminal paper MapReduce:

More information

Running Hadoop at Stirling

Running Hadoop at Stirling Running Hadoop at Stirling Kevin Swingler Summary The Hadoopserver in CS @ Stirling A quick intoduction to Unix commands Getting files in and out Compliing your Java Submit a HadoopJob Monitor your jobs

More information

19 Putting into Practice: Large-Scale Data Management with HADOOP

19 Putting into Practice: Large-Scale Data Management with HADOOP 19 Putting into Practice: Large-Scale Data Management with HADOOP The chapter proposes an introduction to HADOOP and suggests some exercises to initiate a practical experience of the system. The following

More information

From Distributed Systems to Data Science. William C. Benton Red Hat Emerging Technology

From Distributed Systems to Data Science. William C. Benton Red Hat Emerging Technology From Distributed Systems to Data Science William C. Benton Red Hat Emerging Technology About me At Red Hat: scheduling, configuration management, RPC, Fedora, data engineering, data science. Before Red

More information

Introduc)on to Map- Reduce. Vincent Leroy

Introduc)on to Map- Reduce. Vincent Leroy Introduc)on to Map- Reduce Vincent Leroy Sources Apache Hadoop Yahoo! Developer Network Hortonworks Cloudera Prac)cal Problem Solving with Hadoop and Pig Slides will be available at hgp://lig- membres.imag.fr/leroyv/

More information

Hadoop and ecosystem * 本 文 中 的 言 论 仅 代 表 作 者 个 人 观 点 * 本 文 中 的 一 些 图 例 来 自 于 互 联 网. Information Management. Information Management IBM CDL Lab

Hadoop and ecosystem * 本 文 中 的 言 论 仅 代 表 作 者 个 人 观 点 * 本 文 中 的 一 些 图 例 来 自 于 互 联 网. Information Management. Information Management IBM CDL Lab IBM CDL Lab Hadoop and ecosystem * 本 文 中 的 言 论 仅 代 表 作 者 个 人 观 点 * 本 文 中 的 一 些 图 例 来 自 于 互 联 网 Information Management 2012 IBM Corporation Agenda Hadoop 技 术 Hadoop 概 述 Hadoop 1.x Hadoop 2.x Hadoop 生 态

More information

Hadoop Streaming. 2012 coreservlets.com and Dima May. 2012 coreservlets.com and Dima May

Hadoop Streaming. 2012 coreservlets.com and Dima May. 2012 coreservlets.com and Dima May 2012 coreservlets.com and Dima May Hadoop Streaming Originals of slides and source code for examples: http://www.coreservlets.com/hadoop-tutorial/ Also see the customized Hadoop training courses (onsite

More information

Hadoop and Spark Tutorial for Statisticians

Hadoop and Spark Tutorial for Statisticians Hadoop and Spark Tutorial for Statisticians Feng Li November 30, 2015 Contents 1 Install Hadoop 3 1.1 Pre-requests............................ 3 1.1.1 SSH............................ 3 1.1.2 JDK............................

More information

The Hadoop Eco System Shanghai Data Science Meetup

The Hadoop Eco System Shanghai Data Science Meetup The Hadoop Eco System Shanghai Data Science Meetup Karthik Rajasethupathy, Christian Kuka 03.11.2015 @Agora Space Overview What is this talk about? Giving an overview of the Hadoop Ecosystem and related

More information

Tutorial. Christopher M. Judd

Tutorial. Christopher M. Judd Tutorial Christopher M. Judd Christopher M. Judd CTO and Partner at leader Columbus Developer User Group (CIDUG) Marc Peabody @marcpeabody Introduction http://hadoop.apache.org/ Scale up Scale up

More information

CS 455 Spring 2015. Word Count Example

CS 455 Spring 2015. Word Count Example CS 455 Spring 2015 Word Count Example Before starting, make sure that you have HDFS and Yarn running, using sbin/start-dfs.sh and sbin/start-yarn.sh Download text copies of at least 3 books from Project

More information

// The fist job is a word count job // It counts the number of occurrences of each movie in watchedmovies.txt

// The fist job is a word count job // It counts the number of occurrences of each movie in watchedmovies.txt import org.apache.hadoop.conf.configuration; import org.apache.hadoop.conf.configured; import org.apache.hadoop.fs.path; import org.apache.hadoop.io.*; import org.apache.hadoop.mapreduce.job; import org.apache.hadoop.mapreduce.lib.input.fileinputformat;

More information

Hadoop WordCount Explained! IT332 Distributed Systems

Hadoop WordCount Explained! IT332 Distributed Systems Hadoop WordCount Explained! IT332 Distributed Systems Typical problem solved by MapReduce Read a lot of data Map: extract something you care about from each record Shuffle and Sort Reduce: aggregate, summarize,

More information

Hadoop Framework. technology basics for data scientists. Spring - 2014. Jordi Torres, UPC - BSC www.jorditorres.eu @JordiTorresBCN

Hadoop Framework. technology basics for data scientists. Spring - 2014. Jordi Torres, UPC - BSC www.jorditorres.eu @JordiTorresBCN Hadoop Framework technology basics for data scientists Spring - 2014 Jordi Torres, UPC - BSC www.jorditorres.eu @JordiTorresBCN Warning! Slides are only for presenta8on guide We will discuss+debate addi8onal

More information

Hadoop and Eclipse. Eclipse Hawaii User s Group May 26th, 2009. Seth Ladd http://sethladd.com

Hadoop and Eclipse. Eclipse Hawaii User s Group May 26th, 2009. Seth Ladd http://sethladd.com Hadoop and Eclipse Eclipse Hawaii User s Group May 26th, 2009 Seth Ladd http://sethladd.com Goal YOU can use the same technologies as The Big Boys Google Yahoo (2000 nodes) Last.FM AOL Facebook (2.5 petabytes

More information

MR-(Mapreduce Programming Language)

MR-(Mapreduce Programming Language) MR-(Mapreduce Programming Language) Siyang Dai Zhi Zhang Shuai Yuan Zeyang Yu Jinxiong Tan sd2694 zz2219 sy2420 zy2156 jt2649 Objective of MR MapReduce is a software framework introduced by Google, aiming

More information

Cloud Computing i Hadoop

Cloud Computing i Hadoop Cloud Computing i Hadoop X JPL Barcelona, 01/07/2011 Marc de Palol @lant Qui sóc? Qui sóc? Qui sóc? Qui sóc? Qui sóc? Qui sóc? Grid Computing vs Cloud Grid Computing vs Cloud Els dos són sistemes distribuïts

More information

Word count example Abdalrahman Alsaedi

Word count example Abdalrahman Alsaedi Word count example Abdalrahman Alsaedi To run word count in AWS you have two different ways; either use the already exist WordCount program, or to write your own file. First: Using AWS word count program

More information

Cloud Computing. Lectures 7 and 8 Map Reduce 2014-2015

Cloud Computing. Lectures 7 and 8 Map Reduce 2014-2015 Cloud Computing Lectures 7 and 8 Map Reduce 2014-2015 1 Up until now Introduction Definition of Cloud Computing Grid Computing Content Distribution Networks Cycle-Sharing 2 Outline Map Reduce: What is

More information

Xiaoming Gao Hui Li Thilina Gunarathne

Xiaoming Gao Hui Li Thilina Gunarathne Xiaoming Gao Hui Li Thilina Gunarathne Outline HBase and Bigtable Storage HBase Use Cases HBase vs RDBMS Hands-on: Load CSV file to Hbase table with MapReduce Motivation Lots of Semi structured data Horizontal

More information

Extreme Computing. Hadoop MapReduce in more detail. www.inf.ed.ac.uk

Extreme Computing. Hadoop MapReduce in more detail. www.inf.ed.ac.uk Extreme Computing Hadoop MapReduce in more detail How will I actually learn Hadoop? This class session Hadoop: The Definitive Guide RTFM There is a lot of material out there There is also a lot of useless

More information

Copy the.jar file into the plugins/ subfolder of your Eclipse installation. (e.g., C:\Program Files\Eclipse\plugins)

Copy the.jar file into the plugins/ subfolder of your Eclipse installation. (e.g., C:\Program Files\Eclipse\plugins) Beijing Codelab 1 Introduction to the Hadoop Environment Spinnaker Labs, Inc. Contains materials Copyright 2007 University of Washington, licensed under the Creative Commons Attribution 3.0 License --

More information

Lambda Architecture. CSCI 5828: Foundations of Software Engineering Lecture 29 12/09/2014

Lambda Architecture. CSCI 5828: Foundations of Software Engineering Lecture 29 12/09/2014 Lambda Architecture CSCI 5828: Foundations of Software Engineering Lecture 29 12/09/2014 1 Goals Cover the material in Chapter 8 of the Concurrency Textbook The Lambda Architecture Batch Layer MapReduce

More information

Step 4: Configure a new Hadoop server This perspective will add a new snap-in to your bottom pane (along with Problems and Tasks), like so:

Step 4: Configure a new Hadoop server This perspective will add a new snap-in to your bottom pane (along with Problems and Tasks), like so: Codelab 1 Introduction to the Hadoop Environment (version 0.17.0) Goals: 1. Set up and familiarize yourself with the Eclipse plugin 2. Run and understand a word counting program Setting up Eclipse: Step

More information

USING HDFS ON DISCOVERY CLUSTER TWO EXAMPLES - test1 and test2

USING HDFS ON DISCOVERY CLUSTER TWO EXAMPLES - test1 and test2 USING HDFS ON DISCOVERY CLUSTER TWO EXAMPLES - test1 and test2 (Using HDFS on Discovery Cluster for Discovery Cluster Users email n.roy@neu.edu if you have questions or need more clarifications. Nilay

More information

So far, we've been protected from the full complexity of hadoop by using Pig. Let's see what we've been missing!

So far, we've been protected from the full complexity of hadoop by using Pig. Let's see what we've been missing! Mapping Page 1 Using Raw Hadoop 8:34 AM So far, we've been protected from the full complexity of hadoop by using Pig. Let's see what we've been missing! Hadoop Yahoo's open-source MapReduce implementation

More information

Tutorial on Hadoop HDFS and MapReduce

Tutorial on Hadoop HDFS and MapReduce Tutorial on Hadoop HDFS and MapReduce Table Of Contents Introduction... 3 The Use Case... 4 Pre-Requisites... 5 Task 1: Access Your Hortonworks Virtual Sandbox... 5 Task 2: Create the MapReduce job...

More information

Introduction to Big Data Science. Wuhui Chen

Introduction to Big Data Science. Wuhui Chen Introduction to Big Data Science Wuhui Chen What is Big data? Volume Variety Velocity Outline What are people doing with Big data? Classic examples Two basic technologies for Big data management: Data

More information

Introduction to Hadoop. Owen O Malley Yahoo Inc!

Introduction to Hadoop. Owen O Malley Yahoo Inc! Introduction to Hadoop Owen O Malley Yahoo Inc! omalley@apache.org Hadoop: Why? Need to process 100TB datasets with multiday jobs On 1 node: scanning @ 50MB/s = 23 days MTBF = 3 years On 1000 node cluster:

More information

INFO5011. Cloud Computing Semester 2, 2011 Lecture 6, MapReduce

INFO5011. Cloud Computing Semester 2, 2011 Lecture 6, MapReduce INFO5011 Cloud Computing Semester 2, 2011 Lecture 6, MapReduce COMMONWEALTH OF Copyright Regulations 1969 WARNING This material has been reproduced and communicated to you by or on behalf of the university

More information

Big Data for the JVM developer. Costin Leau, Elasticsearch @costinl

Big Data for the JVM developer. Costin Leau, Elasticsearch @costinl Big Data for the JVM developer Costin Leau, Elasticsearch @costinl Agenda Data Trends Data Pipelines JVM and Big Data Tool Eco-system Data Landscape Data Trends http://www.emc.com/leadership/programs/digital-universe.htm

More information

Hadoop/MapReduce. Object-oriented framework presentation CSCI 5448 Casey McTaggart

Hadoop/MapReduce. Object-oriented framework presentation CSCI 5448 Casey McTaggart Hadoop/MapReduce Object-oriented framework presentation CSCI 5448 Casey McTaggart What is Apache Hadoop? Large scale, open source software framework Yahoo! has been the largest contributor to date Dedicated

More information

MAPREDUCE - HADOOP IMPLEMENTATION

MAPREDUCE - HADOOP IMPLEMENTATION MAPREDUCE - HADOOP IMPLEMENTATION http://www.tutorialspoint.com/map_reduce/implementation_in_hadoop.htm Copyright tutorialspoint.com MapReduce is a framework that is used for writing applications to process

More information

Introduction to MapReduce and Hadoop

Introduction to MapReduce and Hadoop Introduction to MapReduce and Hadoop Jie Tao Karlsruhe Institute of Technology jie.tao@kit.edu Die Kooperation von Why Map/Reduce? Massive data Can not be stored on a single machine Takes too long to process

More information

Fewer Hoops with Hadoop: Spring & Hadoop

Fewer Hoops with Hadoop: Spring & Hadoop Fewer Hoops with Hadoop: Spring & Hadoop Costin Leau, Staff Engineer (@costinl) 2009 VMware Inc. All rights reserved Goal of the talk Improve your Hadoop experience through a POJO-based programming model

More information

Introduction to Hadoop. Owen O Malley Yahoo Inc!

Introduction to Hadoop. Owen O Malley Yahoo Inc! Introduction to Hadoop Owen O Malley Yahoo Inc! omalley@apache.org Hadoop: Why? Need to process 100TB datasets with multiday jobs On 1 node: scanning @ 50MB/s = 23 days MTBF = 3 years On 1000 node cluster:

More information

Health Care Claims System Prototype

Health Care Claims System Prototype SGT WHITE PAPER Health Care Claims System Prototype MongoDB and Hadoop 2015 SGT, Inc. All Rights Reserved 7701 Greenbelt Road, Suite 400, Greenbelt, MD 20770 Tel: (301) 614-8600 Fax: (301) 614-8601 www.sgt-inc.com

More information

Hadoop Overview. July 2011. Lavanya Ramakrishnan Iwona Sakrejda Shane Canon. Lawrence Berkeley National Lab

Hadoop Overview. July 2011. Lavanya Ramakrishnan Iwona Sakrejda Shane Canon. Lawrence Berkeley National Lab Hadoop Overview Lavanya Ramakrishnan Iwona Sakrejda Shane Canon Lawrence Berkeley National Lab July 2011 Overview Concepts & Background MapReduce and Hadoop Hadoop Ecosystem Tools on top of Hadoop Hadoop

More information

BIG DATA ANALYTICS USING HADOOP TOOLS. A Thesis. Presented to the. Faculty of. San Diego State University. In Partial Fulfillment

BIG DATA ANALYTICS USING HADOOP TOOLS. A Thesis. Presented to the. Faculty of. San Diego State University. In Partial Fulfillment BIG DATA ANALYTICS USING HADOOP TOOLS A Thesis Presented to the Faculty of San Diego State University In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by

More information

MapReduce. Course NDBI040: Big Data Management and NoSQL Databases. Practice 01: Martin Svoboda

MapReduce. Course NDBI040: Big Data Management and NoSQL Databases. Practice 01: Martin Svoboda Course NDBI040: Big Data Management and NoSQL Databases Practice 01: MapReduce Martin Svoboda Faculty of Mathematics and Physics, Charles University in Prague MapReduce: Overview MapReduce Programming

More information

Hands-on Exercises with Big Data

Hands-on Exercises with Big Data Hands-on Exercises with Big Data Lab Sheet 1: Getting Started with MapReduce and Hadoop The aim of this exercise is to learn how to begin creating MapReduce programs using the Hadoop Java framework. In

More information

Hadoop Installation MapReduce Examples Jake Karnes

Hadoop Installation MapReduce Examples Jake Karnes Big Data Management Hadoop Installation MapReduce Examples Jake Karnes These slides are based on materials / slides from Cloudera.com Amazon.com Prof. P. Zadrozny's Slides Prerequistes You must have an

More information

Easily parallelize existing application with Hadoop framework Juan Lago, July 2011

Easily parallelize existing application with Hadoop framework Juan Lago, July 2011 Easily parallelize existing application with Hadoop framework Juan Lago, July 2011 There are three ways of installing Hadoop: Standalone (or local) mode: no deamons running. Nothing to configure after

More information

CS455: Introduction to Distributed Systems [Spring 2015] Dept. Of Computer Science, Colorado State University

CS455: Introduction to Distributed Systems [Spring 2015] Dept. Of Computer Science, Colorado State University CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [HADOOP] Shrideep Pallickara Computer Science Colorado State University Frequently asked questions from the previous class survey Can you attempt to place reducers

More information

Hadoop Tutorial Group 7 - Tools For Big Data Indian Institute of Technology Bombay

Hadoop Tutorial Group 7 - Tools For Big Data Indian Institute of Technology Bombay Hadoop Tutorial Group 7 - Tools For Big Data Indian Institute of Technology Bombay Dipojjwal Ray Sandeep Prasad 1 Introduction In installation manual we listed out the steps for hadoop-1.0.3 and hadoop-

More information

Big Data Processing, 2014/15

Big Data Processing, 2014/15 Big Data Processing, 2014/15 Lecture 6: MapReduce - behind the scenes continued (a very mixed bag)!! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl 1 Course content Introduction Data streams

More information

5 HDFS - Hadoop Distributed System

5 HDFS - Hadoop Distributed System 5 HDFS - Hadoop Distributed System 5.1 Definition and Remarks HDFS is a file system designed for storing very large files with streaming data access patterns running on clusters of commoditive hardware.

More information

map/reduce connected components

map/reduce connected components 1, map/reduce connected components find connected components with analogous algorithm: map edges randomly to partitions (k subgraphs of n nodes) for each partition remove edges, so that only tree remains

More information

Hadoop. Scalable Distributed Computing. Claire Jaja, Julian Chan October 8, 2013

Hadoop. Scalable Distributed Computing. Claire Jaja, Julian Chan October 8, 2013 Hadoop Scalable Distributed Computing Claire Jaja, Julian Chan October 8, 2013 What is Hadoop? A general-purpose storage and data-analysis platform Open source Apache software, implemented in Java Enables

More information

BIWA 2015 Big Data Lab Java MapReduce WordCount/Table JOIN Big Data Loader. Arijit Das Greg Belli Erik Lowney Nick Bitto

BIWA 2015 Big Data Lab Java MapReduce WordCount/Table JOIN Big Data Loader. Arijit Das Greg Belli Erik Lowney Nick Bitto BIWA 2015 Big Data Lab Java MapReduce WordCount/Table JOIN Big Data Loader Arijit Das Greg Belli Erik Lowney Nick Bitto Introduction NPS Introduction Hadoop File System Background Wordcount & modifications

More information

Creating.NET-based Mappers and Reducers for Hadoop with JNBridgePro

Creating.NET-based Mappers and Reducers for Hadoop with JNBridgePro Creating.NET-based Mappers and Reducers for Hadoop with JNBridgePro CELEBRATING 10 YEARS OF JAVA.NET Apache Hadoop.NET-based MapReducers Creating.NET-based Mappers and Reducers for Hadoop with JNBridgePro

More information

CDH 5 Quick Start Guide

CDH 5 Quick Start Guide CDH 5 Quick Start Guide Important Notice (c) 2010-2015 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, Cloudera Impala, and any other product or service names or slogans contained in this

More information

MapReduce framework. (input) <k1, v1> -> map -> <k2, v2> -> combine -> <k2, v2> -> reduce -> <k3, v3> (output)

MapReduce framework. (input) <k1, v1> -> map -> <k2, v2> -> combine -> <k2, v2> -> reduce -> <k3, v3> (output) MapReduce framework - Operates exclusively on pairs, - that is, the framework views the input to the job as a set of pairs and produces a set of pairs as the output

More information

Data Intensive Computing Handout 8 Hadoop

Data Intensive Computing Handout 8 Hadoop Data Intensive Computing Handout 8 Hadoop Hadoop 1.2.1 is installed in /HADOOP directory. The JobTracker web interface is available at http://dlrc:50030, the NameNode web interface is available at http://dlrc:50070.

More information

Hadoop 2.2.0 MultiNode Cluster Setup

Hadoop 2.2.0 MultiNode Cluster Setup Hadoop 2.2.0 MultiNode Cluster Setup Sunil Raiyani Jayam Modi June 7, 2014 Sunil Raiyani Jayam Modi Hadoop 2.2.0 MultiNode Cluster Setup June 7, 2014 1 / 14 Outline 4 Starting Daemons 1 Pre-Requisites

More information

Basic Hadoop Programming Skills

Basic Hadoop Programming Skills Basic Hadoop Programming Skills Basic commands of Ubuntu Open file explorer Basic commands of Ubuntu Open terminal Basic commands of Ubuntu Open new tabs in terminal Typically, one tab for compiling source

More information

Tutorial. Christopher M. Judd

Tutorial. Christopher M. Judd Tutorial Christopher M. Judd Christopher M. Judd CTO and Partner at leader Columbus Developer User Group (CIDUG) http://goo.gl/f2cwnz https://s3.amazonaws.com/cmj-presentations/hadoop-javaone-2014/index.html

More information

Case-Based Reasoning Implementation on Hadoop and MapReduce Frameworks Done By: Soufiane Berouel Supervised By: Dr Lily Liang

Case-Based Reasoning Implementation on Hadoop and MapReduce Frameworks Done By: Soufiane Berouel Supervised By: Dr Lily Liang Case-Based Reasoning Implementation on Hadoop and MapReduce Frameworks Done By: Soufiane Berouel Supervised By: Dr Lily Liang Independent Study Advanced Case-Based Reasoning Department of Computer Science

More information

Set JAVA PATH in Linux Environment. Edit.bashrc and add below 2 lines $vi.bashrc export JAVA_HOME=/usr/lib/jvm/java-7-oracle/

Set JAVA PATH in Linux Environment. Edit.bashrc and add below 2 lines $vi.bashrc export JAVA_HOME=/usr/lib/jvm/java-7-oracle/ Download the Hadoop tar. Download the Java from Oracle - Unpack the Comparisons -- $tar -zxvf hadoop-2.6.0.tar.gz $tar -zxf jdk1.7.0_60.tar.gz Set JAVA PATH in Linux Environment. Edit.bashrc and add below

More information

Hadoop. Dawid Weiss. Institute of Computing Science Poznań University of Technology

Hadoop. Dawid Weiss. Institute of Computing Science Poznań University of Technology Hadoop Dawid Weiss Institute of Computing Science Poznań University of Technology 2008 Hadoop Programming Summary About Config 1 Open Source Map-Reduce: Hadoop About Cluster Configuration 2 Programming

More information

IDS 561 Big data analytics Assignment 1

IDS 561 Big data analytics Assignment 1 IDS 561 Big data analytics Assignment 1 Due Midnight, October 4th, 2015 General Instructions The purpose of this tutorial is (1) to get you started with Hadoop and (2) to get you acquainted with the code

More information

Hadoop Tutorial. General Instructions

Hadoop Tutorial. General Instructions CS246: Mining Massive Datasets Winter 2016 Hadoop Tutorial Due 11:59pm January 12, 2016 General Instructions The purpose of this tutorial is (1) to get you started with Hadoop and (2) to get you acquainted

More information

Outline of Tutorial. Hadoop and Pig Overview Hands-on

Outline of Tutorial. Hadoop and Pig Overview Hands-on Outline of Tutorial Hadoop and Pig Overview Hands-on 1 Hadoop and Pig Overview Lavanya Ramakrishnan Shane Canon Lawrence Berkeley National Lab October 2011 Overview Concepts & Background MapReduce and

More information

HADOOP - MAPREDUCE. Generally MapReduce paradigm is based on sending the computer to where the data resides!

HADOOP - MAPREDUCE. Generally MapReduce paradigm is based on sending the computer to where the data resides! HADOOP - MAPREDUCE http://www.tutorialspoint.com/hadoop/hadoop_mapreduce.htm Copyright tutorialspoint.com MapReduce is a framework using which we can write applications to process huge amounts of data,

More information

Hadoop Distributed Filesystem. Spring 2015, X. Zhang Fordham Univ.

Hadoop Distributed Filesystem. Spring 2015, X. Zhang Fordham Univ. Hadoop Distributed Filesystem Spring 2015, X. Zhang Fordham Univ. MapReduce Programming Model Split Shuffle Input: a set of [key,value] pairs intermediate [key,value] pairs [k1,v11,v12, ] [k2,v21,v22,

More information

Parallel Frameworks & Big Data

Parallel Frameworks & Big Data Parallel Frameworks & Big Data Hadoop and Spark on BioHPC [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2015-11-18 Overview What is Big Data? Big data & parallel processing

More information

Introduction To Hadoop

Introduction To Hadoop Introduction To Hadoop Kenneth Heafield Google Inc January 14, 2008 Example code from Hadoop 0.13.1 used under the Apache License Version 2.0 and modified for presentation. Except as otherwise noted, the

More information

Istanbul Şehir University Big Data Camp 14. Hadoop Map Reduce. Aslan Bakirov Kevser Nur Çoğalmış

Istanbul Şehir University Big Data Camp 14. Hadoop Map Reduce. Aslan Bakirov Kevser Nur Çoğalmış Istanbul Şehir University Big Data Camp 14 Hadoop Map Reduce Aslan Bakirov Kevser Nur Çoğalmış Agenda Map Reduce Concepts System Overview Hadoop MR Hadoop MR Internal Job Execution Workflow Map Side Details

More information

MarkLogic Server. MarkLogic Connector for Hadoop Developer s Guide. MarkLogic 8 February, 2015

MarkLogic Server. MarkLogic Connector for Hadoop Developer s Guide. MarkLogic 8 February, 2015 MarkLogic Connector for Hadoop Developer s Guide 1 MarkLogic 8 February, 2015 Last Revised: 8.0-3, June, 2015 Copyright 2015 MarkLogic Corporation. All rights reserved. Table of Contents Table of Contents

More information

Map-Reduce and Hadoop

Map-Reduce and Hadoop Map-Reduce and Hadoop 1 Introduction to Map-Reduce 2 3 Map Reduce operations Input data are (key, value) pairs 2 operations available : map and reduce Map Takes a (key, value) and generates other (key,

More information

CSE-E5430 Scalable Cloud Computing. Lecture 4

CSE-E5430 Scalable Cloud Computing. Lecture 4 Lecture 4 Keijo Heljanko Department of Computer Science School of Science Aalto University keijo.heljanko@aalto.fi 5.10-2015 1/23 Hadoop - Linux of Big Data Hadoop = Open Source Distributed Operating System

More information

Hadoop Training Hands On Exercise

Hadoop Training Hands On Exercise Hadoop Training Hands On Exercise 1. Getting started: Step 1: Download and Install the Vmware player - Download the VMware- player- 5.0.1-894247.zip and unzip it on your windows machine - Click the exe

More information

Assignment 1: MapReduce with Hadoop

Assignment 1: MapReduce with Hadoop Assignment 1: MapReduce with Hadoop Jean-Pierre Lozi January 24, 2015 Provided files following URL: An archive that contains all files you will need for this assignment can be found at the http://sfu.ca/~jlozi/cmpt732/assignment1.tar.gz

More information

Hadoop Installation Tutorial (Hadoop 1.x)

Hadoop Installation Tutorial (Hadoop 1.x) Contents Download and install Java JDK... 1 Download the Hadoop tar ball... 1 Update $HOME/.bashrc... 3 Configuration of Hadoop in Pseudo Distributed Mode... 4 Format the newly created cluster to create

More information

Introduction to Cloud Computing

Introduction to Cloud Computing Introduction to Cloud Computing MapReduce and Hadoop 15 319, spring 2010 17 th Lecture, Mar 16 th Majd F. Sakr Lecture Goals Transition to MapReduce from Functional Programming Understand the origins of

More information

Installation Guide Setting Up and Testing Hadoop on Mac By Ryan Tabora, Think Big Analytics

Installation Guide Setting Up and Testing Hadoop on Mac By Ryan Tabora, Think Big Analytics Installation Guide Setting Up and Testing Hadoop on Mac By Ryan Tabora, Think Big Analytics www.thinkbiganalytics.com 520 San Antonio Rd, Suite 210 Mt. View, CA 94040 (650) 949-2350 Table of Contents OVERVIEW

More information

Three Approaches to Data Analysis with Hadoop

Three Approaches to Data Analysis with Hadoop Three Approaches to Data Analysis with Hadoop A Dell Technical White Paper Dave Jaffe, Ph.D. Solution Architect Dell Solution Centers Executive Summary This white paper demonstrates analysis of large datasets

More information

What s Big Data? Big Data: 3V s. Variety (Complexity) 5/5/2016. Introduction to Big Data, mostly from www.cs.kent.edu/~jin/bigdata by Ruoming Jin

What s Big Data? Big Data: 3V s. Variety (Complexity) 5/5/2016. Introduction to Big Data, mostly from www.cs.kent.edu/~jin/bigdata by Ruoming Jin data every day 5/5/2016 Introduction to Big Data, mostly from www.cs.kent.edu/~jin/bigdata by Ruoming Jin What s Big Data? No single definition; here is from Wikipedia: Big data is the term for a collection

More information