Data Intensive Computing Handout 5 Hadoop
|
|
|
- Evangeline Rice
- 10 years ago
- Views:
Transcription
1 Data Intensive Computing Handout 5 Hadoop Hadoop is installed in /HADOOP directory. The JobTracker web interface is available at the NameNode web interface is available at To conveniently access the web interfaces remotely, you can use ssh -D localhost:2020 -p ufallab.ms.mff.cuni.cz to open SOCKS5 proxy server forwarding requests to the remove site, and use it as a proxy server in a browser, for example as chromium --proxy-server=socks://localhost:2020. Simple Tokenizer We will commonly need to split given text into words (called tokens). You can do so easily by using function wordpunct_tokenize from nltk.tokenize package, i.e. using the following import line at the beginning of you program: from nltk.tokenize import wordpunct_tokenize Dumbo Dumbo is (one of several) Python API to Hadoop. Hadoop language bindings. Overview of features: always a mapper, maybe reducer and combiner mapper, reducer and combiner can be implemented as It utilizes Hadoop Streaming, as most simple method class with init, call and possibly cleanup. Note that both call and cleanup must return either a list or a generator returning the results. Especially, if you do not want to return anything, you have to return [] (that is Dumbo bug as it could be fixes easily). parameters through self.params and -param passing files using -file or -cachefile counters multiple iterations with any non-circular dependencies Simple Grep Example All mentioned examples are available in /home/straka/examples. grep_ab.py def mapper ( key, value ) : i f key. s t a r t s w i t h ( Ab ) : yield key, value. r e p l a c e ( \n, ) dumbo. run ( mapper ) 1
2 grep_ab.sh dumbo start grep_ab.py -input / data / wiki /cs - medium -output / users / $USER / grep_ ab - outputformat text - overwrite yes Running Dumbo Run above script using dumbo start script.py options. Available options: -input input_hdfs_path: input path to use -inputformat [auto sequencefile text keyvaluetext]: input format to use, auto is default -output output_hdfs_path: output path to use -output [sequencefile text]: output format to use, sequencefile is default -nummaptasks n: set number of map tasks to given number -numreducetasks n: set number of reduce tasks to given number. Zero is allowed (and default if no reducer is specified) and only mappers are executed in that case. -file local_file: file to be put in the dir where the python program gets executed -cachefile HDFS_path#link_name: create a link link_name in the dir where the python program gets executed -param name=value: param available in the Python script as self.params["name"] -hadoop hadoop_prefix: default is /HADOOP -name hadoop_job_name: default is script.py -mapper hadoop_mapper: Java class to use as mapper instead of mapper in script.py -reducer hadoop_reducer: Java class to use as reducer instead of reducer in script.py Dumbo HDFS Commands dumbo cat HDFS_path [-ascode=yes]: convert to text and print given file dumbo ls HDFS_path dumbo exists HDFS_path dumbo rm HDFS_path dumbo put local_path HDFS_path dumbo get HDFS_path local_path Grep Example Parameters can be passed to mappers and reducers using -param name=value Dumbo option and accessed using self.params dictionary. Also note the class version of the mapper, using constructor init and mapper method call. Reducers can be implemented similarly. import re grep.py class Mapper : s e l f. re = re. compile ( s e l f. params. get ( pattern, ) ) 2
3 i f s e l f. re. search ( key ) : yield key, value. r e p l a c e ( \n, ) dumbo. run ( Mapper ) grep.sh dumbo start grep.py -input / data / wiki /cs - medium -output / users / $USER / grep - outputformat text - param pattern =" ^ H" - overwrite yes Simple Word Count Reducers are similar to mappers, and can be specified also either using a method or a class. An optional combiner (third parameter of dumbo.run) can be specified too. wordcount.py def mapper ( key, value ) : def reducer ( key, v a l u e s ) : dumbo. run ( mapper, reducer, reducer ) wordcount.sh dumbo start wordcount.py -input / data / wiki /cs - medium -output / users / $USER / wordcount - outputformat text - overwrite yes Efficient Word Count Example More efficient word count is obtained when counts in the processed block are stored in an associative array (more efficient version of local reducer). To that end, cleanup method can be used nicely. Note that we have to return [] in call. That is causes by the fact that Dumbo iterates over results of a call invocation and unfortunately does not handle None return value. wc_effective.py class Mapper : s e l f. counts = {} s e l f. counts [ word ] = s e l f. counts. get ( word, 0) + 1 return [ ] # Method c a l l has to return the ( key, v a l u e ) p a i r s. # Unfortunately, NoneType i s not handled in Dumbo. 3
4 def cleanup ( s e l f ) : for word count in s e l f. counts. i t e r i t e m s ( ) : yield word count class Reducer : def c a l l ( s e l f, key, v a l u e s ) : dumbo. run ( Mapper, Reducer ) wc_effective.sh dumbo start wc_effective.py -input / data / wiki /cs - medium -output / users / $USER / wc_ effective - outputformat text - overwrite yes Word Count with Counters User counters can be collected using Hadoop using self.counters object. wc_counters.py def mapper ( key, value ) : class Reducer : def c a l l ( s e l f, key, v a l u e s ) : t o t a l = sum( v a l u e s ) counter = Key o c c u r r e n c e s + ( s t r ( t o t a l ) i f t o t a l < 10 else 10 or more ) s e l f. c ounters [ counter ] += 1 yield key, t o t a l dumbo. run ( mapper, Reducer ) wc_counters.sh dumbo start wc_counters.py -input / data / wiki /cs - medium -output / users / $USER / wc_ counters - outputformat text - overwrite yes Word Count using Stop List Sometimes customization using -param is not enough, instead a whole file should be used to customize the mapper or reducer. Consider for example case where word count should ignore given words. This task can be solved by using -param to specify file with words to ignore and by -file or -cachefile to distribute the file in question with the computation. wc_excludes.py class Mapper : 4
5 f i l e = open ( s e l f. params [ e x c l u d e s ], r ) s e l f. e x c l u d e s = s e t ( l i n e. s t r i p ( ) for l i n e in f i l e ) f i l e. c l o s e ( ) i f not ( word in s e l f. e x c l u d e s ) : def reducer ( key, v a l u e s ) : dumbo. run ( Mapper, reducer, reducer ) wc_excludes.sh dumbo start wc_excludes.py -input / data / wiki /cs - medium -output / users / $USER / wc_ excludes - outputformat text - param excludes = stoplist. txt - file stoplist. txt - overwrite yes Multiple Iterations Word Count Dumbo can execute multiple iterations of MapReduce. In the following artifical example, we first create lower case variant of values and then filter out words not matching given pattern and count their occurrences. import re wc_2iterations.py class LowercaseMapper : yield key, value. decode ( utf 8 ). lower ( ). encode ( utf 8 ) class GrepMapper : s e l f. re = re. compile ( s e l f. params. get ( pattern, ) ) i f s e l f. re. search ( word ) : def reducer ( key, v a l u e s ) : def runner ( job ) : job. a d d i t e r ( LowercaseMapper ) job. a d d i t e r ( GrepMapper, reducer ) dumbo. main ( runner ) wc_2iterations.sh 5
6 dumbo start wc_2iterations.py -input / data / wiki /cs - medium -output / users / $USER / wc_ 2iterations - outputformat text - param pattern = h - overwrite yes Non-Trivial Dependencies Between Iterations The MapReduce iterations can depend on output of arbitrary iterations (as long as the dependencies do not form a cycle, of course). This can be specifies using input parameter to additer as follows. import re wc_dag.py class LowercaseMapper : yield key, value. decode ( utf 8 ). lower ( ). encode ( utf 8 ) class FilterMapper1 : s e l f. re = re. compile ( s e l f. params. get ( pattern1, ) ) i f s e l f. re. search ( word ) : class FilterMapper2 : s e l f. re = re. compile ( s e l f. params. get ( pattern2, ) ) i f s e l f. re. search ( word ) : class IdentityMapper : yield key, value def reducer ( key, v a l u e s ) : def runner ( job ) : lowercased = job. a d d i t e r ( LowercaseMapper ) # i m p l i c i t input = j o b. root f i l t e r e d 1 = job. a d d i t e r ( FilterMapper1, input = lowercased ) f i l t e r e d 2 = job. a d d i t e r ( FilterMapper2, input = lowercased ) job. a d d i t e r ( IdentityMapper, reducer, input = [ f i l t e r e d 1, f i l t e r e d 2 ] ) dumbo. main ( runner ) wc_dag.sh dumbo start wc_dag.py -input / data / wiki /cs - medium -output / users / $USER / wc_ dag - outputformat text - param pattern1 = h - param pattern2 = i - overwrite yes 6
7 Execute Hadoop Locally Using dumbo-local instead of dumbo, you can run the Hadoop computation locally using one mapper and one reducer. The standard error of the Python script is available in that case. Wikipedia Data The Wikipedia Data available from /dlrc_share/data/wiki/ are available also in HDFS: /data/wiki/cs: Czech Wikipedia data (Sep 2009), 195MB, 124k articles /data/wiki/en: English Wikipedia data (Sep 2009), 4.9GB, 2.9M articles All data is stored in a record-compressed sequence files, with article names as keys and article texts as values, in UTF-8 encoding. Tasks Solve the following tasks. Solution for each task is a Dumbo Python source processing the Wikipedia source data and producing required results. Task Points Description dumbo_unique_words 2 Create a list of unique words used in the articles using Dumbo. Convert them to lowercase to ignore case. Use the wordpunct_tokenize as a tokenizer. article_initials 2 Run a Dumbo job which uses counters to count the number of articles according to their first letter, ignoring the case and merging all non-czech initials. dumbo_inverted_index 2 Compute inverted index in Dumbo for every lowercased word from the articles, compute (article name, ascending positions of occurrences as word indices) pairs. Use the wordpunct_tokenize as a tokenizer. The output should be a file with one word on a line in the following format: word \t article name \t space separated occurrences... You will get 2 additional points if the articles will be numbered using consecutive integers. In that case, the output is ascending (article id, ascending positions of occurrences as word indices) pairs, together with a file containing list of articles representing this mapping (the article on line i is the article with id i). no_references 3 An article A is said to reference article B, if it contains B as a token (ignoring case). Run a Dumbo job which counts for each article how many references there exists for the given article (summing all references in a single article). You will get one extra point if the result is sorted by the number of references (you are allowed to use 1 reducer in the sorting phase). Use the wordpunct_tokenize as a tokenizer. 7
8 Task Points Description dumbo_wordsim_index 4 In order to implement word similarity search, compute for each form with at least three occurrences all contexts in which it occurs, including their number of occurrences. List the contexts in ascending order. Given N (either 1, 2, 3 or 4), the context of a form occurrence is N forms preceding this occurrence and N forms following this occurrence (ignore sentence boundaries, use empty words when article boundaries are reached). Use the wordpunct_tokenize as a tokenizer. The output should be a file with one form on a line in the following format: form \t context \t counts... Let S be given natural number. Using the index created dumbo_wordsim_find 4 in dumbo_wordsim_index, find for each form S most similar forms. The similarity of two forms is computed using cosine C similarity as A C B C A C B F is a vector of occurrences of form F contexts. The output should be a file with one form on a line in the following format: form \t most similar form \t cosine similarity... 8
Data Intensive Computing Handout 6 Hadoop
Data Intensive Computing Handout 6 Hadoop Hadoop 1.2.1 is installed in /HADOOP directory. The JobTracker web interface is available at http://dlrc:50030, the NameNode web interface is available at http://dlrc:50070.
Hadoop Streaming. Table of contents
Table of contents 1 Hadoop Streaming...3 2 How Streaming Works... 3 3 Streaming Command Options...4 3.1 Specifying a Java Class as the Mapper/Reducer... 5 3.2 Packaging Files With Job Submissions... 5
Extreme computing lab exercises Session one
Extreme computing lab exercises Session one Michail Basios ([email protected]) Stratis Viglas ([email protected]) 1 Getting started First you need to access the machine where you will be doing all
Extreme computing lab exercises Session one
Extreme computing lab exercises Session one Miles Osborne (original: Sasa Petrovic) October 23, 2012 1 Getting started First you need to access the machine where you will be doing all the work. Do this
A. Aiken & K. Olukotun PA3
Programming Assignment #3 Hadoop N-Gram Due Tue, Feb 18, 11:59PM In this programming assignment you will use Hadoop s implementation of MapReduce to search Wikipedia. This is not a course in search, so
PassTest. Bessere Qualität, bessere Dienstleistungen!
PassTest Bessere Qualität, bessere Dienstleistungen! Q&A Exam : CCD-410 Title : Cloudera Certified Developer for Apache Hadoop (CCDH) Version : DEMO 1 / 4 1.When is the earliest point at which the reduce
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA http://kzhang6.people.uic.edu/tutorial/amcis2014.html August 7, 2014 Schedule I. Introduction to big data
Yahoo! Grid Services Where Grid Computing at Yahoo! is Today
Yahoo! Grid Services Where Grid Computing at Yahoo! is Today Marco Nicosia Grid Services Operations [email protected] What is Apache Hadoop? Distributed File System and Map-Reduce programming platform
Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?
Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software? 可 以 跟 資 料 庫 結 合 嘛? Can Hadoop work with Databases? 開 發 者 們 有 聽 到
Introduction to Cloud Computing
Introduction to Cloud Computing Qloud Demonstration 15 319, spring 2010 3 rd Lecture, Jan 19 th Suhail Rehman Time to check out the Qloud! Enough Talk! Time for some Action! Finally you can have your own
A Tutorial Introduc/on to Big Data. Hands On Data Analy/cs over EMR. Robert Grossman University of Chicago Open Data Group
A Tutorial Introduc/on to Big Data Hands On Data Analy/cs over EMR Robert Grossman University of Chicago Open Data Group Collin BenneE Open Data Group November 12, 2012 1 Amazon AWS Elas/c MapReduce allows
Extreme Computing. Hadoop MapReduce in more detail. www.inf.ed.ac.uk
Extreme Computing Hadoop MapReduce in more detail How will I actually learn Hadoop? This class session Hadoop: The Definitive Guide RTFM There is a lot of material out there There is also a lot of useless
map/reduce connected components
1, map/reduce connected components find connected components with analogous algorithm: map edges randomly to partitions (k subgraphs of n nodes) for each partition remove edges, so that only tree remains
Hadoop Distributed File System Propagation Adapter for Nimbus
University of Victoria Faculty of Engineering Coop Workterm Report Hadoop Distributed File System Propagation Adapter for Nimbus Department of Physics University of Victoria Victoria, BC Matthew Vliet
Hadoop Design and k-means Clustering
Hadoop Design and k-means Clustering Kenneth Heafield Google Inc January 15, 2008 Example code from Hadoop 0.13.1 used under the Apache License Version 2.0 and modified for presentation. Except as otherwise
Assignment 2: More MapReduce with Hadoop
Assignment 2: More MapReduce with Hadoop Jean-Pierre Lozi February 5, 2015 Provided files following URL: An archive that contains all files you will need for this assignment can be found at the http://sfu.ca/~jlozi/cmpt732/assignment2.tar.gz
Cloudera Certified Developer for Apache Hadoop
Cloudera CCD-333 Cloudera Certified Developer for Apache Hadoop Version: 5.6 QUESTION NO: 1 Cloudera CCD-333 Exam What is a SequenceFile? A. A SequenceFile contains a binary encoding of an arbitrary number
BIG DATA HANDS-ON WORKSHOP Data Manipulation with Hive and Pig
BIG DATA HANDS-ON WORKSHOP Data Manipulation with Hive and Pig Contents Acknowledgements... 1 Introduction to Hive and Pig... 2 Setup... 2 Exercise 1 Load Avro data into HDFS... 2 Exercise 2 Define an
CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment
CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment James Devine December 15, 2008 Abstract Mapreduce has been a very successful computational technique that has
Hadoop Installation MapReduce Examples Jake Karnes
Big Data Management Hadoop Installation MapReduce Examples Jake Karnes These slides are based on materials / slides from Cloudera.com Amazon.com Prof. P. Zadrozny's Slides Prerequistes You must have an
Hadoop and Map-reduce computing
Hadoop and Map-reduce computing 1 Introduction This activity contains a great deal of background information and detailed instructions so that you can refer to it later for further activities and homework.
Hadoop Hands-On Exercises
Hadoop Hands-On Exercises Lawrence Berkeley National Lab July 2011 We will Training accounts/user Agreement forms Test access to carver HDFS commands Monitoring Run the word count example Simple streaming
Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data
Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give
Hadoop WordCount Explained! IT332 Distributed Systems
Hadoop WordCount Explained! IT332 Distributed Systems Typical problem solved by MapReduce Read a lot of data Map: extract something you care about from each record Shuffle and Sort Reduce: aggregate, summarize,
MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012
MapReduce and Hadoop Aaron Birkland Cornell Center for Advanced Computing January 2012 Motivation Simple programming model for Big Data Distributed, parallel but hides this Established success at petabyte
TP1: Getting Started with Hadoop
TP1: Getting Started with Hadoop Alexandru Costan MapReduce has emerged as a leading programming model for data-intensive computing. It was originally proposed by Google to simplify development of web
This exam contains 13 pages (including this cover page) and 18 questions. Check to see if any pages are missing.
Big Data Processing 2013-2014 Q2 April 7, 2014 (Resit) Lecturer: Claudia Hauff Time Limit: 180 Minutes Name: Answer the questions in the spaces provided on this exam. If you run out of room for an answer,
How To Write A Map In Java (Java) On A Microsoft Powerbook 2.5 (Ahem) On An Ipa (Aeso) Or Ipa 2.4 (Aseo) On Your Computer Or Your Computer
Lab 0 - Introduction to Hadoop/Eclipse/Map/Reduce CSE 490h - Winter 2007 To Do 1. Eclipse plug in introduction Dennis Quan, IBM 2. Read this hand out. 3. Get Eclipse set up on your machine. 4. Load the
Introduction to MapReduce and Hadoop
Introduction to MapReduce and Hadoop Jie Tao Karlsruhe Institute of Technology [email protected] Die Kooperation von Why Map/Reduce? Massive data Can not be stored on a single machine Takes too long to process
How To Use Hadoop
Hadoop in Action Justin Quan March 15, 2011 Poll What s to come Overview of Hadoop for the uninitiated How does Hadoop work? How do I use Hadoop? How do I get started? Final Thoughts Key Take Aways Hadoop
Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team [email protected]
Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team [email protected] Who Am I? Yahoo! Architect on Hadoop Map/Reduce Design, review, and implement features in Hadoop Working on Hadoop full time since Feb
Hadoop Hands-On Exercises
Hadoop Hands-On Exercises Lawrence Berkeley National Lab Oct 2011 We will Training accounts/user Agreement forms Test access to carver HDFS commands Monitoring Run the word count example Simple streaming
Big Data Frameworks: Scala and Spark Tutorial
Big Data Frameworks: Scala and Spark Tutorial 13.03.2015 Eemil Lagerspetz, Ella Peltonen Professor Sasu Tarkoma These slides: http://is.gd/bigdatascala www.cs.helsinki.fi Functional Programming Functional
MapReduce. Tushar B. Kute, http://tusharkute.com
MapReduce Tushar B. Kute, http://tusharkute.com What is MapReduce? MapReduce is a framework using which we can write applications to process huge amounts of data, in parallel, on large clusters of commodity
Extreme Computing. Hadoop. Stratis Viglas. School of Informatics University of Edinburgh [email protected]. Stratis Viglas Extreme Computing 1
Extreme Computing Hadoop Stratis Viglas School of Informatics University of Edinburgh [email protected] Stratis Viglas Extreme Computing 1 Hadoop Overview Examples Environment Stratis Viglas Extreme
Getting Started with Hadoop with Amazon s Elastic MapReduce
Getting Started with Hadoop with Amazon s Elastic MapReduce Scott Hendrickson [email protected] http://drskippy.net/projects/emr-hadoopmeetup.pdf Boulder/Denver Hadoop Meetup 8 July 2010 Scott Hendrickson
Research Laboratory. Java Web Crawler & Hadoop MapReduce Anri Morchiladze && Bachana Dolidze Supervisor Nodar Momtselidze
Research Laboratory Java Web Crawler & Hadoop MapReduce Anri Morchiladze && Bachana Dolidze Supervisor Nodar Momtselidze 1. Java Web Crawler Description Java Code 2. MapReduce Overview Example of mapreduce
Click Stream Data Analysis Using Hadoop
Governors State University OPUS Open Portal to University Scholarship Capstone Projects Spring 2015 Click Stream Data Analysis Using Hadoop Krishna Chand Reddy Gaddam Governors State University Sivakrishna
CS 378 Big Data Programming. Lecture 5 Summariza9on Pa:erns
CS 378 Big Data Programming Lecture 5 Summariza9on Pa:erns Review Assignment 2 Ques9ons? If you d like to use guava (Google collec9ons classes) pom.xml available for assignment 2 Includes dependency for
COURSE CONTENT Big Data and Hadoop Training
COURSE CONTENT Big Data and Hadoop Training 1. Meet Hadoop Data! Data Storage and Analysis Comparison with Other Systems RDBMS Grid Computing Volunteer Computing A Brief History of Hadoop Apache Hadoop
Sriram Krishnan, Ph.D. [email protected]
Sriram Krishnan, Ph.D. [email protected] (Re-)Introduction to cloud computing Introduction to the MapReduce and Hadoop Distributed File System Programming model Examples of MapReduce Where/how to run MapReduce
Big Data Management and NoSQL Databases
NDBI040 Big Data Management and NoSQL Databases Lecture 3. Apache Hadoop Doc. RNDr. Irena Holubova, Ph.D. [email protected] http://www.ksi.mff.cuni.cz/~holubova/ndbi040/ Apache Hadoop Open-source
Optimize the execution of local physics analysis workflows using Hadoop
Optimize the execution of local physics analysis workflows using Hadoop INFN CCR - GARR Workshop 14-17 May Napoli Hassen Riahi Giacinto Donvito Livio Fano Massimiliano Fasi Andrea Valentini INFN-PERUGIA
Tutorial for Assignment 2.0
Tutorial for Assignment 2.0 Florian Klien & Christian Körner IMPORTANT The presented information has been tested on the following operating systems Mac OS X 10.6 Ubuntu Linux The installation on Windows
Integrating NLTK with the Hadoop Map Reduce Framework 433-460 Human Language Technology Project
Integrating NLTK with the Hadoop Map Reduce Framework 433-460 Human Language Technology Project Paul Bone [email protected] June 2008 Contents 1 Introduction 1 2 Method 2 2.1 Hadoop and Python.........................
Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay
Weekly Report Hadoop Introduction submitted By Anurag Sharma Department of Computer Science and Engineering Indian Institute of Technology Bombay Chapter 1 What is Hadoop? Apache Hadoop (High-availability
Hadoop Lab Notes. Nicola Tonellotto November 15, 2010
Hadoop Lab Notes Nicola Tonellotto November 15, 2010 2 Contents 1 Hadoop Setup 4 1.1 Prerequisites........................................... 4 1.2 Installation............................................
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
Getting to know Apache Hadoop
Getting to know Apache Hadoop Oana Denisa Balalau Télécom ParisTech October 13, 2015 1 / 32 Table of Contents 1 Apache Hadoop 2 The Hadoop Distributed File System(HDFS) 3 Application management in the
Hadoop Tutorial Group 7 - Tools For Big Data Indian Institute of Technology Bombay
Hadoop Tutorial Group 7 - Tools For Big Data Indian Institute of Technology Bombay Dipojjwal Ray Sandeep Prasad 1 Introduction In installation manual we listed out the steps for hadoop-1.0.3 and hadoop-
MapReduce, Hadoop and Amazon AWS
MapReduce, Hadoop and Amazon AWS Yasser Ganjisaffar http://www.ics.uci.edu/~yganjisa February 2011 What is Hadoop? A software framework that supports data-intensive distributed applications. It enables
Single Node Setup. Table of contents
Table of contents 1 Purpose... 2 2 Prerequisites...2 2.1 Supported Platforms...2 2.2 Required Software... 2 2.3 Installing Software...2 3 Download...2 4 Prepare to Start the Hadoop Cluster... 3 5 Standalone
HDFS Users Guide. Table of contents
Table of contents 1 Purpose...2 2 Overview...2 3 Prerequisites...3 4 Web Interface...3 5 Shell Commands... 3 5.1 DFSAdmin Command...4 6 Secondary NameNode...4 7 Checkpoint Node...5 8 Backup Node...6 9
Introduction to Big Data with Apache Spark UC BERKELEY
Introduction to Big Data with Apache Spark UC BERKELEY This Lecture Programming Spark Resilient Distributed Datasets (RDDs) Creating an RDD Spark Transformations and Actions Spark Programming Model Python
CS 378 Big Data Programming
CS 378 Big Data Programming Lecture 2 Map- Reduce CS 378 - Fall 2015 Big Data Programming 1 MapReduce Large data sets are not new What characterizes a problem suitable for MR? Most or all of the data is
ITG Software Engineering
Introduction to Cloudera Course ID: Page 1 Last Updated 12/15/2014 Introduction to Cloudera Course : This 5 day course introduces the student to the Hadoop architecture, file system, and the Hadoop Ecosystem.
Disco: Beyond MapReduce
Disco: Beyond MapReduce Prashanth Mundkur Nokia Mar 22, 2013 Outline BigData/MapReduce Disco Disco Pipeline Model Disco Roadmap BigData/MapReduce Data too big to fit in RAM/disk of any single machine Analyze
Complete Java Classes Hadoop Syllabus Contact No: 8888022204
1) Introduction to BigData & Hadoop What is Big Data? Why all industries are talking about Big Data? What are the issues in Big Data? Storage What are the challenges for storing big data? Processing What
Prepared By : Manoj Kumar Joshi & Vikas Sawhney
Prepared By : Manoj Kumar Joshi & Vikas Sawhney General Agenda Introduction to Hadoop Architecture Acknowledgement Thanks to all the authors who left their selfexplanatory images on the internet. Thanks
MapReduce and Hadoop Distributed File System
MapReduce and Hadoop Distributed File System 1 B. RAMAMURTHY Contact: Dr. Bina Ramamurthy CSE Department University at Buffalo (SUNY) [email protected] http://www.cse.buffalo.edu/faculty/bina Partially
Hadoop Streaming. 2012 coreservlets.com and Dima May. 2012 coreservlets.com and Dima May
2012 coreservlets.com and Dima May Hadoop Streaming Originals of slides and source code for examples: http://www.coreservlets.com/hadoop-tutorial/ Also see the customized Hadoop training courses (onsite
Hands-on Exercises with Big Data
Hands-on Exercises with Big Data Lab Sheet 1: Getting Started with MapReduce and Hadoop The aim of this exercise is to learn how to begin creating MapReduce programs using the Hadoop Java framework. In
A very short Intro to Hadoop
4 Overview A very short Intro to Hadoop photo by: exfordy, flickr 5 How to Crunch a Petabyte? Lots of disks, spinning all the time Redundancy, since disks die Lots of CPU cores, working all the time Retry,
University of Maryland. Tuesday, February 2, 2010
Data-Intensive Information Processing Applications Session #2 Hadoop: Nuts and Bolts Jimmy Lin University of Maryland Tuesday, February 2, 2010 This work is licensed under a Creative Commons Attribution-Noncommercial-Share
Map Reduce & Hadoop Recommended Text:
Big Data Map Reduce & Hadoop Recommended Text:! Large datasets are becoming more common The New York Stock Exchange generates about one terabyte of new trade data per day. Facebook hosts approximately
Strategies for scheduling Hadoop Jobs. Pere Urbon-Bayes (@purbon) [email protected] http://www.purbon.com
Strategies for scheduling Hadoop Jobs Pere Urbon-Bayes (@purbon) [email protected] http://www.purbon.com $ whoami Software Architect with > 10 years of experience. Interested in data centric applications
python hadoop pig October 29, 2015
python hadoop pig October 29, 2015 1 Python Hadoop Pig This notebook aims at showing how to submit a PIG job to remote hadoop cluster (tested with Cloudera). It works better if you know Hadoop otherwise
t] open source Hadoop Beginner's Guide ij$ data avalanche Garry Turkington Learn how to crunch big data to extract meaning from
Hadoop Beginner's Guide Learn how to crunch big data to extract meaning from data avalanche Garry Turkington [ PUBLISHING t] open source I I community experience distilled ftu\ ij$ BIRMINGHAMMUMBAI ')
Running Hadoop on Windows CCNP Server
Running Hadoop at Stirling Kevin Swingler Summary The Hadoopserver in CS @ Stirling A quick intoduction to Unix commands Getting files in and out Compliing your Java Submit a HadoopJob Monitor your jobs
How To Write A Map Reduce In Hadoop Hadooper 2.5.2.2 (Ahemos)
Processing Data with Map Reduce Allahbaksh Mohammedali Asadullah Infosys Labs, Infosys Technologies 1 Content Map Function Reduce Function Why Hadoop HDFS Map Reduce Hadoop Some Questions 2 What is Map
Hadoop. History and Introduction. Explained By Vaibhav Agarwal
Hadoop History and Introduction Explained By Vaibhav Agarwal Agenda Architecture HDFS Data Flow Map Reduce Data Flow Hadoop Versions History Hadoop version 2 Hadoop Architecture HADOOP (HDFS) Data Flow
CS2510 Computer Operating Systems Hadoop Examples Guide
CS2510 Computer Operating Systems Hadoop Examples Guide The main objective of this document is to acquire some faimiliarity with the MapReduce and Hadoop computational model and distributed file system.
Distributed Computing and Big Data: Hadoop and MapReduce
Distributed Computing and Big Data: Hadoop and MapReduce Bill Keenan, Director Terry Heinze, Architect Thomson Reuters Research & Development Agenda R&D Overview Hadoop and MapReduce Overview Use Case:
HDFS. Hadoop Distributed File System
HDFS Kevin Swingler Hadoop Distributed File System File system designed to store VERY large files Streaming data access Running across clusters of commodity hardware Resilient to node failure 1 Large files
Cloud Computing. Chapter 8. 8.1 Hadoop
Chapter 8 Cloud Computing In cloud computing, the idea is that a large corporation that has many computers could sell time on them, for example to make profitable use of excess capacity. The typical customer
Basic Hadoop Programming Skills
Basic Hadoop Programming Skills Basic commands of Ubuntu Open file explorer Basic commands of Ubuntu Open terminal Basic commands of Ubuntu Open new tabs in terminal Typically, one tab for compiling source
CSCI 5417 Information Retrieval Systems Jim Martin!
CSCI 5417 Information Retrieval Systems Jim Martin! Lecture 9 9/20/2011 Today 9/20 Where we are MapReduce/Hadoop Probabilistic IR Language models LM for ad hoc retrieval 1 Where we are... Basics of ad
Step 4: Configure a new Hadoop server This perspective will add a new snap-in to your bottom pane (along with Problems and Tasks), like so:
Codelab 1 Introduction to the Hadoop Environment (version 0.17.0) Goals: 1. Set up and familiarize yourself with the Eclipse plugin 2. Run and understand a word counting program Setting up Eclipse: Step
Spark ΕΡΓΑΣΤΗΡΙΟ 10. Prepared by George Nikolaides 4/19/2015 1
Spark ΕΡΓΑΣΤΗΡΙΟ 10 Prepared by George Nikolaides 4/19/2015 1 Introduction to Apache Spark Another cluster computing framework Developed in the AMPLab at UC Berkeley Started in 2009 Open-sourced in 2010
To reduce or not to reduce, that is the question
To reduce or not to reduce, that is the question 1 Running jobs on the Hadoop cluster For part 1 of assignment 8, you should have gotten the word counting example from class compiling. To start with, let
CS 378 Big Data Programming. Lecture 2 Map- Reduce
CS 378 Big Data Programming Lecture 2 Map- Reduce MapReduce Large data sets are not new What characterizes a problem suitable for MR? Most or all of the data is processed But viewed in small increments
Important Notice. (c) 2010-2013 Cloudera, Inc. All rights reserved.
Hue 2 User Guide Important Notice (c) 2010-2013 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, Cloudera Impala, and any other product or service names or slogans contained in this document
Hadoop Certification (Developer, Administrator HBase & Data Science) CCD-410, CCA-410 and CCB-400 and DS-200
Hadoop Learning Resources 1 Hadoop Certification (Developer, Administrator HBase & Data Science) CCD-410, CCA-410 and CCB-400 and DS-200 Author: Hadoop Learning Resource Hadoop Training in Just $60/3000INR
10605 BigML Assignment 4(a): Naive Bayes using Hadoop Streaming
10605 BigML Assignment 4(a): Naive Bayes using Hadoop Streaming Due: Friday, Feb. 21, 2014 23:59 EST via Autolab Late submission with 50% credit: Sunday, Feb. 23, 2014 23:59 EST via Autolab Policy on Collaboration
Pro Apache Hadoop. Second Edition. Sameer Wadkar. Madhu Siddalingaiah
Pro Apache Hadoop Second Edition Sameer Wadkar Madhu Siddalingaiah Contents J About the Authors About the Technical Reviewer Acknowledgments Introduction xix xxi xxiii xxv Chapter 1: Motivation for Big
Copy the.jar file into the plugins/ subfolder of your Eclipse installation. (e.g., C:\Program Files\Eclipse\plugins)
Beijing Codelab 1 Introduction to the Hadoop Environment Spinnaker Labs, Inc. Contains materials Copyright 2007 University of Washington, licensed under the Creative Commons Attribution 3.0 License --
mrjob Documentation Release 0.5.0-dev Steve Johnson
mrjob Documentation Release 0.5.0-dev Steve Johnson December 30, 2015 Contents 1 Guides 3 1.1 Why mrjob?............................................... 3 1.2 Fundamentals...............................................
The Hadoop Framework
The Hadoop Framework Nils Braden University of Applied Sciences Gießen-Friedberg Wiesenstraße 14 35390 Gießen [email protected] Abstract. The Hadoop Framework offers an approach to large-scale
Introduction To Hive
Introduction To Hive How to use Hive in Amazon EC2 CS 341: Project in Mining Massive Data Sets Hyung Jin(Evion) Kim Stanford University References: Cloudera Tutorials, CS345a session slides, Hadoop - The
Data-intensive computing systems
Data-intensive computing systems Hadoop Universtity of Verona Computer Science Department Damiano Carra Acknowledgements! Credits Part of the course material is based on slides provided by the following
Hadoop/MapReduce. Object-oriented framework presentation CSCI 5448 Casey McTaggart
Hadoop/MapReduce Object-oriented framework presentation CSCI 5448 Casey McTaggart What is Apache Hadoop? Large scale, open source software framework Yahoo! has been the largest contributor to date Dedicated
Introduction to Apache Pig Indexing and Search
Large-scale Information Processing, Summer 2014 Introduction to Apache Pig Indexing and Search Emmanouil Tzouridis Knowledge Mining & Assessment Includes slides from Ulf Brefeld: LSIP 2013 Organizational
PARALLEL IMAGE DATABASE PROCESSING WITH MAPREDUCE AND PERFORMANCE EVALUATION IN PSEUDO DISTRIBUTED MODE
International Journal of Electronic Commerce Studies Vol.3, No.2, pp.211-228, 2012 doi: 10.7903/ijecs.1092 PARALLEL IMAGE DATABASE PROCESSING WITH MAPREDUCE AND PERFORMANCE EVALUATION IN PSEUDO DISTRIBUTED
