Chase Wu New Jersey Ins0tute of Technology

Similar documents
E6893 Big Data Analytics Lecture 2: Big Data Analytics Platforms

Hadoop. Sunday, November 25, 12

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

Big Data, Why All the Buzz? (Abridged) Anita Luthra, February 20, 2014

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

HDFS. Hadoop Distributed File System

Hadoop Ecosystem B Y R A H I M A.

How To Scale Out Of A Nosql Database

Distributed Filesystems

HADOOP MOCK TEST HADOOP MOCK TEST I

ITG Software Engineering

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

Constructing a Data Lake: Hadoop and Oracle Database United!

Hadoop IST 734 SS CHUNG

Jeffrey D. Ullman slides. MapReduce for data intensive computing

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

Accelerating and Simplifying Apache

HADOOP MOCK TEST HADOOP MOCK TEST II

Hadoop implementation of MapReduce computational model. Ján Vaňo

!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets

Hadoop Evolution In Organizations. Mark Vervuurt Cluster Data Science & Analytics

Big Data Management and NoSQL Databases

Pro Apache Hadoop. Second Edition. Sameer Wadkar. Madhu Siddalingaiah

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

Workshop on Hadoop with Big Data

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Dominik Wagenknecht Accenture

Getting Started with Hadoop. Raanan Dagan Paul Tibaldi

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Application Development. A Paradigm Shift

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Large scale processing using Hadoop. Ján Vaňo

Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January Website:

Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.

COURSE CONTENT Big Data and Hadoop Training

THE HADOOP DISTRIBUTED FILE SYSTEM

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

HDFS Users Guide. Table of contents

Hadoop Job Oriented Training Agenda

Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

Communicating with the Elephant in the Data Center

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

BBM467 Data Intensive ApplicaAons

Deploying Hadoop with Manager

Big Data Management. Big Data Management. (BDM) Autumn Povl Koch November 11,

BIG DATA What it is and how to use?

Big Data Introduction

Upcoming Announcements

Qsoft Inc

Chukwa, Hadoop subproject, 37, 131 Cloud enabled big data, 4 Codd s 12 rules, 1 Column-oriented databases, 18, 52 Compression pattern, 83 84

Apache Hadoop new way for the company to store and analyze big data

Certified Big Data and Apache Hadoop Developer VS-1221

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Bringing Big Data to People

Hadoop & Spark Using Amazon EMR

Complete Java Classes Hadoop Syllabus Contact No:

What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea

How Companies are! Using Spark

A very short Intro to Hadoop

Apache Hadoop: Past, Present, and Future

Peers Techno log ies Pv t. L td. HADOOP

BIG DATA TECHNOLOGY. Hadoop Ecosystem

Open source Google-style large scale data analysis with Hadoop

Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL. May 2015

Apache Hadoop Ecosystem

Data Warehousing and Analytics Infrastructure at Facebook. Ashish Thusoo & Dhruba Borthakur athusoo,dhruba@facebook.com

Hadoop: The Definitive Guide

Comparing the Hadoop Distributed File System (HDFS) with the Cassandra File System (CFS)

Hadoop Distributed File System (HDFS)

Alternatives to HIVE SQL in Hadoop File Structure

Hadoop Architecture. Part 1

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Big Data With Hadoop

Introduction to HDFS. Prasanth Kothuri, CERN

File S1: Supplementary Information of CloudDOE

HADOOP ADMINISTATION AND DEVELOPMENT TRAINING CURRICULUM

Next Gen Hadoop Gather around the campfire and I will tell you a good YARN

The Hadoop Eco System Shanghai Data Science Meetup

Single Node Setup. Table of contents

Apache Hadoop. Alexandru Costan

MySQL and Hadoop. Percona Live 2014 Chris Schneider

Case Study : 3 different hadoop cluster deployments

Transforming the Telecoms Business using Big Data and Analytics

NoSQL and Hadoop Technologies On Oracle Cloud

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

BIG DATA TRENDS AND TECHNOLOGIES

Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park

Manifest for Big Data Pig, Hive & Jaql

Introduction to Big Data Training

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

Transcription:

CS 698: Special Topics in Big Data Chapter 4. Big Data Analytics Platforms Chase Wu New Jersey Ins0tute of Technology Some of the slides have been provided through the courtesy of Dr. Ching-Yung Lin at Columbia University 1

Reading Reference 2

Remind -- Apache Hadoop The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. The project includes these modules: Hadoop Common: The common utilities that support the other Hadoop modules. Hadoop Distributed File System (HDFS ): A distributed file system that provides highthroughput access to application data. Hadoop YARN: A framework for job scheduling and cluster resource management. Hadoop MapReduce: A YARN-based system for parallel processing of large data sets. hhp://hadoop.apache.org 3

Remind -- Hadoop-related Apache Projects Ambari : A web-based tool for provisioning, managing, and monitoring Hadoop clusters.it also provides a dashboard for viewing cluster health and ability to view MapReduce, Pig and Hive applications visually. Avro : A data serialization system. Cassandra : A scalable multi-master database with no single points of failure. Chukwa : A data collection system for managing large distributed systems. HBase : A scalable, distributed database that supports structured data storage for large tables. Hive : A data warehouse infrastructure that provides data summarization and ad hoc querying. Mahout : A Scalable machine learning and data mining library. Pig : A high-level data-flow language and execution framework for parallel computation. Spark : A fast and general compute engine for Hadoop data. Spark provides a simple and expressive programming model that supports a wide range of applications, including ETL, machine learning, stream processing, and graph computation. Tez : A generalized data-flow programming framework, built on Hadoop YARN, which provides a powerful and flexible engine to execute an arbitrary DAG of tasks to process data for both batch and interactive use-cases. ZooKeeper : A high-performance coordination service for distributed applications. 4

Common Use Cases for Big Data in Hadoop Log Data Analysis most common, fits perfectly for HDFS scenario: Write once & Read often. Data Warehouse Modernization Fraud Detection Risk Modeling Social Sentiment Analysis Image Classification Graph Analysis Beyond D. deroos et al, Hadoop for Dummies, John Wiley & Sons, 2014 5

Remind -- Hadoop Distributed File System (HDFS) hhp://hortonworks.com/hadoop/hdfs/ 6

Remind -- MapReduce example hhp://www.alex-hanna.com 7

Setting Up the Hadoop Environment Local (standalone) mode Pseudo-distributed mode Fully-distributed mode 8

Setting Up the Hadoop Environment Pseudo-distributed mode authorized_keys: used by the SSH server to store the public keys of clients for client authen0ca0on known_hosts: used by the SSH client to store the public keys of servers for server authen0ca0on hhp://hadoop.apache.org/docs/stable2/hadoop-project-dist/hadoop-common/singlenodesetup.html 9

Setting Up the Hadoop Environment Pseudo-distributed mode 10

Data Storage Operations on HDFS Hadoop is designed to work best with a modest number of extremely large files. Average file sizes è larger than 500MB. Write Once, Read Often model. Content of individual files cannot be modified, other than appending new data at the end of the file. What we can do: Create a new file Append content to the end of a file Delete a file Rename a file Modify file attributes like owner 11

HDFS blocks File is divided into blocks (default: 64MB) and duplicated in multiple places (default: 3) Dividing into blocks is normal for a file system, e.g., the default block size in Linux is 4KB. The difference of HDFS is the scale. Hadoop was designed to operate at the petabyte scale. Every data block stored in HDFS has its own metadata and needs to be tracked by a central server. 12

HDFS blocks Replication patterns of data blocks in HDFS. When HDFS stores the replicas of the original blocks across the Hadoop cluster, it tries to ensure that the block replicas are stored in different failure points. 13

HDFS is a User-Space-Level file system 14

Interaction between HDFS components 15

HDFS Federation Before Hadoop 2.0 NameNode was a single point of failure and operation limitation. Hadoop clusters usually have fewer clusters that were able to scale beyond 3,000 or 4,000 nodes. In Hadoop 2.x Multiple NameNodes can be used (HDFS High Availability feature one is in an Active state, the other one is in a Standby state). hhp://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/hdfshighavailabilitywithnfs.html 16

High Availability of the NameNodes Active NameNode Standby NameNode keeping the state of the block locations and block metadata in memory -> HDFS checkpointing responsibilities. JournalNode if a failure occurs, the Standby Node reads all completed journal entries to ensure the new Active NameNode is fully consistent with the state of cluster. Zookeeper provides coordination and configuration services for distributed systems. 17

Data Compression in HDFS 18

Several useful commands for HDFS All hadoop commands are invoked by the bin/hadoop script. % hadoop fsck / -files blocks: è list the blocks that make up each file in HDFS. For HDFS, the schema name is hdfs, and for the local file system, the schema name is file. A file or directory in HDFS can be specified in a fully qualified way, such as: hdfs://namenodehost/parent/child or hdfs://namenodehost The HDFS file system shell command is similar to Linux file commands, with the following general syntax: hadoop hdfs file_cmd For instance mkdir runs as: $hadoop hdfs dfs mkdir /user/directory_name 19

Several useful commands for HDFS -- II 20

Ingesting Log Data -- Flume Inges0ng stream data 21

How Hadoop Works hhp://www.alex-hanna.com 22

Remind -- MapReduce Data Flow hhp://www.ibm.com/developerworks/cloud/library/cl-openstack-deployhadoop/ 23

MapReduce Use Case Example flight data Data Source: Airline On-time Performance data set (flight data set). All the logs of domestic flights from the period of October 1987 to April 2008. Each record represents an individual flight where various details are captured: Time and date of arrival and departure Originating and destination airports Amount of time taken to taxi from the runway to the gate. Download it from Statistical Computing: http://stat-computing.org/dataexpo/2009/ 24

Other datasets available from Statistical Computing http://stat-computing.org/dataexpo/ 25

Flight Data Schema 26

MapReduce Use Case Example flight data Count the number of flights for each carrier Serial way (not MapReduce): 27

MapReduce Use Case Example flight data Count the number of flights for each carrier Parallel way: 28

MapReduce application flow 29

MapReduce steps for flight data computation 30

FlightsByCarrier application Create FlightsByCarrier.java: 31

FlightsByCarrier application 32

FlightsByCarrier Mapper 33

FlightsByCarrier Reducer 34

Run the code 35

See Result 36

Using Pig Script E.g.: totalmiles.pig: calculates the total miles flown for all flights flown in one year Execute it: pig totalmiles.pig See result: hdfs dfs cat /user/root/totalmiles/part-r-00000 è 775009272 37

Questions? 38