Hadoop 2.0 Introduction with HDP for Windows. Seele Lin



Similar documents
Hadoop Introduction. Olivier Renault Solution Engineer - Hortonworks

The Cloud Computing Era and Ecosystem. Phoenix Liau, Technical Manager

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Cloud Computing Era. Trend Micro

Hadoop/MapReduce. Object-oriented framework presentation CSCI 5448 Casey McTaggart

HDP Hadoop From concept to deployment.

Internals of Hadoop Application Framework and Distributed File System

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

Big Data With Hadoop

CSE-E5430 Scalable Cloud Computing Lecture 2

Hadoop and ecosystem * 本 文 中 的 言 论 仅 代 表 作 者 个 人 观 点 * 本 文 中 的 一 些 图 例 来 自 于 互 联 网. Information Management. Information Management IBM CDL Lab

Hadoop Ecosystem B Y R A H I M A.

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Hadoop Job Oriented Training Agenda

Next Gen Hadoop Gather around the campfire and I will tell you a good YARN

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

HDP Enabling the Modern Data Architecture

Hadoop implementation of MapReduce computational model. Ján Vaňo

Hadoop IST 734 SS CHUNG

Qsoft Inc

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

The Hadoop Eco System Shanghai Data Science Meetup

Hadoop and Map-Reduce. Swati Gore

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Large scale processing using Hadoop. Ján Vaňo

Upcoming Announcements

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Deploying Hadoop with Manager

Word Count Code using MR2 Classes and API

Apache Hadoop: Past, Present, and Future

Introduction to MapReduce and Hadoop

and HDFS for Big Data Applications Serge Blazhievsky Nice Systems

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

MapReduce with Apache Hadoop Analysing Big Data

Lambda Architecture. CSCI 5828: Foundations of Software Engineering Lecture 29 12/09/2014

CS54100: Database Systems

Session: Big Data get familiar with Hadoop to use your unstructured data Udo Brede Dell Software. 22 nd October :00 Sesión B - DB2 LUW

Certified Big Data and Apache Hadoop Developer VS-1221

Big Data Course Highlights

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

NoSQL and Hadoop Technologies On Oracle Cloud

Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing

Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January Website:

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Big Data and Data Science: Behind the Buzz Words

Big Data: Making Sense of it all!

BIG DATA: STORAGE, ANALYSIS AND IMPACT GEDIMINAS ŽYLIUS

HADOOP. Revised 10/19/2015

Big Data Too Big To Ignore

Getting to know Apache Hadoop

Parallel Programming Map-Reduce. Needless to Say, We Need Machine Learning for Big Data

Open source Google-style large scale data analysis with Hadoop

Apache Hadoop. Alexandru Costan

Big Data Analytics with MapReduce VL Implementierung von Datenbanksystemen 05-Feb-13

A very short Intro to Hadoop

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

Hadoop WordCount Explained! IT332 Distributed Systems

<Insert Picture Here> Big Data

Big Data Management. Big Data Management. (BDM) Autumn Povl Koch November 11,

Hadoop Framework. technology basics for data scientists. Spring Jordi Torres, UPC - BSC

Big Data and Apache Hadoop s MapReduce

Constructing a Data Lake: Hadoop and Oracle Database United!

BBM467 Data Intensive ApplicaAons

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

Istanbul Şehir University Big Data Camp 14. Hadoop Map Reduce. Aslan Bakirov Kevser Nur Çoğalmış

Introduction to Big Data Training

BIG DATA TRENDS AND TECHNOLOGIES

Lecture 10 - Functional programming: Hadoop and MapReduce

Open source large scale distributed data management with Google s MapReduce and Bigtable

Big Data & QlikView. Democratizing Big Data Analytics. David Freriks Principal Solution Architect

YARN Apache Hadoop Next Generation Compute Platform

Comprehensive Analytics on the Hortonworks Data Platform

Big Data Realities Hadoop in the Enterprise Architecture

Xiaoming Gao Hui Li Thilina Gunarathne

Oracle s Big Data solutions. Roger Wullschleger. <Insert Picture Here>

Big Data Weather Analytics Using Hadoop

Architectural patterns for building real time applications with Apache HBase. Andrew Purtell Committer and PMC, Apache HBase

The Future of Data Management

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Implement Hadoop jobs to extract business value from large and varied data sets

Data processing goes big

Please give me your feedback

BIG DATA APPLICATIONS

Modernizing Your Data Warehouse for Hadoop

Map Reduce & Hadoop Recommended Text:

Introduc)on to the MapReduce Paradigm and Apache Hadoop. Sriram Krishnan

Chapter 7. Using Hadoop Cluster and MapReduce

Big Data Management and NoSQL Databases

Hortonworks and ODP: Realizing the Future of Big Data, Now Manila, May 13, 2015

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Getting Started with Hadoop. Raanan Dagan Paul Tibaldi

Microsoft SQL Server 2012 with Hadoop

Testing Big data is one of the biggest

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

Transcription:

Hadoop 2.0 Introduction with HDP for Windows Seele Lin

Who am I Speaker: 林 彥 辰 A.K.A Seele Lin Mail: seele_lin@trend.com.tw Experience 2010~Present 2013~2014 Trainer of Hortonworks Certificated Training lecture HCAHD(Hortonworks Certified Apache Hadoop Developer) HCAHA (Hortonworks Certified Apache Hadoop Administrator)

Agenda What is Big Data The Need for Hadoop Hadoop Introduction What is Hadoop 2.0 Hadoop Architecture Fundamentals What is HDFS What is MapReduce What is YARN Hadoop eco-systems HDP for Windows What is HDP How to install HDP on Windows The advantages of HDP What s Next Conclusion Q&A

What is Big Data

What is Big Data? 1. In what timeframe do we now create the same amount of information that we created from the dawn of civilization until 2003? 2. 90% of the world s data was created in the last (how many years)? 3. This is data from 2010 report! 2 days 2 years Sources: http://www.itbusinessedge.com/cm/blogs/lawson/just-the-stats-big-numbers-about-big-data/? cs=48051 http://techcrunch.com/2010/08/04/schmidt-data/

How large can it be? 1ZB = 1000 EB = 1,000,000 PB = 1,000,000,000 TB

Every minute http://whathappensontheinternetin60seconds.com/

The definition? Big Data is like teenage sex: Everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it too. Dan Ariely. A set of files A database A single file

Big Data Includes All Types of Data Structured Pre-defined schema Relational database system Semi-structured Inconsistent structure Cannot be stored in rows in a single table Logs, tweets Often has nested structure Unstructured Irregular structure or.. Parts of it lack structure Pictures Video Time-sensitive Immutable

6 Key Hadoop DATA TYPES 1. Sentiment How your customers feel 2. Clickstream Website visitors data 3. Sensor/Machine Data from remote sensors and machines 4. Geographic Location-based data 5. Server Logs Value 6. Text Millions of web pages, emails, and documents Hortonworks Inc. 2013 Page

4 V s of Big Data http://www.datasciencecentral.com/profiles/blogs/data-veracity

Next Product to Buy (NPTB) Business Problem Telecom product portfolios are complex There are many cross-sell opportunities to installed base Sales associates use in-person conversations to guess about NPTB recommendations, with little supporting data Solution Hadoop gives telcos the ability to make confident NPTB recommendations, based on data from all its customers Confident NPTB recommendations empower sales associates and improve their interactions with customers Use the HDP data lake to reduce sales friction and create NPTB advantage like Amazon s advantage in ecommerce

Use case Walmart prediction Beer Diapers Friday Revenue?

Localized, Personalized Promotions Business Problem Telcos can geo-locate their mobile subscribers They could create localized and personalized promotions This requires connections with both deep historical data and realtime streaming data Those connections have been expensive and complicated Solution Hadoop brings the data together to inexpensively localize and personalize promotions delivered to mobile devices Notify subscribers about local attractions, events and sales that align with their preferences and location Telcos can sell these promotional services to retailers

360 View of the Customer Business Problem Retailers interact with customers across multiple channels Customer interaction and purchase data is often siloed Few retailers can correlate customer purchases with marketing campaigns and online browsing behavior Merging data in relational databases is expensive Solution Hadoop gives retailers a 360 view of customer behavior Store data longer & track phases of the customer lifecycle Gain competitive advantage: increase sales, reduce supply chain expenses and retain the best customers

Use case Target case Target mined their customer data and send coupons to shopper who have high pregnancy prediction score. One angry father stormed into a Target to yell at them for sending his daughter coupons for baby clothes and cribs. Guess what, she was pregnant, and hadn t told her father yet.

Changes in Analyzing Data Big data is fundamentally changing the way we analyze information. Ability to analyze vast amounts of data rather than evaluating sample sets. Historically we have had to look at causes. Now we can look at patterns and correlations in data that give us much better perspective.

Recent day cases 1: http://www.ibtimes.co.uk/global-smartphone-data-traffic-increase-eightfold-17-exabyte-2020-1475571

Recent day cases 1: Practice on LINE http://tech.naver.jp/blog/?p=2412

Recent day cases 2: in Taiwan The media analyze of 2014 Taipei City mayor election IMHO, 黑 貘 來 說 http://gene.speaking.tw/2014/11/blog-post_28.html 破 解 社 群 與 APP 行 銷 http://taiwansmm.wordpress.com/2014/11/26/ 行 銷 絕 不 等 於 買 廣 告 -2014 年 台 北 市 長 選 舉 柯 文 哲 與 連 /

Scale up or Scale out?

Guess what Traditionally, computation has been processor-bound For decades, the primary push was to increase the computing power of a single machine Faster processor, more RAM Distributed systems evolved to allow developers to use multiple machines for a single job At compute time, data is copied to the compute nodes

Scaling with a traditional database scalling with a queue sharding the database fault-tolerane issue corruption issue problems

NOSQL Not Only SQL provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include simplicity of design, horizontal scaling and finer control over availability. Column: Accumulo, Cassandra, Druid, HBase, Vertica Document: Clusterpoint, Apache CouchDB, Couchbase, MarkLogic, MongoDB Key-value: Dynamo, FoundationDB, MemcacheDB, Redis, Riak, FairCom c-treeace, Aerospike Graph: Allegro, Neo4J, InfiniteGraph, OrientDB, Virtuoso, Stardog

First principles(1/2) "At the most fundamental level, what does a data system do?" a data system does: "A data system answers questions based on information that was acquired in the past". "What is this person's name? "How many friends does this person have? A bank account web page answers questions like "What is my current balance? "What transactions have occurred on my account recently?"

First principles(2/2) Data is often used interchangeably with the word "information". You answer questions on your data by running functions that take data as input The most general purpose data system can answer questions by running functions that take in the as input. In fact, any query can be answered entire dataset by running a function on the complete dataset

Desired Properties of a Big Data System Robust and fault-tolerant Low latency reads and updates Scalable General Extensible Allow ad hoc queries Minimal maintenance

The Lambda Architecture There is no single tool that provides a complete solution. You have to use The Lambda Architecture a variety of tools and techniques to build a complete Big Data system. solves the problem of computing arbitrary functions on arbitrary data in realtime by decomposing the problem into three layers batch layer, the serving layer, and the speed layer

The Lambda Architecture model

Batch Layer 1 The batch layer stores the master copy of the dataset and precomputes batch views on that master dataset. The master dataset can be thought of us a very large list of records. two things : store an immutable, constantly growing master dataset, and compute arbitrary functions on that dataset. If you're going to precompute views on a dataset, you need to be able to do so for any view and any dataset. There's a class of systems called "batch processing systems" that are built to do exactly what the batch layer requires

Batch Layer(2

What is Batch View Everything starts from the "query = function(all data)" equation. You could literally run your query functions on the fly on the complete dataset to get the results. it would take a huge amount of resources to do and would be unreasonably expensive Instead of computing the query on the fly, you read the results from the precomputed view.

How we get Batch View

Serving Layer 1

Serving Layer 2 The batch layer emits batch views as the result of its functions. The next step is to load the views somewhere so that they can be queried. The serving layer indexes the batch view and loads it up so it can be efficiently queried to get particular values out of the view.

Batch and serving layers satisfy almost all properties Robust and fault tolerant Scalable General Extensible Allows ad hoc queries Minimal maintenance

Speed Layer 1

Speed Layer 2 speed layer as similar to the batch layer in that it produces views based on data it receives. One big difference is that in order to achieve the fastest latencies possible, the speed layer doesn't look at all the new data at once. it updates the realtime view as it receives new data instead of recomputing them like the batch layer does. "incremental updates vs "recomputation updates". Page view example

Speed Layer 3 complexity isolation:complexity is pushed into a layer whose results are only temporary The last piece of the Lambda Architecture is merging the resutlts from the batch and realtime views

Summary of the Lambda Architecture

Summary of the Lambda Architecture All new data is sent to both the batch layer and the speed layer The master dataset is an immutable, append-only set of data The batch layer pre-computes query functions from scratch The serving layer indexes the batch views produced by the batch layer and makes it possible to get particular values out of a batch view very quickly The speed layer compensates for the high latency of updates to the serving layer. Queries are resolved by getting results from both the batch and realtime views and merging them together

The Need for Hadoop SCALE (storage & processing) Traditional Database EDW MPP Analytics NoSQL Hadoop Platform Store and use all types of data Process all the data Scalability Commodity hardware

Hadoop as a Data Factory A role Hadoop can play in an enterprise data platform is that of a data factory Structured, semi-structured and raw data Business value Hadoop

Hadoop as a Data Lake A larger more general role Hadoop can play in an enterprise data platform is that of a data lake.

Integrating Hadoop ODBC Access for Popular BI Tools Tools MACHINE GENERATED Applications & Spreadsheets Visualization & Intelligence WEB LOGS, CLICK STREAMS ODBC Big Data Data Analysis Messaging Social Media Staging Area Hadoop Connectors EDW Data Marts OLTP

Hadoop Introduction

inspired by Apache Hadoop project inspired by Google's MapReduce and Google File System papers. Open sourced, flexible and available architecture for large scale computation and data processing on a network of commodity hardware Hadoop Creator: Doug Cu:ng Yahoo has been the largest contributor to the project and uses Hadoop extensively in its Web search and Ad business.

Hadoop Concepts Distribute the data as it is initially stored in the system Moving Computation is Cheaper than Moving Data Individual nodes can work on data local to those nodes Users can focus on developing applications.

Relational Databases vs. Hadoop Relational VS. Hadoop Required on write Reads are fast Standards and structured Limited, no data processing Structured schema speed governance processing data types Required on read Writes are fast Loosely structured Processing coupled with data Multi and unstructured Interactive OLAP Analytics Complex ACID Transactions Operational Data Store best fit use Data Discovery Processing unstructured data Massive Storage/Processing

Different behaviors between RDBMS and Hadoop RDBMS Application Schema RDBMS SQL Hadoop Application Hadoop Schema MapReduce

Why we use Hadoop, not RDBMS? Limitation of RDBMS Capacity 100GB~100TB Speed Cost High-end devices price increases over than its linear proportion Software cost on technical support or license fee Too Complex A Distributed File System is more likely fit our need DFS usually provides backup and faulttolerance mechanism More cheap than RDBMS when the data is really huge enough

What is Hadoop 2.0? The Apache Hadoop 2.0 project consists of the following modules: Hadoop Common: the utilities that provide support for the other Hadoop modules. HDFS: the Hadoop Distributed File System YARN: a framework for job scheduling and cluster resource management. MapReduce: for processing large data sets in a scalable and parallel fashion.

Difference between Hadoop 1.0 and 2.0

What is YARN Yet Another Resource Negotiator Jira ticket (MAPREDUCE-279) raised in January 2008 by Hortonworks co-founder Arun Murthy. YARN is the result of 5 years of subsequent development in the open community. YARN has been tested by Yahoo! since September 2012 and has been in production across 30,000 nodes and 325PB of data since January 2013. More recently, other enterprises such as Microsoft, ebay, Twitter, XING and Spotify have adopted a YARNbased architecture. Apache Hadoop YARN wins Best Paper award at SoCC 2013! - Hortonworks http://hortonworks.com/blog/apache-hadoop-yarn-wins-best-paper-award-at-socc-2013/

YARN: Taking Hadoop Beyond Batch With YARN, applications run natively in Hadoop (instead of on Hadoop)

http://hortonworks.com/blog/an-introduction-to-hdfs-federation/ HDFS Federation /app/hive /app/hbase /home/ Hadoop Hadoop 2.0

HDFS High Availability (HA) Secondary Name Node is not Name Node http://www.youtube.com/watch?v=heqqmlsxqly

HDFS High Availability (HA) https://issues.apache.org/jira/browse/hdfs-1623

Hadoop Architecture Fundamentals

What is HDFS NameNode Shared multi-petabyte file system for an entire cluster Managed by a single NameNode Multiple DataNodes DataNode DataNode DataNode

The Components of HDFS NameNode The master node of HDFS Determines and maintains how the chunks of data are distributed across the DataNodes DataNode Stores the chunks of data, and is responsible for replicating the chunks across other DataNodes

Concept: What is NameNode NameNode holds metadata for the files One HDFS cluster only has one metadata NameNode is a single point of failure Only one NameNode for One HDFS cluster One HDFS cluster only has one namespace and one root directory Metadata saves in NameNode s RAM in case to query it faster 1G RAM can almost saves 1,000,000 blocks of the mapping metadata information If the block size is 64MB, the metadata may mapping to 64TB actual data

More on the Metadata NameNode uses two important local files to save the metadata information.. fsimage fsimage saves file directory tree information fsimage saves the mapping of the file and the blocks edits edits saves the file system journal When the client tries to create / move a file, the operation will be first recorded into edits. If the operation succeed, the data in RAM will later be changed. fsimage WILL NOT instantly be changed.

The NameNode 1. When the NameNode starts, it reads the fsimage and edits files. 2. The transactions in edits are merged with fsimage, and edits is emptied. Data fsimage edits 3. A client application creates a new file in HDFS. NameNode 4. The NameNode logs that transaction in the edits file.

File Name Replicas Block Sequence Others /data/part-0 2 B1, B2, B3 user, group,... /data/part-1 3 B4, B5 foo, bar,... Memory Disk File Name Replicas Block Sequence Others fsimage /data/part-0 3 B1, B2, B3 user, group,... /data/part-1 3 B4, B5 user, group,... OP Code Operands edits OP_SET_REPLICATION "/data/part-0", 2 OP_SET_OWNER "/data/part-1", "foo", "bar"

Concept: What is DataNode DataNode hold the actual blocks Each block will be 64MB or 128MB in size Each block is replicated three times on the cluster DataNode communicates through Heartbeat with NameNode

Block backup and replication Each block is replicated multiple times Default replica number = 3 Client can modify the configuration Each block s replica has the same ID System has no need to record which blocks are the same Replicas can be set by rack awareness First backup on a rack The other two backups are on another rack, but on the different machines

The DataNodes NameNode I m still alive! This is my latest Blockreport. Replicate block 123 to DataNode 1. I m here! Here is my latest Blockreport. DataNode 1 DataNode 2 DataNode 3 DataNode 4 123

Data 1. Client sends a request to the NameNode to add a file to HDFS 2. NameNode tells client how and where to distribute the blocks NameNode 3. Client breaks the data into blocks and distributes the blocks to the DataNodes DataNode 1 DataNode 2 DataNode 3 4. The DataNodes replicate the blocks (as instructed by the NameNode) Hortonworks Inc. 2013

What is MapReduce Two Functions Mapper Since we are processing huge amount of data, it s nature to split the input data The Mapper reads data in the form of key/value pairs M(K1, V1) à list(k2, V2) Reducer Since the input data are split, we would need another phase to aggregate result in each split R(K2, list(v2)) à list(k3, V3)

Hadoop 1.0 Basic Core Architecture Mapper Reducer Map Shuffle/Sort Reduce MapReduce Hadoop Distributed File System (HDFS) Hadoop

Words to Websites - Simplified From words provide locations Provides what to display for a search Note: Page rank determines the order For example to find URLs with books on them Map Reduce <url, keyword> www.eslite.com books calendars www.yahoo.com sports finance email celebrity www.amazon.com shoes books toolkits www.google.com finance email search www.microso6.com operahng- system produchvity system K, V <keyword, url> books www.eslite.com www.amazon.com email www.google.com www.yahoo.com www.facebook.com finance www.yahoo.com www.google.com groceries www.costco.com www.wellcome.com toolkits www.costco.com www.amazon.com

Data Model MapReduce works on <key, value> pairs (Key input, Value input) (www.eslite.com, books calendars) Map Other Compute result (other map result) (Key intermediate, Value intermediate) (books, www.eslite.com) Reduce (Key output, Value output) (books, www.eslite.com www.amazon.com)

The M/R concept Job Tracker Heartbeat, Task Report Worker Nodes Task Tracker Task Tracker Task Tracker Task Tracker M M M M M M M M M M M M R R R R R R R R

Map -> Shuffle -> Reduce Task Tracker A Mapper A Sort A Task Tracker D Task Tracker B A Mapper B Sort B Fetch B Merge Reducer 0 Task Tracker C C Mapper C Sort C

Map -> Shuffle -> Reduce Mapper A Partition + Sort A0 A1 Fetch A0 B0 Merge Reducer 0 Mapper B Partition + Sort B0 B1 C0 Fetch A1 B1 Merge Reducer 1 Mapper C Partition + Sort C0 C1 C1

Word Count Example Key: offset Value: line Key: word Value: count Key: word Value: sum of count 0:The cat sat on the mat 22:The aardvark sat on the sofa

What is YARN? YARN is a re-architecture of Hadoop that allows multiple applications to run on the same platform

Why YARN support non-mapreduce workloads reducing the need to move data between Hadoop HDFS and other storage systems improve scalability 2009 8 cores, 16GB of RAM, 4x1TB disk 2012 16+ cores, 48-96GB of RAM, 12x2TB or 12x3TB of disk. scale to production deployments of ~5000 nodes of hardware of 2009 vintage cluster utilization JobTracker views the cluster as composed of nodes (managed by individual TaskTrackers) with distinct map slots and reduce slots customer agility

How YARN Works YARN s original purpose was to split up the two major responsibilities of the JobTracker/TaskTracker into separate entities: a global ResourceManager a per-application ApplicationMaster a per-node slave NodeManager a per-application Container running on a NodeManager

MapReduce v1

YARN

The Hadoop 1.x and 2 Ecosystem Hadoop

The Path to ROI Raw Data 1. Put the data into HDFS in its raw format Hadoop Distributed File System 2. Use Pig to explore and transform Answers to questions = $$ 3. Data Analysts use Hive to query the data Structured Data Hidden gems = $$ 4. Data Scientists use MapReduce, R and Mahout to mine the data

Flume & Sqoop

Flume / Sqoop Data Integration Framework What s the problem for data collection Data collection is currently a priori and ad hoc A priori decide what you want to collect ahead of time Ad hoc each kind of data source goes through its own collection path

(and how can it help?) A distributed data collection service It efficiently collecting, aggregating, and moving large amounts of data Fault tolerant, many failover and recovery mechanism One-stop solution for data collection of all formats

Flume: High-Level Overview Logical Node Source Sink

An example flow

Sqoop Easy, parallel database import/export You want Insert data from RDBMS to HDFS Export data from HDFS back into RDBMS

Sqoop - import process

Sqoop - export process Exports are performed in parallel using MapReduce

Why Sqoop JDBC-based implementation Works with many popular database vendors Auto-generation of tedious user-side code Write MapReduce applications to work with your data, faster Integration with Hive Allows you to stay in a SQL-based environment

Pig & Hive

Why Hive and Pig? Although MapReduce is very powerful, it can also be complex to master Many organizations have business or data analysts who are skilled at writing SQL queries, but not at writing Java code Many organizations have programmers who are skilled at writing code in scripting languages Hive and Pig are two projects which evolved separately to help such people analyze huge amounts of data via MapReduce Hive was initially developed at Facebook, Pig at Yahoo!

Pig Initiated by An engine for executing programs on top of Hadoop A high-level scripting language (Pig Latin) Process data one step at a time Simple to write MapReduce program Easy understand Easy debug A = load a.txt as (id, name, age,...) B = load b.txt as (id, address,...) C = JOIN A BY id, B BY id;store C into c.txt

Hive Developed by What is Hive? An SQL-like interface to Hadoop Treat your Big Data as tables Data Warehouse infrastructure Which provides Data summarization MapRuduce for execution Ad hoc querying on top of Hadoop Maintains metadata information about your Big Data stored on HDFS Hive Query Language SELECT * FROM purchases WHERE price > 100 GROUP BY storeid

WordCount Example Input Hello World Bye World Hello Hadoop Goodbye Hadoop For the given sample input the map emits < Hello, 1> < World, 1> < Bye, 1> < World, 1> < Hello, 1> < Hadoop, 1> < Goodbye, 1> < Hadoop, 1> the < Bye, reduce 1> just sums up the values < Goodbye, 1> < Hadoop, 2> < Hello, 2> < World, 2>

WordCount Example In MapReduce public class WordCount { public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(longwritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.tostring(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasmoretokens()) { word.set(tokenizer.nexttoken()); context.write(word, one); } } } public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } public static void main(string[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf, "wordcount"); job.setoutputkeyclass(text.class); job.setoutputvalueclass(intwritable.class); job.setmapperclass(map.class); job.setreducerclass(reduce.class); job.setinputformatclass(textinputformat.class); job.setoutputformatclass(textoutputformat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitforcompletion(true); }

WordCount Example By Pig A = LOAD 'wordcount/input' USING PigStorage as (token:chararray); B = GROUP A BY token; C = FOREACH B GENERATE group, COUNT(A) as count; DUMP C;

WordCount Example By Hive CREATE TABLE wordcount (token STRING); LOAD DATA LOCAL INPATH wordcount/input' OVERWRITE INTO TABLE wordcount; SELECT count(*) FROM wordcount GROUP BY token;

Hive vs. Pig Hive Pig Language HiveQL (SQL-like) Pig Latin, a scripting language Schema Table definitions that are stored in a metastore Programmait Access JDBC, ODBC A schema is optionally defined at runtime PigServer

HCatalog in the Ecosystem Java MapReduce HCatalog HDFS HBase???

Oozie

What is? A Java Web Application Oozie is a workflow scheduler for Hadoop Crond for Hadoop Job 1 Job 2 Job 3 Job 4 Job 5

How it triggered Time Execute your workflow every 15 minutes 00:15 00:30 00:45 01:00 Event Materialize your workflow every hour, but only run them when the input data is ready. Input Data Exists? Hadoop 01:00 02:00 03:00 04:00

Defining an Oozie Workflow Start Action Contr ol Flow Action Action Action End

HDP on Windows

General Planning Considerations Run on a single node? For test and perform simple operations Not suitable for big data Start from a small cluster Maybe 4 or 6 nodes As the data grows, add more nodes. Expand when necessary Storage is not enough Improve computing capability

Traditional Operating System Selection RedHat Enterprise Linux CentOS Ubuntu Server SuSE Enterprise Linux

Topology Master Node Active NameNode ResourceManager Secondary NameNode ( or Standby NameNode) Slave Node DataNode NodeManager

Network Topology World Switch Switch Switch Switch Switch Switch Namenode RM SNN DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM Rack 1 Rack 2 Rack 3 Rack n

Hadoop Distribution Apache Cloudera MapR Hortonworks Amazon EMR Apache Hadoop MapR Greenplum IBM

Get all your Hadoop packages and Make sure the packages compatibility!

Who is Hortonworks? Upstream Community Projects Downstream Enterprise Product Virtuous cycle when development & fixed issues done upstream & stable project releases flow downstream Integrate & Test Apache Pig Apache Hive Test & Patch Apache Hadoop Design & Develop Release Design & Develop Hortonworks Data Platform Package & Certify Apache HBase Apache HCatalog Distribute Other Apache Projects Apache Ambari No Lock-in: Integrated, tested & certified distribution lowers risk by ensuring close alignment with Apache projects

What is HDP OPERATIONAL SERVICES Manage AMBARI & Operate at Scale OOZIE HADOOP CORE PLATFORM SERVICES FLUME SQOOP WEBHDFS HDFS DATA SERVICES PIG HIVE HCATALOG MAP REDUCE YARN HBASE Enterprise Readiness: HA, DR, Snapshots, Security, HORTONWORKS DATA PLATFORM (HDP) Enterprise Hadoop The Hortonworks Data Platform (HDP) The ONLY 100% open source and complete distribution Enterprise grade, proven and tested at scale Ecosystem endorsed to ensure interoperability OS Cloud VM Appliance

The management for HDP

What is HDP for Windows HDP for Windows significantly expands the ecosystem for the next generation big data platform. This means that the Microsoft partners and tools you already rely on can help you with your Big Data initiatives. HDP for Windows is the Microsoft recommended way to deploy Hadoop on Windows Server environments. Support Windows server 2008 Windows server 2012

Choose your HDP

HDP hardware recommendations Machine Type Workload Pattern/ Cluster Type Storage Processor (# of Cores) Memory (GB) Network Balanced workload Twelve 2-3 TB disks 8 128-256 1 GB onboard, 2x10 GBE mezzanine/ external Slave Nodes Compute-intensive workload Twelve 1-2 TB disks 10 128-256 1 GB onboard, 2x10 GBE mezzanine/ external Storage-heavy workload Twelve 4+ TB disks 8 128-256 1 GB onboard, 2x10 GBE mezzanine/ external NameNode Balanced workload Four or more 2-3 TB RAID 10 with spares 8 128-256 1 GB onboard, 2x10 GBE mezzanine/ external ResourceManager Balanced workload Four or more 2-3 TB RAID 10 with spares 8 128-256 1 GB onboard, 2x10 GBE mezzanine/ external

The installation of HDP for Windows 1

The installation of HDP for Windows 2

The installation of HDP for Windows 3 #Log directory HDP_LOG_DIR=d:\hadoop\logs #Data directory HDP_DATA_DIR=d:\hdp\data #Hosts NAMENODE_HOST=NAMENODE_MASTER.acme.com SECONDARY_NAMENODE_HOST=SECONDARY_NAMENODE_MASTER.acme.com RESOURCEMANAGER_HOST.acme.com HIVE_SERVER_HOST=HIVE_SERVER_MASTER.acme.com OOZIE_SERVER_HOST=OOZIE_SERVER_MASTER.acme.com WEBHCAT_HOST=WEBHCAT_MASTER.acme.com FLUME_HOSTS=FLUME_SERVICE1.acme.com,FLUME_SERVICE2.acme.com,FLUME_SERV ICE3.acme.com HBASE_MASTER=HBASE_MASTER.acme.com HBASE_REGIONSERVERS=slave1.acme.com, slave2.acme.com, slave3.acme.com ZOOKEEPER_HOSTS=slave1.acme.com, slave2.acme.com, slave3.acme.com SLAVE_HOSTS=slave1.acme.com, slave2.acme.com, slave3.acme.com

The installation of HDP for Windows 4 #Database host DB_FLAVOR=derby DB_HOSTNAME=DB_myHostName #Hive properties HIVE_DB_NAME=hive HIVE_DB_USERNAME=hive HIVE_DB_PASSWORD=hive #Oozie properties OOZIE_DB_NAME=oozie OOZIE_DB_USERNAME=oozie OOZIE_DB_PASSWORD=oozie

What is HDP for Windows(Con.)

The management of HDP for Windows The Ambari SCOM integration is made possible by the pluggable nature of Ambari.

The management of HDP for Windows(Con.)

The Advantages of HDP for Windows Hadoop on Windows Made Easy With HDP for Windows, Hadoop is both simple to install and manage. It demystifies the Hadoop distribution so you don t need to choose and test the right combination of Hadoop projects to deploy. Clean and Easy Management Apache Ambari, the open source choice for management of a Hadoop cluster is integrated and extends Microsoft System Center so that IT Operators can manage their Hadoop clusters side-by-side with their databases, applications and other IT assets on a single screen. Secure, Reliable, Enterprise-Ready Hadoop Offering the most reliable, innovative and trusted distribution available, Microsoft and Hortonworks together deliver tighter security through integration with Windows Server Active Directory, ease of management through System Center integration.

The Data Integration The Hive ODBC Driver BI Tools Analytics Reporting Hive ODBC Driver

Using Hive with Excel Using the Hive ODBC Driver, your Excel spreadsheets can query data stored in Hadoop

Querying Hive from Excel

Querying Hive from Excel (Con.)

Combine model using Power View in Excel

Then Next? Spark Ambari Ranger Falcon

Why MapReduce is too slow Aims to make data analytics fast both fast to run and fast to write. When you have the request: iterative algorithms

What is In-memory distributed computing framework Create by UC Berkeley AMP Lab in 2010 Target Problem that Hadoop MR is bad at Iterative algorithm (Machine Learning ) Interactive data mining More general purpose than Hadoop MR Active contributions from ~15 companies

http://spark.incubator.apache.org What Different between Hadoop and Spark Map Map Data Source Data Source 2 Reduce Reduce Map() Join() Transform Cache() HDFS

What is Provision a Hadoop Cluster Ambari provides a step-by-step wizard for installing Hadoop services across any number of hosts. Ambari handles configuration of Hadoop services for the cluster. Manage a Hadoop Cluster Ambari provides central management for starting, stopping, and reconfiguring Hadoop services across the entire cluster. Monitor a Hadoop Cluster Ambari provides a dashboard for monitoring health and status of the Hadoop cluster.

Ambari installation Wizard

Ambari central dashboard

Conclusion

Recap - Lifecycle of a YARN Application Client Resource Manager Container Basic unit of allocation Ex. Container A = 2GB, 1CPU Fine-grained resource allocation Replace the fixed map/ reduce slots Node Manager Node Manager Node Manager Node Manager Application Master Container Container Container Container Container Container Container

Hadoop 2.0 Eco-systems

Q&A

Questions?