E6893 Big Data Analytics Lecture 2: Big Data Analytics Platforms Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science Mgr., Dept. of Network Science and Big Data Analytics, IBM Watson Research Center September 11st, 2014 1
Course Structure Class Data 09/04/14 09/11/14 09/18/14 09/25/14 10/02/14 10/09/14 10/16/14 10/23/14 10/30/14 11/06/14 11/13/14 11/20/14 11/27/14 12/04/14 12/11/14 & 12/12/14 Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14-15 Topics Covered Introduction to Big Data Analytics Big Data Analytics Platforms Big Data Storage and Processing Big Data Analytics Algorithms -- I Big Data Analytics Algorithms -- II Linked Big Data Analysis Graph Computing and Network Science Big Data Visualization Big Data Mobile Applications Large-Scale Machine Learning Big Data Analytics on Specific Processors Hardware and Cluster Platforms for Big Data Analytics Big Data Next Challenges IoT, Cognition, and Beyond Thanksgiving Holiday Final Projects Discussion (Optional) Two-Day Big Data Analytics Workshop Final Project Presentations 2
Course information -- TAs 11 Teaching Assistants: Ruichi Yu <ry2254>, Computer Science Aonan Zhang <az2385>, Electrical Engineering Promiti Dutta <pd2049>, Electrical Engineering and Environmental Engineering Bhaveep Sethi, <bas2226>, Computer Science Weizhen Wang <ww2339>, Computer Science Jen-Chieh Huang <jh3478>, Computer Engineering Yunzhi Ye <yy2509>, Computer Science Meng-Yi (Marcus) Hsu <mh3346>, Electrical Engineering Shuguan Yang <sy2518>, Electrical Engineering Lin Huang <lh2647>, Electrical Engineering Huan Gao <hg2357>, Electrical Engineering 3
Students shall be divided into groups based on interest Goal: Align interest domain into groups in order to focus on use scenarios, datasets, requirements to create open source Big Data Analytics toolkits. And also other fields that are not on the list: Education, Social Science, etc. Selection: An online website will be opened to let all students (on-campus & CVN) submit up to 3 preferences and description of personal education/work background towards the domain. TAs will be assigned to lead 11 groups. Some groups may have multiple fields. Some fields may be multiple groups. 4
Related Information Guest: Prof. Ernesto Reuben, Business School; Associate Director, CELSS Columbia Experimental Laboratory for the Social Sciences (CELSS): A joint venture by the Business School Sociology Economics SIPA Political Science 5
Reading Reference for Lecture 2 & 3 6
Remind -- Apache Hadoop The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. The project includes these modules: Hadoop Common: The common utilities that support the other Hadoop modules. Hadoop Distributed File System (HDFS ): A distributed file system that provides highthroughput access to application data. Hadoop YARN: A framework for job scheduling and cluster resource management. Hadoop MapReduce: A YARN-based system for parallel processing of large data sets. http://hadoop.apache.org 7
Remind -- Hadoop-related Apache Projects Ambari : A web-based tool for provisioning, managing, and monitoring Hadoop clusters.it also provides a dashboard for viewing cluster health and ability to view MapReduce, Pig and Hive applications visually. Avro : A data serialization system. Cassandra : A scalable multi-master database with no single points of failure. Chukwa : A data collection system for managing large distributed systems. HBase : A scalable, distributed database that supports structured data storage for large tables. Hive : A data warehouse infrastructure that provides data summarization and ad hoc querying. Mahout : A Scalable machine learning and data mining library. Pig : A high-level data-flow language and execution framework for parallel computation. Spark : A fast and general compute engine for Hadoop data. Spark provides a simple and expressive programming model that supports a wide range of applications, including ETL, machine learning, stream processing, and graph computation. Tez : A generalized data-flow programming framework, built on Hadoop YARN, which provides a powerful and flexible engine to execute an arbitrary DAG of tasks to process data for both batch and interactive use-cases. ZooKeeper : A high-performance coordination service for distributed applications. 8
Common Use Cases for Big Data in Hadoop Log Data Analysis most common, fits perfectly for HDFS scenario: Write once & Read often. Data Warehouse Modernization Fraud Detection Risk Modeling Social Sentiment Analysis Image Classification Graph Analysis Beyond D. derooset al, Hadoopfor Dummies, John Wiley & Sons, 2014 9
Example: Business Value of Log Analysis Struggle Detection D. derooset al, Hadoopfor Dummies, John Wiley & Sons, 2014 10
Remind -- Hadoop Distributed File System (HDFS) http://hortonworks.com/hadoop/hdfs/ 11
Remind -- MapReduce example http://www.alex-hanna.com 12
MapReduce Process on User Behavior via Log Analysis D. derooset al, Hadoopfor Dummies, John Wiley & Sons, 2014 13
Setting Up the Hadoop Environment Local (standalone) mode Pseudo-distributed mode Fully-distributed mode 14
Setting Up the Hadoop Environment Pseudo-distributed mode http://hadoop.apache.org/docs/stable2/hadoop-project-dist/hadoop-common/singlenodesetup.html 15
Setting Up the Hadoop Environment Pseudo-distributed mode 16
Data Storage Operations on HDFS Hadoop is designed to work best with a modest number of extremely large files. Average file sizes larger than 500MB. Write One, Read Often model. Content of individual files cannot be modified, other than appending new data at the end of the file. What we can do: Create a new file Append content to the end of a file Delete a file Rename a file Modify file attributes like owner 17
HDFS blocks File is divided into blocks (default: 64MB) and duplicated in multiple places (default: 3) Dividing into blocks is normal for a file system. E.g., the default block size in Linux is 4KB. The difference of HDFS is the scale. Hadoop was designed to operate at the petabyte scale. Every data block stored in HDFS has its own metadata and needs to be tracked by a central server. 18
HDFS blocks Replication patterns of data blocks in HDFS. When HDFS stores the replicas of the original blocks across the Hadoop cluster, it tries to ensure that the block replicas are stored in different failure points. 19
HDFS is a User-Space-Level file system 20
Interaction between HDFS components 21
HDFS Federation Before Hadoop 2.0, NameNode was a single point of failure and operation limitation. Before Hadoop 2, Hadoop clusters usually have fewer clusters that were able to scale beyond 3,000 or 4,000 nodes. Multiple NameNodes can be used in Hadoop 2.x. (HDFS High Availability feature one is in an Active state, the other one is in a Standby state). http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/hdfshighavailabilitywithnfs.html 22
High Availability of the NameNodes Active NameNode Standby NameNode keeping the state of the block locations and block metadata in memory -> HDFS checkpointing responsibilities. JournalNode if a failure occurs, the Standby Node reads all completed journal entries to ensure the new Active NameNode is fully consistent with the state of cluster. Zookeeper provides coordination and configuration services for distributed systems. 23
Data Compression in HDFS 24
Several useful commands for HDFS All hadoop commands are invoked by the bin/hadoop script. % hadoop fsck / -files blocks: list the blocks that make up each file in HDFS. For HDFS, the schema name is hdfs, and for the local file system, the schema name is file. A file or director in HDFS can be specified in a fully qualified way, such as: hdfs://namenodehost/parent/child or hdfs://namenodehost The HDFS file system shell command is similar to Linux file commands, with the following general syntax: hadoop hdfs file_cmd For instance mkdir runs as: $hadoop hdfs dfs mkdir /user/directory_name 25
Several useful commands for HDFS -- II 26
Ingesting Log Data -- Flume Ingesting stream data 27
Execute Hadoop Works http://www.alex-hanna.com 28
Remind -- MapReduce Data Flow http://www.ibm.com/developerworks/cloud/library/cl-openstack-deployhadoop/ 29
MapReduce Use Case Example flight data Data Source: Airline On-time Performance data set (flight data set). All the logs of domestic flights from the period of October 1987 to April 2008. Each record represents an individual flight where various details are captured: Time and date of arrival and departure Originating and destination airports Amount of time taken to taxi from the runway to the gate. Download it from Statistical Computing: http://statcomputing.org/dataexpo/2009/ 30
Other datasets available from Statistical Computing http://stat-computing.org/dataexpo/ 31
Flight Data Schema 32
MapReduce Use Case Example flight data Count the number of flights for each carrier Serial way (not MapReduce): 33
MapReduce Use Case Example flight data Count the number of flights for each carrier Parallel way: 34
MapReduce application flow 35
MapReduce steps for flight data computation 36
FlightsByCarrier application Create FlightsByCarrier.java: 37
FlightsByCarrier application 38
FlightsByCarrier Mapper 39
FlightsByCarrier Reducer 40
Run the code 41
See Result 42
Using Pig Script E.g.: totalmiles.pig: calculates the total miles flown for all flights flown in one year Execute it: pig totalmiles.pig See result: hdfs dfs cat /user/root/totalmiles/part-r-00000 775009272 43
Questions? 44