Hadoop and MySQL for Big Data

Similar documents
Alexander Rubin Principle Architect, Percona April 18, Using Hadoop Together with MySQL for Data Analysis

SQL Server Parallel Data Warehouse: Architecture Overview. José Blakeley Database Systems Group, Microsoft Corporation

Real-Time Data Analytics and Visualization

Hadoop for MySQL DBAs. Copyright 2011 Cloudera. All rights reserved. Not to be reproduced without prior written consent.

From Dolphins to Elephants: Real-Time MySQL to Hadoop Replication with Tungsten

Constructing a Data Lake: Hadoop and Oracle Database United!

MySQL and Hadoop: Big Data Integration. Shubhangi Garg & Neha Kumari MySQL Engineering

Replicating to everything

Data Warehouse 2.0 How Hive & the Emerging Interactive Query Engines Change the Game Forever. David P. Mariani AtScale, Inc. September 16, 2013

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

MySQL and Hadoop. Percona Live 2014 Chris Schneider

Tapping Into Hadoop and NoSQL Data Sources with MicroStrategy. Presented by: Jeffrey Zhang and Trishla Maru

Open source Google-style large scale data analysis with Hadoop

SOLVING REAL AND BIG (DATA) PROBLEMS USING HADOOP. Eva Andreasson Cloudera

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

Moving From Hadoop to Spark

SAS BIG DATA SOLUTIONS ON AWS SAS FORUM ESPAÑA, OCTOBER 16 TH, 2014 IAN MEYERS SOLUTIONS ARCHITECT / AMAZON WEB SERVICES

HOW TO LIVE WITH THE ELEPHANT IN THE SERVER ROOM APACHE HADOOP WORKSHOP

PENTAHO DATA INTEGRATION WITH GREENPLUM LOADER

#mstrworld. Tapping into Hadoop and NoSQL Data Sources in MicroStrategy. Presented by: Trishla Maru. #mstrworld

Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam

Microsoft SQL Server Connector for Apache Hadoop Version 1.0. User Guide

Testing 3Vs (Volume, Variety and Velocity) of Big Data

Hadoop Job Oriented Training Agenda

How To Scale Out Of A Nosql Database

HADOOP ADMINISTATION AND DEVELOPMENT TRAINING CURRICULUM

Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.

Native Connectivity to Big Data Sources in MSTR 10

Chukwa, Hadoop subproject, 37, 131 Cloud enabled big data, 4 Codd s 12 rules, 1 Column-oriented databases, 18, 52 Compression pattern, 83 84

Tap into Hadoop and Other No SQL Sources

Oracle s Big Data solutions. Roger Wullschleger. <Insert Picture Here>

Big Data and Advanced Analytics Applications and Capabilities Steven Hagan, Vice President, Server Technologies

Using distributed technologies to analyze Big Data

Data Analyst Program- 0 to 100

ITG Software Engineering

Big data blue print for cloud architecture

Hadoop implementation of MapReduce computational model. Ján Vaňo

Open source large scale distributed data management with Google s MapReduce and Bigtable

Peers Techno log ies Pv t. L td. HADOOP

BIG DATA CAN DRIVE THE BUSINESS AND IT TO EVOLVE AND ADAPT RALPH KIMBALL BUSSUM 2014

Using RDBMS, NoSQL or Hadoop?

Native Connectivity to Big Data Sources in MicroStrategy 10. Presented by: Raja Ganapathy

Reference Architecture, Requirements, Gaps, Roles

Big Data Big Data/Data Analytics & Software Development

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

Big Data Course Highlights

Hadoop Evolution In Organizations. Mark Vervuurt Cluster Data Science & Analytics

BIG DATA TRENDS AND TECHNOLOGIES

MySQL and Hadoop Big Data Integration

Modernizing Your Data Warehouse for Hadoop

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

Hadoop. for Oracle database professionals. Alex Gorbachev Calgary, AB September 2013

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Introduction to Decision Support, Data Warehousing, Business Intelligence, and Analytical Load Testing for all Databases

Can the Elephants Handle the NoSQL Onslaught?

COSC 6397 Big Data Analytics. 2 nd homework assignment Pig and Hive. Edgar Gabriel Spring 2015

Hadoop Ecosystem B Y R A H I M A.

Big Data Buzzwords From A to Z. By Rick Whiting, CRN 4:00 PM ET Wed. Nov. 28, 2012

Querying data warehouses efficiently using the Bitmap Join Index OLAP Tool

Apache Hadoop: Past, Present, and Future

Petabyte Scale Data at Facebook. Dhruba Borthakur, Engineer at Facebook, SIGMOD, New York, June 2013

11/18/15 CS q Hadoop was not designed to migrate data from traditional relational databases to its HDFS. q This is where Hive comes in.

A Brief Outline on Bigdata Hadoop

brief contents PART 1 BACKGROUND AND FUNDAMENTALS...1 PART 2 PART 3 BIG DATA PATTERNS PART 4 BEYOND MAPREDUCE...385

Parquet. Columnar storage for the people

Large scale processing using Hadoop. Ján Vaňo

Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science

Big Data & QlikView. Democratizing Big Data Analytics. David Freriks Principal Solution Architect

Hortonworks & SAS. Analytics everywhere. Page 1. Hortonworks Inc All Rights Reserved

Lecture 10: HBase! Claudia Hauff (Web Information Systems)!

Application Development. A Paradigm Shift

Map Reduce & Hadoop Recommended Text:

TRAINING PROGRAM ON BIGDATA/HADOOP

Big Data on Microsoft Platform

Big Data and Data Science: Behind the Buzz Words

Introduction to Apache Cassandra

Spring,2015. Apache Hive BY NATIA MAMAIASHVILI, LASHA AMASHUKELI & ALEKO CHAKHVASHVILI SUPERVAIZOR: PROF. NODAR MOMTSELIDZE

Unified Big Data Processing with Apache Spark. Matei

HDP Hadoop From concept to deployment.

UNISYS. SQL Server Day 2009 Partners

The Internet of Things and Big Data: Intro

Cloudera Certified Developer for Apache Hadoop

Monitis Project Proposals for AUA. September 2014, Yerevan, Armenia

Hadoop and Map-Reduce. Swati Gore

Real Time Big Data Processing

Practical Hadoop by Example

GAIN BETTER INSIGHT FROM BIG DATA USING JBOSS DATA VIRTUALIZATION

Big Data: Tools and Technologies in Big Data

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Qsoft Inc

The Future of Data Management

Implement Hadoop jobs to extract business value from large and varied data sets

Connecting Hadoop with Oracle Database

HDFS. Hadoop Distributed File System

<Insert Picture Here> Big Data

Introduction to Big Data Training

Hadoop: Distributed Data Processing. Amr Awadallah Founder/CTO, Cloudera, Inc. ACM Data Mining SIG Thursday, January 25 th, 2010

Apache Hadoop: The Pla/orm for Big Data. Amr Awadallah CTO, Founder, Cloudera, Inc.

Integrating Big Data into the Computing Curricula

Transcription:

Hadoop and MySQL for Big Data Alexander Rubin October 9, 2013

About Me Alexander Rubin, Principal Consultant, Percona Working with MySQL for over 10 years Started at MySQL AB, Sun Microsystems, Oracle (MySQL Consulting) Worked at Hortonworks Joined Percona in 2013

Agenda Hadoop Use Cases Inside Hadoop Big Data with Hadoop MySQL and Hadoop Integration Star Schema benchmark

IG DATA Hadoop: when it makes sense

Big Data 3Vs of big data Volume Petabytes Variety Any type of data usually unstructured/raw data No normalization Velocity Data is collected at a high rate http://en.wikipedia.org/wiki/big_data#definition

Hadoop Use Cases Clickstream: a data trail for a user visiting website Task: store and analyze

Hadoop Use Cases Risk modeling: fraud prevention (banks, credit cards) Store all customer activity Run analysis to identify credit card fraud

Point-of-Sale Analysis Hadoop Use Cases

Hadoop Use Cases Stocks / Historical prices analysis

Inside Hadoop

Where is my data? Data Nodes HDFS (Distributed File System) HDFS = Data is spread between MANY machines

Inside Hadoop HBase (Key-Value NoSQLDB) PIG (scripting) Hive (SQL) MapReduce (Distributed Programming Framework) HDFS (Distributed File System) HDFS = Data is spread between MANY machines Write files, append-only mode

Inside Hadoop HBase (Key-Value NoSQLDB) PIG (scripting) Hive (SQL) MapReduce (Distributed Programming Framework) HDFS (Distributed File System) MapReduce: Reads and Process data Code is executed on the DATA NODES inside HDFS

Inside Hadoop HBase (Key-Value NoSQLDB) PIG (scripting) Hive (SQL) MapReduce (Distributed Programming Framework) HDFS (Distributed File System) Hive = executes SQL and translates into MapReduce job PIG = scripting language, translates into MapReduce job

Inside Hadoop HBase (Key-Value NoSQLDB) PIG (scripting) Hive (SQL) MapReduce (Distributed Programming Framework) HDFS (Distributed File System) Hbase = Key/Value store, No-SQL

Hive SQL level for Hadoop Translates SQL to Map-Reduce jobs Schema on Read does not check the data on load

Hive Example hive> create table lineitem ( l_orderkey int, l_partkey int, l_suppkey int, l_linenumber int, l_quantity double, l_extendedprice double, l_discount double, l_tax double, l_returnflag string, l_linestatus string, l_shipdate string, l_commitdate string, l_receiptdate string, l_shipinstruct string, l_shipmode string, l_comment string) row format delimited fields terminated by ' ';

Hive Example hive> create external table lineitem ( l_orderkey int, l_partkey int, l_suppkey int, l_linenumber int, l_quantity double, l_extendedprice double, l_discount double, l_tax double, l_returnflag string, l_linestatus string, l_shipdate string, l_commitdate string, l_receiptdate string, l_shipinstruct string, l_shipmode string, l_comment string) row format delimited fields terminated by ' location /ssb/lineorder/ ;

Impala: Faster SQL Not based on Map-Reduce, directly get data from HDFS http://www.cloudera.com/content/cloudera-content/cloudera-docs/impala/latest/ Installing-and-Using-Impala/Installing-and-Using-Impala.html

Hadoop vs MySQL

Hadoop vs. MySQL for BigData Indexes Partitioning Sharding Full table scan Partitioning Map/Reduce

Hadoop (vs. MySQL) No indexes All processing is full scan BUT: distributed and parallel No transactions High latency (usually)

Indexes for Big Data challenge Creating an index for Petabytes of data? Updating an index for Petabytes of data? Reading a terabyte index? Random read of Petabyte? Full scan in parallel is better for big data

ETL vs ELT 1. Extract data from external source 2. Transform before loading 3. Load data into MySQL 1. Extract data from external source 2. Load data into Hadoop 3. Transform data/ Analyze data/ Visualize data;

Hadoop Example: Load Data I OLTP / Web site MySQL ELT BI / Data Analysis Hadoop Cluster

Archiving to Hadoop OLTP / Web site Goal: keep 100G MySQL ELT BI / Data Analysis Can store Petabytes for archiving Hadoop Cluster

Hadoop Example: Load Data II Web Server Clickstream logs Filesystem Grab file Reports Flume Hadoop Cluster

Hadoop and MySQL Apache Sqoop: http://sqoop.apache.org/ Used to import and export data between Hadoop and MySQL

Hadoop and MySQL Together Integrating MySQL and Hadoop

Integration: Hadoop -> MySQL Hadoop Cluster MySQL Server

Hadoop -> MySQL: Sqoop $ sqoop export --connect jdbc:mysql://mysql_host/db_name --table orders --export-dir /results/ Hadoop Cluster MySQL Server

Hadoop -> MySQL: Sqoop Not real-time: run from a cronjob Export from files on HDFS to MySQL To export from HIVE Hive tables = files inside HDFS $ sqoop export! --connect jdbc:mysql://mysql_host/db_name --table orders! --export-dir /user/hive/warehouse/orders.db! --input-fields-terminated-by '\001! Hive uses ^A (\001) as a field delimiter

Integration: MySQL -> Hadoop MySQL Server Hadoop Cluster

MySQL -> Hadoop: Sqoop $ sqoop import --connect jdbc:mysql://mysql_host/db_name --table ORDERS --hive-import MySQL Server Hadoop Cluster

MySQL to Hadoop: Hadoop Applier Only inserts are supported Replication Master Hadoop Cluster MySQL Applier (reads binlogs from Master) Regular MySQL Slaves

MySQL to Hadoop: Hadoop Applier Download from: http://labs.mysql.com/ Alpha version right now We need to write code how to process data

Star Schema benchmark Variation of TPC-H with star-schema database design http:///docs/wiki/benchmark:ssb:start

Cloudera Impala

Impala and Columnar Storage Parquet: Columnar Storage for Hadoop Column-oriented storage Supports compression Star Schema Benchmark example RAW data: 990GB Parquet data: 240GB http://parquet.io/ http://www.cloudera.com/content/cloudera-content/cloudera-docs/ Impala/latest/Installing-and-Using-Impala/ciiu_parquet.html

Impala Benchmark: Table 1. Load data into HDFS $ hdfs dfs put /data/ssb/lineorder* /ssb/lineorder/ http:///docs/wiki/benchmark:ssb:start

Impala Benchmark: Table 2. Create external table CREATE EXTERNAL TABLE lineorder_text ( LO_OrderKey bigint, LO_LineNumber tinyint, LO_CustKey int, LO_PartKey int,... ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ' ' LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION '/ssb/lineorder'; http:///docs/wiki/benchmark:ssb:start

Impala Benchmark: Table 3. Convert to Parquet Impala-shell> create table lineorder like lineorder_text STORED AS PARQUETFILE; Impala-shell> insert into lineorder select * from lineorder_text; http:///docs/wiki/benchmark:ssb:start

Impala Benchmark: SQLs Q1.1 select sum(lo_extendedprice*lo_discount) as revenue from lineorder, dates where lo_orderdate = d_datekey and d_year = 1993 and lo_discount between 1 and 3 and lo_quantity < 25; http:///docs/wiki/benchmark:ssb:start

Impala Benchmark: SQLs Q3.1 select c_nation, s_nation, d_year, sum(lo_revenue) as revenue from customer, lineorder, supplier, dates where lo_custkey = c_custkey and lo_suppkey = s_suppkey and lo_orderdate = d_datekey and c_region = 'ASIA and s_region = 'ASIA' and d_year >= 1992 and d_year <= 1997 group by c_nation, s_nation, d_year order by d_year asc, revenue desc;

Partitioning for Impala / Hive Partitioning by Year CREATE TABLE lineorder ( LO_OrderKey bigint, LO_LineNumber tinyint, LO_CustKey int, Will create a virtual field on year LO_PartKey int,... ) partitioned by (year int); select * from lineorder where year = '2013 will only read the 2013 partition (much less disk IOs!)

Impala Benchmark CDP Hadoop with Impala 10 m1.xlarge Amazon EC2 RAW data: 990GB Parquet data: 240GB

Impala Benchmark Scanning 1Tb of row data 50-200 seconds 10 xlarge nodes on Amazon EC2

Impala vs Hive Benchmark 10 xlarge nodes on Amazon EC2 Impala and Hive

Amazon Elastic Map Reduce

Hive on Amazon S3 hive> create external table lineitem ( l_orderkey int, l_partkey int, l_suppkey int, l_linenumber int, l_quantity double, l_extendedprice double, l_discount double, l_tax double, l_returnflag string, l_linestatus string, l_shipdate string, l_commitdate string, l_receiptdate string, l_shipinstruct string, l_shipmode string, l_comment string) row format delimited fields terminated by ' location 's3n://data.s3ndemo.hive/tpch/lineitem';

Amazon Elastic Map Reduce Store data on S3 Prepare SQL file (create table, select, etc) Run Elastic Map Reduce Will start N boxes then stop them Results loaded to S3

Questions? Thank you! Blog: http://www.arubin.org