Paper SAS Techniques in Processing Data on Hadoop
|
|
|
- Ann Bradley
- 10 years ago
- Views:
Transcription
1 Paper SAS Techniques in Processing Data on Hadoop Donna De Capite, SAS Institute Inc., Cary, NC ABSTRACT Before you can analyze your big data, you need to prepare the data for analysis. This paper discusses capabilities and techniques for using the power of SAS to prepare big data for analytics. It focuses on how a SAS user can write code that will run in a Hadoop cluster and take advantage of the massive parallel processing power of Hadoop. WHAT IS HADOOP? Hadoop is an open-source Apache project that was developed to solve the big data problem. How do you know you have a big data problem? In simple terms, when you have exceeded the capacity of conventional database systems, you re dealing with big data. Companies like Google, Yahoo, and Facebook were faced with big data challenges early on. The Hadoop project resulted from their efforts to manage their data volumes. It is designed to run on a large number of machines that don t share any memory or disks. Hadoop allows for handling hardware failures through its massively parallel architecture. Data is spread across the cluster of machines using HDFS Hadoop Distributed File System. Data is processed using MapReduce, a Java-based programming model for data processing on Hadoop. WHY ARE COMPANIES TURNING TO HADOOP? Put simply, companies want to take advantage of the relatively low-cost infrastructure available with a Hadoop environment. Hadoop uses lower-cost commodity hardware to store and process data. Users with a traditional storage area network (SAN) are interested in moving more of their data into a Hadoop cluster. WHAT CAN SAS DO IN HADOOP? SAS can process all your data on Hadoop. SAS can process your data feeds and format your data in a meaningful way so that you can do analytics. You can use SAS to query and use the SAS DATA step against your Hive and Impala data. This paper takes you through the steps of getting started with Hadoop. If you already have experience with the basics of MapReduce and Pig, you can jump to the methods more centric to SAS of processing your data using the SAS language. GETTING STARTED WITH HADOOP In a Hadoop cluster, the configuration file is key to communicating with the Hadoop cluster. The examples in this paper use a basic configuration file. Here is an example config.xml file. The settings should be updated to point to the specific Hadoop cluster. This file is located at \\machine\config.xml. A macro variable also references this file: %let cfgfile="\\machine\config.xml"; <configuration> <property> <name>fs.default.name</name> <value>hdfs://servername.demo.sas.com:8020</value> </property>
2 <property> <name>mapred.job.tracker</name> <value>servername.demo.sas.com:8021</value> </property> </configuration> HELLO WORLD! The Hello World of Hadoop is working with the Shakespeare examples on the Apache Hadoop pages. Here s an example to get started with using SAS to process this data. In this example, the Shakespeare data is moved to c:\public\tmp. The first thing that you need to do is to land your Shakespeare data out on your cluster. There is always more than one way to do things, so this is not an exhaustive list of techniques. The file system for Hadoop is called the Hadoop Distributed File System (HDFS). Data needs to be in HDFS to be processed by Hadoop. MOVING DATA TO HDFS /**************************************************** * Simple example of moving a file to HDFS * * filename to Hadoop cluster configuration * *****************************************************/ filename cfg \\machine\config.xml";; /* create authdomain in SAS Metadata named "HADOOP" * * copy file from my local file system to HDFS * * HDFS location is /user/sas * *****************************************************/ proc hadoop options=cfg authdomain= HADOOP verbose; hdfs copyfromlocal= c:\public\tmp\hamlet.txt out="/user/sas/hamlet.txt" overwrite; The AUTHDOMAIN option is used. If you have set up AUTHDOMAIN, then you do not need to specify USERNAME= and PASSWORD=. Once the data is in HDFS, you will want to read that data and process it in Hadoop to take advantage of the parallel processing power of Hadoop. RUNNING MAPREDUCE IN A HADOOP CLUSTER FROM SAS PROC HADOOP can submit the sample MapReduce program that is shipped with Hadoop via the following SAS code: proc hadoop options = cfg verbose AuthDomain="HADOOP" ; hdfs delete="/user/sas/outputs"; mapreduce input="/user/sas/hamlet.txt" output="/user/sas/outputs" jar="c:\public\tmp\wcount.jar" map="org.apache.hadoop.examples.wordcount$tokenizermapper" reduce="org.apache.hadoop.examples.wordcount$intsumreducer" combine="org.apache.hadoop.examples.wordcount$intsumreducer" outputvalue="org.apache.hadoop.io.intwritable" outputkey="org.apache.hadoop.io.text"
3 ; RUNNING PIG IN A HADOOP CLUSTER FROM SAS Apache Pig is another technique available to Hadoop users for the simple processing of data. This example does a word count using Pig Latin: filename W2A8SBAK temp ; data _null_; file W2A8SBAK; put "A = load '/user/sas/hamlet.txt'; "; put "B = foreach A generate flatten(tokenize((chararray)$0)) as word; "; put "C = Filter B by (word matches 'The');"; put "D = group C by word;"; put "E = foreach D generate COUNT(C), group;"; put "F = store E into '/user/sas/pig_thecount';"; proc hadoop options = cfg verbose AuthDomain="HADOOP"; pig code=w2a8sbak; The output file is writing in HDFS and shows the words and their occurrences in the source data. You can look at the Hadoop log files to see how many mappers and reducers were run on the data. USING HDFS TO STORE SAS DATA SETS What if you have already have SAS data and want to take advantage of the commodity hardware capabilities of your Hadoop cluster? In other words, use HDFS for the storage of your SAS data and run SAS against that data. SAS uses its distributed data storage format with the SAS Scalable Performance Data (SPDE) Engine for Hadoop. In the example below, you can move your SAS data set to HDFS and use SAS to work with that data. Currently, the processing of the data does not take place using MapReduce. This functionality uses HDFS. SAS Scalable Performance Data Engine (SPDE) provides parallel access to partitioned data files from the SAS client. libname testdata spde '/user/dodeca' hdfshost=default; libname cars base '\\sash\cardata'; proc copy in=cars out=testdata; select cardata; RUNNING SAS AGAINST YOUR HADOOP DATA proc freq data=testdata.cardata; tables Color;
4 Figure 1. Results from PROC FREQ ACCESSING HIVE DATA USING SAS/ACCESS INTERFACE TO HADOOP SAS/ACCESS Interface to Hadoop provides a LIBNAME engine to get to your Hadoop Hive data. Below is a simple example using PROC SQL: /* CTAS create table as select */ proc sql; connect to hadoop (server=duped user=myuserid); execute (create table myuserid_store_cnt row format delimited fields terminated by '\001' stored as textfile as select customer_rk, count(*) as total_orders from order_fact group by customer_rk) by hadoop; disconnect from hadoop; quit; /* simple libname statement */ libname myhdp hadoop server=duped user=myuserid ; /* Create a SAS data set from Hadoop data */ proc sql; create table work.join_test as ( select c.customer_rk, o.store_id from myhdp.customer_dim c, myhdp.order_fact o where c.customer_rk = o.customer_rk);
5 quit; Below is a PROC FREQ example. The Hadoop LIBNAME exposes standard SAS functionality such as PROC FREQ against Hadoop data. /* PROC FREQ example */ /* Create a Hive table */ data myhdp.myuserid_class; set sashelp.class; /* Run PROC FREQ on the class table */ proc freq data=myhdp.myuserid_class; tables sex * age; where age > 9; title 'Frequency'; LEVERAGING THE POWER OF SAS TO PROCESS YOUR HADOOP DATA ACROSS THE CLUSTER In this example, SAS Enterprise Miner is used to develop a model for scoring. The result is a score.sas and score.xml files. This functionality can be deployed to a Hadoop cluster for all of the processing to take place in Hadoop. To run scoring in Hadoop, there are several prerequisites to processing the data. Metadata needs to be known about the input data. The PROC HDMD can be used to register this metadata. Credentials must be provided to connect to the Hadoop cluster using the INDCONN macro variable. Scoring Accelerator for Hadoop SAS Enterprise Miner can be used to build models. Using the Score Export node from SAS Enterprise Miner, the scoring code can be exported for scoring to take place on your Hadoop cluster. The code below sets up the connection to the cluster, it publishes the scoring code to HDFS, and it then runs the model. %let modelnm=al32; %let scorepath=c:\public\hadoop\sample Files\ALL\AL32; %let metadir=%str(/user/hadoop/meta); %let ds2dir=%str(/user/hadoop/ds2); %let INDCONN=%str(HADOOP_CFG=&config USER=&user PASSWORD=&pw); %indhd_publish_model( dir=&scorepath., datastep=score.sas., xml=score.xml., modeldir=&ds2dir., modelname=&modelnm., action=replace, trace=yes); %indhd_run_model( inmetaname= &metadir./al32.sashdmd, outdatadir=&datadir./&modelnm.out, outmetadir=&metadir./&modelnm.out.sashdmd, scorepgm=&ds2dir./&modelnm./&modelnm..ds2, forceoverwrite=false, showproc=yes /* showproc and trace are debug options */, trace=yes); /* Select the values from the scoring results on Hadoop*/ proc sql;
6 select em_classification from gridlib.&modelnm.out; quit; At this point, the scoring algorithm runs through all of the data and provides a score for the data. The processing all takes place in Hadoop. The SAS term in-database is a carry-over term from the days of putting the processing power of SAS into databases. SAS embeds the process into the Hadoop cluster so that the cluster does all the work. There is no data movement back to SAS. SAS provides the ability to embed user-written code into Hadoop. That functionality is provided through the SAS Code Accelerator for Hadoop. This functionality is currently in development and is due to be released in the summer of RUNNING USER-WRITTEN SAS CODE The SAS embedded process technology offers the ability for a user to run SAS code in the cluster and take advantage of the processing power of Hadoop. The first offering from SAS was the SAS Scoring Accelerator for Hadoop. The next offering from SAS (target release date around summer 2014) is the SAS Code Accelerator for Hadoop. Code Accelerator The example below takes data and spreads it across the cluster using BY-group processing and uses first/last dot notation to output the result. This technique takes advantage of Hadoop processing power to do all of the calculations in the mapper stage. libname hdptgt hadoop server=&server port=10000 schema=sample user=hive password=hive config="&hadoop_config_file"; proc ds2 indb=yes; thread tpgm / overwrite=yes; declare double highscore; keep gender highscore; retain highscore; method run(); set hdptgt.myinputtable; by gender; if first.gender then highscore=0; if score1 <= 100 and score1>highscore then highscore=score1; if last.gender then output; end; endthread; ds2_options trace; quit; data hdptgt.myoutputscore(overwrite=yes); dcl thread tpgm tdata; method run(); set from tdata; end; enddata;
7 Figure 2. Result from the SAS Code Accelerator Program PROCESSING FILES ON HADOOP TO LOAD SAS LASR ANALYTIC SERVER The HDMD procedure enables you to register metadata about your HDFS files on Hadoop so that they can be easily processed using SAS. The example below uses the Hadoop engine to parallel load your file data into the SAS LASR Analytic Server. This same technique can be used whenever you are working with Hadoop file data. /* The HADOOP libname points to your Hadoop cluster */ libname hdp HADOOP /* connection options */ config = &cfgfile hdfs_datadir = '/user/demo/files' hdfs_tempdir = '/user/demo/tmp' hdfs_metadir = '/user/demo/meta' hdfs_permdir = '/user/demo/perm'; /* Start up a LASR Analytic Server on port 1314 */ options set=gridhost="servername.demo.sas.com"; options set=gridinstallloc="/opt/tkgrid"; proc lasr create port=1314 path="/tmp" noclass; performance nodes=all; /* Use proc HDMD to create a definition of the movies.txt file delimited by comma */ /* This proc does not read the data, it just defines the structure of the data */ Proc HDMD name=hdp.movies format=delimited encoding=utf8 sep = ',' text_qualifier='"' DATA_FILE='movies.txt' input_format='org.apache.hadoop.mapred.textinputformat' ; column userid int; column itemid int; column rating int; column timeid int; /* Now load the data into the LASR Analytic Server */ /* Parallel load of movies data into LASR */ proc lasr add port=1314 data=hdp.movies; /* terminate the lasr port 1314 */ proc lasr term port=1314; SAS DATA DIRECTOR
8 The SAS Data Director (target release date around summer 2014) is a visual client that allows for processing Hadoop data on the cluster. It provides a point-and-click interface that makes it easy to work with Hadoop data. SAS Data Director is a code generator. Depending on the type of transformation required, different code generation techniques are used. SAS Data Director uses native MapReduce, SAS Code Accelerator for DS2, SAS/ACCESS Interface to Hadoop for Hive queries and Apache Sqoop. Below are a few examples of the type of processing provided by SAS Data Director. Sqoop Data In and Out of Hadoop Using SAS Data Director, the user selects the location of the data to copy and the target location. A fast parallel load is then initiated to copy the data from the relational database into Hadoop. Figure 3. SAS Data Director Copies Data to Hadoop Using Sqoop SAS Data Director transforms the data on your Hadoop cluster (using filter, subset, join, and sort).
9 Figure 4. Example of SAS Data Director Transforming Data in Hadoop Easy Movement of Data to SAS LASR Analytic Server and SAS Visual Analytics Server Parallel loads into SAS LASR Analytic Server and SAS Visual Analytics Server are easy with SAS Data Director. Specify the source and the LASR target and data is copied into SAS LASR Analytic Server. Figure 5. Example of SAS Data Director Copying Hadoop Data to SAS LASR Analytic Server
10 HELPFUL TECHNIQUES TO WORK WITH HADOOP DATA The HDFS housekeeping routines that are provided by PROC HADOOP are handy to verify that the directory structure is in place or that old data is cleared out before running your SAS Hadoop job. When preparing my Hadoop environment for scoring, I relied on PROC HADOOP to get my directory structure in place. proc hadoop options = "\\sashq\root\u\sasdsv\dev\hadoop\cdh431d1_config\core-site.xml" USER="&dbuser" PASSWORD="&dbpw" ; hdfs mkdir="/user/dodeca/meta"; hdfs mkdir="/user/dodeca/temp"; hdfs mkdir="/user/dodeca/ds2"; hdfs copyfromlocal="c:\public\scoring.csv" out="/user/dodeca/data/scoring.csv" ; CONCLUSION SAS continues to provide more features for processing data in the Hadoop landscape. Most recently, SAS can take advantage of the processing power of Hadoop with its embedded process technology. SAS will continue to offer more features as the Hadoop ecosystem delivers more functionality. For example, SAS/ACCESS Interface to Impala was released in early RECOMMENDED READING Secosky, Jason, et al Parallel Data Preparation with the DS2 Programming Language. Proceedings of the SAS Global Forum 2014 Conference. Cary, NC: SAS Institute Inc. Available at Rausch, Nancy, et al What s New in SAS Data Management. Proceedings of the SAS Global Forum 2014 Conference. Cary, NC: SAS Institute Inc. Available at Rausch, Nancy, et al Best Practices in SAS Data Management for Big Data. Proceedings of the SAS Global Forum 2013 Conference. Cary, NC: SAS Institute Inc. Available at CONTACT INFORMATION Your comments and questions are valued and encouraged. Contact the author at: Donna De Capite 100 Campus Drive Cary, NC SAS Institute Inc. Work Phone: (919) Fax: (919) [email protected] Web: support.sas.com
11 SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. indicates USA registration. Other brand and product names are trademarks of their respective companies.
QUEST meeting Big Data Analytics
QUEST meeting Big Data Analytics Peter Hughes Business Solutions Consultant SAS Australia/New Zealand Copyright 2015, SAS Institute Inc. All rights reserved. Big Data Analytics WHERE WE ARE NOW 2005 2007
What's New in SAS Data Management
Paper SAS034-2014 What's New in SAS Data Management Nancy Rausch, SAS Institute Inc., Cary, NC; Mike Frost, SAS Institute Inc., Cary, NC, Mike Ames, SAS Institute Inc., Cary ABSTRACT The latest releases
Parallel Data Preparation with the DS2 Programming Language
ABSTRACT Paper SAS329-2014 Parallel Data Preparation with the DS2 Programming Language Jason Secosky and Robert Ray, SAS Institute Inc., Cary, NC and Greg Otto, Teradata Corporation, Dayton, OH A time-consuming
and Hadoop Technology
SAS and Hadoop Technology Overview SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. SAS and Hadoop Technology: Overview. Cary, NC: SAS Institute
WHAT S NEW IN SAS 9.4
WHAT S NEW IN SAS 9.4 PLATFORM, HPA & SAS GRID COMPUTING MICHAEL GODDARD CHIEF ARCHITECT SAS INSTITUTE, NEW ZEALAND SAS 9.4 WHAT S NEW IN THE PLATFORM Platform update SAS Grid Computing update Hadoop support
Hadoop & SAS Data Loader for Hadoop
Turning Data into Value Hadoop & SAS Data Loader for Hadoop Sebastiaan Schaap Frederik Vandenberghe Agenda What s Hadoop SAS Data management: Traditional In-Database In-Memory The Hadoop analytics lifecycle
Använd SAS för att bearbeta och analysera ditt data i Hadoop
make connections share ideas be inspired Använd SAS för att bearbeta och analysera ditt data i Hadoop Mikael Turvall Arkitektur SAS VISUAL ANALYTICS and SAS VISUAL STATISTICS SAS IN-MEMORY STATISTICS FOR
Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop
Lecture 32 Big Data 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop 1 2 Big Data Problems Data explosion Data from users on social
Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.
Big Data Hadoop Administration and Developer Course This course is designed to understand and implement the concepts of Big data and Hadoop. This will cover right from setting up Hadoop environment in
Hortonworks & SAS. Analytics everywhere. Page 1. Hortonworks Inc. 2011 2014. All Rights Reserved
Hortonworks & SAS Analytics everywhere. Page 1 A change in focus. A shift in Advertising From mass branding A shift in Financial Services From Educated Investing A shift in Healthcare From mass treatment
Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data
Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
Implement Hadoop jobs to extract business value from large and varied data sets
Hadoop Development for Big Data Solutions: Hands-On You Will Learn How To: Implement Hadoop jobs to extract business value from large and varied data sets Write, customize and deploy MapReduce jobs to
9.4 SPD Engine: Storing Data in the Hadoop Distributed File System
SAS 9.4 SPD Engine: Storing Data in the Hadoop Distributed File System Third Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. SAS 9.4
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE AGENDA Introduction to Big Data Introduction to Hadoop HDFS file system Map/Reduce framework Hadoop utilities Summary BIG DATA FACTS In what timeframe
Big Data Too Big To Ignore
Big Data Too Big To Ignore Geert! Big Data Consultant and Manager! Currently finishing a 3 rd Big Data project! IBM & Cloudera Certified! IBM & Microsoft Big Data Partner 2 Agenda! Defining Big Data! Introduction
Technical Paper. Performance of SAS In-Memory Statistics for Hadoop. A Benchmark Study. Allison Jennifer Ames Xiangxiang Meng Wayne Thompson
Technical Paper Performance of SAS In-Memory Statistics for Hadoop A Benchmark Study Allison Jennifer Ames Xiangxiang Meng Wayne Thompson Release Information Content Version: 1.0 May 20, 2014 Trademarks
Comprehensive Analytics on the Hortonworks Data Platform
Comprehensive Analytics on the Hortonworks Data Platform We do Hadoop. Page 1 Page 2 Back to 2005 Page 3 Vertical Scaling Page 4 Vertical Scaling Page 5 Vertical Scaling Page 6 Horizontal Scaling Page
Bringing the Power of SAS to Hadoop. White Paper
White Paper Bringing the Power of SAS to Hadoop Combine SAS World-Class Analytic Strength with Hadoop s Low-Cost, Distributed Data Storage to Uncover Hidden Opportunities Contents Introduction... 1 What
Collaborative Big Data Analytics. Copyright 2012 EMC Corporation. All rights reserved.
Collaborative Big Data Analytics 1 Big Data Is Less About Size, And More About Freedom TechCrunch!!!!!!!!! Total data: bigger than big data 451 Group Findings: Big Data Is More Extreme Than Volume Gartner!!!!!!!!!!!!!!!
Integrating VoltDB with Hadoop
The NewSQL database you ll never outgrow Integrating with Hadoop Hadoop is an open source framework for managing and manipulating massive volumes of data. is an database for handling high velocity data.
Introducing - SAS Grid Manager for Hadoop
ABSTRACT Paper SAS6281-2016 Introducing - SAS Grid Manager for Hadoop Cheryl Doninger, SAS Institute Inc., Cary, NC Doug Haigh, SAS Institute Inc, Cary, NC Organizations view Hadoop as a lower cost means
Workshop on Hadoop with Big Data
Workshop on Hadoop with Big Data Hadoop? Apache Hadoop is an open source framework for distributed storage and processing of large sets of data on commodity hardware. Hadoop enables businesses to quickly
Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP
Big-Data and Hadoop Developer Training with Oracle WDP What is this course about? Big Data is a collection of large and complex data sets that cannot be processed using regular database management tools
Data processing goes big
Test report: Integration Big Data Edition Data processing goes big Dr. Götz Güttich Integration is a powerful set of tools to access, transform, move and synchronize data. With more than 450 connectors,
BIG DATA TECHNOLOGY. Hadoop Ecosystem
BIG DATA TECHNOLOGY Hadoop Ecosystem Agenda Background What is Big Data Solution Objective Introduction to Hadoop Hadoop Ecosystem Hybrid EDW Model Predictive Analysis using Hadoop Conclusion What is Big
Constructing a Data Lake: Hadoop and Oracle Database United!
Constructing a Data Lake: Hadoop and Oracle Database United! Sharon Sophia Stephen Big Data PreSales Consultant February 21, 2015 Safe Harbor The following is intended to outline our general product direction.
ANALYTICS MODERNIZATION TRENDS, APPROACHES, AND USE CASES. Copyright 2013, SAS Institute Inc. All rights reserved.
ANALYTICS MODERNIZATION TRENDS, APPROACHES, AND USE CASES STUNNING FACT Making the Modern World: Materials and Dematerialization - Vaclav Smil Trends in Platforms Hadoop Microsoft PDW COST PER TERABYTE
Hadoop Ecosystem B Y R A H I M A.
Hadoop Ecosystem B Y R A H I M A. History of Hadoop Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open
ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat
ESS event: Big Data in Official Statistics Antonino Virgillito, Istat v erbi v is 1 About me Head of Unit Web and BI Technologies, IT Directorate of Istat Project manager and technical coordinator of Web
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after
Using distributed technologies to analyze Big Data
Using distributed technologies to analyze Big Data Abhijit Sharma Innovation Lab BMC Software 1 Data Explosion in Data Center Performance / Time Series Data Incoming data rates ~Millions of data points/
Hadoop Job Oriented Training Agenda
1 Hadoop Job Oriented Training Agenda Kapil CK [email protected] Module 1 M o d u l e 1 Understanding Hadoop This module covers an overview of big data, Hadoop, and the Hortonworks Data Platform. 1.1 Module
Hadoop Evolution In Organizations. Mark Vervuurt Cluster Data Science & Analytics
In Organizations Mark Vervuurt Cluster Data Science & Analytics AGENDA 1. Yellow Elephant 2. Data Ingestion & Complex Event Processing 3. SQL on Hadoop 4. NoSQL 5. InMemory 6. Data Science & Machine Learning
COSC 6397 Big Data Analytics. 2 nd homework assignment Pig and Hive. Edgar Gabriel Spring 2015
COSC 6397 Big Data Analytics 2 nd homework assignment Pig and Hive Edgar Gabriel Spring 2015 2 nd Homework Rules Each student should deliver Source code (.java files) Documentation (.pdf,.doc,.tex or.txt
High-Performance Business Analytics: SAS and IBM Netezza Data Warehouse Appliances
High-Performance Business Analytics: SAS and IBM Netezza Data Warehouse Appliances Highlights IBM Netezza and SAS together provide appliances and analytic software solutions that help organizations improve
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
9.4 Hadoop Configuration Guide for Base SAS. and SAS/ACCESS
SAS 9.4 Hadoop Configuration Guide for Base SAS and SAS/ACCESS Second Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. SAS 9.4 Hadoop
ITG Software Engineering
Introduction to Apache Hadoop Course ID: Page 1 Last Updated 12/15/2014 Introduction to Apache Hadoop Course Overview: This 5 day course introduces the student to the Hadoop architecture, file system,
Big Data Weather Analytics Using Hadoop
Big Data Weather Analytics Using Hadoop Veershetty Dagade #1 Mahesh Lagali #2 Supriya Avadhani #3 Priya Kalekar #4 Professor, Computer science and Engineering Department, Jain College of Engineering, Belgaum,
Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park
Big Data: Using ArcGIS with Apache Hadoop Erik Hoel and Mike Park Outline Overview of Hadoop Adding GIS capabilities to Hadoop Integrating Hadoop with ArcGIS Apache Hadoop What is Hadoop? Hadoop is a scalable
ITG Software Engineering
Introduction to Cloudera Course ID: Page 1 Last Updated 12/15/2014 Introduction to Cloudera Course : This 5 day course introduces the student to the Hadoop architecture, file system, and the Hadoop Ecosystem.
Xiaoming Gao Hui Li Thilina Gunarathne
Xiaoming Gao Hui Li Thilina Gunarathne Outline HBase and Bigtable Storage HBase Use Cases HBase vs RDBMS Hands-on: Load CSV file to Hbase table with MapReduce Motivation Lots of Semi structured data Horizontal
SAS 9.4 In-Database Products
SAS 9.4 In-Database Products Administrator s Guide Fifth Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. SAS 9.4 In-Database Products:
White Paper. How Streaming Data Analytics Enables Real-Time Decisions
White Paper How Streaming Data Analytics Enables Real-Time Decisions Contents Introduction... 1 What Is Streaming Analytics?... 1 How Does SAS Event Stream Processing Work?... 2 Overview...2 Event Stream
Oracle Big Data SQL Technical Update
Oracle Big Data SQL Technical Update Jean-Pierre Dijcks Oracle Redwood City, CA, USA Keywords: Big Data, Hadoop, NoSQL Databases, Relational Databases, SQL, Security, Performance Introduction This technical
Hadoop and Map-Reduce. Swati Gore
Hadoop and Map-Reduce Swati Gore Contents Why Hadoop? Hadoop Overview Hadoop Architecture Working Description Fault Tolerance Limitations Why Map-Reduce not MPI Distributed sort Why Hadoop? Existing Data
Hadoop. MPDL-Frühstück 9. Dezember 2013 MPDL INTERN
Hadoop MPDL-Frühstück 9. Dezember 2013 MPDL INTERN Understanding Hadoop Understanding Hadoop What's Hadoop about? Apache Hadoop project (started 2008) downloadable open-source software library (current
MySQL and Hadoop. Percona Live 2014 Chris Schneider
MySQL and Hadoop Percona Live 2014 Chris Schneider About Me Chris Schneider, Database Architect @ Groupon Spent the last 10 years building MySQL architecture for multiple companies Worked with Hadoop for
An Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
Cloudera Certified Developer for Apache Hadoop
Cloudera CCD-333 Cloudera Certified Developer for Apache Hadoop Version: 5.6 QUESTION NO: 1 Cloudera CCD-333 Exam What is a SequenceFile? A. A SequenceFile contains a binary encoding of an arbitrary number
Hadoop IST 734 SS CHUNG
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
Big Fast Data Hadoop acceleration with Flash. June 2013
Big Fast Data Hadoop acceleration with Flash June 2013 Agenda The Big Data Problem What is Hadoop Hadoop and Flash The Nytro Solution Test Results The Big Data Problem Big Data Output Facebook Traditional
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
Map Reduce & Hadoop Recommended Text:
Big Data Map Reduce & Hadoop Recommended Text:! Large datasets are becoming more common The New York Stock Exchange generates about one terabyte of new trade data per day. Facebook hosts approximately
BIG DATA TRENDS AND TECHNOLOGIES
BIG DATA TRENDS AND TECHNOLOGIES THE WORLD OF DATA IS CHANGING Cloud WHAT IS BIG DATA? Big data are datasets that grow so large that they become awkward to work with using onhand database management tools.
Tips and Techniques for Efficiently Updating and Loading Data into SAS Visual Analytics
Paper SAS1905-2015 Tips and Techniques for Efficiently Updating and Loading Data into SAS Visual Analytics Kerri L. Rivers and Christopher Redpath, SAS Institute Inc., Cary, NC ABSTRACT So you have big
A bit about Hadoop. Luca Pireddu. March 9, 2012. CRS4Distributed Computing Group. [email protected] (CRS4) Luca Pireddu March 9, 2012 1 / 18
A bit about Hadoop Luca Pireddu CRS4Distributed Computing Group March 9, 2012 [email protected] (CRS4) Luca Pireddu March 9, 2012 1 / 18 Often seen problems Often seen problems Low parallelism I/O is
Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook
Hadoop Ecosystem Overview CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook Agenda Introduce Hadoop projects to prepare you for your group work Intimate detail will be provided in future
In-Memory Analytics for Big Data
In-Memory Analytics for Big Data Game-changing technology for faster, better insights WHITE PAPER SAS White Paper Table of Contents Introduction: A New Breed of Analytics... 1 SAS In-Memory Overview...
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
HADOOP ADMINISTATION AND DEVELOPMENT TRAINING CURRICULUM
HADOOP ADMINISTATION AND DEVELOPMENT TRAINING CURRICULUM 1. Introduction 1.1 Big Data Introduction What is Big Data Data Analytics Bigdata Challenges Technologies supported by big data 1.2 Hadoop Introduction
SAS 9.4 In-Database Products
SAS 9.4 In-Database Products User s Guide Fifth Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. SAS 9.4 In-Database Products: User's
HiBench Introduction. Carson Wang ([email protected]) Software & Services Group
HiBench Introduction Carson Wang ([email protected]) Agenda Background Workloads Configurations Benchmark Report Tuning Guide Background WHY Why we need big data benchmarking systems? WHAT What is
An Industrial Perspective on the Hadoop Ecosystem. Eldar Khalilov Pavel Valov
An Industrial Perspective on the Hadoop Ecosystem Eldar Khalilov Pavel Valov agenda 03.12.2015 2 agenda Introduction 03.12.2015 2 agenda Introduction Research goals 03.12.2015 2 agenda Introduction Research
SAS LASR Analytic Server 2.4
SAS LASR Analytic Server 2.4 Reference Guide SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2014. SAS LASR Analytic Server 2.4: Reference Guide.
http://www.wordle.net/
Hadoop & MapReduce http://www.wordle.net/ http://www.wordle.net/ Hadoop is an open-source software framework (or platform) for Reliable + Scalable + Distributed Storage/Computational unit Failures completely
SAS 9.3 Intelligence Platform
SAS 9.3 Intelligence Platform Data Administration Guide Second Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2012. SAS 9.3 Intelligence
SAS Data Set Encryption Options
Technical Paper SAS Data Set Encryption Options SAS product interaction with encrypted data storage Table of Contents Introduction: What Is Encryption?... 1 Test Configuration... 1 Data... 1 Code... 2
Document Type: Best Practice
Global Architecture and Technology Enablement Practice Hadoop with Kerberos Deployment Considerations Document Type: Best Practice Note: The content of this paper refers exclusively to the second maintenance
Hadoop implementation of MapReduce computational model. Ján Vaňo
Hadoop implementation of MapReduce computational model Ján Vaňo What is MapReduce? A computational model published in a paper by Google in 2004 Based on distributed computation Complements Google s distributed
Apache Hadoop. Alexandru Costan
1 Apache Hadoop Alexandru Costan Big Data Landscape No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard, except Hadoop 2 Outline What is Hadoop? Who uses it? Architecture HDFS MapReduce Open
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Summary Xiangzhe Li Nowadays, there are more and more data everyday about everything. For instance, here are some of the astonishing
Data Domain Profiling and Data Masking for Hadoop
Data Domain Profiling and Data Masking for Hadoop 1993-2015 Informatica LLC. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or
I/O Considerations in Big Data Analytics
Library of Congress I/O Considerations in Big Data Analytics 26 September 2011 Marshall Presser Federal Field CTO EMC, Data Computing Division 1 Paradigms in Big Data Structured (relational) data Very
Improving Your Relationship with SAS Enterprise Guide
Paper BI06-2013 Improving Your Relationship with SAS Enterprise Guide Jennifer Bjurstrom, SAS Institute Inc. ABSTRACT SAS Enterprise Guide has proven to be a very beneficial tool for both novice and experienced
International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 ISSN 2278-7763
International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 A Discussion on Testing Hadoop Applications Sevuga Perumal Chidambaram ABSTRACT The purpose of analysing
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
SEIZE THE DATA. 2015 SEIZE THE DATA. 2015
1 Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. BIG DATA CONFERENCE 2015 Boston August 10-13 Predicting and reducing deforestation
Bringing the Power of SAS to Hadoop
Bringing the Power of SAS to Hadoop Combine SAS World-Class Analytic Strength with Hadoop s Low-Cost, High-Performance Data Storage and Processing to Get Better Answers, Faster WHITE PAPER SAS White Paper
A very short Intro to Hadoop
4 Overview A very short Intro to Hadoop photo by: exfordy, flickr 5 How to Crunch a Petabyte? Lots of disks, spinning all the time Redundancy, since disks die Lots of CPU cores, working all the time Retry,
Big Data on Microsoft Platform
Big Data on Microsoft Platform Prepared by GJ Srinivas Corporate TEG - Microsoft Page 1 Contents 1. What is Big Data?...3 2. Characteristics of Big Data...3 3. Enter Hadoop...3 4. Microsoft Big Data Solutions...4
Big Data Open Source Stack vs. Traditional Stack for BI and Analytics
Big Data Open Source Stack vs. Traditional Stack for BI and Analytics Part I By Sam Poozhikala, Vice President Customer Solutions at StratApps Inc. 4/4/2014 You may contact Sam Poozhikala at [email protected].
Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments
Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments Important Notice 2010-2015 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, Cloudera Impala, Impala, and
MySQL and Hadoop: Big Data Integration. Shubhangi Garg & Neha Kumari MySQL Engineering
MySQL and Hadoop: Big Data Integration Shubhangi Garg & Neha Kumari MySQL Engineering 1Copyright 2013, Oracle and/or its affiliates. All rights reserved. Agenda Design rationale Implementation Installation
An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics
An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,
Cloudera Manager Training: Hands-On Exercises
201408 Cloudera Manager Training: Hands-On Exercises General Notes... 2 In- Class Preparation: Accessing Your Cluster... 3 Self- Study Preparation: Creating Your Cluster... 4 Hands- On Exercise: Working
Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015
Hadoop MapReduce and Spark Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015 Outline Hadoop Hadoop Import data on Hadoop Spark Spark features Scala MLlib MLlib
Hadoop Introduction. Olivier Renault Solution Engineer - Hortonworks
Hadoop Introduction Olivier Renault Solution Engineer - Hortonworks Hortonworks A Brief History of Apache Hadoop Apache Project Established Yahoo! begins to Operate at scale Hortonworks Data Platform 2013
BIG DATA HANDS-ON WORKSHOP Data Manipulation with Hive and Pig
BIG DATA HANDS-ON WORKSHOP Data Manipulation with Hive and Pig Contents Acknowledgements... 1 Introduction to Hive and Pig... 2 Setup... 2 Exercise 1 Load Avro data into HDFS... 2 Exercise 2 Define an
SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform
SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform INTRODUCTION Grid computing offers optimization of applications that analyze enormous amounts of data as well as load
Getting Started with Hadoop. Raanan Dagan Paul Tibaldi
Getting Started with Hadoop Raanan Dagan Paul Tibaldi What is Apache Hadoop? Hadoop is a platform for data storage and processing that is Scalable Fault tolerant Open source CORE HADOOP COMPONENTS Hadoop
Lecture 10 - Functional programming: Hadoop and MapReduce
Lecture 10 - Functional programming: Hadoop and MapReduce Sohan Dharmaraja Sohan Dharmaraja Lecture 10 - Functional programming: Hadoop and MapReduce 1 / 41 For today Big Data and Text analytics Functional
Introduction to NoSQL Databases and MapReduce. Tore Risch Information Technology Uppsala University 2014-05-12
Introduction to NoSQL Databases and MapReduce Tore Risch Information Technology Uppsala University 2014-05-12 What is a NoSQL Database? 1. A key/value store Basic index manager, no complete query language
Beyond Web Application Log Analysis using Apache TM Hadoop. A Whitepaper by Orzota, Inc.
Beyond Web Application Log Analysis using Apache TM Hadoop A Whitepaper by Orzota, Inc. 1 Web Applications As more and more software moves to a Software as a Service (SaaS) model, the web application has
Information Architecture
The Bloor Group Actian and The Big Data Information Architecture WHITE PAPER The Actian Big Data Information Architecture Actian and The Big Data Information Architecture Originally founded in 2005 to
Hadoop for MySQL DBAs. Copyright 2011 Cloudera. All rights reserved. Not to be reproduced without prior written consent.
Hadoop for MySQL DBAs + 1 About me Sarah Sproehnle, Director of Educational Services @ Cloudera Spent 5 years at MySQL At Cloudera for the past 2 years [email protected] 2 What is Hadoop? An open-source
