Introduction to Big Data Analysis with R



Similar documents
TDWI 2013 Munich. Training - Using R for Business Intelligence in Big Data

Tutorial - Big Data Analyses with R

Getting Started with R and RStudio 1

Using R for Windows and Macintosh

R Tools Evaluation. A review by Global BI / Local & Regional Capabilities. Telefónica CCDO May 2015

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

Psychology 205: Research Methods in Psychology

rm(list=ls()) library(sqldf) system.time({large = read.csv.sql("large.csv")}) # seconds, 4.23GB of memory used by R

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Integrating Big Data into the Computing Curricula

Data processing goes big

Reading and writing files

WROX Certified Big Data Analyst Program by AnalytixLabs and Wiley

Workshop on Hadoop with Big Data

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

A Short Guide to R with RStudio

Integrating VoltDB with Hadoop

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Your Best Next Business Solution Big Data In R 24/3/2010

What is R? R s Advantages R s Disadvantages Installing and Maintaining R Ways of Running R An Example Program Where to Learn More

INTEGRATING R AND HADOOP FOR BIG DATA ANALYSIS

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

Big Data and Parallel Work with R

MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012

BIG DATA What it is and how to use?

Hadoop Job Oriented Training Agenda

G563 Quantitative Paleontology. SQL databases. An introduction. Department of Geological Sciences Indiana University. (c) 2012, P.

A very short Intro to Hadoop

Case Study : 3 different hadoop cluster deployments

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

The Saves Package. an approximate benchmark of performance issues while loading datasets. Gergely Daróczi

You should have a working knowledge of the Microsoft Windows platform. A basic knowledge of programming is helpful but not required.

RHadoop Installation Guide for Red Hat Enterprise Linux

R data import and export

HDFS. Hadoop Distributed File System

Importing Data into R

Bringing Big Data Modelling into the Hands of Domain Experts

Deal with big data in R using bigmemory package

Hadoop Ecosystem B Y R A H I M A.

L1: Introduction to Hadoop

Bigger data analysis. Hadley Chief Scientist, RStudio. Thursday, July 18, 13

Open Source Technologies on Microsoft Azure

Installing R and the psych package

Session 85 IF, Predictive Analytics for Actuaries: Free Tools for Life and Health Care Analytics--R and Python: A New Paradigm!

Distributed R for Big Data

NoSQL and Hadoop Technologies On Oracle Cloud

Large Datasets and You: A Field Guide

Advanced Big Data Analytics with R and Hadoop

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc.

Up Your R Game. James Taylor, Decision Management Solutions Bill Franks, Teradata

Big Data Too Big To Ignore

ANALYTICS CENTER LEARNING PROGRAM

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS)

Using distributed technologies to analyze Big Data

Prof. Nicolai Meinshausen Regression FS R Exercises

Viewing Ecological data using R graphics

DiskPulse DISK CHANGE MONITOR

4 Other useful features on the course web page. 5 Accessing SAS

How, What, and Where of Data Warehouses for MySQL

Big Data and Data Science: Behind the Buzz Words

Constructing a Data Lake: Hadoop and Oracle Database United!

Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park

Big Data on Microsoft Platform

Find the Hidden Signal in Market Data Noise

Dataframes. Lecture 8. Nicholas Christian BIOST 2094 Spring 2011

American International Journal of Research in Science, Technology, Engineering & Mathematics

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Session 1: IT Infrastructure Security Vertica / Hadoop Integration and Analytic Capabilities for Federal Big Data Challenges

Multiple Linear Regression

Chase Wu New Jersey Ins0tute of Technology

BiDAl: Big Data Analyzer for Cluster Traces

Cloudera Certified Developer for Apache Hadoop

Introduction to R and UNIX Working with microarray data in a multi-user environment

DATA MINING TOOL FOR INTEGRATED COMPLAINT MANAGEMENT SYSTEM WEKA 3.6.7

Overview of Databases On MacOS. Karl Kuehn Automation Engineer RethinkDB

Transforming the Telecoms Business using Big Data and Analytics

Data Warehouse and Hive. Presented By: Shalva Gelenidze Supervisor: Nodar Momtselidze

BIG DATA TECHNOLOGY. Hadoop Ecosystem

Parallel Options for R

Big Data and Analytics: Getting Started with ArcGIS. Mike Park Erik Hoel

R Language Fundamentals

Chapter 7. Using Hadoop Cluster and MapReduce

Architectural patterns for building real time applications with Apache HBase. Andrew Purtell Committer and PMC, Apache HBase

Lecture 10: HBase! Claudia Hauff (Web Information Systems)!

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Sentimental Analysis using Hadoop Phase 2: Week 2

A Brief Outline on Bigdata Hadoop

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

Data-Intensive Applications on HPC Using Hadoop, Spark and RADICAL-Cybertools

MIXED MODEL ANALYSIS USING R

Analytics on Big Data

An Approach to Implement Map Reduce with NoSQL Databases

BIG DATA ANALYSIS USING RHADOOP

Real-time Data Analytics mit Elasticsearch. Bernhard Pflugfelder inovex GmbH

Programming Languages & Tools

Transcription:

Introduction to Big Data Analysis with R Yung-Hsiang Huang National Center for High-performance Computing, Taiwan 2014/12/01

Agenda Big Data, Big Challenge Introduction to R Some R-Packages to Deal With Big Data Several useful R-packages Hadoop vs. R 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 2

Big Data Big data is an all-encompassing term for any collection of data sets so large and complex that it becomes difficult to process them using traditional data processing applications. Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, manage, and process data within a tolerable elapsed time. Big data "size" is a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 3

Big Data (cont.) Big data is a set of techniques and technologies that require new forms of integration to uncover large hidden values from large datasets that are diverse, complex, and of a massive scale. The challenges include analysis, capture, search, sharing, storage, transfer, visualization, and privacy violations. The trend to larger data sets is due to the additional information derivable from analysis of a single large set of related data, as compared to separate smaller sets with the same total amount of data, allowing correlations to be found to "spot business trends, prevent diseases, combat crime and so on." 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 4

Volume, Velocity, Varity Bigdata is usually transformed in three dimensions- volume, velocity and variety. Volume: Machine generated data is produced in larger quantities than non traditional data. Velocity: This refers to the speed of data processing. Variety: This refers to large variety of input data which in turn generates large amount of data as output. Some make it 4 V s or 5V s Value: How to generate maximum value Veracity: The uncertainty of data 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 5

2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 6

Agenda Big Data, Big Challenge Introduction to R Some R-Packages to Deal With Big Data Several useful R-packages Hadoop vs. R 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 7

What is R? The most powerful and most widely used statistical software https://www.youtube.com/watch?v=tr2bhsj_eck 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 8

What is R? R is a comprehensive statistical and graphical programming language and is a dialect of the S language: 1988 - S2: RA Becker, JM Chambers, A Wilks 1992 - S3: JM Chambers, TJ Hastie 1998 - S4: JM Chambers R: initially written by Ross Ihaka and Robert Gentleman at Dep. of Statistics of U of Auckland, New Zealand during 1990s. Since 1997: international R-core team of 15 people with access to common CVS archive. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 9

What is R? The R statistical programming language is a free open source package based on the S language developed by Bell Labs. The language is very powerful for writing programs. Many statistical functions are already built in. Contributed packages expand the functionality to cutting edge research. Since it is a programming language, generating computer code to complete tasks is required. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 10

http://www.r-project.org R-Project Open-source: R is a free software environment for statistical computing and graphics Offers tools to manage and analyze data Standard and many more statistical methods are implemented Support via the R mailing list by members of the core team R-announce, R-packages, R-help, R-devel, http://www.r-project.org/mail.html Support via several manuals and books http://www.r-project.org/doc/bib/r-books.html http://cran.r-project.org/manuals.html 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 11

R-Project Huge online-libraries with R-packages CRAN: http://cran.r-project.org/ BioConductor for genomic data: http://bioconductor.org/ Omegahat: http://www.omegahat.org/ R-Forge: http://r-forge.r-project.org/ Possibility to write personalized code and to contribute new packages The New York Times (Jan, 2009), Data Analysts Captivated by R s Power 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 12

R vs. SAS vs. R is a open source software, SAS is a commercial product, R is free and available to everyone R code is open source and can be modified by everyone R is a complete and enclosed programming language R has a big and active community Number of scholarly articles that reference each software by year, after removing the top two, SPSS and SAS. Sources: http://r4stats.com/articles/popularity/ 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 13

R Books Introduction to scientific programming and simulation using R, ISBN: 978-1420068726. Statistical Computing with R, ISBN: 978-1584885450 R Programming for Bioinformatics, ISBN: 978-1420063677 R for Business Analytics, ISBN: 978-1461443438 A Handbook of Statistical Analyses using R, ISBN: 978-1482204582 Introductory Statistics with R, Edition: 2, ISBN: 978-0387790534 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 14

Installing, Running, and Interacting with R How to get R http://www.r-project.org/ Available in Windows, Linux and Mac OS X 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 15

Installing, Running, and Interacting with R (cont.) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 16

Installing, Running, and Interacting with R (cont.) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 17

RStudio, an Integrated Development Environment (IDE) for R http://www.rstudio.com 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 18

RStudio (cont.) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 19

RStudio (cont.) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 20

R is a Calculator R will evaluate basic calculations which you type into the console (input window) 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 > 3+10 [1] 13 > 3/(10+3) [1] 0.2307692 > 2^19 [1] 524288 > log(2,base=10) [1] 0.30103 > sin(pi/2) [1] 1 > x = 1:10 > x [1] 1 2 3 4 5 6 7 8 9 10 > x<5 [1] TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE > x[3:7] [1] 3 4 5 6 7 > x[-4] [1] 1 2 3 5 6 7 8 9 10 > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 21

R is a Graphing Device 01 02 03 04 > x = rnorm(1000,0,1) > hist(x) > > plot(density(x)) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 22

R is a Statistics Package 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 > x = c(1.0, 2.3, 3.1, 4.8, 5.6, 6.5) > y = c(2.6, 2.8, 3.1, 4.7, 5.1, 5.3) > lm.fit = lm(y ~ x) > summary(lm.fit) Call: lm(formula = y ~ x) Residuals: 1 2 3 4 5 6 0.3094-0.2312-0.3870 0.2444 0.1886-0.1242 Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) 1.72082 0.29508 5.832 0.00431 ** x 0.56975 0.06813 8.363 0.00112 ** --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1 Residual standard error: 0.3201 on 4 degrees of freedom Multiple R-squared: 0.9459, Adjusted R-squared: 0.9324 F-statistic: 69.93 on 1 and 4 DF, p-value: 0.001118 > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 23

R is a Simulator 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 > # Simulated tossing two fair dices 2000 times, and observe the sum of dices. > # > # N : the time to toss two fair dices > # rolls[,1] : 1st N tosses of two die > # rolls[,2] : 2nd N tosses of two die > # rolls.sum : sum the rolls column by colume > # obs : frequency of observed outcomes > # exp : expected frequency of outcomes > N = 2000 > rolls = matrix(ceiling(6*runif(2*n)),ncol=2) > rolls.sum = rolls[,1] + rolls[,2] > obs = table(rolls.sum)/n > exp = c(1:6,5:1)/36 > print(round(cbind(obs,exp),4)) 2 3 4 5 6 7 8 9 10 11 12 obs 0.0250 0.0500 0.0855 0.1045 0.1355 0.1705 0.1310 0.1175 0.0890 0.0645 0.0270 exp 0.0278 0.0556 0.0833 0.1111 0.1389 0.1667 0.1389 0.1111 0.0833 0.0556 0.0278 > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 24

R is a Programming Language 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 > my.fun <- function(x,y) { + res = x+y^2 + return(res) + } > my.fun(3,5) [1] 28 > hist.normal <- function(n,color) { + x = rnorm(n) + h = hist(x,freq=f) + lines(density(x),col=color) + } > par(mfrow=c(2,1)) > hist.normal(2000,2) # 2 for red > hist.normal(1000,4) # 4 for blue > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 25

R Basics Getting Help in R Basic Usage Naming convention Type of variables Missing values Importing external file to R Exporting R data to external file Load packages and data Functions Many statistical/mathematical methods and functions Vector and Matrix Some useful functions 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 26

Getting Help in R library() lists all available libraries on system help(command) getting help for one command, e.g. help(heatmap) help.search( topic ) searches help system for packages associated with the topic, e.g. help.search( normal ) help.start() starts local HTML interface q() quits R console 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 27

Basic Usage The general R command syntax uses the assignment operator <- (or = ) to assign data to object. x<-c(5,2,3,10,1); object <- function (arguments) Equivalently, assign( y, c(10,6,7,8,9)); c(1,2,3,4,5)->z source( myscript.r ) command to execute an R script named as myscript.r. objects() or ls() list the names of all objects rm(data1) Remove the object named data1 from the current environment data1 <- edit(data.frame()) Starts empty GUI spreadsheet editor for manual data entry. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 28

class(object) displays the object type. str(object) Basic Usage (cont.) displays the internal type and structure of an R object. attributes(object) dir() Returns an object's attribute list. Reads content of current working directory. getwd() Returns current working directory. setwd( d:/data ) Changes current working directory to user specified directory. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 29

Naming Convention All alpha-numeric symbols are allowed plus. and _ Must start with a letter (A-Z, a-z) Case-sensitive MyData is different from mydata 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 30

Types of objects: Type of Variables vector, factor, array, matrix, data.frame, ts, list Attributes Mode: numeric, character, complex, logical Length: number of elements in object Creation Assign a value Create a blank object 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 31

Missing Values R is designed to handle statistical data and therefore predestined to deal with missing values Variables of each data type (numeric, character, logical) can also take the value NA: not available. NA is not the same as 0 NA is not the same as NA is not the same as FALSE NA is not the same as NULL NA, NaN, and Null NA (Not Available): applies to many modes (character, numeric, etc.) NaN (Not a Number): applies only to numeric modes NULL: Lists with zero length 01 02 03 04 05 06 07 08 09 > x = c(1,2,3,na) > x+3 [1] 4 5 6 NA > 0/0 [1] NaN > y = NULL > length(y) [1] 0 > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 32

Importing External file to R Text files (ASCII) Files in other formats (Excel, SAS, SPSS, ) Data on Web pages SQL-like Databases Binary file Much more information is available in the Data Import/Export manual. read.table() Reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 33

Syntax Notes read.table(file, header=false, sep=, dec=., ) file header sep qutoe dec row.name col.name as.is na.strings colclasses nrows skip check.names fill strip.white blank.lines.skip comment.char the name of the file (within or a variable of mode character), possible with its path (the symbol \ is not allowed and must be replaced by /, even under Windows), or a remote access to a file of type URL (http://...) a logical (FALSE or TRUE) indicating if the file contains the names of the variables on its first line the field separator used in the file, for instance sep= \t if it is a tabulation the characters used to cite the variables of mode character the character used for the decimal point a vector with the names of the lines which can be either a vector of mode character, or the number (or the name) of a variable of the file (by default: 1, 2, 3, ) a vector with he names of the variables (by default: V1, V2, V3, ) controls the conversion of character variables as factors (if FALSE) or keeps them as characters (TRYE); as.is can be a logical, numeric or character vector specifying the variables to be kept as character the value given to missing data (converted as NA) a vector of mode character giving the classes to attribute to the columns the maximum number of lines to read (negative values are ignored) the number of lines to be skipped before reading the data if TRUE, checks that the variable name are valid for R if TRUE and all lines do not have the same number of variables, blanks are added (conditional to sep) if TURE, deletes extra spaces before and after the character variables if TRUE, ignores blank lines a character defining comments in the data file, the rest of the line after this character is ignored (to disable this argument, use comment.char= ) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 34

Data Import read.delim( clipboard, header=t) scan( my_file ) Reads vector/array into vector from file or keyboard read.csv(file= path, header=true) You can skip lines, read a limited number of lines, different decimal separator, and more importing options. The foreign package can read files from Stata, SAS, and SPSS. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 35

Fixed Format Data The function read.fwf reads the fixed-format data mydata <- read.fwf(file, widths, header=false, sep= \t, buffersize=2000) buffersize if the maximum number of lines to read at one time. A2.021.3 B2.523.2 C2.664.3 D1.223.2 E6.323.1 F5.115.2 G6.365.1 01 02 03 04 05 06 07 08 09 10 11 > mydata = read.fwf("c:/tmp/fixed.txt",widths=c(1,4,3)) > mydata V1 V2 V3 1 A 2.02 1.3 2 B 2.52 3.2 3 C 2.66 4.3 4 D 1.22 3.2 5 E 6.32 3.1 6 F 5.11 5.2 7 G 6.36 5.1 > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 36

Exporting R Data to External File write.table(iris, "clipboard", sep="\t", col.names=na, quote=f) Command to copy&paste from R into Excel or other programs. It writes the data of an R data frame object into the clipbroard from where it can be pasted into other applications. write.table(dataframe, file= file path", sep="\t", col.names = NA) Writes data frame to a tab-delimited text file. The argument 'col.names = NA' makes sure that the titles align with columns when row/index names are exported (default). write(x, file="file path") Writes matrix data to a file. sink("my_r_output") redirects all subsequent R output to a file 'My_R_Output' without showing it in the R console anymore. restores normal R output behavior. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 37

Load Packages and Data GUI Mode Package(s) install package(s) Package(s) load package Command Mode 01 02 03 04 05 06 07 08 09 10 11 12 13 > # install.packages("onion") > library(onion) > data(bunny) > head(bunny,n=3) x y z [1,] -0.0378297 0.127940 0.00447467 [2,] -0.0447794 0.128887 0.00190497 [3,] -0.0680095 0.151244 0.03719530 > # Three dimensional plotting of points. > # Produces a nice-looking 3D scatterplot with > # greying out of further points givin a visual > # depth cue > p3d(bunny,theta=3,phi=104,box=false) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 38

Functions Actions can be performed on objects using functions A function is itself an object Functions have arguments and options, often there are defaults Functions provide a result The parentheses () are used to specify that a function is being called 01 02 03 04 05 06 07 08 09 10 > my.fun <- function(a,b=10) { + ret = a+b + ret + } > > my.fun(1) [1] 11 > my.fun(1,2) [1] 3 > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 39

Statistical/Mathematical Methods and Functions K-means Clustering 01 02 03 04 05 06 07 08 09 10 11 12 > x = rbind(matrix(rnorm(100,mean=0,sd=0.3),ncol=2), + matrix(rnorm(100,mean=1,sd=0.3),ncol=2)) > head(x,n=3) [,1] [,2] [1,] -0.07122656 0.3770062 [2,] -0.01977698-0.1491674 [3,] -0.33266395 0.2015663 > dim(x) [1] 100 2 > cl = kmeans(x,4) > plot(x,col=cl$cluster) > points(cl$centers,col=1:4,pch=8,cex=2) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 40

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 > cl K-means clustering with 4 clusters of sizes 16, 47, 14, 23 Cluster means: [,1] [,2] 1 0.55451817 0.87965620 2-0.03469715 0.02879535 3 1.09512837 1.45672919 4 1.17938578 0.90782872 Clustering vector: [1] 3 2 2 3 3 3 2 3 3 2 2 3 1 3 2 2 2 3 3 3 2 2 2 3 2 2 3 3 3 3 3 2 3 3 2 3 2 3 3 3 3 2 [43] 3 3 2 2 3 2 2 2 4 1 3 1 4 4 4 4 1 1 4 4 1 1 1 4 1 1 1 1 4 4 1 1 1 1 4 4 1 1 1 4 1 4 [85] 4 1 1 1 4 4 4 4 1 1 4 4 4 1 4 1 Within cluster sum of squares by cluster: [1] 3.741233 2.055192 2.498015 2.213513 (between_ss / total_ss = 83.8 %) Available components: [1] "cluster" "centers" "totss" "withinss" "tot.withinss" [6] "betweenss" "size" "iter" "ifault" 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 41

Statistical/Mathematical Methods and Functions Statistical distribution Density (d), cumulative distribution function (p), quantile function (q) and random variate generation (r). Normal: dnorm, pnorm, qnorm, rnorm Beta: dbeta, pbeta, qbeta, rbeta F: df, pf, qf, rf Some basic mathematical operator log, exp mean, median, mode, max, min, sd trigonometry set operations logical operators: <, <=, >, >=, ==,!= 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 42

01 02 03 04 05 06 07 08 09 10 11 12 13 14 > pnorm(1,mean=0,sd=1) - pnorm(-1,mean=0,sd=1) [1] 0.6826895 > pnorm(2,mean=0,sd=1) - pnorm(1,mean=0,sd=1) [1] 0.1359051 > qnorm(0.9785,mean=0,sd=1) [1] 2.02371 > x = seq(-3.2,3.2,by=0.01) > y = dnorm(x) > x.sampling = rnorm(200,mean=0,sd=1) > hist(x.sampling,prob=t,xlim=c(-3.2,3.2),ylim=c(0,0.5),xlab="",ylab="",main="") > lines(x,y) > lines(density(x.sampling),col="red") > legend("topright",c("dnorm: exact dist'n","rnorm: random number"),col=c(1,2),lty=1) > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 43

Vector and Matrix Vector: all elements are the same data type 01 02 03 04 05 06 07 08 09 10 > v1 = c(1,2,3,4,5,6) > v2 = c("one","two","three","four","five","six") > v3 = c(true,false,true,false,true,false) > v1 [1] 1 2 3 4 5 6 > v2 [1] "one" "two" "three" "four" "five" "six" > v3 [1] TRUE FALSE TRUE FALSE TRUE FALSE > Matrix: all elements are the same data type 01 02 03 04 05 06 > m = matrix(letters[1:12],nrow=3,ncol=4) > m [,1] [,2] [,3] [,4] [1,] "A" "D" "G" "J" [2,] "B" "E" "H" "K" [3,] "C" "F" "I" "L" 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 44

Vector and Matrix Data Frame: different columns can have different data types List: the ordered collection of various elements 01 02 03 04 05 06 07 08 09 > dframe = data.frame(v1,v2,v3) > dframe v1 v2 v3 1 1 one TRUE 2 2 two FALSE 3 3 three TRUE 4 4 four FALSE 5 5 five TRUE 6 6 six FALSE 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 > l = list(n= Sean",mat=m,df=dframe) > l $n [1] Sean" $mat [,1] [,2] [,3] [,4] [1,] "A" "D" "G" "J" [2,] "B" "E" "H" "K" [3,] "C" "F" "I" "L" $df v1 v2 v3 1 1 one TRUE 2 2 two FALSE 3 3 three TRUE 4 4 four FALSE 5 5 five TRUE 6 6 six FALSE 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 45

apply(iris[,1:3], 1, mean) Some Useful Functions Calculates the mean values for the columns 1-3 in the sample data frame 'iris'. With the argument setting '1', row-wise iterations are performed and with '2' column-wise iterations. tapply(iris[,4], iris$species, mean) Calculates the mean values for the 4th column based on the grouping information in the 'Species' column in the 'iris' data frame. sapply(x, sqrt) Calculates the square root for each element in the vector x. Generates the same result as 'sqrt(x)'. lapply(x, fun) Returns a list of the same length as X, each element of which is the result of applying FUN to the corresponding element of X. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 46

Some Useful Functions: Examples 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 > iris.part = iris[1:10,1:3] > iris.part Sepal.Length Sepal.Width Petal.Length 1 5.1 3.5 1.4 2 4.9 3.0 1.4 3 4.7 3.2 1.3 4 4.6 3.1 1.5 5 5.0 3.6 1.4 6 5.4 3.9 1.7 7 4.6 3.4 1.4 8 5.0 3.4 1.5 9 4.4 2.9 1.4 10 4.9 3.1 1.5 > apply(iris.part,1,mean) 1 2 3 4 5 3.333333 3.100000 3.066667 3.066667 3.333333 6 7 8 9 10 3.666667 3.133333 3.300000 2.900000 3.166667 > apply(iris.part,2,mean) Sepal.Length Sepal.Width Petal.Length 4.86 3.31 1.45 01 02 03 04 05 06 07 08 09 > tapply(iris[,4],iris$species,mean) setosa versicolor virginica 0.246 1.326 2.026 > sapply(iris[,1:3],mean) Sepal.Length Sepal.Width Petal.Length 5.843333 3.057333 3.758000 > apply(iris[,1:3],2,mean) Sepal.Length Sepal.Width Petal.Length 5.843333 3.057333 3.758000 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 47

Agenda Big Data, Big Challenge Introduction to R Some R-Packages to Deal With Big Data Several useful R-packages Hadoop vs. R 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 48

High-Performance and Parallel Computing with R http://cran.r-project.org/web/views/highperformancecomputing.html Parallel computing: Explicit parallelism Parallel computing: Implicit parallelism Parallel computing: Grid computing Parallel computing: Hadoop Parallel computing: Random numbers Parallel computing: Resource managers and batch schedulers Parallel computing: Applications Parallel computing: GPUs Large memory and out-of-memory data 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 49

Parallel Computing Basics Serial and parallel tasks: Task 1 Task 1 Task 2 Task 3 Task n Task 2 Task 2.1 Task 3.1 Task n.1 Task 3 Task 2.2 Task 3.2 Task n.2 Task n+1 Task 2.m Task 3.m Task n.m Task n+1 Problem is broken into a discrete series of instructions and they are processed one after another. Problem is broken into discrete parts, that can be solved concurrently. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 50

Package: data.table An extension of data.frame for fast indexing, fast ordered joins, fast assignment, fast grouping and list columns 01 02 03 04 05 06 07 08 09 10 11 12 13 > library(data.table) > v1 = c(1,2,5.3,6,-2,4) > v2 = c( red, white, red,na, blue, orange ) > v3 = c(t,t,t,f,t,f) > my.datatable = data.table(v1,v2,v3) > my.datatable v1 v2 v3 1: 1.0 one TRUE 2: 2.0 two TRUE 3: 5.3 three TRUE 4: 6.0 four FALSE 5: -2.0 five TRUE 6: 4.0 six FALSE 14 15 16 17 18 19 20 21 22 23 24 25 26 > my.datatable[2] v1 v2 v3 1: 2 two TRUE > my.datatable[,v2] [1] "one" "two" "three" "four" "five" "six" > my.datatable[,sum(v1),by=v3] v3 V1 1: TRUE 6.3 2: FALSE 10.0 > setkey(my.datatable,v2) > my.datatable["five"] v1 v2 v3 1: -2 five TRUE 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 51

Package: plyr plyr is the tools for splitting, applying and combining data Functions are named according to what sort of data structure used (a:array, l:list, d:data.frame, m:multiple inputs, r:repeat multiple times) Provides a set of helper functions for common data analysis 01 02 03 04 05 06 07 08 09 10 11 12 > library(plyr) > data(iris) > head(iris,n=3) Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa > count(iris,vars="species") Species freq 1 setosa 50 2 versicolor 50 3 virginica 50 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 52

Package: plyr (cont.) Summarise works in an analogous way to mutate, except instead of adding columns to an existing data frame, it creates a new data frame. This is particularly useful in conjunction with ddply as it makes it easy to perform group-wise summaries. 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 > is.data.frame(iris) [1] TRUE > dim(iris) [1] 150 5 > summary(iris) Sepal.Length Sepal.Width Petal.Length Petal.Width Species Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100 setosa :50 1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300 versicolor:50 Median :5.800 Median :3.000 Median :4.350 Median :1.300 virginica :50 Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800 Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500 > summarise(iris,mean_petal_length=mean(petal.length), + max_metal_length=max(sepal.length)) mean_petal_length max_metal_length 1 3.758 7.9 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 53

Package: plyr (cont.) ddply: For each subset of a data frame, apply function then combine results into a data frame. daply: For each subset of data frame, apply function then combine results into an array. daply with a function that operates column-wise is similar to aggregate. 01 02 03 04 05 06 07 08 09 10 11 > ddply(iris,.(species),summarise,mean_petal_length=mean(petal.length), + max_petal_length=max(sepal.length)) Species mean_petal_length max_petal_length 1 setosa 1.462 5.8 2 versicolor 4.260 7.0 3 virginica 5.552 7.9 > daply(iris[,c(1,2,5)],.(species),colwise(mean)) Species Sepal.Length Sepal.Width setosa 5.006 3.428 versicolor 5.936 2.77 virginica 6.588 2.974 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 54

Package: RJSONIO JSON: Javascript Object Notation a lightweight data-interchange format 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 > library(rjsonio) > json = tojson(list(a=c(1,2,3),name="markus")) > cat(json) { "a": [ 1, 2, 3 ], "name": "Markus" } > robj = fromjson(json) > robj $a [1] 1 2 3 $name [1] "Markus" > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 55

Package: bigmemory, biganalytics, foreach, Available on Unix-alikes, including Mac. Manage massive matrices with shared memory and memorymapped files 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 > library(bigmemory) > library(biganalytics) > library(foreach) > x = rbind(matrix(rnorm(100,sd=0.3),ncol=2), + matrix(rnorm(100,mean=1,sd=0.3),ncol=2)) > bigmatrix = as.big.matrix(x) > res = bigkmeans(bigmatrix,3) > res K-means clustering with 3 clusters of sizes 27, 49, 24 Cluster means: [,1] [,2] [1,] 0.998961403 0.696210221 [2,] -0.003675359-0.004256339 [3,] 0.897364335 1.273988875 Clustering vector: [1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 [38] 2 2 2 1 2 2 2 2 2 2 2 2 1 1 3 3 1 3 3 1 1 3 3 1 3 1 3 1 1 3 1 3 1 3 1 1 3 [75] 3 3 3 2 1 3 1 3 3 3 1 1 1 1 1 1 1 3 1 3 1 3 1 1 3 3 Within cluster sum of squares by cluster: [1] 3.010972 6.260258 2.399644 Available components: [1] "cluster" "centers" "withinss" "size" > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 56

Package: parallel First version was released with R 2.14.0 Contains functionality derived from and pretty much equivalent to the multicore and snow packages. 01 02 03 04 05 06 07 08 > N = 1000 > x = list(a=rnorm(n),b=rbeta(n,2,3)) > lapply(x,mean) $a [1] 0.02433092 $b [1] 0.4005343 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 > library(parallel) > cl = makecluster(2) > parlapply(cl,x,mean) $a [1] 0.02433092 $b [1] 0.4005343 > stopcluster(cl) > mclapply(x,mean) $a [1] 0.02433092 $b [1] 0.4005343 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 57

Package: Rcpp The Rcpp package provides C++ classes that greatly facilitate interfacing C or C++ code in R packages using the.call interface provided by R. A clean, approachable API that lets you write high-performance code. Can help with loops, recursive functions and functions with advanced data structures. 01 02 > library(rcpp) > cppfunction(' 03 + int add(int x, int y, int z) { 04 + int sum = x+y+z; 05 + return sum; 06 + } 07 + ') 08 09 10 11 12 13 > add function (x, y, z).primitive(".call")(<pointer: 0x00000000655c1770>, x, y, z) > add(2014,12,1) [1] 2027 > cppfunction(' 14 + int fibonacci(const int x) { 15 + if (x == 0) return(0); 16 + if (x == 1) return(1); 17 + return (fibonacci(x - 1)) + fibonacci(x - 2); 18 + } 19 + ') 20 > fibonacci(20) 2014/12/01 21 [1] 6765 SEAIP 2014: Intro R & Big Data (Sean) 58

Package: ggplot2 ggplot2 is useful for producing complex graphics relatively simply. An implementation of the Grammar of Graphics book by Leland Wilkinson The basic notion is that there is a grammar to the composition of graphical components in statistical graphics By directly controlling that grammar, you can generate a large set of carefully constructed graphics from a relatively small set of operations A good grammar will allow us to gain insight into the ocmposition of complicated graphics, and reveal unexpected connections between seemingly different graphics. 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 59

01 02 03 04 05 06 > library(ggplot2) > qplot(sepal.length,petal.length,data=iris,color=species) # top-left > res = qplot(sepal.length,petal.length,data=iris,color=species,size=petal.width,alpha=i(0.5)) > res # top-right > res+geom_line(size=1) # bottom-left > res+geom_boxplot(size=0.2,alpha=i(0.3)) # bottom-right 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 60

Shiny Easy Web Application Developed by RStudio http://www.rstudio.com/shiny/ Turn analyses into interactive web applications that anyone can use Let your users choose input parameters using friendly controls like slides, drop-downs, and text fields Easily incorporate any number of outputs like plots, tables, and summaryies No HTML or JavaScript knowledge is necessary, only R Hello World Shiny 01 02 > library(shiny) > runexample("01_hello") 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 61

Package: bigvis Revolution R Enterprise Tools for exploratory data analysis of large data sets (76 million) http://blog.revolutionanalytics.com/2013/04/visualize-large-data-sets-with-the-bigvis-package.html 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 62

R and Databases SQL provides a standard language to filter, aggregate, group, sort data SQL-like query languages showing up in new places Hadoop Hive, ODBC provides SQL interface to non-database data Excel, CSV, text files, R stores relational data in data frames. Rank DBMS Database Model Score Changes 1. Oracle Relational DBMS 1452.13-19.77 2. MySQL Relational DBMS 1279.08 +16.11 3. Microsoft SQL Server Relational DBMS 1220.20 +0.59 4. PostgreSQL Relational DBMS 257.36-0.36 5. MongoDB Document store 244.73 +4.33 6. DB2 Relational DBMS 206.23-1.44 7. Microsoft Access Relational DBMS 138.84-2.80 8. SQLite Relational DBMS 95.28 +0.33 9. Cassandra Wide column store 91.99 +6.29 10. Sybase ASE Relational DBMS 84.62-2.17 DB-Engines Ranking http://db-engines.com/en/ranking_trend 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 63

Package: sqldf sqldf is an R package for running SQL statements on R data frames SQL statements in R using data frame names in place of table names A database with appropriate table layouts/schema is automatically created, the data frames are automatically loaded into the database The result is read back into R sqldf supports the SQLite back-end database(default), the H2 java database, the PostgreSQL database and MySQL 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 64

Package: sqldf (cont.) 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 > library(sqldf) > sqldf('select * from iris limit 4') Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa > sqldf('select count(*) from iris') count(*) 1 150 > sqldf('select Species, count(*) from iris group by Species') Species count(*) 1 setosa 50 2 versicolor 50 3 virginica 50 > sqldf('select Species, avg("sepal.length") as "Sepal Avg.", + variance("sepal.width") as "Sepal Width" from iris group by Species') Species Sepal Avg. Sepal Width 1 setosa 5.006 0.14368980 2 versicolor 5.936 0.09846939 3 virginica 6.588 0.10400408 > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 65

Other Relational Packages RMySQL provide an interface to MySQL RPostgreSQL provide an interface to PostgreSQL ROracle provide an interface to Oracle RJDBC provide access to database through a JDBC interface RSQLite provide access to SQLite Bottleneck to deal with BIG DATA All packages read the full result in R memory 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 66

Hadoop An open source software framework designed to support large scale data processing Map Reduce: a computational paradigm Application is divided into many small fragments of work HDFS: Hadoop Distributed File System A distributed file system that stores data on the compute nodes The Ecosystem: Hive, Pig, Flume, Mahout, Written in Java, opened up to alternatives by its Streaming API http://hadoop.apache.org/ 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 67

HDFS & Hadoop Cluster HDFS is a block-structured file system Blocks are stored across a cluster of one or more machines with data storage capacity. (datanode) Data is accessed in a write once and read many model HDFS does com with its own utilities for file management HDFS file system stores its metadata reliably. (namenode) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 68

RHadoop An open source project sponsored by Revolution Analytics Package rmr2 hosts all Map Reduce related functions, uses Hadoop Streaming API rhdfs for the interaction with HDFS file system plyrmr convenient processing on a Hadoop cluster of large data sets rhbase connect with Hadoop s NoSQL database HBase Installation https://github.com/revolutionanalytics/rhadoop 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 69

Dependence Package: rmr2 Rcpp, functional, bitops, catools, RJSONIO, etc. Simple Parallel Computing with R An Map Reduce Example Job 01 02 03 04 05 06 07 08 09 > x = 1:5 > x [1] 1 2 3 4 5 > unlist(lapply(x, function(y) y^2)) [1] 1 4 9 16 25 > library(parallel) > unlist(mclapply(x, function(y) y^2)) [1] 1 4 9 16 25 > 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 > library(rmr2) > rmr.options(backend=c("local")) NULL > small.ints = to.dfs(keyval(1,1:100)) > out = mapreduce( + input=small.ints, + map=function(k,v) cbind(v,v^2)) > df = from.dfs(out) > head(df$val,n=5) v [1,] 1 1 [2,] 2 4 [3,] 3 9 [4,] 4 16 [5,] 5 25 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 70

Hadoop Hello World Example: Word Count 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 71

Word Count Example 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 > wc.map <- function(., lines) { + key = unlist(strsplit(x=lines,split=pattern)) + keyval(key,1) + } > wc.reduce <- function(word,counts) { + keyval(word,sum(counts)) + } > rmr.options(backend=c("local")) NULL > pattern = "[ \n\r]+" > out = mapreduce( + input="d:/littlematchgirl.txt", + input.format = "text", + map = wc.map, + reduce = wc.reduce, + combine = T, + in.memory.combine = F) > res = from.dfs(out) > id = which(res$key=="match") > res$val[id] [1] 4 > rbind(res$key[6:15],res$val[6:15]) Her little hands were almost numbed with cold. Oh! a match might afford her a world of comfort, if she only dared take a single one out of the bundle, draw it against the wall, and warm her fingers by it. She drew one out. "Rischt!" how it blazed, how it burnt! It was a warm, bright flame, like a candle, as she held her hands over it: it was a wonderful light. It seemed really to the little maiden as though she were sitting before a large iron stove, with burnished brass feet and a brass ornament at top. The fire burned with such blessed influence; it warmed so delightfully. The little girl had already stretched out her feet to warm them too; but--the small flame went out, the stove vanished: she had only the remains of the burntout match in her hand..(cont.) [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] "numbed" "with" "cold." "Oh!" "a" "match" "might" "afford" "her" "world" [2,] "1" "5" "1" "1" "12" "4" "1" "1" "6" "1" 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 72

Hadoop Environmental Variables All RHadoop packages have to access hadoop In Linux they need the correct environment variables In R you have to set them explicitly 01 02 03 04 05 06 07 08 09 10 11 > Sys.setenv(HADOOP_DIR="/usr/local/hadoop") > Sys.setenv(HADOOP_STREAMING="/usr/local/hadoop/contrib + /streaming/hadoop-dev-streaming.jar") > library(rmr2) > rmr.options(backend=c( hadoop")) NULL > small.ints = to.dfs(keyval(1,1:100)) > out = mapreduce( + input=small.ints, + map=function(k,v) cbind(v,v^2)) > 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 73

More about RHadoop rmr2 is the easiest, more productive, most elegant way to write map-reduce jobs With rmr2 one-two orders of magnitude less code than Java With rmr2 readable, reusable, extensible map-reduce With rmr2 is a great prototyping, executable spec and research language rmr2 is a way to work on big data sets in a way that is R-like Simple things should be simple, complex things should be possible rmr2 is not Hadoop Streaming It uses streaming No support for every single option that streaming has Streaming is accessible from R with no additional packages because R can execute an external program and R scripts can read stdin and stdout Map-reduce programs written in rmr2 are not going to be the most efficient. More information? Visit Revolution Analytics Webinars 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 74

Summary R is a powerful statistical tool to analyze many different kind of data from small to big data. R can access various types of databases R can run Hadoop jobs rmr2 run your Map Reduce jobs plyrmr makes big data management on Hadoop easy R is open source and there is a log of community-driven development More Revolution R Enterprise: the commercial R version Oracle R Advanced Analytics for Hadoop package pbdr: programming with big data in R (http://r-pbd.org/) 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 75

Thanks!! Any Questions? http://datacommunitydc.org/blog/2013/05/stepping-up-to-big-data-with-r-and-python/ 2014/12/01 SEAIP 2014: Intro R & Big Data (Sean) 76