Big Data Analysis with Revolution R Enterprise
|
|
|
- Lee Parker
- 10 years ago
- Views:
Transcription
1 Big Data Analysis with Revolution R Enterprise August 2010 Joseph B. Rickert Copyright 2010 Revolution Analytics, Inc. All Rights Reserved. 1
2 Background The R language is well established as the language for doing statistics, data analysis, data-mining algorithm development, stock trading, credit risk scoring, market basket analysis and all manner of predictive analytics. However, given the deluge of data that must be processed and analyzed today, many organizations have been reticent about deploying R beyond research into production applications. The main barrier is that R is a memory-bound language. All data used in calculations vectors, matrices, lists, data frames, and so forth all need to be held in memory. Even for modern computers with 64-bit address spaces and huge amounts of RAM, dealing with data sets that are tens of gigabytes and hundreds of millions of rows (or larger) can present a significant challenge. The problem isn t just one of capacity, that is, being simply being able to accommodate the data in memory for analysis. For mission-critical applications, performance is also a prime consideration: if the overnight analysis does not complete in time for the open of business the next day, that s just as much of a failure as an out-of-memory error. And with data set sizes growing rapidly, scalability is also of concern: even if the in-memory analysis completes today, the IT manager still needs the confidence that the production run will complete on time! as the data set grows. Revolution Analytics has addressed these capacity, performance and scalability challenges with its Big Data initiative to extend the reach of R into the realm of production data analysis with terabyte-class data sets. This paper describes Revolution Analytics new add-on package called RevoScaleR, which provides unprecedented levels of performance and capacity for statistical analysis in the R environment. For the first time, R users can process, visualize and model their largest data sets in a fraction of the time of legacy systems, without the need to deploy expensive or specialized hardware. Limitations of In-memory Data Analysis As mentioned above, R requires all data to be loaded into memory for processing. This design feature limits the size of files that can be analyzed on a modest desktop computer. For example, in the following line of code, the data frame, mydata, contains 5,000,000 rows and three columns, only a very small subset of the airline data file that will be examined later in this paper. The error message from the R interpreter shows that even this relatively small file cannot be processed in the R environment on a PC with 3 gigabytes of memory. > lm(arrdelay~uniquecarrier+dayofweek, data=mydata) Error: cannot allocate vector of size Mb Of course, a high-end PC or server configured with much more memory and running a 64-bit version of R can go a considerable way in extending the memory barrier. But even when the data 2
3 can fit into memory, the performance of standard R on large files can be prohibitively slow. Really breaking through the memory/performance barrier requires implementing external memory algorithms and data structures that explicitly manage data placement and movement. [2] RevoScaleR brings parallel external memory algorithms and a new very efficient data file format to R. RevoScaleR Features The RevoScaleR package provides a mechanism for scaling the R language to handle very large data sets. There are three major components to this package: A new file format especially designed for large files, External memory implementations of the statistical algorithms most commonly used with large data sets, and An extensible programming framework that allows R and later C++ programmers to write their own external memory algorithms that can take advantage of Revolution R Enterprise s new Big Data capabilities. RevoScaleR XDF File Format RevoScaleR provides a new data file type with extension.xdf that has been optimized for data chunking, accessing parts of an Xdf file for independent processing. Xdf files store data in a binary format. Methods for accessing these files may use either horizontal (rows) or vertical (columns) block partitioning. The file format provides very fast access to a specified set of rows for a specified set of columns. New rows and columns can be added to the file without re-writing the entire file. RevoScaleR also provides a new R class, RxDataSource, that has been designed to support the use of external memory algorithms with.xdf files. Table 1 lists the key functions for working with Xdf files: Table 1-Key Xdf Functions Function Description rxdataframetoxdf Write a data frame in memory to an Xdf file rxdatastepxdf Transform data from an input Xdf file to and output.xdf file rxsetvarinfoxdf Modify the variable information for an Xdf file, including names, descriptions and factor labels rxgetvarinfoxdf Get the variable information for an Xdf file, including column names, descriptions, and factor labels. rxreadxdf Read data from an Xdf file into a data frame rxgetinfoxdf Get information about an Xdf file rxtexttoxdf Convert text files into an Xdf file Table 2 lists some of the available low level functions for working with chunks of data. The RevoScaleR: Getting Started Guide presents detailed examples for working with these functions. [1] Table 2 - RxDataSource Functions Function Description rxopendata Opens the data file for reading and returns a handle rxreadnext Reads the next chunk of data from the specified file using the data handle 3
4 rxreadall rxclosedata Reads all of the data from the file specified by the data handle and creates a data frame Closes the data file specified by the data handle Statistical Algorithms The RevoScaleR package also contains parallel external memory implementations of the most common algorithms used to analyze very large data files. These functions, which are listed in Table 3, represent the initial release of the RevoScaleR package. Additional algorithms are planned for subsequent releases. Table 3 - Statistical Functions in RevoScaleR Function Description rxsummary Computes summary statistics on data in an Xdf file or in a data frame rxcrosstabs Creates contingency tables by cross-classifying factors in an Xdf file or data frame. This function permits multi-dimensional cross classification rxlinmod Fits linear models from data residing in an Xdf file or data frame. rxlinmodels fits both simple and multiple regression models. rxlogit Fits binomial, logistic regression models from data residing in and Xdf file or data frame rxpredict Computes predicted values and residuals for the models created by rxlinmod and rxlogit All of the RevoScaleR statistical functions produce objects that may be used as input to standard R functions. Extensible Programming Framework Advanced R users can write their own functions to exploit the capabilities of the Xdf files and RxdataSource objects. Any statistical or data mining function that can work on chunks of data and be implemented as an external memory algorithm is a candidate for a RevoScaleR implementation. This R programming interface is available in the initial release of RevoScaleR; a subsequent release will add a C++ programming interface. RevoScaleR Examples This section illustrates many of the capabilities of the RevoScaleR package through an extended example that uses the moderately large airlines data file: AirlineData87to08. This file contains flight arrival and departure details for all commercial flights within the United States for the period October 1987 through April The file contains 123,534,969 rows and 29 columns (variables). It requires 13,280,963 KB (over 13 Gb) to store, uncompressed on disk. The first block of code immediately below (Code Block 1) loads the RevoScaleR library. For a more robust computer, the number of cores can also be set appropriately using the rxoptions function. (For the Intel Xeon dual 6-core processor server used in the benchmarks below, setting this value to 12, the number of physical cores, produced optimal results.) 4
5 Code Block 1 library(revoscaler) Code Block 2 access the file stored on disk in the Xdf format, displays information about the file and its variables (see Table 4) and produces a summary (Table 5) for two variables: Arrdelay and DayofWeek. Code Block 2 defaultdatadir <- "C:/Users/.../revoAnalytics" dataname <- file.path(defaultdatadir,"airlinedata87to08") rxgetinfoxdf(dataname, getvarinfo=true) # Get summary data rxsummary(~arrdelay:dayofweek,data=dataname) Table 4 Information and Variables in Airline File > rxgetinfoxdf(dataname, getvarinfo=true) Name: C:\Users\... \AirlineData87to08.xdf Number of rows: Number of variables: 29 Number of blocks: 832 Variable Information: Var 1: Year, Type: factor 22 factor levels: Var 2: Month, Type: factor 12 factor levels: January February March April May... August September October November December Var 3: DayofMonth, Type: factor 31 factor levels: Var 4: DayOfWeek, Type: factor 7 factor levels: Monday Tuesday Wednesday Thursday Friday Saturday Sunday Var 5: DepTime, Type: numeric, Storage: float32, Low/High: (0.0167, ) Var 6: CRSDepTime, Type: numeric, Storage: float32, Low/High: (0.0000, ) Var 7: ArrTime, Type: numeric, Storage: float32, Low/High: (0.0167, ) Var 8: CRSArrTime, Type: numeric, Storage: float32, Low/High: (0.0000, ) Var 9: UniqueCarrier, Type: factor 29 factor levels: 9E AA AQ AS B6... UA US WN XE YV Var 10: FlightNum, Type: factor 8160 factor levels: Var 11: TailNum, Type: factor factor levels: NA N7298U N7449U N7453U N7288U... N516AS N763JB N766JB N75428 N75429 Var 12: ActualElapsedTime, Type: integer, Low/High: (-719, 1883) Var 13: CRSElapsedTime, Type: integer, Low/High: (-1240, 1613) Var 14: AirTime, Type: integer, Low/High: (-3818, 3508) Var 15: ArrDelay, Type: integer, Low/High: (-1437, 2598) Var 16: DepDelay, Type: integer, Low/High: (-1410, 2601)) Var 17: Origin, Type: factor 347 factor levels: SAN SFO BUR OAK LAX... ROW GCC RKS MKG OTH Var 18: Dest, Type: factor 352 factor levels: SFO RNO OAK BUR LAX... PIR GCC RKS MKG OTH 5
6 Var 19: Distance, Type: integer, Low/High: (0, 4983) Var 20: TaxiIn, Type: integer, Low/High: (0, 1523) Var 21: TaxiOut, Type: integer, Low/High: (0, 3905) Var 22: Cancelled, Type: logical, Storage: uchar, Low/High: (0, 1) Var 23: CancellationCode, Type: factor 5 factor levels: NA carrier weather NAS security Var 24: Diverted, Type: logical, Storage: uchar, Low/High: (0, 1) Var 25: CarrierDelay, Type: integer, Low/High: (0, 2580) Var 26: WeatherDelay, Type: integer, Low/High: (0, 1510) Var 27: NASDelay, Type: integer, Low/High: (-60, 1392 Var 28: SecurityDelay, Type: integer, Low/High: (0, 533) Var 29: LateAircraftDelay, Type: integer, Low/High: (0, 1407) Table 5 Summary of variables ArrDelay and DayofWeek Summary Statistics for: ArrDelay:DayOfWeek (Total: , Missing: ) Name Mean StdDev Mi n Ma x ValidObs ArrDelay:DayOfWeek Statistics by category (7 categories): Ca tegory Means StdDev Mi n Ma x ValidObs ArrDelay for DayOfWeek=Monday ArrDelay for DayOfWeek=Tuesday ArrDelay for DayOfWeek=Wednesday ArrDelay for DayOfWeek=Thurs day ArrDelay for DayOfWeek=Friday ArrDelay for DayOfWeek=Saturday ArrDelay for DayOfWeek=Sunday Next we perform a simple linear regression of ArrDelay against DayOfWeek using the cube option of rxlinmod, the RevoScaleR function to fit linear models. When cube is set to TRUE and the first explanatory variable in the regression model is categorical, rxlinmod uses a partitioned inverse algorithm to fit the model. This algorithm can be faster and uses less memory than the default inverse algorithm. Code Block 3 shows the code for fitting the model and uses the standard R functions to obtain the results of the regression (Table 6) and plot Arrival delay by day of week (Figure 1). Note that the p-values calculated by the standard t-test are extremely small, as would be expected by using such a simple model with such a large data set. Also note that the cube option produces a data frame as output that shows the counts of the data points that went into producing each coefficient (Table 7). Code Block 3 # Fit a linear model with the cube option arrdelaylm1 <- rxlinmod(arrdelay ~ -1+DayOfWeek, data=dataname, cube=true) summary(arrdelaylm1) # Use the standard R function summary arrdelaylm1$countdf # Look at the optput from the cube option 6
7 # Plot arrival delay by day of week xyplot( ArrDelay ~ DayOfWeek, data = arrdelaylm1$countdf, type = "l", lwd=3,pch=c(16,17), auto.key=true) Table 6 Output of summary rxlinmod.formula(formula = ArrDelay ~ -1 + DayOfWeek, data = dataname, cube=true) Coefficients: Estimate Std. Error t value Pr(> t ) Da yofweek.monda y E-16 DayOfWeek.Tuesday E-16 DayOfWeek.Wednesday E-16 DayOfWeek.Thursday E-16 DayOfWeek.Friday E-16 DayOfWeek.Saturday E-16 DayOfWeek.Sunday E-16 Residual standard error: on degrees of freedom Multiple R-squared: , F-statistic: 5 Adjusted R-squared: Figure 1 Average Arrival Delay by Day of Week 9 8 ArrDelay Monday Tuesday Wednesday Thursday Friday Saturday Sunday DayOfWeek 7
8 Table 7 Output from cube option arrdelaylm1$countdf DayOfWeek ArrDelay Counts Monday Tuesday Wednesday Thursday Friday Saturday Sunday The RevoScaleR package also contains a predict function for predicting values of a linear model and computing residuals. The rxpredict function in Code Block4 appends the variables ArrDelay_Pred and ArrDelay_Resid as columns 30 and 31, respectively, to the airlines data file. Figure 2 which shows 9 residual plots of 1000 residuals randomly selected from the file Airlinedata87To08, illustrates how the residuals are available to R for further analysis. Code Block 4 # Predict function computes predicted values and residuals rxpredict(modelobject=arrdelaylm1,data=dataname,computeresiduals=true) par(mfrow=c(3,3)) start <- runif(16,1, ) for (i in 1:9){ residualdf <- rxreadxdf(file=dataname,varstokeep="arrdelay_resid", startrow= start[i],numrows=1000) plot(residualdf$arrdelay_resid)} Figure residuals from 4 random locations* *
9 Next, we consider a multiple regression of arrival delay (ArrDelay) on the day of the week (DayOfWeek) and departure time (CRSDepTime). The second line of Code Block 5 constructs the linear model to carry out the regression and illustrates the inline use of the F function which makes a factor variable out of CRSDepTime on the fly as it is being used to construct the linear model. This function illustrates a fundamental design concept of RevoScaleR: the ability efficiently transform data, and create new variables without having to make multiple passes through the file. The cube option produces a table of arrival delay counts by departure time and day of the week. The first five lines of this output are displayed in Table 8. Figure 3 show a plot of arrival delay by departure time and day of the week that is based on these data. Code Block 5 # Multiple Regression arrdelaylm2 <- rxlinmod(arrdelay ~ DayOfWeek:F(CRSDepTime), data=dataname,cube=true) arrdelaydt <- arrdelaylm2$countdf arrdelaydt[1:5,] summary(arrdelaylm2) names(arrdelaydt) <- c("dayofweek", "DepartureHour", "ArrDelay", "Counts") xyplot( ArrDelay ~ DepartureHour DayOfWeek, data = arrdelaydt, type = "l", lwd=3,pch=c(16,17), main='average Arrival Delay by Day of Week by Departure Hour', layout=c(2,4), auto.key=true) Table 8. Output from Cube Option > arrdelaydt[1:5,] Row DayOfWeek X.CRSDepTime ArrDelay Counts 1 Monday Tuesday Wednesday Thursday Friday
10 Figure 3. Plot of Data produced by Cube Option Average Arrival Delay by Day of Wee Sunday Friday Saturday 15 ArrDelay Wednesday Thursday Monday Tuesday DepartureHour Finally, we illustrate the flexible way the RevoScaleR function rxdatastepxdf can be used to generate a file containing a subset of the airlines data and new variables that are transformations of some of the variables in the airlines data file. These variables are created on the fly with the help of the transformfunc parameter to the rxdatastepxdf function. The first six lines of Code Block 6 are a function that defines the new variable. Lines 9 and 10 show the rxdatastepxdf function which reads the airline data file, creates an new file keeping only the variables designated and transforms these variables using the specified transformation function. Code Block 6 # Create function to transform data mytransforms <- function(data){ data$late <- data$arrdelay > 15 data$dephour <- as.integer(data$crsdeptime) data$night <- data$dephour >= 20 data$dephour <= 5 return(data)} # The rxdatastepxdf function read the existing data set, performs the # transformations, and creates a new data set. rxdatastepxdf(outdata="ads2", indata=dataname, transformfunc=mytransforms, varstokeep=c("arrdelay","crsdeptime","deptime")) # rxshowinfoxdf("ads2", numrows=5) # Run a logistic regression using the new variables logitobj <- rxlogit(late~dephour+night, data="ads2", verbose=true 10
11 Table 9 shows the meta-data for the newly created file, called ADS2, and shows the first five llines of data. Table 10 shows the output from perfroming a logistic regression using the newly defined variables. Table 9. Description of data file ADS2 and first five rows of data > rxshowinfoxdf("ads2", numrows=5) Name: C:\... \ADS2.xdf Number of rows: Number of variables: 6 Number of blocks: 832 Column Information: Col 1: 'ArrDelay', Int (Min/Max=-1437,2598) Col 2: 'CRSDepTime', Float (Min/Max=0,24) Col 3: 'DepTime', Float (Min/Max= ,29.5) Col 4: 'Late', UChar (Min/Max=0,1) Col 5: 'DepHour', Int Col 6: 'Night', UChar (Min/Max=0,1) DataSet[5, 6] Col 1: 'ArrDelay', Int [5, 1] (Min/Max=-1437,2598) Col 2: 'CRSDepTime', Float [5, 1] (Min/Max=0,24) Col 3: 'DepTime', Float [5, 1] (Min/Max= ,29.5) Col 4: 'Late', UChar [5, 1] (Min/Max=0,1) NumSelected = 3 Col 5: 'DepHour', Int [5, 1] Col 6: 'Night', UChar [5, 1] (Min/Max=0,1) NumSelected = 0 ArrDelay CRSDepTi me DepTime Late DepHour Night [ 1,.] T 7 F [ 2,.] F 7 F [ 3,.] T 7 F [ 4,.] F 7 F [ 5,.] T 7 F Table 10. Output from logistic regression Logistic Regression Results for: Late ~ DepHour + Night Dependent Variable: Late Total independent variables: 3 Number of valid observations: (Number excluded for missings: ) -2*LogLikelihood: e+008 (Residual Deviance on degrees of freedom) Condition number of final VC matrix: Row Coeffs. Value Std. Error t Value Pr(> t ) [ 1,.] (Intercept) [ 2,.] DepHour [ 3,.] Night
12 RevoScaleR Benchmarks In order to provide some idea of the performance that can be reasonably expected from RevoScaleR functions operating on moderately sized data sets, this section provides benchmarks of rxlinmod, rxcrosstabs and rxlogit. All benchmarks were conducted on two platforms: (1) a Lenovo Thinkpad laptop with dual-core, Intel P GHz processor and 3 GB of RAM running Windows 7, and (2) an Intel Xeon X5660 server with GHz CPUs each with 6 cores and 12 threads with 72GB of RAM running Windows Server While no formal comparisons are against other well known statistical systems are included in this analysis, we believe that RevoScaleR s capacity and performance abilities substantially out-perform competitive products. rxlinmod The file AirlineData87to08 contains information on flight arrivals and departure details for all commercial flights within the US from October 1987 to April. It is 13,280,936 KB and contains 123,534,969 rows and 29 columns. Table 4 above provides a summary of the information in the file. The following R code (Code Block 7) runs a simple regression followed by a multiple regression with three explanatory variables. Note that the explanatory variables are categorical. The cube option for rxlinmod ensures the efficient processing of categorical data. Code Block 7 library(revoscaler) #rxoptions(numcorestouse=12) defaultdatadir <- "C:/Users/.../revoAnalytics" dataname <- file.path(defaultdatadir,"airlinedata87to08") rxshowinfoxdf(dataname) # Simple Regression system.time(delayarr <- rxlinmod(arrdelay ~ DayOfWeek,data=dataName,blocksPerRead=30)) # summary(delayarr) # Multiple Regression system.time(delaycarrierloc <- rxlinmod(arrdelay ~ UniqueCarrier+Origin+Dest,data=dataName,blocksPerRead=30,cube=TRUE)) Table 11 contains the results of the regression benchmarks. Note that the average time for first runs of the simple regression on the laptop is considerably longer than the average time to complete subsequent runs of the same regression. This was most likely due to disk caching on the laptop. This phenomenon was not observed when doing the multiple regression on the laptop and it was not observed at all on the server. Performance on the server was consistent among multiple runs. Other than setting the number of cores to be used by the RevoScaleR compute engine equal to 12, the number of real cores available on the server, no attempt was made to optimize the performance of the server. Table 11 Regression Benchmark Results Average Elapsed Time (seconds) Simple Regression Laptop Server first run NA subsequent runs Multiple Regression
13 rxcrosstabs The file CensusIp2001 contains US census data. It is 17,079,992 KB, and contains 14,583,271 rows and 265 columns. Table 12 contains a portion of the output describing the file CensusIp2001 that is produced by the command rxshoinfoxdf. Table 12 - Header Information and first five columns of CensusIp2001 > rxshowinfoxdf(datafile) Name: C:\Users\... \CensusIp20001.xdf Number of rows: Number of variables: 265 Number of blocks: 487 Column Information: Col 1: 'rectype', UInt (Min/Max/PadForMissings=0,1,0) Labels Col 2: 'year', ('Census year'), UInt (Min/Max/PadForMissings=0,13,0) Labels Codes Col 3: 'datanum', ('Data set number'), Int (Min/Max=1,1) Col 4: 'serial', Int (Min/Max=1, ) Col 5: 'numprec', ('Number of person records following'), Int (Min/Max=0,54) Labels Codes The following R code (Code Block 8) runs the benchmark. Note that the text string in the third line must include the entire path to the directory containing the file Code Block 8 library(revoscaler) rxoptions(numcorestouse=12) # Only run on the Server defaultdatadir <- "C:/Users/... " datafile <- file.path(defaultdatadir,"censusip20001") rxshowinfoxdf(datafile) # Compute a 4-way cross tabulation system.time(asmc <- rxcrosstabs(~f(age)+sex+f(marst)+f(condo), data=datafile,pweights="perwt",blocksperread=25)) Table 13 presents the results of the benchmarks. Notice that the first time the 2-way cross-tab was run on the laptop it took and average of seconds to complete, while subsequent runs completed in little over a second. This phenomenon is most likely due to disk caching. Also note that the 4-way cross-tab on the laptop ran slightly faster, on average, than the 2-way cross-tab. This effect is also most likely due to disk caching as the laptop was not rebooted between runs. Table 13 - Results of rxcrosstabs Benchmark Average Elapsed Time (seconds) 2-way cross tab Laptop Server first run NA subsequent runs way cross tab first run NA subsequent runs
14 Performance on the server was consistent among multiple runs. No first run effect was observed which is to be expected. Other than setting the number of cores to be used by the RevoScaleR compute engine equal to 12, the number of real cores available on the server, no attempt was made to optimize the performance of the server. rxlogit The file mortdefault contains ten years of mortgage default data (2000 to 2009). It is 234,378 KB, has 10,000,000 and 6 variables (Table 14). Code Block 9 presents function to run the regression and Table 15 contains the benchmark results. Table 14 Header Information and Variables in Mortgage File > rxshowinfoxdf(datafilename) Name: C:\Users\...\mortDefault.xdf Number of rows: Number of variables: 6 Number of blocks: 10 Column Information: Col 1: 'creditscore', Int (Min/Max=432,955) Col 2: 'houseage', UInt (Min/Max/PadForMissings=0,40,0) Labels Col 3: 'yearsemploy', Int (Min/Max=0,15) Col 4: 'ccdebt', Int (Min/Max=0,15566) Col 5: 'year', UInt (Min/Max/PadForMissings=0,9,0) Labels Col 6: 'default', Int (Min/Max=0,1) Code Block 9 # Run a logistic regression on the file system.time(logitobj <- rxlogit(default~creditscore + yearsemploy + ccdebt + houseage + year,data=datafilename, blocksperread=2, verbose=true,reportprogress="time")) Table 15 Results of rxlogit Benchmarks Average Elapsed Time (seconds) Laptop Server Logistic Regression Summary The RevoScaleR package from Revolution Analytics provides external memory algorithms that help R break through the memory/performance barrier. The XDF file format provides an efficient and flexible mechanism for processing large data sets. The new package provides functions for creating XDF files from text files, writing XDF data to text files and data frames and for transforming and manipulating variables during the read/ write process. 14
15 The new package also contains statistical functions for generating summary statistics, performing multi-way cross tabulations on large data sets and for developing linear models and performing logistic regressions. Preliminary benchmarks show that RevoScaleR functions are fast and efficient -- enabling real data analysis to be performed on a million row, 13GB data set on a common dual core laptop. Moreover, the benchmarks show that the performance of RevoScaleR functions scale nicely as computational resources increase. References [1] Revolution Analytics. RevoScaleR: Getting Started Guide, July 19, 2010 [2] Vitter, Jeffry Scott. Algorithms and Data Structures for External Memory. Now Publishers, Inc.: Hannover, MA
Big Data Analysis with Revolution R Enterprise
REVOLUTION WHITE PAPER Big Data Analysis with Revolution R Enterprise By Joseph Rickert January 2011 Background The R language is well established as the language for doing statistics, data analysis, data-mining
Scalable Data Analysis in R. Lee E. Edlefsen Chief Scientist UserR! 2011
Scalable Data Analysis in R Lee E. Edlefsen Chief Scientist UserR! 2011 1 Introduction Our ability to collect and store data has rapidly been outpacing our ability to analyze it We need scalable data analysis
RevoScaleR 7.3 HPC Server Getting Started Guide
RevoScaleR 7.3 HPC Server Getting Started Guide The correct bibliographic citation for this manual is as follows: Revolution Analytics, Inc. 2014. RevoScaleR 7.3 HPC Server Getting Started Guide. Revolution
RevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
VOL. 5, NO. 2, August 2015 ISSN 2225-7217 ARPN Journal of Systems and Software 2009-2015 AJSS Journal. All rights reserved
Big Data Analysis of Airline Data Set using Hive Nillohit Bhattacharya, 2 Jongwook Woo Grad Student, 2 Prof., Department of Computer Information Systems, California State University Los Angeles nbhatta2
Accessing bigger datasets in R using SQLite and dplyr
Accessing bigger datasets in R using SQLite and dplyr Amherst College, Amherst, MA, USA March 24, 2015 [email protected] Thanks to Revolution Analytics for their financial support to the Five College
Big Data Decision Trees with R
REVOLUTION ANALYTICS WHITE PAPER Big Data Decision Trees with R By Richard Calaway, Lee Edlefsen, and Lixin Gong Fast, Scalable, Distributable Decision Trees Revolution Analytics RevoScaleR package provides
High Performance Predictive Analytics in R and Hadoop:
High Performance Predictive Analytics in R and Hadoop: Achieving Big Data Big Analytics Presented by: Mario E. Inchiosa, Ph.D. US Chief Scientist August 27, 2013 1 Polling Questions 1 & 2 2 Agenda Revolution
Introduc)on to RHadoop Master s Degree in Informa1cs Engineering Master s Programme in ICT Innova1on: Data Science (EIT ICT Labs Master School)
Introduc)on to RHadoop Master s Degree in Informa1cs Engineering Master s Programme in ICT Innova1on: Data Science (EIT ICT Labs Master School) Academic Year 2015-2106 Contents Introduc1on to MapReduce
How To Test The Performance Of An Ass 9.4 And Sas 7.4 On A Test On A Powerpoint Powerpoint 9.2 (Powerpoint) On A Microsoft Powerpoint 8.4 (Powerprobe) (
White Paper Revolution R Enterprise: Faster Than SAS Benchmarking Results by Thomas W. Dinsmore and Derek McCrae Norton In analytics, speed matters. How much? We asked the director of analytics from a
MANIPULATION OF LARGE DATABASES WITH "R"
MANIPULATION OF LARGE DATABASES WITH "R" Ana Maria DOBRE, Andreea GAGIU National Institute of Statistics, Bucharest Abstract Nowadays knowledge is power. In the informational era, the ability to manipulate
Understanding the Benefits of IBM SPSS Statistics Server
IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster
Bringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks [email protected] 2015 The MathWorks, Inc. 1 Data is the sword of the
Technical Paper. Performance of SAS In-Memory Statistics for Hadoop. A Benchmark Study. Allison Jennifer Ames Xiangxiang Meng Wayne Thompson
Technical Paper Performance of SAS In-Memory Statistics for Hadoop A Benchmark Study Allison Jennifer Ames Xiangxiang Meng Wayne Thompson Release Information Content Version: 1.0 May 20, 2014 Trademarks
Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator
WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion
Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc.
Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc. 2015 The MathWorks, Inc. 1 Challenges of Big Data Any collection of data sets so large and complex that it becomes difficult
CDD user guide. PsN 4.4.8. Revised 2015-02-23
CDD user guide PsN 4.4.8 Revised 2015-02-23 1 Introduction The Case Deletions Diagnostics (CDD) algorithm is a tool primarily used to identify influential components of the dataset, usually individuals.
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
High-Performance Processing of Large Data Sets via Memory Mapping A Case Study in R and C++
High-Performance Processing of Large Data Sets via Memory Mapping A Case Study in R and C++ Daniel Adler, Jens Oelschlägel, Oleg Nenadic, Walter Zucchini Georg-August University Göttingen, Germany - Research
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
Revolution R Enterprise 7 Hadoop Configuration Guide
Revolution R Enterprise 7 Hadoop Configuration Guide The correct bibliographic citation for this manual is as follows: Revolution Analytics, Inc. 2014. Revolution R Enterprise 7 Hadoop Configuration Guide.
2015 The MathWorks, Inc. 1
25 The MathWorks, Inc. 빅 데이터 및 다양한 데이터 처리 위한 MATLAB의 인터페이스 환경 및 새로운 기능 엄준상 대리 Application Engineer MathWorks 25 The MathWorks, Inc. 2 Challenges of Data Any collection of data sets so large and complex
Fast Analytics on Big Data with H20
Fast Analytics on Big Data with H20 0xdata.com, h2o.ai Tomas Nykodym, Petr Maj Team About H2O and 0xdata H2O is a platform for distributed in memory predictive analytics and machine learning Pure Java,
Classroom Demonstrations of Big Data
Classroom Demonstrations of Big Data Eric A. Suess Abstract We present examples of accessing and analyzing large data sets for use in a classroom at the first year graduate level or senior undergraduate
System Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
Advanced Big Data Analytics with R and Hadoop
REVOLUTION ANALYTICS WHITE PAPER Advanced Big Data Analytics with R and Hadoop 'Big Data' Analytics as a Competitive Advantage Big Analytics delivers competitive advantage in two ways compared to the traditional
The RevoScaleR Data Step White Paper
REVOLUTION ANALYTICS WHITE PAPER The RevoScaleR Data Step White Paper By Joseph B. Rickert INTRODUCTION This paper provides an introduction to working with large data sets with Revolution Analytics proprietary
Fact Sheet In-Memory Analysis
Fact Sheet In-Memory Analysis 1 Copyright Yellowfin International 2010 Contents In Memory Overview...3 Benefits...3 Agile development & rapid delivery...3 Data types supported by the In-Memory Database...4
IBM SPSS Statistics Performance Best Practices
IBM SPSS Statistics IBM SPSS Statistics Performance Best Practices Contents Overview... 3 Target User... 3 Introduction... 3 Methods of Problem Diagnosis... 3 Performance Logging for Statistics Server...
Choosing a Computer for Running SLX, P3D, and P5
Choosing a Computer for Running SLX, P3D, and P5 This paper is based on my experience purchasing a new laptop in January, 2010. I ll lead you through my selection criteria and point you to some on-line
Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)
White Paper Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability) White Paper July, 2012 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public
Enterprise Performance Tuning: Best Practices with SQL Server 2008 Analysis Services. By Ajay Goyal Consultant Scalability Experts, Inc.
Enterprise Performance Tuning: Best Practices with SQL Server 2008 Analysis Services By Ajay Goyal Consultant Scalability Experts, Inc. June 2009 Recommendations presented in this document should be thoroughly
What is R? R s Advantages R s Disadvantages Installing and Maintaining R Ways of Running R An Example Program Where to Learn More
Bob Muenchen, Author R for SAS and SPSS Users, Co-Author R for Stata Users [email protected], http://r4stats.com What is R? R s Advantages R s Disadvantages Installing and Maintaining R Ways of Running
SQL Server 2005 Features Comparison
Page 1 of 10 Quick Links Home Worldwide Search Microsoft.com for: Go : Home Product Information How to Buy Editions Learning Downloads Support Partners Technologies Solutions Community Previous Versions
HP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
Psychology 205: Research Methods in Psychology
Psychology 205: Research Methods in Psychology Using R to analyze the data for study 2 Department of Psychology Northwestern University Evanston, Illinois USA November, 2012 1 / 38 Outline 1 Getting ready
InfiniteGraph: The Distributed Graph Database
A Performance and Distributed Performance Benchmark of InfiniteGraph and a Leading Open Source Graph Database Using Synthetic Data Objectivity, Inc. 640 West California Ave. Suite 240 Sunnyvale, CA 94086
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
hmetrix Revolutionizing Healthcare Analytics with Vertica & Tableau
Powered by Vertica Solution Series in conjunction with: hmetrix Revolutionizing Healthcare Analytics with Vertica & Tableau The cost of healthcare in the US continues to escalate. Consumers, employers,
Binary search tree with SIMD bandwidth optimization using SSE
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
GeoImaging Accelerator Pansharp Test Results
GeoImaging Accelerator Pansharp Test Results Executive Summary After demonstrating the exceptional performance improvement in the orthorectification module (approximately fourteen-fold see GXL Ortho Performance
Tableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
Hands-On Data Science with R Dealing with Big Data. [email protected]. 27th November 2014 DRAFT
Hands-On Data Science with R Dealing with Big Data [email protected] 27th November 2014 Visit http://handsondatascience.com/ for more Chapters. In this module we explore how to load larger datasets
OPTOFORCE DATA VISUALIZATION 3D
U S E R G U I D E - O D V 3 D D o c u m e n t V e r s i o n : 1. 1 B E N E F I T S S Y S T E M R E Q U I R E M E N T S Analog data visualization Force vector representation 2D and 3D plot Data Logging
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
Using Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
Comparing the Network Performance of Windows File Sharing Environments
Technical Report Comparing the Network Performance of Windows File Sharing Environments Dan Chilton, Srinivas Addanki, NetApp September 2010 TR-3869 EXECUTIVE SUMMARY This technical report presents the
Parallel Algorithm Engineering
Parallel Algorithm Engineering Kenneth S. Bøgh PhD Fellow Based on slides by Darius Sidlauskas Outline Background Current multicore architectures UMA vs NUMA The openmp framework Examples Software crisis
MATCHDAY 1 7-9 September 2014
MATCHDAY 1 7-9 September 2014 7 September Sunday 18:00 Group D 7 September Sunday 20:45 Group D 7 September Sunday 20:45 Group D 7 September Sunday 18:00 Group F 7 September Sunday 20:45 Group F 7 September
Rethinking SIMD Vectorization for In-Memory Databases
SIGMOD 215, Melbourne, Victoria, Australia Rethinking SIMD Vectorization for In-Memory Databases Orestis Polychroniou Columbia University Arun Raghavan Oracle Labs Kenneth A. Ross Columbia University Latest
Virtual desktops made easy
Product test: DataCore Virtual Desktop Server 2.0 Virtual desktops made easy Dr. Götz Güttich The Virtual Desktop Server 2.0 allows administrators to launch and maintain virtual desktops with relatively
Tableau Server 7.0 scalability
Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different
QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE
QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QlikView Technical Brief April 2011 www.qlikview.com Introduction This technical brief covers an overview of the QlikView product components and architecture
SIGMOD RWE Review Towards Proximity Pattern Mining in Large Graphs
SIGMOD RWE Review Towards Proximity Pattern Mining in Large Graphs Fabian Hueske, TU Berlin June 26, 21 1 Review This document is a review report on the paper Towards Proximity Pattern Mining in Large
Using In-Memory Computing to Simplify Big Data Analytics
SCALEOUT SOFTWARE Using In-Memory Computing to Simplify Big Data Analytics by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T he big data revolution is upon us, fed
Lavastorm Analytic Library Predictive and Statistical Analytics Node Pack FAQs
1.1 Introduction Lavastorm Analytic Library Predictive and Statistical Analytics Node Pack FAQs For brevity, the Lavastorm Analytics Library (LAL) Predictive and Statistical Analytics Node Pack will be
Your Best Next Business Solution Big Data In R 24/3/2010
Your Best Next Business Solution Big Data In R 24/3/2010 Big Data In R R Works on RAM Causing Scalability issues Maximum length of an object is 2^31-1 Some packages developed to help overcome this problem
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Shared Parallel File System
Shared Parallel File System Fangbin Liu [email protected] System and Network Engineering University of Amsterdam Shared Parallel File System Introduction of the project The PVFS2 parallel file system
GPU File System Encryption Kartik Kulkarni and Eugene Linkov
GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through
Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May 2014. Copyright 2014 Permabit Technology Corporation
Top Ten Questions to Ask Your Primary Storage Provider About Their Data Efficiency May 2014 Copyright 2014 Permabit Technology Corporation Introduction The value of data efficiency technologies, namely
Scaling Web Applications on Server-Farms Requires Distributed Caching
Scaling Web Applications on Server-Farms Requires Distributed Caching A White Paper from ScaleOut Software Dr. William L. Bain Founder & CEO Spurred by the growth of Web-based applications running on server-farms,
Capacity Planning Process Estimating the load Initial configuration
Capacity Planning Any data warehouse solution will grow over time, sometimes quite dramatically. It is essential that the components of the solution (hardware, software, and database) are capable of supporting
Microsoft Dynamics CRM 2011 Guide to features and requirements
Guide to features and requirements New or existing Dynamics CRM Users, here s what you need to know about CRM 2011! This guide explains what new features are available and what hardware and software requirements
Performance Guideline for syslog-ng Premium Edition 5 LTS
Performance Guideline for syslog-ng Premium Edition 5 LTS May 08, 2015 Abstract Performance analysis of syslog-ng Premium Edition Copyright 1996-2015 BalaBit S.a.r.l. Table of Contents 1. Preface... 3
Oracle BI EE Implementation on Netezza. Prepared by SureShot Strategies, Inc.
Oracle BI EE Implementation on Netezza Prepared by SureShot Strategies, Inc. The goal of this paper is to give an insight to Netezza architecture and implementation experience to strategize Oracle BI EE
Synchronizer Installation
Synchronizer Installation Synchronizer Installation Synchronizer Installation This document provides instructions for installing Synchronizer. Synchronizer performs all the administrative tasks for XenClient
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data Amanda O Connor, Bryan Justice, and A. Thomas Harris IN52A. Big Data in the Geosciences:
IBM Cognos 10: Enhancing query processing performance for IBM Netezza appliances
IBM Software Business Analytics Cognos Business Intelligence IBM Cognos 10: Enhancing query processing performance for IBM Netezza appliances 2 IBM Cognos 10: Enhancing query processing performance for
XenClient Enterprise Synchronizer Installation Guide
XenClient Enterprise Synchronizer Installation Guide Version 5.1.0 March 26, 2014 Table of Contents About this Guide...3 Hardware, Software and Browser Requirements...3 BIOS Settings...4 Adding Hyper-V
Microsoft Office Outlook 2013: Part 1
Microsoft Office Outlook 2013: Part 1 Course Specifications Course Length: 1 day Overview: Email has become one of the most widely used methods of communication, whether for personal or business communications.
Accelerating Server Storage Performance on Lenovo ThinkServer
Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011
SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications Jürgen Primsch, SAP AG July 2011 Why In-Memory? Information at the Speed of Thought Imagine access to business data,
EMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
QLIKVIEW SERVER MEMORY MANAGEMENT AND CPU UTILIZATION
QLIKVIEW SERVER MEMORY MANAGEMENT AND CPU UTILIZATION QlikView Scalability Center Technical Brief Series September 2012 qlikview.com Introduction This technical brief provides a discussion at a fundamental
Comparison of Windows IaaS Environments
Comparison of Windows IaaS Environments Comparison of Amazon Web Services, Expedient, Microsoft, and Rackspace Public Clouds January 5, 215 TABLE OF CONTENTS Executive Summary 2 vcpu Performance Summary
SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform
SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform INTRODUCTION Grid computing offers optimization of applications that analyze enormous amounts of data as well as load
Predicting Flight Delays
Predicting Flight Delays Dieterich Lawson [email protected] William Castillo [email protected] Introduction Every year approximately 20% of airline flights are delayed or cancelled, costing
Multiple Linear Regression
Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
What s New in MATLAB and Simulink
What s New in MATLAB and Simulink Kevin Cohan Product Marketing, MATLAB Michael Carone Product Marketing, Simulink 2015 The MathWorks, Inc. 1 What was new for Simulink in R2012b? 2 What Was New for MATLAB
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.
Objectives The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Identify the components of the central processing unit and how they work together and interact with memory Describe how
DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering
DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Data Visualization 2011: Omniscope 2.6
Data Visualization 2011: Omniscope 2.6 by admin-andrei - Thursday, October 06, 2011 http://www.practicaldb.com/blog/omniscope-2-6/ Omniscope 2.6 is finally about to be released (after more then 2 years
CHAPTER 7: The CPU and Memory
CHAPTER 7: The CPU and Memory The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides
Full Drive Encryption with Samsung Solid State Drives
Full Drive with Solid State Drives A performance and general review of s new selfencrypting solid state drives. Trusted Strategies LLC Author: Bill Bosen November 2010 Sponsored by Electronics Full Drive
SAS Data Set Encryption Options
Technical Paper SAS Data Set Encryption Options SAS product interaction with encrypted data storage Table of Contents Introduction: What Is Encryption?... 1 Test Configuration... 1 Data... 1 Code... 2
Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database
WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive
