1 Executive White Paper Delivering Value from Big Data with Revolution R Enterprise and Hadoop Bill Jacobs, Director of Product Marketing Thomas W. Dinsmore, Director of Product Management October 2013
2 Abstract Businesses are rapidly investing in Hadoop to manage analytic data stores due to its flexibility, scalability and relatively low cost. However, Hadoop s native tooling for advanced analytics is immature; this makes it difficult for analysts to use without significant additional training, and limits the ability of businesses to drive value from Big Data. Revolution R Enterprise leverages open source R, the standard platform for modern analytics. R has a thriving community of more than two million users who are trained and ready to deliver results. Revolution R Enterprise runs in Hadoop. It enables users to perform advanced analysis on Big Data while working exclusively in the familiar R environment. The software transparently distributes work across the nodes and distributed data of Hadoop without complex programming. By leveraging Revolution R Enterprise in Hadoop, organizations tap a large and growing community of analysts and all of the capabilities in R with a true cross-platform and open standards architecture.
3 Hadoop for Advanced Analytics Over the next three years, IT industry analyst IDC expects spending on Apache Hadoop for data management to grow at an average annual rate of 60%. Hadoop is an open source software framework for distributed data management; it supports a programming model (MapReduce) and a file system (HDFS) that work together to provide a data management platform that is flexible, highly scalable, and relatively inexpensive to implement. For some types of data, Hadoop has emerged as the clear platform of choice: n Clickstream data, including site-side clicks and web media tags n Sentiment data, including news feeds, blog feeds, social media comments and Twitter streams n Telematics, such as vehicle tracking data n Sensor and machine-generated data n Geo tracking and location data n Server and network logs n Document and text repositories n Digitized images, voice, video and other media Enterprises struggle, however, to analyze data held in Hadoop 1. While Hadoop is an excellent platform for managing large and complex data sets, its tooling for advanced analytics is much less mature than the analytics tooling available to enterprises today on other platforms. Hadoop s native advanced analytics project, Apache Mahout, is an eclectic mix of algorithms. Mahout is immature (currently in Release 0.8), and it includes a number of partially complete and unintegrated algorithms; many of these, according to project leadership, lack adequate developer support 2. Mahout is hard to use, with limited documentation; it is only suitable for highly trained developers with strong backgrounds in Java and MapReduce. Conventional server-based analytic software packages from vendors such as SAS or SPSS typically offer the ability to connect with Hadoop and extract data over a network, but they do not run inside Hadoop. Using server-based software with Hadoop is suitable for users working with small packets of data, or who are willing to work with small samples of the data. However, this architecture is impractical when source data approaches terabyte scale, and sampling is not acceptable for some analytic applications. Hosting the analytic software outside of Hadoop also creates a deployment problem, since custom programming may be required to implement a production scoring process. Moreover, data replication and movement can create data security issues: For curated data (e.g., clinical trials), the organization must recertify the data after replication or movement. This is time consuming and expensive. Some vendors have recently introduced software that runs on freestanding Analysis servers within the Hadoop cluster or runs in memory on the Hadoop data nodes. While deployment within the Hadoop cluster reduces latency from data movement, this approach is still only practical with smaller data sets. Analytics that load the entire data set into memory typically cannot run on the commodity hardware typically used as Hadoop data nodes. As a result of the limited prebuilt analytics available today in Hadoop, much of the analytic work performed in Hadoop is homegrown or built with custom programming in MapReduce, Java, or other languages. This is a serious constraint for organizations seeking to leverage Hadoop for analytics because data scientists, the individuals with the necessary skills, are scarce and in high demand. Moreover, custom homegrown applications are expensive to build, maintain and support. DELIVERING VALUE FROM BIG DATA 2013 REVOLUTION ANALYTICS, INC. 3
4 There is a pressing need for an analytics platform that runs inside Hadoop, handles data scale in the terabytes, offers comprehensive support for advanced analytics, and is accessible to a broad user base without extensive re-training. Revolution R Enterprise for Big Data, Big Analytics Open source R is widely used by statisticians, data miners, actuaries, researchers and other professionals who need an agile and capable analytics platform. There are approximately two million R users in the workforce today, and the user base is rapidly growing. Surveys 3,4,5 show that working analytics professionals prefer R to all other analytic software options. There are many reasons behind R s growing popularity, including its flexibility, rich graphics, broad ecosystem and comprehensive library of more than five thousand packages. However, since open source R runs in memory and lacks a native distributed computing framework, it is ill suited to working with large data sets. R scalable. PEMAs support operations on data far larger than available memory through a combination of streaming, parallel processing on multiple cores and distributed processing across multiple nodes. Revolution R Enterprise s Write Once Deploy Anywhere (WODA) architecture enables users to develop content on one platform and deploy it on any other supported platform: workstations, servers, grids, data warehouse appliances and Hadoop. Through Revolution R Enterprise s DeployR tools for enterprise integration, users can publish web reports to a broader audience, and organizations can enable two-way integration between Revolution R Enterprise and popular tools like Excel, QlikView and Tableau. For business analysts who prefer a visual interface, Revolution R Enterprise couples with Alteryx to provide easy-to-use, high-performance analytics. In brief, Revolution R Enterprise offers a scalable, high-performance platform for the rich capabilities of R. It offers true cross-platform integration together with a wide choice of user interfaces and deployment options. Revolution R Enterprise (RRE), a product from Revolution Analytics, is a Big Data, Big Analytics Platform that supports open source R. Revolution R Enterprise extends an enhanced and supported DevelopR DeployR version of R with a distributed computing framework, scalable algorithms, Big Data connectivity tools, an integrated development environment and enterprise deployment tools. R+CRAN RevoR ConnectR ScaleR Revolution R Enterprise s ScaleR package of highperformance analytic algorithms uses Parallel External DistributedR Memory Algorithms (PEMAs) to make open source DELIVERING VALUE FROM BIG DATA 2013 REVOLUTION ANALYTICS, INC. 4
5 Revolution R Enterprise in Hadoop Revolution Analytics is a pioneer in analytics for Hadoop. In 2011, Revolution Analytics introduced the RHadoop open source project, one of the most frequently downloaded projects on GitHub. Revolution R Enterprise supports interfaces to MapReduce, HDFS and HBase, powering production analytics for enterprise customers: n A life sciences company uses Revolution R Enterprise and Hadoop to deliver recommended treatments to its customers based on userspecified details of planned usage. n A leading US bank has developed an innovative prediction engine based on Revolution R Enterprise and Hadoop. n A marketing sciences company analyzes billions of cookies and session records with a cloud-enabled solution of Hadoop and Revolution R Enterprise. Revolution R Enterprise does not simply connect to Hadoop it runs inside Hadoop, providing users with direct access to data stored in Hadoop from the familiar R interface. To do this, Revolution R Enterprise uses MapReduce to distribute computational workload across the nodes of a Hadoop cluster. This section includes a detailed explanation of how this works. Deployment Architecture Figure 1 shows Revolution R Enterprise deployed for operations inside Hadoop. The diagram shows Revolution R Enterprise deployed on each node in the Hadoop cluster, with one node designated as the Revolution R Enterprise master node. The diagram also shows Revolution R Enterprise deployed on one or more desktops or servers outside of the Hadoop cluster. This is optional, but useful as a development environment or as a platform for analytics with small data sets. Revolution R Enterprise s DeployR interface supports the web services connection shown in Figure 1. The Corporate Applications shown in the diagram can include interactive interfaces to popular BI platforms (such as Jaspersoft, QlikView or Tableau), end-user tools like Microsoft Excel and Alteryx, custom web-based reports, application servers and rules engines. Hadoop Cluster Desktops & Servers Corporate Applications Figure 1: Leveraging inherent parallelism in Hadoop to accelerate Big Data analytics DELIVERING VALUE FROM BIG DATA 2013 REVOLUTION ANALYTICS, INC. 5
6 Execution in Hadoop: High-Level Flow Users invoke Revolution R Enterprise in Hadoop in one of two ways: local or remote. Local: Revolution R Enterprise users submit individual commands or scripts through the R command line interface. Remote: Revolution R Enterprise users working on a separate platform run a script that includes a command to transfer execution to a remote Hadoop cluster. In both cases, Revolution R Enterprise running on an edge node acts as the master process. It executes the Revolution R Enterprise script locally, using threads, cores and sockets within the edge node server. When the master process encounters a call to an algorithm in the ScaleR library, it distributes subsets of the work to data nodes in the Hadoop cluster. Once the data nodes finish their tasks, a single Reducer consolidates intermediate results and returns them to the master node. If the master node determines that the algorithm has completed its work, it returns the consolidated results to the script or command. For iterative algorithms (such as logistic regression or k-means clustering), Revolution R Enterprise performs a convergence test and checks the number of iterations. If the tests succeed, Revolution R Enterprise returns consolidated results to the script or command; otherwise, it repeats the process. Execution in Hadoop: Detailed Flow This section details execution steps for Revolution R Enterprise running inside Hadoop. 1. Revolution R Enterprise running on the edge node encounters a call to a ScaleR algorithm. 2. The master process serializes instructions, stores them in an object and saves the object to a GUID file in HDFS (where the instructions are accessible to Mappers and Reducers). 3. Next, the master process prepares a job request for JobTracker, passing the location of the input data file and the processing instructions for Mappers and Reducers. 4. Once invoked, JobTracker spawns a potentially large set of TaskTrackers on each node, passing to each the GUID of the instructions file and the HDFS location, of a single split of data. 5. Each TaskTracker invokes a Mapper, delivers input data split location and instruction file location, and awaits completion of the Mapper or Reducer. The Mapper consists of a Java wrapper, housing a complete instance of Revolution R Enterprise. Data splits roughly equate to a single HDFS file segment. 6. Each Mapper retrieves its instructions from a GUID file, ingests the designated data split, and performs the requested operations. Mappers operate independently of one another. 7. When each Mapper finishes its work, it produces an Intermediate Results Object, or IRO, in a key value format. The Mapper serializes the IRO and stores it in a GUID file at a location passed to it by TaskTracker. DELIVERING VALUE FROM BIG DATA 2013 REVOLUTION ANALYTICS, INC. 6
7 8. As Mappers begin to complete, Hadoop s shuffle-sort sequentially moves all IROs to the Reducer s host node. 9. As IROs begin to arrive, TaskTracker invokes the Reducer and passes it the GUID of the instructions file directing the job. 10. The Reducer reads the instructions file and proceeds to consolidate the inbound IROs into a single consolidated IRO. 11. When there are no remaining IROs, the Reducer saves the consolidated IRO to HDFS in a location directed by the master process. 12. The Reducer exits, its TaskTracker exits, and the master process assumes control. 13. Upon return of control, the master process retrieves and evaluates the consolidated IRO to see if the process has converged. 14. If the convergence test succeeds, the master process returns a final results object to the Revolution R Enterprise command or program. 15. If the convergence test does not succeed, the master process repeats steps 2 13 with new instructions. 16. Processing ends when the convergence test succeeds or the number of iterations reaches a user-defined limit. All IROs produced by a particular job use the same key, causing subsequent shuffle-sort operations in Hadoop to consolidate all IROs on a single node, using a single Reducer instance. It is worth noting that IROs are tiny by comparison to input data, so this step is a performance improvement over use of combiners or other multiple-reducer schemes to consolidate results. Revolution R Enterprise and Hadoop: The Benefits Revolution R Enterprise brings tools to Hadoop that enable organizations to deliver new value from Big Data: n Understand site-side customer behavior. n Optimize search term selection. n Measure the impact of web media campaigns. n Track brand perceptions among current and prospective customers. n Track driving behavior of policyholders. n Identify anomalies in fleet vehicle usage. n Predict machine failures before they happen. n Drive location-specific marketing offers. n Detect unusual behavior in secure systems. n Predict network outages. n Identify plagiarized documents. With Revolution R Enterprise, data remains in place in Hadoop; users perform analysis without data movement. This dramatically reduces the total cycle time to build and deploy predictive models, and eliminates potential security issues from data movement or replication. Revolution R Enterprise makes Hadoop easier to use. Users can focus on what they do best drawing insight from data and do not need to learn parallel programming, distributed data management techniques, Java, MapReduce or HDFS. By making Hadoop available to the R community of users, Revolution R Enterprise enables organizations to tap into a large and rapidly growing talent pool. Revolution R Enterprise s Write Once Deploy Anywhere architecture offers flexibility and portability while helping organizations avoid vendor and DELIVERING VALUE FROM BIG DATA 2013 REVOLUTION ANALYTICS, INC. 7
8 platform lock-in. Users can build and test projects on a workstation, then deploy them on a production system essentially without change. Organizations can easily re-host projects, for any reason for better performance, better economics, to move computing to the cloud, or simply for convenience. Revolution R Enterprise brings rich analytical and statistical computation at scale to Hadoop and enables data analytics teams to achieve shorter, less costly and lower-risk projects while achieving far higher performance, data scale and portability than can be achieved by other methods. Take the Next Step If you are seeking easy-to-use solutions for Big Data, Big Analytics and are considering Hadoop: n Consider scheduling an in-depth presentation and perhaps an evaluation by contacting a member of our sales team at +1 (650) or through one of our offices listed on our website: n Meet with us face to face at one of the upcoming events listed on our website s events page. n Learn more about Revolution R Enterprise on our website s product page. FAQs Q. What input sources does Revolution Analytics support for use with Revolution R Enterprise? A. Revolution R Enterprise can read data from many sources using various connectors. Revolution R Enterprise ingests input data in parallel from HDFS when running inside Hadoop. Q. What data input formats can we use? A. For Hadoop users, Revolution R Enterprise supports text files in CSV or fixed width file formats. For data residing in other systems, Revolution R Enterprise 7 supports many other formats including both SAS and SPSS files plus a fast proprietary format called XDF. XDF is an efficient binary serialization format used by Revolution products to accelerate data access and manipulation. Q. Where can data be output? A. There are two options available to Hadoop users. First, Revolution R Enterprise returns data in R data objects to the calling program or application. These objects are available for further processing and export, or they are published through the DeployR interface in a variety of formats. Second, users can run prediction functions to score data and write it back to HDFS. Q. What formats can we use to return results? A. Users can write model scores directly to CSV or XDF, publish models as PMML documents or pass R objects to a Revolution R Enterprise DeployR server for broader integration. Q. Are there intermediate formats used? A. Between Mappers and Reducers, Revolution R Enterprise passes data in standard Hadoop key/value pairs. These are stored in RAM and swapped to local storage if necessary. Q. How many MapReduce jobs run? A. Typically, one MapReduce job runs per algorithm called. That job may be iterative in the case of multi-pass algorithms like logistic regression or clustering. DELIVERING VALUE FROM BIG DATA 2013 REVOLUTION ANALYTICS, INC. 8
9 Q. How many actual Mapper and Reducer instances run? A. Revolution R Enterprise schedules a variable number of Mappers depending on the size of the data set. For each split of input data, Revolution R Enterprise schedules one Mapper. Splits roughly approximate the block size setting of the HDFS file system instance, typically either 64 or 128MB. The HDFS Administrator configures the HDFS block size. Q. Why run only a single Reducer or no Reducer at all? A. Unlike many MapReduce-based applications, ScaleR and Revolution R Enterprise typically use a single Reducer and, in some cases, such as scoring data, no Reducer at all. Reducers run only if needed to return data to the master process on the edge node; otherwise, Revolution R Enterprise skips this step. Q. What happens if a Revolution R Enterprise job fails while running in Hadoop? A. When running ScaleR algorithms, Revolution R Enterprise may invoke a large number of Mappers to accomplish required calculations. The results of each map instance are persisted to memory and files, allowing Hadoop to reschedule only failed Mapper instances in the event of incomplete execution. Q. How do we run model scoring? A. Revolution R Enterprise offers users two options for model scoring. First, users may run scoring natively using the rxpredict function. This function applies model objects produced by any of the ScaleR predictive modeling functions and applies the model to new data. rxpredict computes the model score for each new record and writes results to HDFS. As an alternative, users may export model documents using Predictive Model Markup Language (PMML). Organizations can deploy PMML model documents into any PMMLcompliant database, decision engine or through a software abstraction layer such as Concurrent Inc. s Cascading software. Endnotes: 1. May 10, 2013, Hadoop Adoption Accelerates, But Not For Data Analytics, 2. What is Apache Mahout?, 3. September 02, 2013, Poll: R top language for data science three years running, 4. October 15, 2013, R usage skyrocketing: Rexer poll, 5. May 2013, The Popularity of Data Analysis Software, DELIVERING VALUE FROM BIG DATA 2013 REVOLUTION ANALYTICS, INC. 9
10 About Revolution Analytics, Inc. As the leading commercial provider of software and services based on the open source R project for statistical computing and headquartered in Mountain View, California Revolution Analytics brings Big Data scalability, performance, and cross-platform enterprise readiness to R, the world s most widely used statistics software. The company s flagship Revolution R Enterprise software is designed to meet the production needs of leading organizations in data-driven industries including finance, retail, manufacturing, and digital media. Used by over 2 million data scientists in academia and at cuttingedge organizations such as Google, Lloyd s of London and the US Food and Drug Administration, R is the standard of innovation in statistical analysis. Revolution Analytics is committed to supporting the continued growth of the R community by sponsoring R user groups and conferences worldwide, and offers free licenses for Revolution R Enterprise to everyone in academia Revolution Analytics, Inc. All rights reserved. Printed in the United States of America. This white paper is for informational purposes only. REVOLUTION ANALYTICS MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY. Revolution R Enterprise, RRE, Revolution Analytics, the Revolution Analytics and Revolution R Enterprise logos, and Big Data, Big Analytics Platform are trademarks or registered trademarks of Revolution Analytics, Inc., in the United States and/or other countries. Other product names mentioned herein may be trademarks of their respective companies.
White Paper Intel Reference Architecture Big Data Analytics Predictive Analytics and Interactive Queries on Big Data WRITERS Moty Fania, Principal Engineer, Big Data/Advanced Analytics, Intel IT Parviz
White Paper Big Data Analytics Extract, Transform, and Load Big Data with Apache Hadoop* ABSTRACT Over the last few years, organizations across public and private sectors have made a strategic decision
INTELLIGENT BUSINESS STRATEGIES W H I T E P A P E R Architecting A Big Data Platform for Analytics By Mike Ferguson Intelligent Business Strategies October 2012 Prepared for: Table of Contents Introduction...
An Oracle White Paper March 2013 Big Data Analytics Advanced Analytics in Oracle Database Advanced Analytics in Oracle Database Disclaimer The following is intended to outline our general product direction.
Revolution R Enterprise: Efficient Predictive Analytics for Big Data Prepared for The Bloor Group August 2014 Bill Jacobs Director Product Marketing / Field CTO - Big Data Products email@example.com
ascent Thought leadership from Atos white paper Data Analytics as a Service: unleashing the power of Cloud and Big Data Your business technologists. Powering progress Big Data and Cloud, two of the trends
An Oracle White Paper June 2013 Oracle: Big Data for the Enterprise Executive Summary... 2 Introduction... 3 Defining Big Data... 3 The Importance of Big Data... 4 Building a Big Data Platform... 5 Infrastructure
February 2009 Seeding the Clouds: Key Infrastructure Elements for Cloud Computing Page 2 Table of Contents Executive summary... 3 Introduction... 4 Business value of cloud computing... 4 Evolution of cloud
SPECIAL REPORT W I N T E R C O R P O R A T I O N T h e L a r g e S c a l e Big Data What Does It Really Cost? D a t a M a n a g e m e n t Expe r t s W I N T E R C O R P O R A T I O N Big Data What Does
Customer Cloud Architecture for Big Data and Analytics Executive Overview Using analytics reveals patterns, trends and associations in data that help an organization understand the behavior of the people
Final Report Duke Energy Emerging Technologies Organization Data Modeling and Analytics Initiative April 2014 Duke Energy Data Modeling and Analytics Initiative Table of Contents Table of Contents List
white paper Capitalizing on Big Data Analytics for the Insurance Industry A Simplified, Automated Approach to Big Data Applications: StackIQ Enterprise Data Management and Monitoring. Abstract Contents
REVOLUTION ANALYTICS WHITE PAPER Advanced Big Data Analytics with R and Hadoop 'Big Data' Analytics as a Competitive Advantage Big Analytics delivers competitive advantage in two ways compared to the traditional
OPEN DATA CENTER ALLIANCE : sm Big Data Consumer Guide SM Table of Contents Legal Notice...3 Executive Summary...4 Introduction...5 Objective...5 Big Data 101...5 Defining Big Data...5 Big Data Evolution...7
G00249318 Top 10 Technology Trends Impacting Information Infrastructure, 2013 Published: 19 February 2013 Analyst(s): Regina Casonato, Mark A. Beyer, Merv Adrian, Ted Friedman, Debra Logan, Frank Buytendijk,
Convergence of Social, Mobile and Cloud: 7 Steps to Ensure Success June, 2013 Contents Executive Overview...4 Business Innovation & Transformation...5 Roadmap for Social, Mobile and Cloud Solutions...7
Emergence and Taxonomy of Big Data as a Service Benoy Bhagattjee Working Paper CISL# 2014-06 May 2014 Composite Information Systems Laboratory (CISL) Sloan School of Management, Room E62-422 Massachusetts
White Paper Big Data Analytics Pilot, Internet of Things Optimizing with the Internet of Things Intel manufacturing advances operational efficiencies and boosts the bottom line with an IoT and big data
MASARYK UNIVERSITY FACULTY OF INFORMATICS Best Practices in Scalable Web Development MASTER THESIS Martin Novák May, 2014 Brno, Czech Republic Declaration Hereby I declare that this paper is my original
White Paper Data Warehouse Optimization with Hadoop A Big Data Reference Architecture Using Informatica and Cloudera Technologies This document contains Confidential, Proprietary and Trade Secret Information
White Paper BIG DATA-AS-A-SERVICE What Big Data is about What service providers can do with Big Data What EMC can do to help EMC Solutions Group Abstract This white paper looks at what service providers
W H I T E P A P E R Cloud Computing Raising Geospatial Technology to the Cloud: Intergraph Strategy for Leveraging Cloud-based Resources Contents 1. Introduction... 1 2. Cloud Computing... 2 2.1. Clustering...
TABLE OF CONTENTS Introduction... 3 The Importance of Triplestores... 4 Why Triplestores... 5 The Top 8 Things You Should Know When Considering a Triplestore... 9 Inferencing... 9 Integration with Text
Description of Work Project acronym: Project full title: Big Data Analytics of Digital Footprints Project Budget: 3,538.387 Euro Work programme topics addressed: Objective ICT-2011.1.2: Cloud Computing,