PAXATA DATA PREPARATION PERFORMANCE BENCHMARKING SPRING 15 RELEASE

Similar documents
Cisco Data Preparation

Oracle Big Data SQL Technical Update

SQL Server 2012 Performance White Paper

Hadoop & Spark Using Amazon EMR

Safe Harbor Statement

HADOOP SOLUTION USING EMC ISILON AND CLOUDERA ENTERPRISE Efficient, Flexible In-Place Hadoop Analytics

Capitalize on Big Data for Competitive Advantage with Bedrock TM, an integrated Management Platform for Hadoop Data Lakes

SAS BIG DATA SOLUTIONS ON AWS SAS FORUM ESPAÑA, OCTOBER 16 TH, 2014 IAN MEYERS SOLUTIONS ARCHITECT / AMAZON WEB SERVICES

Data Integration Checklist

locuz.com Big Data Services

Real Time Big Data Processing

Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database

Hadoop & SAS Data Loader for Hadoop

Datenverwaltung im Wandel - Building an Enterprise Data Hub with

PLATFORA INTERACTIVE, IN-MEMORY BUSINESS INTELLIGENCE FOR HADOOP

BIG DATA TRENDS AND TECHNOLOGIES

Apache HBase. Crazy dances on the elephant back

Actian SQL in Hadoop Buyer s Guide

Moving From Hadoop to Spark

Ganzheitliches Datenmanagement

AtScale Intelligence Platform

News and trends in Data Warehouse Automation, Big Data and BI. Johan Hendrickx & Dirk Vermeiren

What's New in SAS Data Management

Getting Started & Successful with Big Data

Unified Big Data Processing with Apache Spark. Matei

THE DEVELOPER GUIDE TO BUILDING STREAMING DATA APPLICATIONS

A Brief Introduction to Apache Tez

How to Ingest Data into Google BigQuery using Talend for Big Data. A Technical Solution Paper from Saama Technologies, Inc.

Dell* In-Memory Appliance for Cloudera* Enterprise

Associate Professor, Department of CSE, Shri Vishnu Engineering College for Women, Andhra Pradesh, India 2

Virtualizing Apache Hadoop. June, 2012

Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January Website:

Big Data Approaches. Making Sense of Big Data. Ian Crosland. Jan 2016

Databricks. A Primer

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL. May 2015

The Future of Data Management

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

Native Connectivity to Big Data Sources in MSTR 10

Big Data for Investment Research Management

Big Data on Microsoft Platform

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

Advanced In-Database Analytics

BIG DATA ANALYTICS REFERENCE ARCHITECTURES AND CASE STUDIES

Accelerating Hadoop MapReduce Using an In-Memory Data Grid

QLIKVIEW DEPLOYMENT FOR BIG DATA ANALYTICS AT KING.COM

W H I T E P A P E R. Deriving Intelligence from Large Data Using Hadoop and Applying Analytics. Abstract

Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing

An Oracle White Paper June High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

Deploy. Friction-free self-service BI solutions for everyone Scalable analytics on a modern architecture

An Open Source Memory-Centric Distributed Storage System

Databricks. A Primer

BASHO DATA PLATFORM SIMPLIFIES BIG DATA, IOT, AND HYBRID CLOUD APPS

Microsoft Analytics Platform System. Solution Brief

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

Oracle BI EE Implementation on Netezza. Prepared by SureShot Strategies, Inc.

HadoopTM Analytics DDN

Ali Ghodsi Head of PM and Engineering Databricks

How To Handle Big Data With A Data Scientist

How To Use Hp Vertica Ondemand

Mr. Apichon Witayangkurn Department of Civil Engineering The University of Tokyo

Dell Reference Configuration for DataStax Enterprise powered by Apache Cassandra

Business Intelligence Discover a wider perspective for your business

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

Quantcast Petabyte Storage at Half Price with QFS!

Introducing Oracle Exalytics In-Memory Machine

TE's Analytics on Hadoop and SAP HANA Using SAP Vora

Manjrasoft Market Oriented Cloud Computing Platform

How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time

Apache Hadoop: Past, Present, and Future

Simplifying Big Data Analytics: Unifying Batch and Stream Processing. John Fanelli,! VP Product! In-Memory Compute Summit! June 30, 2015!!

Big Data Web Analytics Platform on AWS for Yottaa

In-memory data pipeline and warehouse at scale using Spark, Spark SQL, Tachyon and Parquet

QLIKVIEW INTEGRATION TION WITH AMAZON REDSHIFT John Park Partner Engineering

Sisense. Product Highlights.

Building a BI Solution in the Cloud

COMP9321 Web Application Engineering

EMC Federation Big Data Solutions. Copyright 2015 EMC Corporation. All rights reserved.

OLAP Services. MicroStrategy Products. MicroStrategy OLAP Services Delivers Economic Savings, Analytical Insight, and up to 50x Faster Performance

STREAM ANALYTIX. Industry s only Multi-Engine Streaming Analytics Platform

End-to-end Big Data Discovery with Platfora Date: May 2015 Author: Mike Leone, ESG Lab Analyst

Pro Apache Hadoop. Second Edition. Sameer Wadkar. Madhu Siddalingaiah

End to End Solution to Accelerate Data Warehouse Optimization. Franco Flore Alliance Sales Director - APJ

Big Data Visualization and Dashboards

Big Data With Hadoop

How Transactional Analytics is Changing the Future of Business A look at the options, use cases, and anti-patterns

[Hadoop, Storm and Couchbase: Faster Big Data]

A Novel Cloud Based Elastic Framework for Big Data Preprocessing

BIG DATA ANALYTICS For REAL TIME SYSTEM

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Luncheon Webinar Series May 13, 2013

Why Big Data in the Cloud?

Introduction to Big Data! with Apache Spark" UC#BERKELEY#

Big Data Analytics. with EMC Greenplum and Hadoop. Big Data Analytics. Ofir Manor Pre Sales Technical Architect EMC Greenplum

Big Data & QlikView. Democratizing Big Data Analytics. David Freriks Principal Solution Architect

High-Volume Data Warehousing in Centerprise. Product Datasheet

EMC BACKUP-AS-A-SERVICE

Best Practices for Hadoop Data Analysis with Tableau

Big Data and Its Impact on the Data Warehousing Architecture

Apache Hadoop in the Enterprise. Dr. Amr Awadallah,

Transcription:

PAXATA DATA PREPARATION PERFORMANCE BENCHMARKING SPRING 15 RELEASE February 2015 Page 1

Table of Contents Introduction... 3 Paxata Technology Stack... 3 The user interface layer... 4 Data preparation application web services... 4 Parallel in-memory pipelined data prep engine... 4 File management and storage... 5 Production Deployment... 6 Architecture... 6 Performance Metrics... 7 Criteria... 7 Results... 8 Usage... 9 Extreme Scalability... 9 Summary... 10 About Paxata... 10 Page 2

Introduction For the last 30 years, traditional data integration products have been IT s workhorse for processing data. Data Integration (DI), also known as ETL, is the analysis, combination, and transformation of data from a variety of sources and formats into a unified data model representation. Data Integration is a key element of data warehousing, application integration, and business analytics solutions. The variety and volume of data is always increasing and performance of data integration systems is critical. However, there has been no industry standard for measuring and comparing the performance of DI systems. The TPC-DI benchmark subcommittee is continuing refinement of the specification. While today s self-service data preparation solutions should be able to handle the same data volumes, some of the basic performance testing from legacy ETL tools are just not relevant. On the other hand, today s self-service data prep platform, built specifically for business users, has a new set of performance metrics based on the direct interaction between business analysts and the underlying system. Regardless of whether it is hosted in the cloud or deployed on premise, there is a demand for high performance and elasticity that was never expected of ETL tools because business never got to interact with them directly. This report is on the most recent tests performed on Paxata s Spring 15 release. Paxata tests all major releases based on a set of benchmarks, which were initially established in the Fall 2013 release. Details about system configuration can be found at the back of this document. Paxata Technology Stack The Paxata architecture is comprised of four layers: an HTML5 UI (User Interface), a Java web services layer, a parallel pipeline data prep engine that wraps Apache Spark with additional functionality built to optimize Spark performance and responsiveness, and a data management layer that persists data inside HDFS (Hadoop Distributed File System). This architecture and code base is leveraged for both our multi-tenant cloud service as well as an on premise deployment model. Cloud customers get the power of Paxata s robust architecture without additional cost or burden of maintenance. They simply log on and start data prep projects. On premise or private cloud customers have the ability to deploy Paxata within a dedicated Hadoop environment or as part of their existing Hadoop cluster. Page 3

The user interface layer HTML5 and websocket technology that ensure the system is multi-user aware. That means Paxata can be used from any web browser, from any device. That also means when someone makes a change in the system, whether in setting up a new project, adding data, publishing data or working on data sets, all authorized Paxata users in their system can see those actions being performed in real time. This component delivers a visual user experience that is symmetric across all devices, such as desktop web browsers, tablets, and smart phones. A web services toolkit (REST API) that allows for programmatic system access, as well as ODBC/JDBC connectivity that enables users to query Paxata AnswerSets via Impala or Hive. Data preparation application web services A lightweight Java layer that translates and mediates actions from the user interface into commands to the underlying platform layer. This layer handles critical capabilities for rules around tenants, users, projects and cell-level modifications, creating a comprehensive governance backbone. It also manages time-stamping and versioning for every operation performed, which is the secret sauce behind Paxata Step Editor. A lightweight instance of MongoDB is dedicated to the Paxata instance, and captures all of the application meta-data from the web services. Some customers prefer to use their own instance of MongoDB and this is completely acceptable as long as the versions are compatible. Parallel in-memory pipelined data prep engine Intellifusion is Paxata s data prep automation engine enabled by proprietary machine learning, latent semantic indexing, statistical pattern recognition and text analytics techniques. Intellifusion handles data in a model-free environment and operates over a large variety and volumes of structured and unstructured data in real-time, enabled by a vector query processor. At the core of Intellifusion is the combination of a distributed in-memory processing engine from Apache Spark with a Paxata-proprietary Spark interface that interprets requests from the web services layer and compiles them into the minimum set of operations that need to be executed on the cluster. This reduces the burden on Spark by efficiently delivering only the necessary jobs to the server. While Spark is used out-of-the-box (no modifications are made to CDH), here are Page 4

some of the areas where Paxata has invested significant development time to increase the efficiency and intelligence of Spark: In addition to the Resilient Distributed Datasets (RDDs) that come with Spark, Paxata developed a number of proprietary abstractions that do projections, filtering, grouping, joins). PaxRequests reduce the burden on Spark by organizing and optimizing sequences of RDD operations as part of a higher level construct for viewing data, creating clusters and aggregates, histograms, relationships and more. This layer also includes Paxata s intelligent cache management layer that allows us to invoke caches in-line (on a given node) or remotely, allowing the system to call on data cached on other nodes and produce them seamlessly to the user. File management and storage All data sets and AnswerSets are stored and accessed through the Paxata Library, which sits on top of HDFS (the Hadoop Distributed File System). For on premise or private cloud, there are two deployment options for data persistence: customers can either use an existing Hadoop cluster or create a specific Hadoop cluster for Paxata. Cloud customers get all the power of Hadoop without ever needing to think about the underlying file management and storage technologies. The virtualized, highly reliable infrastructure for our multi-tenant cloud service runs on Amazon Web Services. On premise customers can also deploy Paxata s Adaptive Data Preparation platform on VMWare VCloud environments. Page 5

Production Deployment Architecture Paxata s production deployment architecture in Amazon Web Services consists of the following components: Web Services and Data Library: 1 X 32 core 60 GB instance In-Memory Pipelined Data Prep Engine on Apache Spark: Between 20-130 X 8 core 60GB instances (the system elastically scales) Hadoop Cluster: 8 X 4 core 30GB instances MongoDB: 3 X 1 core 3.7GB LDAP & DNS: 4 x 1 core 2GB The production deployment architecture is depicted in the diagram below: Page 6

Performance Metrics Criteria Paxata s Data Preparation benchmark is inspired by TPC-DI, the Data Integration (also known as 'ETL') benchmark developed by the TPC. Paxata s benchmark combines and transforms data extracted from multiple On-Line Transaction Processing (OTLP) systems along with other sources of data, and persists it into an AnswerSet that can then be sent to a variety of destinations including reporting and visualization tools, analytic applications, traditional data warehouses, or Hadoop clusters. The source and destination data models, data transformations and implementation rules have been designed to be broadly representative of modern data integration requirements, characterized by: Ingestion of large volumes of data Multiple data sources, utilizing a variety of different data formats A mixture of transformation types including data validation, key lookups, conditional logic, data type conversions, complex aggregation operations, etc. AnswerSet building and maintenance operations One extremely important difference between traditional DI benchmarking and DP benchmarking is that Paxata allows for interactive processing in addition to batch processing. Given that this is a breakthrough capability not available in legacy systems, the focus of our questions during testing were as follows: 1. Based on a changing data volume, what was the time it took to load from HDFS into Spark? 2. Visualization of filtergrams how quickly did the system return results of text filtergram on numeric data? Text filtergram on string data? Numeric filtergram on numeric data? 3. Multiple filtergrams how long did it take to select a value from a filtergram and rerender both the grid when there were multiple filtergrams? 4. Full scan operations how quickly does it take the system to sort or aggregate and groupby on a single or multiple columns? 5. Join Detection with Intellifusion how long did it take to do join detection across multiple datasets? 6. Join Execution how long do inner and various types of outer joins take to execute? 7. Shaping operations how quickly is the system able to transpose, pivot, or depivot datasets? 8. Hashing operations how quickly is the system able to create buckets based on hashing to support operations such as clustering? 9. Publishing: how quickly can the system push all of the rows of an underlying dataset through the pipeline? 10. For all of the above, what is the difference in execution time between cached and uncached operations? Page 7

Results The results below were tested on a cluster with 27 Spark workers (in the deployment model described in previous section) using three publicly available datasets intended to represent a prototypical business analyst data preparation use case: Dataset 1 20 million rows x 22 columns Dataset 2 100 k rows x 22 columns Dataset 3 2 million rows x 198 columns Performance Comparisons of Paxata Fall 14 and Spring 15 Release Scenario Fall 14 Spring 15 % Change - Not Cached - Not Cached - Not Cached Load Dataset 1 in project 33 11-66.67% (20 million rows x 22 columns) Bring up filter for col Primary Type 131 3-97.71% Select entry Narcotics in the filter 22 3-86.36% Bring up filter on col Year 26 3-88.46% Change range to be 2014-2014 28 4-85.71% Close filter on Year 12 0.3-97.50% Close filter on Primary Type 0.3 0.3 0.00% Sort col Block 32 5-84.38% Sort col ID 33 5-84.85% Bring up cluster + edit on col Block (1116 17 10-41.18% clusters) Cluster automatically 96 27-71.88% Bring up filter on col Block 16 10-37.50% Group By on col Primary Type (32 rows) with 27 2-92.59% metric Count of ID Sort col Count - ID 25 1-96.00% Transpose with Row Values = Arrest and 18 2-88.89% Column Labels = Primary Type De-duplicate on Primary Type 20 2-90.00% De-duplicate on Year 27 2-92.59% Pivot with Row Labels = Primary Type and 34 6-82.35% Column Labels = Arrest and metric Count of ID Add lookup Dataset 2 (100 k rows x 22 columns) Join Detection 228 46-79.82% Left Outer 3 2-33.33% Inner 58 22-62.07% Right Outer 108 57-47.22% Full Outer 113 61-46.02% Page 8

Add lookup Dataset 3 (2 million rows x 198 columns) Join Detection 1038 154-85.16% Left Outer 25 21-16.00% Inner 932 407-56.33% Right Outer 1308 464-64.53% Full Outer 1550 1021-34.13% Total Median 30 5-82% As can be seen in the above performance benchmark, Paxata s aggregate median performance on all operations has been reduced by over 80% in the span of two releases on uncached data. With caching enabled, upon completion of an initial operation, subsequent operations of the same type return with sub-second response times. These results above are based on modest sizes intended to show the performance improvement over releases with a stable benchmark. However, Paxata has been proven to scale at much larger volumes while retaining interactive performance. Similar tests to the above have been run on a single one billion row dataset on a 128 node cluster in Amazon. Each r3.2xlarge virtual machine had 8 CPUs, 60GB of memory and a 140GB Ephemeral disk (SSD speeds). The system was able to demonstrate random access to any window of the one billion row dataset in <10 seconds time demonstrating the power of Paxata s adaptive windowing architecture which only executes transformations lazily on subsets of the data until such time as data is published. Usage In terms of how this correlates with individual customer s usage, the table below provides some key statistics for some of Paxata s customers in our multi-tenant cloud: Tenant Projects Library Artifacts Max Row Count Median Row Count High Tech manufacturer 6 5 20000000 319254 Analytics consultancy 49 67 4154949 176327 Consumer Packaged Goods company 422 655 2663844 4461 Healthcare organization 112 81 1329061 96744 Financial Services Organization 487 253 1152828 64046 As shown above, the largest number of datasets for a given tenant is 655, while the largest number of data preparation projects is 487. Most impressively, the high tech manufacturer is preparing data of 20,000,000 rows with interactive performance. It should be noted that the usage of Paxata in on premise deployments significantly exceeds the multi-tenant cloud in terms of data volumes. Extreme Scalability The scalability of the Paxata system is directly correlated to the Apache Spark system upon which it is built. Recently, the version of Spark used by Paxata was submitted to an industry benchmark on how fast a system can sort 100 TB of data (one trillion records). Using 206 EC2 machines, Spark sorted 100 TB of data on disk in 23 minutes. In comparison, the previous world record set by Page 9

Hadoop MapReduce used 2100 machines and took 72 minutes. Additionally Spark was able to sort one PB of data (ten trillion records) on 190 machines in under four hours, also shattering previous records. This means that Spark sorted the same data 3X faster using 10X fewer machines. All the sorting took place on disk (HDFS), without using Spark s in-memory cache. The Spark cluster was able to sustain 3GB/s/node I/O activity during the map phase, and 1.1 GB/s/node network activity during the reduce phase, saturating the 10Gbps link available on these machines. Hadoop MR Record Spark Record Spark 1 PB Data Size 102.5 TB 100 TB 1000 TB Elapsed Time 72 minutes 23 minutes 234 minutes # Nodes 2100 206 190 # Cores 50400 physical 6592 virtualized 6080 virtualized Cluster disk 3150 GB/s 618 GB/s 570 GB/s throughput (est.) Sort Benchmark Yes Yes No Daytona Rules Network Dedicated data center, 10Gbps Virtualized (EC2) 10Gbps network Sort rate 1.42 TB/min 4.27 TB/min 4.27 TB/min Sort rate/node 0.67 GB/min 20.7 GB/min 22.5 GB/min Virtualized (EC2) 10Gbps network This benchmark workload is resource intensive by any measure: sorting 100 TB of data following the strict rules generates 500 TB of disk I/O and 200 TB of network I/O. Being based on Apache Spark, along with a significant number of our own performance optimizations as discussed above, it is clear that Paxata s performance is state of the art in comparison to previous generations of data preparation systems. Summary Paxata is a state of the art data preparation product with a highly innovative architecture that is extremely performant and scalable and deployed in production for more than three dozen customers today. It is the only system in the industry that can provide interactive data preparation against massive volumes, and its performance will only continuously increase based on a combination of Moore s law and planned improvements in our technology. About Paxata Paxata delivers the first purpose-built Adaptive Data Preparation solution for business analysts, data scientists, developers, data curators, and IT teams to enable the integration, cleansing, and enrichment of raw data into rich, analytic-ready data to power ad hoc, operational, predictive, and packaged analytics. Paxata partners with industry-leading big data and business intelligence solutions providers such as Cloudera, and seamlessly connects to BI tools, including Salesforce.com, Tableau, Qlik and Microsoft Excel to greatly accelerate the time to actionable business insights. To learn more, visit www.paxata.com. Page 10