Setting the Direction for Big Data Benchmark Standards Chaitan Baru, PhD San Diego Supercomputer Center UC San Diego



Similar documents
Setting the Direction for Big Data Benchmark Standards

Setting the Direction for Big Data Benchmark Standards 1

Welcome to the 6 th Workshop on Big Data Benchmarking

BENCHMARKING BIG DATA SYSTEMS AND THE BIGDATA TOP100 LIST

IEEE BigData 2014 Tutorial on Big Data Benchmarking

How To Write A Bigbench Benchmark For A Retailer

Shaping the Landscape of Industry Standard Benchmarks: Contributions of the Transaction Processing Performance Council (TPC)

SPEC Research Group. Sam Kounev. SPEC 2015 Annual Meeting. Austin, TX, February 5, 2015

NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB

Survey of Big Data Benchmarking

Mind Commerce. Commerce Publishing v3122/ Publisher Sample

Pre-Conference Seminar E: Flash Storage Networking

Big Data Generation. Tilmann Rabl and Hans-Arno Jacobsen

Big Data Services Market in Western Europe

BigBench: Towards an Industry Standard Benchmark for Big Data Analytics

Global Hadoop Market (Hardware, Software, Services) applications, Geography, Haas, Global Trends,Opportunities, Segmentation and Forecast

Development of a Computational and Data-Enabled Science and Engineering Ph.D. Program

TABLE OF CONTENTS 1 Chapter 1: Introduction 2 Chapter 2: Big Data Technology & Business Case 3 Chapter 3: Key Investment Sectors for Big Data

How To Understand The Business Case For Big Data

Big Data in Financial Services Industry: Market Trends, Challenges, and Prospects

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Introduction to Decision Support, Data Warehousing, Business Intelligence, and Analytical Load Testing for all Databases

Automating Big Data Benchmarking for Different Architectures with ALOJA

BPOE Research Highlights

Proact whitepaper on Big Data

Hadoop Market - Global Industry Analysis, Size, Share, Growth, Trends, and Forecast,

Hadoop Introduction. Olivier Renault Solution Engineer - Hortonworks

ISO/IEC JTC1 SC32. Next Generation Analytics Study Group

Dell Reference Configuration for Hortonworks Data Platform

Mind Commerce. Commerce Publishing v3122/ Publisher Sample

NoSQL and Hadoop Technologies On Oracle Cloud

RFP-MM Enterprise Storage Addendum 1

Hadoop & Big Data Market [Hardware, Software, Services, Hadoop-as-a- Service] - Trends, Geographical Analysis & Worldwide Market Forecasts ( )

Big Data Technologies Compared June 2014

Big Data Market Size and Vendor Revenues

Cloud SingularLogic:

Introduction to Decision Support, Data Warehousing, Business Intelligence, and Analytical Load Testing for all Databases

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Virtualization Performance Insights from TPC-VMS

Hadoop Market - Global Industry Analysis, Size, Share, Growth, Trends, And Forecast,

Architecting for Big Data Analytics and Beyond: A New Framework for Business Intelligence and Data Warehousing

BIG DATA APPLIANCES. July 23, TDWI. R Sathyanarayana. Enterprise Information Management & Analytics Practice EMC Consulting

Post Graduation Survey Results 2015 College of Engineering Information Networking Institute INFORMATION NETWORKING Master of Science

Storage Systems Performance Testing

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

RED HAT ENTERPRISE VIRTUALIZATION PERFORMANCE: SPECVIRT BENCHMARK

DMTF Standards; A Building Block for Cloud Interoperability. Winston Bumpus President, DMTF

C E N T E R F O R I N N O V A T I O N A N D E N T R E P R E N E U R S H I P

Big Data. What is Big Data? Over the past years. Big Data. Big Data: Introduction and Applications

SNIA Cloud Storage PRESENTATION TITLE GOES HERE

Apache Hadoop Patterns of Use

SOCIAL MEDIA ADVERTISING STRATEGIES THAT WORK

A Tour of the Zoo the Hadoop Ecosystem Prafulla Wani

INVESTOR PRESENTATION. First Quarter 2014

Global Virtualization and Cloud Management Software Market

Oracle Database Scalability in VMware ESX VMware ESX 3.5

From Database to your Desktop: How to almost completely automate reports in SAS, with the power of Proc SQL

White Paper: Datameer s User-Focused Big Data Solutions

MapReduce with Apache Hadoop Analysing Big Data

Building & Optimizing Enterprise-class Hadoop with Open Architectures Prem Jain NetApp

vsphere Client Hardware Health Monitoring VMware vsphere 4.1

Active Measurement Data Analysis Techniques

Hadoop s Entry into the Traditional Analytical DBMS Market. Daniel Abadi Yale University August 3 rd, 2010

The Inside Scoop on Hadoop

STATE OF B2B SOCIAL MEDIA MARKETING 2015

vrealize Business System Requirements Guide

PACE Predictive Analytics Center of San Diego Supercomputer Center, UCSD. Natasha Balac, Ph.D.

Downloading, Configuring, and Using the Free SAS University Edition Software

Transcription:

Setting the Direction for Big Data Benchmark Standards Chaitan Baru, PhD San Diego Supercomputer Center UC San Diego

Industry s first workshop on big data benchmarking Acknowledgements National Science Foundation (NSF) SDSC industry sponsors: Seagate, Greenplum, NetApp, Mellanox, Brocade (host) WBDB2012 steering committee Chaitan Baru, SDSC/UC San Diego Milind Bhandarkar, Greenplum/EMC Raghunath Nambiar, Cisco Meikel Poess, Oracle Tilmann Rabl, U of Toronto 2

3 Invited Attendee Organizations Actian AMD BMMsoft Brocade CA Labs Cisco Cloudera Convey Computer CWI/Monet Dell EPFL Facebook Google Greenplum Hewlett-Packard Hortonworks Indiana Univ / Hathitrust Research Foundation InfoSizing Intel LinkedIn MapR/Mahout Mellanox Microsoft NSF NetApp NetApp/OpenSFS Oracle Red Hat San Diego Supercomputer Center SAS Scripps Research Institute Seagate Shell SNIA Teradata Corporation Twitter UC Irvine Univ. of Minnesota Univ. of Toronto Univ. of Washington VMware WhamCloud Yahoo! Meeting structure: 6 x 15mts invited presentations; 35 x 5mts lightning talks. Afternoons: Structured discussion sessions

Topics of discussions Audience: Who is the audience for this benchmark? Application: What application should we model? Single benchmark spec: Is it possible to develop a single benchmark to capture characteristics of multiple applications? Component vs. end-to-end benchmark. Is it possible to factor out a set of benchmark components, which can be isolated and plugged into an end-to-end benchmark(s)? Paper and Pencil vs Implementation-based. Should the implementation be specification-driven or implementation-driven? Reuse. Can we reuse existing benchmarks? Benchmark Data. Where do we get the data from? Innovation or competition? Should the benchmark be for innovation or competition?

Audience: Who is the primary audience for a big data benchmark? Customers Workload should preferably be expressed in English Or, a declarative Language (unsophisticated user) But, not a procedural language (sophisticated user) Want to compare among different vendors Vendors Would like to sell machines/systems based on benchmarks Computer science/hardware research is also an audience Niche players and technologies will emerge out of academia Will be useful to train students on specific benchmarking

Applications: What application should we model? Possibilities An application that somebody could donate An application based on empirical data Examples from scientific applications Multi-channel retailer-based application, like the amended TPC-DS for Big Data? Mature schema, large scale data generator, execution rules, audit process exists. Abstraction of an Internet-scale application, e.g. data management at the Facebook site, with synthetic data

Single Benchmark vs Multiple Is it possible to develop a single benchmark to represent multiple applications? Yes, but not desired if there is no synergy between the benchmarks, e.g. say, at the data model level Synthetic Facebook application might provide context for a single benchmark Click streams, data sorting/indexing, weblog processing, graph traversals, image/video data,

Component benchmark vs. end-to-end benchmark Are there components that can be isolated and plugged into an end-to-end benchmark? The benchmark should consist of individual components that ultimately make up an end-to-end benchmark The benchmark should include a component that extracts large data Many data science applications extract large data and then visualize the output Opportunity for pushing down viz into the data management system

Paper and Pencil / Specification driven versus Implementation driven Start with an implementation and develop specification at the same time Some post-workshop activity has begun in this area Data generation; sorting; some processing

Where Do we Get the Data From? Downloading data is not an option Data needs to be generated (quickly) Examples of actual datasets from scientific applications Observational data (e.g. LSST), simulation outputs Using existing data generators (TPC-DS, TPC-H) Data that is generic enough with good characteristics is better than specific data

Should the benchmark be for innovation or competition? Innovation and competition are not mutually exclusive Should be used for both The benchmark should be designed for competition, such a benchmark will then also be used internally for innovation TPC-H is a prime example of a benchmark model that could drive competition and innovation (if combined correctly)

Can we reuse existing benchmarks? Yes, we could but we need to discuss: How much augmentation is necessary? Can the benchmark data be scaled If the benchmark uses SQL, we should not require it Examples: but none of the following could be used unmodified Statistical Workload Injector for Map Reduce (SWIM) GridMix3 (lots of shortcomings) Open TPC-DS YCSB++ (lots of shortcomings) Terasort strong sentiment for using this, perhaps as part of an end-to-end scenario!"#$%&'&$!()*+,&-.$%&'&$/01(2$ source #$%&'("" - ')*+,+-.",/00-12"3).*45617"81-5"#16.,6*2+-."$1-*),,+.9" $)18-156.*)"%-/.*+:"" - 4220;<<===>20*>-19<20*?,<,0)*<20*?,@A>A>B>0?8" C4D"3/+:?"-."2-0"-8"#$%&'(E" F-:/5)";"" - G-"24)-1)2+*6:":+5+2" - #),2)?"/0"2-"ABB"#H"" F):-*+2D";"1-::+.9"/0?62),"" F61+)2D"!" - 5LFK UHODWLRQDO PRGHO DQG GDWD WDEOHV IDFW WDEOHV «" - I6,D"2-"6??"24)"-24)1"2=-",-/1*)," " " " Ahmad Ghazal, Aster

Keep in mind principles for good benchmark design Self-scaling, e.g. TPC-C Comparability between scale factors Results should be comparable at different scales Technology agnostic (if meaningful to the application) Simple to run + TPC Longevity: TPC-C has carried the load for 20 years + Comparability Audit requirements and strict detailed run rules mean one can compare results published by two different entities + Scaling Results just as meaningful at the high-end of the market as at the lowend; as relevant on clusters as on single servers - Hard and expensive to run - No kit - DeWitt clauses 3 Reza Taheri, VMWare

14 Extrapolating Results TPC benchmarks typically run on over-specified systems i.e. Customer installations may have less hardware than benchmark installation (SUT) Big Data Benchmarking may be opposite May need to run benchmark on systems that are smaller than customer installations Can we extrapolate? Scaling may be piecewise linear. Need to find those inflexion points

15 Interest in results from the Workshop June 9, 2012: June 22-23, 2012: Setting the Direction for Big Data Benchmark Standards, Baru, Bhandarkar, Nambiar, Poess, Rabl, 4th Int l TPC Technology Conference (TPCTC 2012), with VLDB2012, Istanbul, Turkey September 10-13, 2012: Towards Industry Standard Benchmarks for Big Data, M. Bhandarkar, The Extremely Large Databases Conference at Asia, June 22-23, 2012. August 27-31, 2012: Summary of Workshop on Big Data Benchmarking, C. Baru, Workshop on Architectures and Systems for Big Data, Portland, OR. Introducing the Big Data Benchmarking Community, Lightning Talk and Poster at the 6th Extremely Large Database Conference (XLDB), Stanford Univ, Palo Alto. September 20, SNIA ABDS2012 September 21, Workshop on Managing Big Data Systems (MBDS), San Jose, CA.

Big Data Benchmarking Community (BDBC) 16 Biweekly phone conferences Group communications and coordination is facilitated via phone calls every two weeks. See http://clds.sdsc.edu/bdbc/community for call-in information Contact tilmann.rabl@utoronto.ca or baru@sdsc.edu to join BDBC mailing list

Big Data Benchmarking Community Participants Actian AMD Argonne National Labs BMMsoft Brocade CA Labs Cisco Cloudera CMU Convey Computer CWI/Monet DataStax Dell EPFL Facebook GaTech Google Greenplum Hortonworks Hewlett-Packard IBM IndianaU/HTRF InfoSizing Intel Johns Hopkins U. LinkedIn MapR/Mahout Mellanox Microsoft NetApp Netflix NIST NSF OpenSFS Oracle Ohio State U. Paypal PNNL Red Hat San Diego Supercomputer Center SAS Seagate Shell SLAC SNIA Teradata Twitter Univ. of Minnesota UC Berkeley UC Irvine UC San Diego Univ of Passau Univ of Toronto Univ. of Washington VMware WhamCloud Yahoo!

2nd Workshop on Big Data Benchmarking: India December 17-18, 2012, Pune, India Hosted by Persistent Systems, India See http://clds.ucsd.edu/wbdb2012.in Themes: Application Scenarios/Use Cases Benchmark Process Benchmark Metrics Data Generation Format: Invited speakers; Lightning talks; Demos We welcome sponsorship Current Sponsors: NSF, Persistent Systems, Seagate, Greenplum, Brocade, Mellanox 18

3rd Workshop on Big Data Benchmarking: China June 2013, Xian, China Hosted by Shanxi Supercomputing Center See http://clds.ucsd.edu/wbdb2013.cn Current Sponsors: Seagate, Greenplum, Mellanox, Brocade? 19

The Center for Large-scale Data Systems Research: CLDS 20 At the San Diego Supercomputer Center, UC San Diego R&D center and forum for discussions on big datarelated topics Benchmarking; Taxonomy of data growth; Information value; Focus on industry segments, e.g healthcare Workshops; Program-level activities; Project-level activities For information and sponsorship opportunities: Contact: Chaitan Baru, SDSC, baru@sdsc.edu Thank you!