The Fusion of Supercomputing and Big Data: The Role of Global Memory Architectures in Future Large Scale Data Analytics



Similar documents
Cray XC30 Hadoop Platform Jonathan (Bill) Sparks Howard Pritchard Martha Dumler

HPC and Large Scale Data Analytics. SOS17 Conference Jekyll Island, Georgia

Cray s Storage History and Outlook Lustre+ Jason Goodman, Cray LUG Denver

Supercomputing and Big Data: Where are the Real Boundaries and Opportunities for Synergy?

Lustre* Testing: The Basics. Justin Miller, Cray Inc. James Nunez, Intel Corporation LAD 15 Paris, France

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

HPC & Big Data THE TIME HAS COME FOR A SCALABLE FRAMEWORK

HPC Trends and Directions in the Earth Sciences Per Nyberg Sr. Director, Business Development

Cray: Enabling Real-Time Discovery in Big Data

Amazon EC2 Product Details Page 1 of 5

Data Centric Systems (DCS)

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Storage, Cloud, Web 2.0, Big Data Driving Growth

Hur hanterar vi utmaningar inom området - Big Data. Jan Östling Enterprise Technologies Intel Corporation, NER

BIG DATA-AS-A-SERVICE

ECMWF HPC Workshop: Accelerating Data Management

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Barry Bolding, Ph.D. VP, Cray Product Division

So#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell

The Future of Data Management

Protecting Big Data Data Protection Solutions for the Business Data Lake

Hybrid Software Architectures for Big

Big Data on AWS. Services Overview. Bernie Nallamotu Principle Solutions Architect

VIEWPOINT. High Performance Analytics. Industry Context and Trends

Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale

Architectures for Big Data Analytics A database perspective

New Dimensions in Configurable Computing at runtime simultaneously allows Big Data and fine Grain HPC

Accelerating Business Intelligence with Large-Scale System Memory

Can Flash help you ride the Big Data Wave? Steve Fingerhut Vice President, Marketing Enterprise Storage Solutions Corporation

Einsatzfelder von IBM PureData Systems und Ihre Vorteile.

End to End Solution to Accelerate Data Warehouse Optimization. Franco Flore Alliance Sales Director - APJ

How To Handle Big Data With A Data Scientist

Scaling from Datacenter to Client

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

BIG DATA AND THE ENTERPRISE DATA WAREHOUSE WORKSHOP

Real-Time Big Data Analytics SAP HANA with the Intel Distribution for Apache Hadoop software

Build Your Competitive Edge in Big Data with Cisco. Rick Speyer Senior Global Marketing Manager Big Data Cisco Systems 6/25/2015

Information Architecture

Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

The Flash Transformed Data Center & the Unlimited Future of Flash John Scaramuzzo Sr. Vice President & General Manager, Enterprise Storage Solutions

Accelerating Business Intelligence with Large-Scale System Memory

Parallel Computing. Benson Muite. benson.

Data management challenges in todays Healthcare and Life Sciences ecosystems

System Architecture. In-Memory Database

Introduction to Big Data! with Apache Spark" UC#BERKELEY#

Next-Gen Big Data Analytics using the Spark stack

Datenverwaltung im Wandel - Building an Enterprise Data Hub with

Machine Learning for Data-Driven Discovery

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC

Big Data and Big Data Modeling

Six Days in the Network Security Trenches at SC14. A Cray Graph Analytics Case Study

IBM Netezza High Capacity Appliance

Real-Time Analytical Processing (RTAP) Using the Spark Stack. Jason Dai Intel Software and Services Group

The 4 Pillars of Technosoft s Big Data Practice

Hadoop on the Gordon Data Intensive Cluster

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON

Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches

Enabling High performance Big Data platform with RDMA

IBM System x reference architecture solutions for big data

Big Data. Value, use cases and architectures. Petar Torre Lead Architect Service Provider Group. Dubrovnik, Croatia, South East Europe May, 2013

Kriterien für ein PetaFlop System

SQL Server 2012 Performance White Paper

Real Time Big Data Processing

Infrastructure Matters: POWER8 vs. Xeon x86

Moving From Hadoop to Spark

Big Data on the Open Cloud

Big Data Analytics. with EMC Greenplum and Hadoop. Big Data Analytics. Ofir Manor Pre Sales Technical Architect EMC Greenplum

Title. Click to edit Master text styles Second level Third level

Understanding the Value of In-Memory in the IT Landscape

Integrated Grid Solutions. and Greenplum

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

Enterprise Workloads on the IBM X6 Portfolio: Driving Business Advantages

Architecting for the Internet of Things & Big Data

SGI Big Data Ecosystem - Overview. Dave Raddatz, Big Data Solutions & Performance, SGI

Keywords Big Data, NoSQL, Relational Databases, Decision Making using Big Data, Hadoop

Data-intensive HPC: opportunities and challenges. Patrick Valduriez

How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)

An Oracle White Paper June High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

Oracle Big Data Building A Big Data Management System

Collaborative Big Data Analytics. Copyright 2012 EMC Corporation. All rights reserved.

PRIMERGY server-based High Performance Computing solutions

Hank Childs, University of Oregon

IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME?

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

Next Generation GPU Architecture Code-named Fermi

Transcription:

HPC 2014 High Performance Computing FROM clouds and BIG DATA to EXASCALE AND BEYOND An International Advanced Workshop July 7 11, 2014, Cetraro, Italy Session III Emerging Systems and Solutions The Fusion of Supercomputing and Big Data: The Role of Global Memory Architectures in Future Large Scale Data Analytics Bill Blake Senior VP and CTO, CRAY, USA 1 Cray Proprietary 1

Safe Harbor Statement This presentation may contain forward-looking statements that are based on our current expectations. Forward looking statements may include statements about our financial guidance and expected operating results, our opportunities and future potential, our product development and new product introduction plans, our ability to expand and penetrate our addressable markets and other statements that are not historical facts. These statements are only predictions and actual results may materially vary from those projected. Please refer to Cray's documents filed with the SEC from time to time concerning factors that could affect the Company and these forward-looking statements. 2/18/14 Copyright 2014 Cray Inc. 2

Is Highly Scalable Computing Facing a Branch in the Road Ahead? With one path leading to Exascale supercomputers delivering billion-way parallel computing for the highest capability running a single application? And another path leading to Hyperscale with millions of servers and billions of cores in the cloud delivering high capacity for many applications? This presentation will explore the technology and architectural trends facing system and application developers and portray Cray s system design efforts as the way to deliver the fusion of Supercomputing and High Performance Data Analytics that will achieve a both/and capability bringing HPC benefits to a larger community Copyright 2014 Cray Inc. 3 3

System Architecture Differences Exascale Supercomputing Scalable computing w/high BW, low-latency, Global Mem Architectures Highly integrated processor-memory-interconnect & network storage Ability to apply all compute power to one highly parallel application Low data movement load the mesh into memory and compute Move data for loading, defensive check-pointing or archiving tennis court sized systems that consume <20 MWatt Hyperscale Computing aka the Cloud Distributed computing at largest scale Divide-and-conquer approaches using Service Oriented Architectures Ability to apply compute power to many apps with multi-tenancy High data movement-- Scan/Sort/Stream all the data all the time Lowest cost processor-memory-interconnect & local storage Warehouse sized systems that collectively consume >260 MWatt 4 4

Need to Create a Virtuous Cycle Cloud provides new distributed programming models that utilize divide and conquer approaches with massive scale-out Service Oriented Architectures using local storage and low cost hardware, and new data analytics algorithms where data scientists claim the larger the data the simpler the algorithm HPC provides new parallel programming models that utilize highly scalable Global Memory Architectures supported by highest BW, lowest latency interconnects, with powerful algorithms for high fidelity modeling and simulation using highly iterative processing of both capability and capacity workloads that increasingly support data assimilation (from sensors) COM PUTE Image courtesy STORE of University of Michigan A N Atmospherics A LYZE Dynamics Modeling Group 2/18/14 Copyright 2014 Cray Inc. 5 5

The Fusion of Supercomputing with Large Scale Data Analytics Key Trends 6 6

The Cost per FLOP is Dropping Like a Spent Rocket! Paragon 0.000000001 cents per FLOP SANDIA LAB S RED STORM XT NVIDIA-powered Game Console 2005 2019 40 teraflops 40 teraflops 2 280 square meters 0.08 square meter 1 000 000 watts <200 watts 2 for GPU; CPU adds another ~10 teraflops (10nm process) 7 this sort of comparison provides a way of illustrating what has changed (peak floating point performance) and what has not changed by nearly as much (memory, interconnect and storage bandwidths). Continue the extrapolation and we get a machine with peak performance of an Exaflop and reasonable power consumption (not 20MW perhaps, but within what can be supplied to a building). The trouble is, it is almost useless. One way of looking at the significance of fusing HPC and fast data is that the reemphasis on data movement is what will broaden the applicability of supercomputers again perhaps not to their peak of the late 80s, but at least heading back in the right direction. 7

Big Data Means New Kinds of Data EMC estimates that by 2020 there will be 40,000 Exabytes of data created, although the majority of that data will not be created by humans but sensors Analysis Gap! Able to Analyze Source: IDC White Paper sponsored by EMC May 2009 Copyright 2014 Cray Inc. 8 8

Software-led Infrastructure of the Enterprise There are substantial changes in the technology used by Datacenters that Cray will adopt to support this business (very different from Exascale!) Need to focus on applications and workloads and move to converged infrastructure Appliances are first stage of converged infrastructure (compute + storage) Complex event Processing Data Lake Hadoop Clusters Presentation Layer Source: Cablevision Visit 27 Feb 2014 Copyright 2014 Cray Inc. 9 9

Cray s Vision The Fusion of Supercomputing and Big & Fast Data Modeling The World Cray Supercomputers solving grand challenges in science, engineering and analytics Data Models Integration of datasets and math models for search, analysis, predictive modeling and knowledge discovery Data-Intensive Processing High throughput event processing & data capture from sensors, data feeds and instruments Math Models Modeling and simulation augmented with data to provide the highest fidelity virtual reality results COMPUTE STORE ANALYZE 2/18/14 Copyright 2014 Cray Inc. 10 10

The Big Data Challenge Supercomputing minimizes data movement data movement is highly restricted for defensive or resiliency such as loading, check pointing or archiving. Programming model is imperative (C++/Fortran + MPI) with focus on the details of how parallel programming is done Data-intensive computing is all about data movement - scanning, sorting, streaming and aggregating all the data all the time to get the answer or discover new knowledge from unstructured or structured data sources. Programming model is declarative (query) or functional with emphasis on what is being computed versus how it is computed Cloud Computing is all about virtualization -- Application access to converged infrastructure (Compute/Network/Storage) via IP Stack Programming Model is Platform as a Service with APIs for what is being computed rather then where the computing is done 11 11

Multiple Aspects of Big Data 12 12

The Future: Global Memory and Latency Hiding Data Reuse Near Previous Data Access Modeling & Simulation (dense matrices) Trend Informatics Analytics (sparse matrices) From: Murphy and Kogge, On the Memory Access Patterns of Supercomputer Applications: Benchmark Selection and Its Implications, IEEE Trans. On Computers, July 2007 Data Reuse over Time 7/8/2014 Copyright 2014 Cray Inc. 13 13

Cray Adaptive Supercomputing Adapting the system to the application And not the application to the system Extending Adaptive Supercomputing to Big Data Workloads Copyright 2014 Cray Inc. 14 14

Motivation For Cascade XC30 15 15

The Cray XC30 TM Supercomputer Continuing the most successful, productive supercomputer product line ever built by Cray Previously codenamed Cascade Completion of six-year U.S. DARPA HPCS contract Set the stage for Global Memory Operations at Exascale Significant productivity gains for msg passing parallelism Productized as the Cray XC30 TM Supercomputer Cray s latest production supercomputer = 16 16

Big Data Fast Data SHARED MEMORY (urika Graph, SAP Hana, Exasol, RDBMS) MPP Global Memory MPP DISTRIBUTED MEMORY (CRAY CCS systems and IB or XC30 and Aries ) Fast Data MPP Interconnects ETHERNET CLUSTERS SAN Interconnects Enterprise Data (structured) Big Data (unstructured) MPI on Ethernet Cluster IP Network GRID LAN/WAN Interconnects 17 17

Cray Brings Supercomputing to Analytics urika In-memory Graph Analytics Information Information Density Density SAN Interconnects Enterprise Data (structured) & MPP Integration Integration and Memory Memory Increase Increase XC30 MPP Global Global Memory Fast Data CS300 Cluster Supercomputer & Hadoop Distributed Memory Big Data Ethernet Clusters \ CLOUD Data Data Volume Volume & & Interconnect Interconnect latency latency Increase Increase GRID LAN/WAN interconnects 19 19

Programming Emphasis MPP Global Memory Fast Data MPP Interconnects SAN Interconnects Enterprise Data (structured) Big Data (unstructured) MPI on Ethernet Cluster IP Network GRID LAN/WAN Interconnects 20 20

The Challenge of Expressing Analytics? Queries and Program Code Do Not Mix Well Language close to the machine Problem Domain Abstraction 3GL Scala Functional Languages Domain Erlang Expert F# KDT Chapel Matrix/Array Fortran Languages Expert C++ Programmer Java C# UPC Python C Mathematica (Symbolic) R (Statistical) MATLAB Julia NumPy Problem Solving Environments Analyst NoSQL Query SQL Languages SPARQL Excel SAS Language close to the human 21 21

Future Architectural Possibilities CPUs D R A M Compute Intensive CPUs D R A M N V M Storage Hierarchy CPUs IO HDD D R A M Net CPUs IO SSD D R A M Net N V M No Moving Parts IO HDD Net IO SSD Net Data Intensive N V M Active NVM CPUs CPU + IO HDD D R A M Net Stacked CPU/NVM Chip? NVM NVM NVM NVM Slice CPUs CPUs CPUs slice NET NET NET IO 22 22

System Implications of Fast, Cheap, Non-Volatile Memory Operating System Evolutionary changes integrated into existing architectures (same file system semantics but with buffering, persistent objects) Revolutionary changes with whole-system persistence will effect memory management, I/O, fault management, etc. Relegate magnetic storage to an archival function Power Today 30% to 40% of system power is DRAM (HDD is 10%) Applications Manipulate data 100x 1000x faster (both throughput and latency improve) 2/18/14 Copyright 2014 Cray Inc. 23 23

The Next Generation of Analytics Will Need Very High Performance Global Memory Operations urika In-memory Graph Analytics XC30 Global MPP Global Memory Fast Data + 5 Years + 10 Years Capability Workloads MPP Global Memory Architecture for HPC & Analytics MPP Global Memory Architecture for Analytics with Exascale Technology and NVM SAN Interconnects Enterprise Data (structured) CS300 Cluster Supercomputer & Hadoop Distributed Memory Big Data Ethernet Clusters \ CLOUD GRID LAN/WAN interconnects Capacity Workloads Ingest and Archive High Throughput, Multi-tenancy Cloud Computing The SW Defined Datacenter 24 24

Cray s Roadmap Fusion Modeling & Simulation Compute Data Ingest Store Appliances Analyze Multiple Roadmap Steps: Combine workflows & data into a single system Aggressive local and global memory capabilities Tightly integrated software stack (& runtimes) across all 3 capabilities 25 Cray Proprietary 25

Concluding Comments Warehouse scale distributed computing, aka Cloud, provides an excellent multi-tenancy resource for high throughput capacity computing especially where virtualization of converged compute/network/storage affords the use of resources in various locations But highly parallel analytic workloads, especially those that require low latency messaging and/or global memory operations that benefit greatly from the high performance interconnects and the tight integration of MPP machines, will not migrate from MPP to Cloud Many Cloud developments will condense into future big memory MPP systems, including programming models, RDF and NoSQL databases, software defined networks and storage, and hypervisors that combined with the high performance message passing and global atomic memory support in MPP networks (e.g., Cray Aries) COM will PUTE best support STORE the fusion A N A LYZE of HPC and large- 2/18/14 Copyright 2014 Cray Inc. scale analytics 26

Thank You! bill.blake@cray.com 27 27

Legal Disclaimer Information in this document is provided in connection with Cray Inc. products. No license, express or implied, to any intellectual property rights is granted by this document. Cray Inc. may make changes to specifications and product descriptions at any time, without notice. All products, dates and figures specified are preliminary based on current expectations, and are subject to change without notice. Cray hardware and software products may contain design defects or errors known as errata, which may cause the product to deviate from published specifications. Current characterized errata are available on request. Cray uses codenames internally to identify products that are in development and not yet publically announced for release. Customers and other third parties are not authorized by Cray Inc. to use codenames in advertising, promotion or marketing and any use of Cray Inc. internal codenames is at the sole risk of the user. Performance tests and ratings are measured using specific systems and/or components and reflect the approximate performance of Cray Inc. products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. The following are trademarks of Cray Inc. and are registered in the United States and other countries: CRAY and design, SONEXION, URIKA, and YARCDATA. The following are trademarks of Cray Inc.: ACE, APPRENTICE2, CHAPEL, CLUSTER CONNECT, CRAYPAT, CRAYPORT, ECOPHLEX, LIBSCI, NODEKARE, THREADSTORM. The following system family marks, and associated model number marks, are trademarks of Cray Inc.: CS, CX, XC, XE, XK, XMT, and XT. The registered trademark LINUX is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis. Other trademarks used in this document are the property of their respective owners. 28