Processing/ Processing expensive/ Processing free/ Memory: memory free memory expensive



Similar documents
Data Centric Systems (DCS)

Barry Bolding, Ph.D. VP, Cray Product Division

EXECUTIVE ORDER CREATING A NATIONAL STRATEGIC COMPUTING INITIATIVE. By the authority vested in me as President by the

Mission Need Statement for the Next Generation High Performance Production Computing System Project (NERSC-8)

HPC & Big Data THE TIME HAS COME FOR A SCALABLE FRAMEWORK

NATIONAL NUCLEAR SECURITY ADMINISTRATION

PACE Predictive Analytics Center of San Diego Supercomputer Center, UCSD. Natasha Balac, Ph.D.

Exascale Challenges and General Purpose Processors. Avinash Sodani, Ph.D. Chief Architect, Knights Landing Processor Intel Corporation

Kriterien für ein PetaFlop System

Hank Childs, University of Oregon

New Dimensions in Configurable Computing at runtime simultaneously allows Big Data and fine Grain HPC

Statement of Gil Vega. Associate Chief Information Officer for Cybersecurity and Chief Information Security Officer. U.S. Department of Energy


Final Draft/Pre-Decisional/Do Not Cite. Forging a Common Understanding for Critical Infrastructure. Shared Narrative

UNCLASSIFIED. UNCLASSIFIED Office of Secretary Of Defense Page 1 of 9 R-1 Line #139

Gi-Joon Nam, IBM Research - Austin Sani R. Nassif, Radyalis. Opportunities in Power Distribution Network System Optimization (from EDA Perspective)

Jean-Pierre Panziera Teratec 2011

State-of-Art (SoA) System-on-Chip (SoC) Design HPC SoC Workshop

Design Cycle for Microprocessors

White Paper The Numascale Solution: Extreme BIG DATA Computing

Architectures and Platforms

High Performance or Cycle Accuracy?

Percipient StorAGe for Exascale Data Centric Computing

EOFS Workshop Paris Sept, Lustre at exascale. Eric Barton. CTO Whamcloud, Inc Whamcloud, Inc.

numascale White Paper The Numascale Solution: Extreme BIG DATA Computing Hardware Accellerated Data Intensive Computing By: Einar Rustad ABSTRACT

White Paper The Numascale Solution: Affordable BIG DATA Computing

Optimization in the Virtual Data Center

Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

UNCLASSIFIED. R-1 Program Element (Number/Name) PE A / High Performance Computing Modernization Program. Prior Years FY 2013 FY 2014 FY 2015

ECMWF HPC Workshop: Accelerating Data Management

LLNL Redefines High Performance Computing with Fusion Powered I/O

Secretary of Energy Advisory Board. Report of the Task Force on Next Generation High Performance Computing. August 18, U.S. Department of Energy

ELEC 5260/6260/6266 Embedded Computing Systems

Cyber security Building confidence in your digital future

Future and Emerging Technologies (FET) in H2020. Ales Fiala Future and Emerging Technologies DG CONNECT European Commission

Eni Green Data Center - Specification

Data Analytics at NERSC. Joaquin Correa NERSC Data and Analytics Services

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

Performance Analysis of Flash Storage Devices and their Application in High Performance Computing

HPC Market Update, HPC Trends In the Oil/Gas Sector and IDC's Top 10 Predictions for Earl Joseph, HPC Program Vice President

Data-intensive HPC: opportunities and challenges. Patrick Valduriez

Business Continuity Management Systems. Protecting for tomorrow by building resilience today

Are You Ready for Big Data?

Infrastructure Matters: POWER8 vs. Xeon x86

Sun Constellation System: The Open Petascale Computing Architecture

Open Cloud Computing A Case for HPC CRO NGI Day Zagreb, Oct, 26th

Big Data Can Drive the Business and IT to Evolve and Adapt

White Paper: Enterprise Hosting 2013

Part V Applications. What is cloud computing? SaaS has been around for awhile. Cloud Computing: General concepts

Wait-Time Analysis Method: New Best Practice for Performance Management

OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC

HPC Update: Engagement Model

A New Era Of Analytic

Making Multicore Work and Measuring its Benefits. Markus Levy, president EEMBC and Multicore Association

Generating Higher Value at IBM

Visualization and Data Analysis

Delivering Business Intelligence with Open Source Software

How To Write A Book On Risk Management

The 50G Silicon Photonics Link

can you effectively plan for the migration and management of systems and applications on Vblock Platforms?

BSC - Barcelona Supercomputer Center

TUSKEGEE CYBER SECURITY PATH FORWARD

Frontiers in Cyber Security: Beyond the OS

Good morning. It is a pleasure to be with you here today to talk about the value and promise of Big Data.

IoT Security Platform

GPU Computing. The GPU Advantage. To ExaScale and Beyond. The GPU is the Computer

HIGH PERFORMANCE COMPUTING

STRATEGIC OBJECTIVE 2.4 OVERCOME GLOBAL SECURITY CHALLENGES THROUGH DIPLOMATIC ENGAGEMENT AND DEVELOPMENT COOPERATION

SAS Business Analytics. Base SAS for SAS 9.2

NATIONAL STRATEGY FOR GLOBAL SUPPLY CHAIN SECURITY

Next-Generation Networking for Science

Simple. Extensible. Open.

Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED

Hadoop on the Gordon Data Intensive Cluster

Supplemental Tool: Executing A Critical Infrastructure Risk Management Approach

RSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer

Transcription:

DOE Exascale Initiative Dimitri Kusnezov, Senior Advisor to the Secretary, US DOE Steve Binkley, Senior Advisor, Office of Science, US DOE Bill Harrod, Office of Science/ASCR Bob Meisner, Defense Programs/ASC Briefing to the Secretary of Energy Advisory Board, September 13, 2013 The Department of Energy is planning for a major computer and computational science initiative anchored in our mission challenges. We use the term 'exascale' to capture the successful transition to the next era of computing in the 2020 timeframe. Change is coming in many ways, not just technology. For instance: World we were in World we are going to Processing/ Processing expensive/ Processing free/ Memory: memory free memory expensive Ecosystem US micro-electronics Globalization of supply chain supply chain US government driven Mobile device driven Cyber Guards/Fences local Sophisticated and organized adversaries global Traditional path of 2x performance improvement every 18 months has ended. The result is unacceptable power requirements for increased performance 1

Goals The DOE Exascale Initiative will drive US leadership in several dimensions: Deliver the next era of high-end computational systems Build-in cyber defenses including information protection Provide technologies to ensure supply-chain integrity Attack previously unsolvable problems critical to the nation s national security, science and innovation engine Deliver tremendous improvements in electronic component energy efficiency, reducing operations costs of HPC centers and carbon footprints Stimulate US competitiveness and intellectual property through aggressive technology development and adoption Exascale will be the culmination of a technological path defined and developed over a next decade. It is about solving some of our nation s more critical problems while driving US competitiveness. 2

What actions are needed and when? What is your confidence? How do you bring science to bear into the decision process? What are the risks? What happened? Can it happen again? 3

Prediction is Simple Prediction examples surround us: First prediction of sun eclipse by Thales of Miletus (585 BC) Tide tables Lifetime of first excited state of hydrogen Laminar-/turbulent-flow drag of a thin plate, aligned with flow direction Performance prediction for a new aircraft National Hurricane Center forecasts Climate prediction 2014 World Cup winners Astrology But consequences of poor predictions are not all the same. Today we are turning to simulation for increasingly serious problems and to push the frontiers of science and technology.

Alignment with Presidential Priorities Climate Action Plan Resiliency against natural and man-made conditions DOE is the Sector-specific agency for Energy Infrastructure Cross-sector dependencies of US critical infrastructures Nuclear Security Agenda (Prague, Berlin, NPR) Preventing nuclear proliferation and nuclear terrorism Reducing role of weapons Maintaining strategic deterrence with reduced forces Strengthening regional deterrence and reassuring allies Ensuring a safe, secure and effective stockpile Confidence in predictions here are critical. Today we make decisions with B$ impacts that we could not do just 2-3 years ago. But we need to do better. 5

Mission: Extreme Scale Science Mission-Focused Applications Stockpile Science New Catalysts 10,000 atom quantum simulations Climate Models Large-ensemble multidecadal predictions Materials Design ab-initio electronic structure methods for excited states Combustion Turbulent, chemically reacting systems Thermonuclear burn p, D, T, He 3,He 4 Δτ Δτ Burn ee Debye screening ~ 10 10 15 12 sec sec Coulomb Collisions Burning Plasma Quantum interference and diffraction Atomic Physics Radiation (Photons) Spontaneous and stimulated emission Improved energy technologies: Photovoltaics, Internal combustion devices, Batteries Manufacturing Technology Novel materials: Energy applications, Electronics Non-Proliferation and Nuclear Counter Terrorism Advances in simulation is the common rate-limiting factor Hydrodynamics 6

Our Data is Exploding Genomics Data Volume increases to 10 PB in FY21 High Energy Physics (Large Hadron Collider) 15 PB of data/year Light Sources Approximately 300 TB/day Climate Data expected to be hundreds of 100 EB Nuclear Security PBs per simulation Driven by exponential technology advances Data sources Scientific Instruments Sensors/ Treaty monitoring Scientific Computing Facilities Simulation Results Big Data and Big Compute is the place we operate Analyzing Big Data requires processing (e.g., search, transform, analyze, ) Extreme scale computing will enable timely and more complex processing of increasingly large Big Data sets 7

Remarkable progress in last 15 years, but much more to do ASCI Red: Comissioned Dec 1996 Just under 10,000 processors 1 TF 2500 sq ft 0.5 MW Intel: 1 TF chip (products within 2 years) Size of a pack of gum 100-250 W Prototype was run in Nov 2011 Example of potential Exascale System To get to exascale, one could imagine: 1 million, 1 TF processors 64 quadrillion bytes of memory high-performance interconnect software to make it work all must fit within 20MW envelope System peak System power System memory Storage I/O aggregate bandwidth 1,000 PF ~20 MW 64 PB 500-1000 PB 60 TB/s MTTI ~1 day 8

10000 Memory Bandwidth & Data Movement Performance will be Energy limited Intranode/SMP Communication Intranode/MPI Communication 1000 pj kw @Petascale PicoJoules 100 10 On-chip / CMP communication now 2018 1 pj MW @Exascale ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems, Peter Kogge, Editor and Study Lead, DARPA-TR-2008-13, 2008. 9

Some of our considerations in getting to the Exascale No obvious path to provide Exascale computing within a reasonable power envelope by 2020 timeframe We are thinking in the many dimensions. It includes: Million processor systems, with each processor having 1000 cores O(10 9 ) cores O(10 9 ) threads with no clear path to programmability, debugging, optimization New programming paradigms Dealing with massive data sets: Our TF systems created PBs of data; Might expect EF systems to create YB of data. HW and SW design inexorably linked to attack problems of reliability Sub-threshold logic 10 18 operations/sec x 10-12 J/operation = 1 MW. But today memory/cache access and average cost of operations is into the 100s of pj. The value proposition: What is your calculation worth? The ratio between flops and bytes of storage will continue to worsen. Application resilience and detection/recovery of soft-errors becomes a large problem: check-point/restart will not be practical. Power mitigated by users and by the introduction of special-purpose processors, generally vector processors of fixed (but arbitrary!) length. 10

Exascale Challenges: Starting to build in self-awareness of state and activities - analytics based monitoring, fingerprinting for provenance, monitoring for rogue operations, software control layers, resiliency to attacks, Security Advisory Board Delivering a useable exascale systems by early 2020s at about 20MW Co-design of simulation tools in key scientific areas Systems software, compilers, operating systems Uncertainty quantification engines Technology Advisory Board Big data analytics Requires efficiencies in memory, storage, data movement, code architectures (eg memory (~2pJ/bit); data motion (< nj/bit, 10x)) Resiliency and reliability US ownership of IP - eg supporting technologies and tools, software stacks, network fabrics, interconnects, Working with business leaders and going beyond technology diffusion ROI, business schools, energy efficiency, Business Advisory Board 11

Summary The Exascale Initiative is about the innovative capacity and insight that will result from potent combination of predictive simulation & big data analytics in an environment that is more secure from cyber, supply chain and resiliency issues. The Exascale Initiative is about pioneering technology in computing that will have an enormous effect on the rest of the the computing marketplace. It will require strong partnerships among government, industry, and universities 12