Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Similar documents
Hadoop on the Gordon Data Intensive Cluster

Smarter Cluster Supercomputing from the Supercomputer Experts

HPC Software Requirements to Support an HPC Cluster Supercomputer

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

Designed for Maximum Accelerator Performance

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

An Introduction to the Gordon Architecture

Designed for Maximum Accelerator Performance

Application and Micro-benchmark Performance using MVAPICH2-X on SDSC Gordon Cluster

Sun Constellation System: The Open Petascale Computing Architecture

HPC Update: Engagement Model

Smarter Cluster Supercomputing from the Supercomputer Experts

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

Cluster Implementation and Management; Scheduling

LANL Computing Environment for PSAAP Partners

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

PACE Predictive Analytics Center of San Diego Supercomputer Center, UCSD. Natasha Balac, Ph.D.

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal by SGI Federal. Published by The Aerospace Corporation with permission.

Building a Top500-class Supercomputing Cluster at LNS-BUAP

LS DYNA Performance Benchmarks and Profiling. January 2009

Building Clusters for Gromacs and other HPC applications

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca

1 Bull, 2011 Bull Extreme Computing

New Storage System Solutions

SUN HPC SOFTWARE CLUSTERING MADE EASY

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

InfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice

Kriterien für ein PetaFlop System

SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center

BSC - Barcelona Supercomputer Center

FLOW-3D Performance Benchmark and Profiling. September 2012

Hortonworks Data Platform Reference Architecture

Current Status of FEFS for the K computer

MapR Enterprise Edition & Enterprise Database Edition

Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering

Cray s Storage History and Outlook Lustre+ Jason Goodman, Cray LUG Denver

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

AirWave 7.7. Server Sizing Guide

Scaling from Workstation to Cluster for Compute-Intensive Applications

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

præsentation oktober 2011

LLNL Redefines High Performance Computing with Fusion Powered I/O

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

FPGA Acceleration using OpenCL & PCIe Accelerators MEW 25

Cray XT3 Supercomputer Scalable by Design CRAY XT3 DATASHEET

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

ECLIPSE Performance Benchmarks and Profiling. January 2009

Data Center Storage Solutions

Cloud Data Center Acceleration 2015

CRIBI. Calcolo Scientifico e Bioinformatica oggi Università di Padova 13 gennaio 2012

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

What s new in Hyper-V 2012 R2

Headline in Arial Bold 30pt. The Need For Speed. Rick Reid Principal Engineer SGI

High Performance Computing in CST STUDIO SUITE

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC Denver

High Performance SQL Server with Storage Center 6.4 All Flash Array

Linux Cluster Computing An Administrator s Perspective

A PERFORMANCE COMPARISON USING HPC BENCHMARKS: WINDOWS HPC SERVER 2008 AND RED HAT ENTERPRISE LINUX 5

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

HP Moonshot System. Table of contents. A new style of IT accelerating innovation at scale. Technical white paper

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Oracle Exadata Database Machine for SAP Systems - Innovation Provided by SAP and Oracle for Joint Customers

Cooling and thermal efficiently in

High Performance Computing (HPC)

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science


Clusters: Mainstream Technology for CAE

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

SMB Direct for SQL Server and Private Cloud

Trinity Advanced Technology System Overview

Fujitsu HPC Cluster Suite

Scaling Across the Supercomputer Performance Spectrum

200 flash-related. petabytes of. flash shipped. patents. NetApp flash leadership. Metrics through October, 2015

Overview of HPC systems and software available within

Windows Server 2008 R2 Hyper V. Public FAQ

Pedraforca: ARM + GPU prototype

Achieving Performance Isolation with Lightweight Co-Kernels

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

Data management challenges in todays Healthcare and Life Sciences ecosystems

Cluster Computing at HRI

Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory

SAS Business Analytics. Base SAS for SAS 9.2

Transcription:

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales

About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007 Branded servers Clusters solutions Manufacturer 2007 to 2012 End-To-End Supercomputer Solutions Moving Forward. 2 2

3 Appro Solution Overview Appro Xtreme-X Supercomputer Based on industry standard, optimized server platforms with the latest processor and system cooling solutions Features extreme compute density, best performance/watt, multiple HPC network connectivity, I/O and disc options Combines great performance, power efficiency, HPC SW stack and configuration flexibility with support & services Based on cutting-edge building blocks server platforms Features up to four processors per server with choice of co-processing technologies Superior floating point performance, peak and operational efficiency Coupled with Appro HPC Software Stack Power an HPC Cluster with network, server, cluster, and storage management Includes performance monitoring, development tools and application libraries Features instant node provisioning, resource management and job scheduling, network load balancing, failover, remote and revision control, advanced power functions with Linux OS support

Appro HPC Software Stack Offers the necessary HPC software with multi Linux OS support to run large applications Powers the HPC Cluster with network, server, cluster, and storage management featuring instant node provisioning, sub-cluster management with network load balancing, failover, and recovery 4 Appro HPC Software Stack Performance Monitoring Development Tools Application Libraries Resource Management/ Job/Scheduling Parallel File System Cluster Monitoring HPCC Perfctr IOR PAPI/IPM netperf Intel Cluster Studio PGI (PGI CDK) GNU PathScale MVAPICH2 OpenMPI Intel MPI-(Cluster Studio) Grid Engine NFS (3.x Local FS (ext3, ext4, XFS) SLURM PanFS ACE (iscb and OpenIPMI) PBS Pro Lustre Remote Power Mgmt ACE PowerMan Remote Console Mgmt ACE ConMan Provisioning Operating System Appro Cluster Engine (ACE ) management software Linux (Red Hat, CentOS, SuSE) Appro Xtreme-X Supercomputer Appro Turn-Key Integration & Delivery Services Appro HPC Professional Services - On-site Installation services and/or Customized services 4

Appro Recent Win by HPC Workload Early engaged with several customer pre-evaluation and codevelopment of future Appro Xtreme-X architecture based on the latest processing technologies Delivered in the last year over 6 PFlops of supercomputing performance TLCC2 DOE Sandia s First Intel Many Integrated Core (Intel MIC) Architecture based Experimental Testbed System Capacity Data-Intensive Hybrid Capability Hybrid 5

Appro s Top100 Success :: HPC Market Snapshot Top 100 Supercomputers Rank by # of Systems Top 500 Supercomputers Rank by # of Systems Rank Company # of Systems System Share (%) Rank Company # of Systems System Share (%) 1 IBM 31 31.0% 1 IBM 213 42.6% 2 Cray 17 17.0% #3 3 Appro 10 10.0% 4 Bull 7 7.0% 5 HP 6 6.0% 2 HP 138 27.6% 3 Cray 26 5.2% #4 4 Appro 19 3.8% 5 Bull 16 3.2% 6 SGI 4 4.0% 6 SGI 16 3.2% Appro has the 3rd Largest Number of Systems in the Top 100 6

Successful Deployment 1. The NNSA, Three National Labs selected Appro Xtreme-X Next Generation Supercomputer for the TLCC2 Project 2. SDSC, selected Appro Next Generation supercomputer to meet the requirements of their one of a kind Data Intensive Supercomputer based on a massive amount of Flash. 3. Appro and Intel collaborated in meeting the specific design configuration of the Appro Next Generation supercomputer based on the new Intel Xeon processor E5 product family

SDSC Gordon Initiated in 2008- Deployed 2012 Goals / Challenge: Funding by the NSF through the Extreme Science and Engineering Discovery Environment program (XSEDE) Meet future needs of Scientific Community High FLOPS requirement, must be over 200 TFlops Peak High IOPS to local storage and High Bandwidth to local storage Creation of virtual SMP supernodes Architecture needed to be planned 3 years before deployment Initially designed in 2008 and deployed in 2011/2012 Take advantage of future technologies: New Intel Xeon Processors, Infiniband Interconnect, 3D Torus network topology, Intel SSDs Data-Intensive Solution: Appro Xtreme-X Supercomputer Based on the new Intel Xeon processor E5 product family Integrated, tested, validated and Proven volume shipment Full HPC software stack compatibility Provided the latest technology features, reliability, configuration flexibility, high speed networking and memory density to meet the customer requirements 8

Appro Xtreme-X Supercomputer Gordon Architecture Speed Mem (RAM) Mem (SSD) >200 TFLOPS 64 TB 256 TB Mem (RAM+SSD) 320 TB IO rate to SSDs 35 Million IOPS Network bandwidth 16 GB/s bidirectional Network latency Disk storage Disk IO Bandwidth 1 msec. 4 PB >100 GB/sec D D D D D D

DOE NNSA- TLCC2 Procurement LLNL, Sandia and Los Alamos National Labs Challenge TLCC2 Capacity Reduce cluster Total Cost of Ownership (TCO) Improve Linux cluster availability and scalability Minimize integration effort and elapsed time Procurement requirements: 1. NNSA ASC Investment for GFY11+12 2. ~$39 - $90M procurement (~$39M + options) reaching up to 6PFlops of computing 3. Single contract with multiple delivery sites 4. Needed cluster architecture based on scalable unit concept 5. Reduce power, floor space 6. Long term vendor partnership based on a proven model Solution Appro Xtreme-X Supercomputer Based on the new Intel Xeon processor E5 product family Integrated, Tested, Validated and proven volume shipment Full HPC Software stack compatibility Provides the latest technology features, reliability, flexibility, energy efficiency and density that meets the customer requirements

TLCC2- From 1 Scalable Unit Configuration Multiple Scalable Units Created System Parameters: ~50 TF/s SU Dual socket Intel Sandy Bridge 2.6GHz nodes; 32 GB DDR3-1600 SDRAM InfiniBand; 4+4GB/s Bandwidth over IBA 4x QDR Built from 648, 324, and 36-port IBA switches Compute and gateway nodes. Remote boot from RPS nodes IO Bandwidth ~20 GB/s delivered parallel I/O performance Software for build and acceptance TOSS 2.0 GPU node option with 2xGPUs per node Some clusters may have SSD in compute nodes for accelerated checkpoint capabilities #SU s # Nodes 1 162 2 324 4 648 8 1,296 12 1,944 18 2,916

Visit Appro s Booth At SC12! Get a Sneak Preview of our Next Gen Architecture

Questions? Appro Supercomputer Solutions Anthony Kenisky, VP of North America Sales Learn More at www.appro.com