Condor Supercomputer



Similar documents
Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED

REPORT DOCUMENTATION PAGE

Data Intensive Science and Computing

UNCLASSIFIED R-1 ITEM NOMENCLATURE FY 2013 OCO

UNCLASSIFIED. R-1 Program Element (Number/Name) PE A / High Performance Computing Modernization Program. Prior Years FY 2013 FY 2014 FY 2015

IBM Deep Computing Visualization Offering

Huawei Agile Network FAQ What is an agile network? What is the relationship between an agile network and SDN?... 2

UNCLASSIFIED. UNCLASSIFIED Office of Secretary Of Defense Page 1 of 8 R-1 Line #50

Big Data in Test and Evaluation by Udaya Ranawake (HPCMP PETTT/Engility Corporation)

Networking Modernize. Open Your Network to Innovation

Network Evolution, Cloud & Future Services. Opportunities & Challenges for Next-Decade Services

High Performance Computing for Engineering Applications

Technical bulletin: Remote visualisation with VirtualGL

Big Data Processing: Past, Present and Future

Remote Visualization and Collaborative Design for CAE Applications

Leveraging the Cloud for Your Business

How To Build A Cloud Computer

Simulation and Training Solutions

Advancing the U.S. Air Force Mission

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

Virtual Compute Appliance Frequently Asked Questions

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

NVM memory: A Critical Design Consideration for IoT Applications

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS

High Productivity Computing With Windows

Overview to the Cisco Mobility Services Architecture

Government Technology Trends to Watch in 2014: Big Data

Security Threats on National Defense ICT based on IoT

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers

Clusters: Mainstream Technology for CAE

Scientific Computing Data Management Visions

Standardizing the Internet of Things; Boiling the Ocean

Cisco Data Preparation

Unisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise

congatec AG How to come around the IoT data security challenges

High Performance Computing in CST STUDIO SUITE

Software-Defined Networks Powered by VellOS

Data Centric Systems (DCS)

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

NETGEAR SMB Storage Line Update and ReadyNAS 2100 Introduction

HPC Update: Engagement Model

Cybersecurity: An Innovative Approach to Advanced Persistent Threats

Addressing Open Source Big Data, Hadoop, and MapReduce limitations

The Production Cluster Construction Checklist

Network Management and Defense Telos offers a full range of managed services for:

Cluster Computing at HRI

Breaking News! Big Data is Solved. What Is In-Memory Computing and What Does It Mean to U.S. Leaders? EXECUTIVE WHITE PAPER

CLOUD GAMING WITH NVIDIA GRID TECHNOLOGIES Franck DIARD, Ph.D., SW Chief Software Architect GDC 2014

Pentaho High-Performance Big Data Reference Configurations using Cisco Unified Computing System

benchmarking Amazon EC2 for high-performance scientific computing

Trusted Experience on Major Data Center Initiatives

White Paper The Numascale Solution: Extreme BIG DATA Computing

What Is In-Memory Computing and What Does It Mean to U.S. Leaders? EXECUTIVE WHITE PAPER

Meraki MX50 Hardware Installation Guide

Ames Consolidated Information Technology Services (A-CITS) Statement of Work

HPCMP New Users Guide Who Are We? Distribution A: Approved for Public release; distribution is unlimited.

Cyber Security at NSU

numascale White Paper The Numascale Solution: Extreme BIG DATA Computing Hardware Accellerated Data Intensive Computing By: Einar Rustad ABSTRACT

SGI High Performance Computing

WATCHKEEPER X UNMANNED AIRCRAFT SYSTEM (UAS)

Trends in High-Performance Computing for Power Grid Applications

2) Xen Hypervisor 3) UEC

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

Advanced Integrated Technologies, LLC

Software Scalability Issues in Large Clusters

Private cloud computing advances

Transcription:

Introducing the Air Force Research Laboratory Information Directorate Condor Supercomputer DOD s Largest Interactive Supercomputer Ribbon Cutting Ceremony AFRL, Rome NY 1 Dec 2010 Approved for Public Release [88ABW-2010-6225] Distribution Unlimited

Ribbon Cutting Ceremony Agenda 1 Dec 2010 0900-0930 : Assemble in Urtz Conference Room 0930-0945 : Welcome from the AFRL Information Director and Chief Scientist 0945-1000 : Move to Naresky Facility 1000-1015 : Ribbon Cutting 1015-1115 : Presentations and Demos A Note from Dr. Richard W. Linderman, ST Chief Scientist, AFRL Information Directorate Thank you for joining us today to cut the ribbon on a unique, world-class high performance computer that represents the fastest interactive supercomputer in Department of Defense (DoD). This 500 trillion operations per second machine can process real-time sensor data at tens of gigabytes per second. Condor can also store and sort more than 400 terabytes of information, equivalent to 40 times the entire printed collection of the Library of Congress. This machine will push the frontiers of science and engineering in diverse fields such as signal and image processing, information fusion and exploitation, large scale cyber simulations, and basic research into models of cognition. Condor continues an AFRL Information Directorate tradition of pushing the state-of-the-art in high performance computing (HPC). It builds upon earlier innovations in real-time supercomputing, embedded processing, and low cost HPC. The new Condor machine achieves a 10X price-performance advantage, and a 15X powerperformance advantage over other large HPSs by combining Playstation 3 gaming consoles and commercial graphics cards with high performance servers. It is lean and green! It is also made freely available to DoD research and development organizations and their contractors. What is the Condor Supercomputer? The Air Force Research Laboratory s Information Directorate has recently designed and assembled what is currently the DoD s largest interactive supercomputer. Our Condor Cluster is a heterogeneous supercomputer comprised of commercial-off-theshelf commodity components including 1,716 Sony Playstation III (PS3) game consoles and 168 General Purpose Graphical Processing Units. The computing power of supercomputers is measured in FLOPS or Floating Point Operations per Second. A floating point operation is a single operation done by a computer. A typical household laptop can achieve approximately 10 billion FLOPS of computing power. AFRL s Condor supercomputer will achieve 500 trillion FLOPS of processing power. That s equivalent to about 50,000 laptops, which is about one fifth the speed of the fastest computer in the world, for a small fraction of the cost to develop and operate! Condor is a green supercomputer, designed to consume significantly less energy than comparable supercomputers. It can achieve approximately 1.5 GigaFLOPS per Watt of computing power, where a typical supercomputer achieves only 100 MegaFLOPS per Watt. This is a 15X improvement in performance for the same power. The sleep mode inherent in the PS3 further contributes to power savings. We are able to intelligently cycle between operational mode and sleep mode, thus requiring less cooling and achieving further energy savings. A primary application of Condor is radar image processing for urban surveillance. Condor is capable of processing the complex computations required to create a detailed image of an entire city from radar data. AFRL also plans to implement neuromorphic computing, which involves emulation of the human brain to reason over complex problems. Potential future applications include quantum computing simulations, intrusion detection system modeling for cyber operations, wireless cloud computing, and computing for nano technology material development.

Building the Condor 1700 PS3s arrive at AFRL/RI in February 2010. Testing, staging, and accountability of all 1700 PS3 s. Fabrication and installation of 44 PS3s on each cart. The cart assembly, network wiring and design were all performed by AFRL/RIT personnel. The entire assembly of 25 carts (1100 PS3s) and wrapping the carts for storage. Moving all 25 carts into the Naresky facility for initial power test, staging of the Linux operating system and final network assembly. All PS3s and Condor server head nodes were networked together with stateof-the-art Ethernet and Infiniband switches. A Linux boot loader is installed in preparation to install and operate Linux operating system within the Condor Cluster. The Naresky facility under went power, A/C, floor and wall upgrades to accommodate the Condor Cluster. Final integration, testing and modifications to the HPC-ARC center included the 150 Mega Pixel data wall and integration of the visualization computers on the 10GbE HPC network. February 2010 November 2010

AFRL Information Directorate DoD Affiliated Resource Center The Information Directorate maintains and operates an Affiliated Resource Center for the DoD High Performance Computing Modernization Program (HPCMP). The HPCMP funded the development of Condor under their Dedicated High Performance Computing Project Investment program. In addition to Condor, we maintain the following clusters to support DoD supercomputing needs. The clusters are available for various DoD applications. Users can access the machines remotely to develop and experiment with processing capabilities for military systems design, development, test and evaluation. 53 TERAFLOPS Cell Broadband Engine (CBE) Cluster: This cluster enabled early access to IBM s CBE chip technology included in the low priced commodity PS3 gaming consoles. This is a heterogeneous cluster with powerful sub-cluster head nodes. The cluster is comprised of 14 sub-clusters, each with 24 PS3s, and one server containing dual quad-core Xeon chips. Emulation Laboratory (EMULAB) Cluster: The EMULAB cluster provides researchers with an easy to use, flexible and repeatable environment for experimentation on networks, software-intensive systems, cyber operations, and areas requiring modeling and simulation. It is comprised of 3 head nodes and 92 compute nodes (Dell 2950III Dual quad-core Xeon Chips). The network configuration utilizes state-of-the-art switching equipment from Cisco including 6 network interface cards on each compute node. HORUS Cluster: HORUS provides General Purpose Graphical Processing Units (GPGPUs) in a heterogeneous computing environment for use onboard aircraft to process synthetic aperture radar images. The cluster has been flown on experiments in support of multi-national programs. With 16 Nvidia Corp.GPGPUs, a total of 22 trillion FLOPS (TERAFLOPS) of computing power are achieved. Other smaller HPC clusters are also currently under development at our facility. DoD s High Performance Computing Modernization Program (HPCMP) From the HPCMP Web Page (http://www.hpcmo.hpc.mil) The High Performance Computing Modernization Program (HPCMP) was initiated in 1992 in response to congressional direction to modernize the high performance computing capabilities of Department of Defense laboratories. The HPCMP was assembled out of a collection of small high performance computing departments, each with a rich history of supercomputing experience that had independently evolved within the Army, Air Force, and Navy laboratories and test centers. HPC tools solve complicated and time-consuming problems. Researchers expand their toolkit to solve modern military and security problems using HPC hardware and software. Programs can assess technical and management risks, such as performance, time, available resources, cost, and schedule. Through HPC solutions, programs gain knowledge to protect our military through new weapon systems, prepare US aircraft for overseas deployments in Afghanistan and Iraq, and assist long-term weather predictions to plan humanitarian and military operations throughout the world. The HPCMP supports DoD objectives through research, development, test, and evaluation (RDT&E). Scientists and engineers that focus on Science and Technology (S&T) to solve complex defense challenges benefit from HPC innovation. The HPCMP is organized into three components to achieve its goals: HPC Centers, Networking/Security, and Software Applications Support. Affiliated Resource Centers (ARCs) are DoD Laboratories and test centers that acquire and manage HPC resources as a part of their local infrastructure. ARCs share their resources with the broader DoD HPC user community with HPCMP coordination. AFRL Information Directorate operates one of only four ARCs nationwide.

The mission of the AFRL Information Directorate is to lead the discovery, development, and integration of affordable warfighting information technologies for our air, space and cyberspace force. Approved for Public Release [88ABW-2010-6225] Distribution Unlimited