SC15 SYNOPSIS FOR FEDERAL GOVERNMENT
|
|
- Chloe Garrett
- 8 years ago
- Views:
Transcription
1 SC15 SYNOPSIS FOR FEDERAL GOVERNMENT
2 SC15 Synopsis for Federal Government As a service to our clients, Engility offers a few notes from the Supercomputing Conference 2015 (SC15). Engility recently attended the 2015 Supercomputing Conference. The Supercomputing Conference is the world s premier High Performance Computing (HPC) conference (343 exhibitors and over 12,000 registered attendees). With many of our Federal clients constrained in attendance by budget, Engility attended the show to not only enhance our own capabilities but also promote the work of our clients and collect data to help identify the best possible HPC solutions. Engility invested in sending a team of more than 20 computational scientists and HPC experts to attend technical sessions, workshops and tutorials; visit vendors; participate in technology roadmap briefings; and meet with several Government decision makers to garner and pass on valuable insight into the state of the $11 billion HPC industry. Engility was also privileged to sponsor the SC15 Poster Session and Scientific Visualization Showcase, facilitating a deeper technical dialog on a broad range of industry, Government and academic HPC research. Here we offer a few observations and we invite you to contact our team if you have other topics on your mind or would like more details around any of these trends. Gay Porter, who leads our Technical Solutions Group, can field your questions and direct you to one of our subject matter experts (SMEs). Not familiar with Engility s role in HPC? Check out engilitycorp.com for a timeline of our HPC experience. State of the Industry Communicating the HPC Mission: The keynote speaker for 2015, actor Alan Alda, is a science advocate and serves as a scientific communications expert and professor. His address highlighted the importance of communicating science to non-scientists and, more specifically, the reasons behind the science. Alda illustrated his point with the adage known as the curse of knowledge a cognitive bias where better-informed parties find it extremely difficult to communicate problems from the perspective of lesser-informed parties. When scientists assume audiences know what they know about a technical subject, they can alienate their listeners with jargon and technical data. Instead, Alda highlighted the importance of using simple language, analogies and examples to create a relationship with one s audience and then transmit ideas. Your ideas don t exist until they get into the other person s head. He demonstrated his point by inviting a woman onto the stage. He asked her to carry a glass of water from one end of the stage to the other. He then said, Now do it again, but don t spill a drop or your entire village will die. He continued to address the packed auditorium, Which trip across the stage was more engaging? We all know there s no village that s going to die, but just putting that in our imagination is enough to give us a sense of tension. It s enough to invest us in the story. Something is at stake, and it s exciting. Working beside our Government clients, Engility s HPC team has a deep appreciation for the motivation and stories behind the HPC challenges we face every day. Attending SC15 allowed us to tell some of those stories to a broader audience and to gain knowledge that will help us address future challenges. One of those challenges is to help our clients engage with stakeholders who ultimately supply the resources to meet the HPC challenges of today and tomorrow. 2
3 The National Strategic Computing Initiative (NSCI): Alda s case for communicating HPC for a non-scientific audience takes on added weight given the White House s recent announcement of the NSCI. Targeted to maintain U.S. leadership in HPC and the journey to exascale computing, Exascale (a billion billion [1018] calculations per second) is among the goals of the NSCI. Flop/s are floating-point operations per second. Flop/s measure computer performance. NSCI offers a platform from which agencies might gain attention for critical HPC challenges. Many of the details for NSCI are still under development, but industry, academia and Government will need to partner to meet the Nation s grand computing challenges. NSCI recognizes the national priority of HPC and its influence on economic growth. In an International Data Corporation (IDC) study funded by the Department of Energy (DOE), researchers found an average return of $356.5 in revenue and $38.7 in profit per dollar invested in HPC. New Application Areas and New HPC Communities: With the increasing availability and decreasing cost of small- and medium-scale HPC clusters, and the rapid increase in the generation of digital data, many new scientific communities are becoming HPC users. SC15 featured many marquee applications in non-traditional areas (e.g., data analytics, fraud prevention and financial analysis). Further, as HPC expands into non-traditional applications, new programming and implementation tools and techniques are receiving substantial attention. Trends The Path to Exascale: Most readers are not concerned with exascale (yet), but innovations needed for exascale suggest improvements to current computing technology and methods. HPC appears to be forming into two segments: leadingedge HPC (e.g., exascale initiatives) and mainstream (most users). Achieving a thousand-fold increase in current maximum computing power is no small order, and it poses several challenges. Power: Power is the biggest challenge in the architecture of systems. The traditional scaling approach to reach exascale will not work the power and space requirements using current practices would be cost prohibitive. A petaflop/s class computer currently uses 3 MW of power; an exaflop/s class computer would require 200 MW of power, meaning a power plant would be required to run it. Current initiatives hope to reach exascale with only 20 megawatts (MW) sometime in the early 2020s. -- New software and algorithms are being designed to run the hardware and facilities in the most efficient way so the power of the system can be fully harnessed. Software: There was significant discussion on how we can prepare current and developing codes to be able to most effectively harness the horsepower of current and nextgeneration HPC architectures. Migration of legacy science and engineering codes to next-generation architectures has proven to be a continuous challenge, and approaches that mitigate the sensitivity of codes to evolving architectures will continue to increase in importance. Petascale-in-a-Rack Systems: To get to exascale, petascalein-a-rack systems are a necessary step. Most HPC users will not need to access grand-challenge exascale performance, but they will gain access to petascale-in-a-rack systems. When exascale comes of age, these smaller systems will have the same within-node challenges that face exascale systems, but between-node (networking) challenges will be somewhat mitigated. For users like the U.S. Department of Defense (DoD), the ability to bring a petaflop/s to the battlefield (realtime force modeling), onto the carrier (real-time local weather 3
4 forecast), or into the air (real-time situational tracking with predicative capabilities) may be a very beneficial opportunity. Internode Interconnects: Interconnect networking links computers within a cluster. When looking at the architectures that manage HPC workloads and scalability, Intel Omni-Path may emerge as a true competitor to Infiniband. Intel will be able to integrate it directly with their processors, and Cray is planning to include it as a future interconnect option. This may add further bifurcation to an already complex HPC environment, with interconnects being added to an already increasing trend toward vendor entrenchment in HPC infrastructure. Fault Tolerance and Hardware: The petascale-in-a-rack systems that will make up the exascale systems will have excellent mean time between failure (MTBF) numbers, and thus only users doing near-exascale runs will have to deal with fault tolerance explicitly. For exascale systems to succeed, individual racks (which will need to host an estimated 2.5 petaflop/s) will exhibit excellent hardware reliability. A petascale user, who needs to leverage only one rack, will benefit from reliability enhancements that vendors are designing into their rack systems. Fault Tolerance and Software: Resilient software is still an active area of development and research. For example, Fault Tolerant MPI (FT-MPI) and User Level Failure Mitigation (ULFM) MPI allow distributed parallel codes to recover from failures. ULFM MPI was integrated into an experimental branch of OpenMPI in 2012; that branch is still under development. ULFM is slated to be included in the MPI 4.0 standard. Other middleware such as Fenix and Falanx work with ULFM to make the recovery process more transparent, such as staging duplicates of data so that it can be restored when lost from a failed node. Message Passing Interface (MPI) is used in parallel computing. It moves information from node to node. Parallelism divides large problems into smaller ones that can be solved simultaneously. Accelerators: Exascale is coming and will involve accelerators of some sort. Accelerators will be in half of all new systems, and there is still healthy competition between CPU, GPU, Intel Phis and hybrid architectures for HPCs. Homogeneous systems have given way to heterogeneous computing, which is here to stay. Hybrid architectures will continue to use CPUs and accelerators together. GPGPU was viewed more favorably than last year. In the next 10 years, the view appears more heterogeneous, possibly including Open POWER systems, ARM64, or some other new technology. Regardless, GPUs or similar accelerators will be abundant in the HPC space, driven by power consumption. Interconnect developments may hamper the ability to build high-performance systems that include a hybrid of different CPUs and different accelerators (GPUs and Intel Phis). Central Processing Unit (CPU), a general purpose core, is flexible and has a large instruction set. Graphic Processing Unit (GPU) can move very fast in one direction and has a small instruction set. The GPU is the dragster, fast in the straightaway, compared to the CPU sports car, which can handle the turns in the course. General Purpose GPUs (GPGPUs) are GPUs hijacked by scientists for computation rather the graphical output. 4
5 Engility computational scientists authored a conferenceaccepted poster that explored how to accelerate runtime on parallel computing resources. A Multi-Zone Scalar Pentadiagonal (SP-MZ) NAS Parallel Benchmark was used to explore the benefits of OpenACC, OpenMP and compiler optimizations on GPU and Xeon Phi accelerator platforms. The poster is available upon request. Data Analytics: High-performance data analytics (HPDA) is gaining attention as visualization, data management, data movement and data analysis are becoming more important. IDC HPC market analysts, in their industry brief, indicated that data-intensive simulation and analytics are growing, and big data combined with HPC is creating new solutions. In fact, the marriage is adding many new users and buyers to the HPC space. The HPDA server and storage market is growing three times faster than the overall HPC market. Data analytics is emerging as a possible fourth method to perform science, with the first three being theory, experimentation and computational modeling. Many vendors used the phrase data-driven compute to describe how performance is driven by data movement and migration costs rather than flop/s cost. This becomes more and more important, even for problems that are not traditionally big data, as the scale of the problem increases. Exascale will be heavily affected by the movement of data, and economies will dictate computing where the data is already located. The Five Vs of Big Data: Because of the complexity, volume and varying data formats such as numerical, text, patternbased and probabilistic data, big data challenges are often referred to as the five Vs: volume, velocity, variety, variability and value. Successfully transforming a sea of data into useful actionable information requires computer science computational hardware and software technology experts, domain SMEs and individuals with experience in the handling and processing of large data sets. For big data problems, however, the input/output (I/O) stack becomes even more critical than in traditional HPC systems, as the size of the problem will sometimes be so large that it cannot reside in memory, and the system cannot save all simulation results hindering the analysis and extraction of useful information. Compute throughput has been and continues to outpace I/O bandwidth. To address this, methods that effectively use the current and future storage systems will need to be developed. Work on the latter is continual, and some of the coming technologies include burst buffers and greater use of non-volatile memory, such as flash or Intel 3D X-Point. There are a number of solutions, depending upon the application area: Better I/O use libraries for smart parallel I/O, but this is a short-term solution Compression reduces data output, but only by so much so it s also a short-term solution Burst buffers NVRAM (Non-volatile memory) or other fast storage caching data for the spinning disk storage -- Cray is recently marketing such technology - naming it DataWarp In situ analysis analyze the results within the model itself and avoid storing full data -- Intel and Cray expect that, as chip densities increase, processors will have analytics engines and other specialized units built into the chip, making in situ analysis more attractive Intelligent subsetting - pull out only representative samples from the data for storage Regardless of how the data is stored, analyzing it will also be challenging: 5
6 Indexing the data helps speed searching and querying Querying with indexing enables fast data analysis Parallel I/O libraries are adding integrated indexing capability Cloud Computing: Cloud computing will continue to gain ground as it is used in conjunction with other solutions and to accommodate burst computing. Security, scale, data movement and complexity make it difficult to use as a standalone approach, but it may evolve to support production computing surge requirements. Although not as glamorous as large supercomputers, there is a growing need to virtualize and provide Amazonlike services for operational institutions to automate the administrative overhead of provisioning HPC clusters. Virtual instances of smaller HPC clusters for development work are already in use with possible applications to virtual workstations. Software: Software is and will continue to be the bottleneck in the ability to fully exploit emerging HPC architectures. We, as an industry, have years of work ahead of us. How do we deal (from a software perspective) with the changing HPC architectures as we enter the pre-exascale era and plan for exascale in the 2020s? Topics we encountered at the show include: How to best port legacy code to new architectures. Defining the tipping point for legacy code update versus starting with a new code baseline. Designing and writing new codes that will make the most efficient use of new architectures. Asynchronous many task programming models as elements of exascale and pre-exascale software approaches. For example, Legion allows for programming in asynchrony by explicitly controlling data and data movement. Legion uses a partitioned global address space (PGAS). OpenACC (open accelerators) is an emerging standard with significant support, but at this time it does not have the maturity of the CUDA (compute unified device architecture) software stack. Apache Spark well suited to machine learning. Kokkos write code once in pure C++ using Kokkos templates; at compile time; determine the actual data structures to use to best utilize host or accelerators. The SC15 Best Poster Award went to Parallelization, Acceleration, and Advancement of Dissipative Particle Dynamics (DPD) Methods. Engility authors, partnering with DoD and academia, applied accelerators, code optimization and improved algorithms to increase performance of LAMMPS, a widely used materials modeling code. The poster and an executive summary are available upon request. SC15 Takeaways and 2016 In the words of one Engility attendee, SC is a fertile place for initiating and strengthening collaborations among HPC researchers and practitioners. It s an indispensable event for networking with peers in HPC as well as learning about the timetables for any big changes in the kinds of machines the HPC vendors will be supplying in the future. We know that we can t send all of our employees and clients to Supercomputing Conferences, but we hope this synopsis has provided a few glimpses into the state of HPC and imminent trends. A common theme throughout is the need for strong partnership between Government, academia and industry as we address the HPC challenges presented by your critical missions. Please reach out to Engility for more information on any of these topics or to discuss your HPC challenges. For more information, please contact: Gay Porter Vice President Technical Solutions Group gay.porter@engilitycorp.com P This document consists of Engility Corporation general capabilities information that does not contain controlled technical data as defined by the International Traffic in Arms (ITAR) Part or Export Administration Regulations (EAR) Part REF: EGL-CR
Data Centric Systems (DCS)
Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems
More informationGPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
More informationInfrastructure Matters: POWER8 vs. Xeon x86
Advisory Infrastructure Matters: POWER8 vs. Xeon x86 Executive Summary This report compares IBM s new POWER8-based scale-out Power System to Intel E5 v2 x86- based scale-out systems. A follow-on report
More informationWhite Paper: Enhancing Functionality and Security of Enterprise Data Holdings
White Paper: Enhancing Functionality and Security of Enterprise Data Holdings Examining New Mission- Enabling Design Patterns Made Possible by the Cloudera- Intel Partnership Inside: Improving Return on
More informationTrends in High-Performance Computing for Power Grid Applications
Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views
More informationSo#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell
So#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell R&D Manager, Scalable System So#ware Department Sandia National Laboratories is a multi-program laboratory managed and
More informationHPC Wales Skills Academy Course Catalogue 2015
HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses
More informationOutline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging
Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging
More informationHETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
More informationHPC & Big Data THE TIME HAS COME FOR A SCALABLE FRAMEWORK
HPC & Big Data THE TIME HAS COME FOR A SCALABLE FRAMEWORK Barry Davis, General Manager, High Performance Fabrics Operation Data Center Group, Intel Corporation Legal Disclaimer Today s presentations contain
More informationThe Fusion of Supercomputing and Big Data. Peter Ungaro President & CEO
The Fusion of Supercomputing and Big Data Peter Ungaro President & CEO The Supercomputing Company Supercomputing Big Data Because some great things never change One other thing that hasn t changed. Cray
More informationPRIMERGY server-based High Performance Computing solutions
PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating
More informationOverview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
More informationAchieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
More informationHigh Performance Computing (HPC)
High Performance Computing (HPC) High Performance Computing (HPC) White Paper Attn: Name, Title Phone: xxx.xxx.xxxx Fax: xxx.xxx.xxxx 1.0 OVERVIEW When heterogeneous enterprise environments are involved,
More informationParallel Computing. Benson Muite. benson.muite@ut.ee http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage
Parallel Computing Benson Muite benson.muite@ut.ee http://math.ut.ee/ benson https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework
More informationAchieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
More informationA Next-Generation Analytics Ecosystem for Big Data. Colin White, BI Research September 2012 Sponsored by ParAccel
A Next-Generation Analytics Ecosystem for Big Data Colin White, BI Research September 2012 Sponsored by ParAccel BIG DATA IS BIG NEWS The value of big data lies in the business analytics that can be generated
More informationNumerix CrossAsset XL and Windows HPC Server 2008 R2
Numerix CrossAsset XL and Windows HPC Server 2008 R2 Faster Performance for Valuation and Risk Management in Complex Derivative Portfolios Microsoft Corporation Published: February 2011 Abstract Numerix,
More informationScaling from Workstation to Cluster for Compute-Intensive Applications
Cluster Transition Guide: Scaling from Workstation to Cluster for Compute-Intensive Applications IN THIS GUIDE: The Why: Proven Performance Gains On Cluster Vs. Workstation The What: Recommended Reference
More informationHadoopTM Analytics DDN
DDN Solution Brief Accelerate> HadoopTM Analytics with the SFA Big Data Platform Organizations that need to extract value from all data can leverage the award winning SFA platform to really accelerate
More informationIBM System x reference architecture solutions for big data
IBM System x reference architecture solutions for big data Easy-to-implement hardware, software and services for analyzing data at rest and data in motion Highlights Accelerates time-to-value with scalable,
More informationKriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
More informationEnterprise Workloads on the IBM X6 Portfolio: Driving Business Advantages
WHITE PAPER Enterprise Workloads on the IBM X6 Portfolio: Driving Business Advantages Sponsored by: IBM Jed Scaramella January 2014 EXECUTIVE SUMMARY Enterprise information technology (IT) leaders are
More informationHyperQ Storage Tiering White Paper
HyperQ Storage Tiering White Paper An Easy Way to Deal with Data Growth Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com
More informationTBR. IBM x86 Servers in the Cloud: Serving the Cloud. February 2012
IBM x86 Servers in the Cloud: Serving the Cloud February 2012 TBR T ECH N O LO G Y B U SI N ES S RES EAR CH, I N C. 1 IBM System x Cloud White Paper February 2012 2012 Technology Business Research Inc.
More informationECDF Infrastructure Refresh - Requirements Consultation Document
Edinburgh Compute & Data Facility - December 2014 ECDF Infrastructure Refresh - Requirements Consultation Document Introduction In order to sustain the University s central research data and computing
More informationUnisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise
Unisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise Introducing Unisys All in One software based weather platform designed to reduce server space, streamline operations, consolidate
More informationNew Features in SANsymphony -V10 Storage Virtualization Software
New Features in SANsymphony -V10 Storage Virtualization Software Updated: May 28, 2014 Contents Introduction... 1 Virtual SAN Configurations (Pooling Direct-attached Storage on hosts)... 1 Scalability
More informationAccelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
More informationJean-Pierre Panziera Teratec 2011
Technologies for the future HPC systems Jean-Pierre Panziera Teratec 2011 3 petaflop systems : TERA 100, CURIE & IFERC Tera100 Curie IFERC 1.25 PetaFlops 256 TB ory 30 PB disk storage 140 000+ Xeon cores
More informationWrite a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical
Identify a problem Review approaches to the problem Propose a novel approach to the problem Define, design, prototype an implementation to evaluate your approach Could be a real system, simulation and/or
More informationReview of SC13; Look Ahead to HPC in 2014. Addison Snell addison@intersect360.com
Review of SC13; Look Ahead to HPC in 2014 Addison Snell addison@intersect360.com New at Intersect360 Research HPC500 user organization, www.hpc500.com Goal: 500 users worldwide, demographically representative
More informationChapter 7: Distributed Systems: Warehouse-Scale Computing. Fall 2011 Jussi Kangasharju
Chapter 7: Distributed Systems: Warehouse-Scale Computing Fall 2011 Jussi Kangasharju Chapter Outline Warehouse-scale computing overview Workloads and software infrastructure Failures and repairs Note:
More informationRevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
More informationOracle Database In-Memory The Next Big Thing
Oracle Database In-Memory The Next Big Thing Maria Colgan Master Product Manager #DBIM12c Why is Oracle do this Oracle Database In-Memory Goals Real Time Analytics Accelerate Mixed Workload OLTP No Changes
More informationIntel HPC Distribution for Apache Hadoop* Software including Intel Enterprise Edition for Lustre* Software. SC13, November, 2013
Intel HPC Distribution for Apache Hadoop* Software including Intel Enterprise Edition for Lustre* Software SC13, November, 2013 Agenda Abstract Opportunity: HPC Adoption of Big Data Analytics on Apache
More informationIBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report
More informationIBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain
More informationBig data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
More informationOptimizing your IT infrastructure. 2012 IBM Corporation
Optimizing your IT infrastructure 2012 IBM Corporation Please Note: IBM s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM s sole discretion.
More informationTechnology Insight Series
Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary
More informationBig Data Management in the Clouds and HPC Systems
Big Data Management in the Clouds and HPC Systems Hemera Final Evaluation Paris 17 th December 2014 Shadi Ibrahim Shadi.ibrahim@inria.fr Era of Big Data! Source: CNRS Magazine 2013 2 Era of Big Data! Source:
More informationHigh Performance Computing. Course Notes 2007-2008. HPC Fundamentals
High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs
More informationThe Road to Convergence
A UBM TECHWEB WHITE PAPER SEPTEMBER 2012 The Road to Convergence Six keys to getting there with the most confidence and the least risk. Brought to you by The Road to Convergence Six keys to getting there
More informationHadoop in the Hybrid Cloud
Presented by Hortonworks and Microsoft Introduction An increasing number of enterprises are either currently using or are planning to use cloud deployment models to expand their IT infrastructure. Big
More informationIBM Deep Computing Visualization Offering
P - 271 IBM Deep Computing Visualization Offering Parijat Sharma, Infrastructure Solution Architect, IBM India Pvt Ltd. email: parijatsharma@in.ibm.com Summary Deep Computing Visualization in Oil & Gas
More informationHank Childs, University of Oregon
Exascale Analysis & Visualization: Get Ready For a Whole New World Sept. 16, 2015 Hank Childs, University of Oregon Before I forget VisIt: visualization and analysis for very big data DOE Workshop for
More informationElasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack
Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack HIGHLIGHTS Real-Time Results Elasticsearch on Cisco UCS enables a deeper
More informationThe PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver
1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution
More informationThe GPU Accelerated Data Center. Marc Hamilton, August 27, 2015
The GPU Accelerated Data Center Marc Hamilton, August 27, 2015 THE GPU-ACCELERATED DATA CENTER HPC DEEP LEARNING PC VIRTUALIZATION CLOUD GAMING RENDERING 2 Product design FROM ADVANCED RENDERING TO VIRTUAL
More informationBig Workflow: More than Just Intelligent Workload Management for Big Data
Big Workflow: More than Just Intelligent Workload Management for Big Data Michael Feldman White Paper February 2014 EXECUTIVE SUMMARY Big data applications represent a fast-growing category of high-value
More informationIDC s Top 10 HPC Market Predictions for 2010
IDC s Top 10 HPC Market Predictions for 2010 Earl Joseph, Ph.D. Steve Conway Jie Wu February 17, 2010 Copyright 2009 IDC. Reproduction is forbidden unless authorized. All rights reserved. IDC s HPC Team
More informationUnified Computing Systems
Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified
More informationM.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
More informationWell packaged sets of preinstalled, integrated, and optimized software on select hardware in the form of engineered systems and appliances
INSIGHT Oracle's All- Out Assault on the Big Data Market: Offering Hadoop, R, Cubes, and Scalable IMDB in Familiar Packages Carl W. Olofson IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA
More informationGlobal Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com INSIGHT Base One: Grid Computing for Database-Centric Applications Earl Joseph, Ph. D. IDC OPINION
More informationBig Data Performance Growth on the Rise
Impact of Big Data growth On Transparent Computing Michael A. Greene Intel Vice President, Software and Services Group, General Manager, System Technologies and Optimization 1 Transparent Computing (TC)
More information- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
More informationSoftware-defined Storage Architecture for Analytics Computing
Software-defined Storage Architecture for Analytics Computing Arati Joshi Performance Engineering Colin Eldridge File System Engineering Carlos Carrero Product Management June 2015 Reference Architecture
More informationBIG DATA-AS-A-SERVICE
White Paper BIG DATA-AS-A-SERVICE What Big Data is about What service providers can do with Big Data What EMC can do to help EMC Solutions Group Abstract This white paper looks at what service providers
More informationOptimizing Shared Resource Contention in HPC Clusters
Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs
More informationPerformance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi
Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi ICPP 6 th International Workshop on Parallel Programming Models and Systems Software for High-End Computing October 1, 2013 Lyon, France
More informationHARNESS project: Managing Heterogeneous Compute Resources for a Cloud Platform
HARNESS project: Managing Heterogeneous Compute Resources for a Cloud Platform J. G. F. Coutinho 1, O. Pell 2, E. O Neill 3, P. Sanders 2, J. McGlone 3, P. Grigoras 1, W. Luk 1, and C. Ragusa 2 1 Imperial
More informationSystem Models for Distributed and Cloud Computing
System Models for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Classification of Distributed Computing Systems
More informationStream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
More informationThe Next Phase of Datacenter Network Resource Management and Automation March 2011
I D C T E C H N O L O G Y S P O T L I G H T The Next Phase of Datacenter Network Resource Management and Automation March 2011 Adapted from Worldwide Datacenter Network 2010 2015 Forecast and Analysis
More informationDr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED
Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory 21 st Century Research Continuum Theory Theory embodied in computation Hypotheses tested through experiment SCIENTIFIC METHODS
More informationMake the Most of Big Data to Drive Innovation Through Reseach
White Paper Make the Most of Big Data to Drive Innovation Through Reseach Bob Burwell, NetApp November 2012 WP-7172 Abstract Monumental data growth is a fact of life in research universities. The ability
More informationSQL Server 2012 Parallel Data Warehouse. Solution Brief
SQL Server 2012 Parallel Data Warehouse Solution Brief Published February 22, 2013 Contents Introduction... 1 Microsoft Platform: Windows Server and SQL Server... 2 SQL Server 2012 Parallel Data Warehouse...
More informationAchieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-Kernels Jiannan Ouyang, Brian Kocoloski, John Lange The Prognostic Lab @ University of Pittsburgh Kevin Pedretti Sandia National Laboratories HPDC 2015
More informationCA Technologies Big Data Infrastructure Management Unified Management and Visibility of Big Data
Research Report CA Technologies Big Data Infrastructure Management Executive Summary CA Technologies recently exhibited new technology innovations, marking its entry into the Big Data marketplace with
More informationThe Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...
More informationHigh Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
More informationPrice/performance Modern Memory Hierarchy
Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion
More informationWHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression
WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression Sponsored by: Oracle Steven Scully May 2010 Benjamin Woo IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA
More informationHigh-performance computing: Use the cloud to outcompute the competition and get ahead
High-performance computing: Use the cloud to outcompute the competition and get ahead High performance computing (HPC) has proved to be effective in offering highly analytical workloads the benefits of
More informationGet More Scalability and Flexibility for Big Data
Solution Overview LexisNexis High-Performance Computing Cluster Systems Platform Get More Scalability and Flexibility for What You Will Learn Modern enterprises are challenged with the need to store and
More informationOPTIMIZING SERVER VIRTUALIZATION
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
More informationInfiniBand Update Addressing new I/O challenges in HPC, Cloud, and Web 2.0 infrastructures. Brian Sparks IBTA Marketing Working Group Co-Chair
InfiniBand Update Addressing new I/O challenges in HPC, Cloud, and Web 2.0 infrastructures Brian Sparks IBTA Marketing Working Group Co-Chair Page 1 IBTA & OFA Update IBTA today has over 50 members; OFA
More informationRemoving Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
More informationProgramming models for heterogeneous computing. Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga
Programming models for heterogeneous computing Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga Talk outline [30 slides] 1. Introduction [5 slides] 2.
More informationPetascale Software Challenges. Piyush Chaudhary piyushc@us.ibm.com High Performance Computing
Petascale Software Challenges Piyush Chaudhary piyushc@us.ibm.com High Performance Computing Fundamental Observations Applications are struggling to realize growth in sustained performance at scale Reasons
More informationAffordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale
WHITE PAPER Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale Sponsored by: IBM Carl W. Olofson December 2014 IN THIS WHITE PAPER This white paper discusses the concept
More informationStorage, Cloud, Web 2.0, Big Data Driving Growth
Storage, Cloud, Web 2.0, Big Data Driving Growth Kevin Deierling Vice President of Marketing October 25, 2013 Delivering the Highest ROI Across all Markets HPC Web 2.0 DB/Enterprise Cloud Financial Services
More informationArchitectures for Big Data Analytics A database perspective
Architectures for Big Data Analytics A database perspective Fernando Velez Director of Product Management Enterprise Information Management, SAP June 2013 Outline Big Data Analytics Requirements Spectrum
More informationUnderstanding the Value of In-Memory in the IT Landscape
February 2012 Understing the Value of In-Memory in Sponsored by QlikView Contents The Many Faces of In-Memory 1 The Meaning of In-Memory 2 The Data Analysis Value Chain Your Goals 3 Mapping Vendors to
More informationSQLstream Blaze and Apache Storm A BENCHMARK COMPARISON
SQLstream Blaze and Apache Storm A BENCHMARK COMPARISON 2 The V of Big Data Velocity means both how fast data is being produced and how fast the data must be processed to meet demand. Gartner The emergence
More informationSolving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
More informationCluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
More informationExascale Challenges and General Purpose Processors. Avinash Sodani, Ph.D. Chief Architect, Knights Landing Processor Intel Corporation
Exascale Challenges and General Purpose Processors Avinash Sodani, Ph.D. Chief Architect, Knights Landing Processor Intel Corporation Jun-93 Aug-94 Oct-95 Dec-96 Feb-98 Apr-99 Jun-00 Aug-01 Oct-02 Dec-03
More informationDr. John E. Kelly III Senior Vice President, Director of Research. Differentiating IBM: Research
Dr. John E. Kelly III Senior Vice President, Director of Research Differentiating IBM: Research IBM Research Priorities Impact on IBM and the Marketplace Globalization and Leverage Balanced Research Agenda
More informationOptimizing GPU-based application performance for the HP for the HP ProLiant SL390s G7 server
Optimizing GPU-based application performance for the HP for the HP ProLiant SL390s G7 server Technology brief Introduction... 2 GPU-based computing... 2 ProLiant SL390s GPU-enabled architecture... 2 Optimizing
More informationSummit and Sierra Supercomputers:
Whitepaper Summit and Sierra Supercomputers: An Inside Look at the U.S. Department of Energy s New Pre-Exascale Systems November 2014 1 Contents New Flagship Supercomputers in U.S. to Pave Path to Exascale
More informationBSC vision on Big Data and extreme scale computing
BSC vision on Big Data and extreme scale computing Jesus Labarta, Eduard Ayguade,, Fabrizio Gagliardi, Rosa M. Badia, Toni Cortes, Jordi Torres, Adrian Cristal, Osman Unsal, David Carrera, Yolanda Becerra,
More informationHPC Market Update, HPC Trends In the Oil/Gas Sector and IDC's Top 10 Predictions for 2014. Earl Joseph, HPC Program Vice President ejoseph@idc.
HPC Market Update, HPC Trends In the Oil/Gas Sector and IDC's Top 10 Predictions for 2014 Earl Joseph, HPC Program Vice President ejoseph@idc.com IDC Has >1,000 Analysts In 52 Countries 2 IDC s HPC Team
More informationAgenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
More informationBig Fast Data Hadoop acceleration with Flash. June 2013
Big Fast Data Hadoop acceleration with Flash June 2013 Agenda The Big Data Problem What is Hadoop Hadoop and Flash The Nytro Solution Test Results The Big Data Problem Big Data Output Facebook Traditional
More informationHow To Use Hp Vertica Ondemand
Data sheet HP Vertica OnDemand Enterprise-class Big Data analytics in the cloud Enterprise-class Big Data analytics for any size organization Vertica OnDemand Organizations today are experiencing a greater
More informationSAS and Oracle: Big Data and Cloud Partnering Innovation Targets the Third Platform
SAS and Oracle: Big Data and Cloud Partnering Innovation Targets the Third Platform David Lawler, Oracle Senior Vice President, Product Management and Strategy Paul Kent, SAS Vice President, Big Data What
More information