Life Sciences Opening the pipe to faster research, discovery, computation and resource sharing
|
|
- Albert McKinney
- 8 years ago
- Views:
Transcription
1 Solution Brief: Life Sciences Opening the pipe to faster research, discovery, computation and resource sharing Abstract Advances in Information Technology (IT) are significantly improving the speed at which organizations dedicated to improving the lives of many through research and development conduct business today. In order to dramatically improve the efficiency of their computing infrastructure, life sciences organizations and educational institutions helping with scientific research, computational chemistry, systems biology, and chemical mixing and analysis are turning to Voltaire s InfiniBand-based solutions. The Life Sciences Challenge Contents The Life Sciences Challenge... 1 Today s Solutions... 2 Computational Chemistry Systems Biology... 2 Chemical Mixing and Material Analysis A Better Way... 3 Building High Performance Clusters High-performance InfiniBand Switches... 4 Fast Storage Access... 7 Visualization Solutions Putting It All Together... 8 Key Features & Benefits... 8 Tested & Certified with Leading Applications... 9 Customer Success Stories... 9 Swiss Institute of BioInformatics (SIB)... 9 Tokyo Institute of Technology (TiTech) About Voltaire There are many focus areas for research and development in the life sciences. Life science IT managers are challenged to provide the right solutions for computational chemistry, systems biology, and chemical mixing and analysis. These areas share several challenges: 1. Conducting more simulations per day 2. Alleviating storage bottlenecks associated with exponential data growth 3. Implementing cost effective solutions 4. Allocating compute and storage resources dynamically to meet researchers needs Complex simulations can take days or weeks to run. When simulations take longer, scientific discoveries are delayed, slowing commercialization and increasing competitive threats. With faster simulations, more complex models can be analyzed, additional assumptions can be tested and further modifications become possible leading to more efficient development cycles. Accelerating simulations has clear benefits. The key to accelerating life science application performance is to select high performance computing systems that eliminate bottlenecks associated with Inter-Processor Communications (IPC) and storage connectivity. Large symmetric multi-processing-machines (SMPs) have been used in the past as an answer for generating massive compute power in data centers. However, these proprietary, expensive systems have given way to cluster and grid architectures built from lower-cost commodity elements that offer incredible performance at a significantly lower cost. Because of the ready-availability of Ethernet, many of today s clusters and grids are built with Ethernet as the interconnect. While Gigabit Ethernet-based clustering is less expensive than SMP-based architectures, it tends to be very inefficient. For applications that rely on bandwidth or memory sharing, or require large amounts of data transfer, InfiniBand-based interconnects significantly improve the efficiency of clusters by providing high bandwidths and low latency without increasing CPU utilization.
2 Solution Brief: Higher Education and Research When clusters are built using high performance servers, storage and interconnects, organizations can experience drastically faster simulations and modeling while continuing to decrease the cost of providing computing infrastructure. Today s Solutions Many life sciences organizations employ last generation, less-efficient platforms that use proprietary- or Ethernet-based server interconnects. This approach does not provide the necessary bandwidth for complex simulations or data transfer that are so common with life sciences applications today. In the past, applications were embarrassingly parallel and were thought to not require a high speed inteconnect such as InfiniBand. The latest commerical, open source and university-created applications have been optimized to take advantage of InfiniBand s improved performance and data transfer speeds. Computational Chemistry Life science organizations in the area of computational chemistry conduct drug research and discovery as well as biochemical analysis and modeling. These areas share five primary challenges: Growing size of formulas and problems Need to accelerate development of new life-saving drugs More comprehensive safety and drug interaction identification requirements Applications that are not optimized for today s powerful CPUs Severe price sensitivity and cost pressure from various groups Simulations and modeling for chemical analysis involves huge formulas and constant number crunching. Until recently, most solutions used Ethernet as an interconnect. Because of the high CPU overhead related to the handling of communication requests, servers spend more cycles managing inter-processor communications than actually solving computational tasks. Systems Biology Life science organizations conducting analyses and modeling in many different chemical and molecular areas share three primary challenges: Simulations of larger, more complex systems and modeling of cell behaviors require lots of bandwidth, low latency, and fast storage access File size and quantity is growing exponentially leading to I/O bottlenecks Grids are used to share data, discoveries and models Systems biology involves many researchers that constantly retrieve, change, and replace extremely large files. This causes a tremendous amount of bottlenecks in the network. Because so much data is shared across organizations, life sciences organizations use large-scale grids. Chemical Mixing and Material Analysis Life science organizations conducting research and discovery in chemical analysis, material, adhesive, flavors and scent modeling share three primary challenges: Growing size of formulas and problems cause I/O bottlenecks The need to accelerate product development Safety and quality issues need to be identified sooner in the process 2
3 Similar to computational chemistry, the size of formulas can be quite large and cause huge bottlenecks. If these bottlenecks can be resolved, more analyses can be done faster, accelerating products to market. Additionally, issues and safety hazzards can be caught sooner in the product cycle, saving organizations thousands of dollars. A Better Way Benchmark testing has found that Voltaire interconnect solutions reduce runtime by as much as % and increase application and file-system performance 10x. To improve the speed of life sciences applications, engineers need to optimize the design of high performance computing systems. IT managers spend a lot of time determining the server CPU to be used in clusters and grids, but the interconnect that is deployed to transport information between the servers is often ignored. This is a missed opportunity as less efficient interconnects cause significant degradation in application performance. Voltaire s InfiniBand solutions accelerate application performance. Voltaire offers high-performance, lowlatency solutions that enable applications to reach their full performance potential. Benchmark testing has found that Voltaire interconnect solutions reduce runtime by as much as percent. Voltaire offers highperformance (10, 20 and 40 Gbps), low-latency (< 2 microseconds) interconnect solutions used in the world s highest performance supercomputers and data centers. InfiniBand is an industry-standard interconnect for high-performance computing (HPC) and enterprise applications. The combination of high bandwidth, low latency, and scalability with high performance storage makes InfiniBand the interconnectof choice to power many of the world s largest and fastest computer systems and commercial data centers. Voltaire solutions support most major server vendors, operating systems, storage solutions and chip manufacturers. 1 Gb Ethernet 10 Gb Ethernet Myrinet InfiniBand Bandwidth 1 Gb/sec 10 Gb/sec 2.5 Gb/sec 10, 20 & 40 Gb/sec Latency ~10 us us < 2 us Average Efficiency 53% No Entries 68% 74% Price Per Gig/Port ~$ >~$ ~$ <$ Table 1: Price/performance advantages for InfiniBand In addition, Voltaire works with leading storage and application vendors to optimize their solutions to alleviate IPC and file I/O bottlenecks. By combining leading storage technologies with InfiniBand and Voltaire s Grid Director family of switch products, life sciences organizations can conduct research faster and more efficiently to gain a clear competitive advantage. 3
4 Solution Brief: Higher Education and Research 70 Parallel Speedup Parallel Speedup # Cores % Efficiency % Efficiency Linear Scaling GbE InfiniBand \ Figure 1. 75% Parallel Speed Up with Voltaire vs. only 50% with GbE Building High Performance Clusters Voltaire offers complete end-to-end server interconnect solutions for speeding life sciences applications. The three major elements of the solution include: High-speed, low latency InfiniBand switches Fast Storage Access and Scalable File Systems Visualization Solutions High-performance InfiniBand Switches Voltaire s InfiniBand-based solutions deliver high performance and scalability to compute clusters. Voltaire offers a complete portfolio of products including a scalable line of InfiniBand switches, high performance I/O gateways (for seamless connectivity to Ethernet and Fibre Channel networks) and fabric management software. Voltaire solutions use the Open Fabric Alliance s OFED drivers and the Open MPI (Message Passing Interface) libraries to optimize application performance for both MPI-based and socket-based applications. Figure 2. Voltaire Grid Director 9024 for small-to-medium sized clusters ranging from 16 to 24 nodes 4
5 For small-to-medium sized clusters, Voltaire offers the Voltaire Grid Director It is a 1U device with twenty-four 10 Gbps (SDR) or 20 Gbps (DDR) InfiniBand ports. The switch is a high performance, low latency, fully non-blocking edge or leaf-switch with a throughput of 480 Gbps. The Grid Director 9024 is well-suited for small InfiniBand fabrics with up to 24 nodes because it includes all of the necessary management capabilities to function as a stand-alone switch. The Grid Director 9024 is internally managed and offers comprehensive device and fabric management capabilities. Designed for high-availability (high MBTF) and easy maintenance, the switch is simple to install and features straightforward initialization. The solution is scalable as additional switches can be added to support additional nodes. Figure 3. Voltaire Grid Director 2004 for scalable clusters ranging from compute nodes. For larger clusters ranging from compute nodes, Voltaire offers the Grid Director 2004 and 2012 multi-service switch the industry s highest performing multi-service switches for medium-to-large clusters and grids. The switch enables high performance non-blocking configurations and features an enterprise-level, high availability design. The Grid Director 2004 supports up to 96 InfiniBand 4X ports (20 Gbps) and the Grid Director 2012 supports up to 288 InfiniBand 4X ports (20 Gbps). Voltaire Grid Director switches are scalable through the use of modular line boards and they feature 10 GbE and Fibre Channel capabilities so the solution can provide high-performance, integrated SAN and LAN connectivity. Voltaire has also defined scalable units for deploying larger, scalable clusters. Scalable units are ideal for constructing large clusters that deliver unparalleled performance to applications. Scalable units combine compute, interconnect and storage capabilities with scalable file systems. At the heart of the solution is the Voltaire Grid Director 2012 multi-service switch. Voltaire s director-class, multi-service switches offer integrated InfiniBand, GbE and Fibre Channel connectivity in a single chassis. This enables MPI and storage traffic to run on the same network, a capability that is not available with Ethernet or proprietary fabrics. By enabling IPC and high performance storage on a single network, Voltaire solutions enable far greater scalability. 5
6 sfu-8 sfu-8 sfu-8 sfu-8 Solution Brief: Higher Education and Research Figure 3. A scalable unit of 200 nodes powered by a Voltaire Grid Director 2012 Scaling out further is made easy by using Voltaire Grid Director switches as core switches to interconnect multiple scalable units. Such connectivity can be implemented as fully non-blocking or as partially blocking depending on application requirements or budget constraints. Figure 4. Multiple scalable units interconnected using a Voltaire Grid Director
7 Fast Storage Access For companies looking to incorporate storage into their InfiniBand cluster, Voltaire offers fast I/O capabilities for storage. Voltaire solutions combine scalable compute and storage capabilities with parallel file systems. By using InfiniBand with parallel file systems, the server s CPU overhead is reduced, freeing up CPU cycles for your application. Before Now Server CPU Utilization Compute IPC Storage Compute IPC Storage Software MPI NFS MPI Parallel File System Network Proprietary Interconnect GbE InfiniBand - Low performance - High overhead on CPU - No scalability - High performance - CPU available for applications - Scalability to thousands of nodes At the heart of the solution is the Voltaire Grid Director 2004 multi-service switch (described above). Voltaire s director-class, multi- service switches offer seamless InfiniBand, GbE and Fibre Channel connectivity. This enables MPI and storage traffic to run on the same network, a capability Ethernet and proprietary fabrics do not offer. By enabling IPC and high-performance storage on a single network, Voltaire solutions allow companies to leave behind the limitations of network file systems (NFS) and move to parallel file systems over InfiniBand. This provides far-greater scalability. Applications can now have effective file I/O rates of 350MB/s compared with the 50MB/s previously available by using NFS. Additionally, the size of compute clusters is no longer limited by the limitations imposed by NFS. Scalable File Systems Running scalable file systems over Voltaire InfiniBand solutions creates the most scalable solution in the industry with more than 1,000 nodes on a single name space, and delivers high performance connectivity for the storage and client nodes. Voltaire has significant experience and expertise in enabling large-scale parallel file system deployments. Such solutions include: Lustre, HP SFS, IBM GPFS, Panasas and PVFS. These solutions, when combined with InfiniBand, solve two critical problems that NFS creates: limited throughput and limited scalability. The diagram (Figure 6) below outlines a Voltaire deployment with HP SFS (Lustre) with 1,100 nodes (2,200 cores) all accessing a single file system. 7
8 sctrl Solution Brief: Higher Education and Research OSS Sustained FS Performance 900 MB/s Client Side Performance 350 MB/s Voltaire ISR 9288 DOD T1-05 Voltaire Grid Visualization Nodes Compute Nodes Storage Nodes Multi-Panel Displays Lustre Clients Lustre Servers (OSS, MDS) Figure 6: Voltaire s TI-05 installation at the D.O.D 1,100 nodes on a single file system InfiniBand is used for both MPI and Lustre over the same wires. Visualization Solutions Customers in a variety of industry sectors including life sciences, energy, automotive, aerospace, government and military use Voltaire solutions for visualization on clusters and grids ranging from dozens to hundreds of nodes. Visualization clusters typically require very high bandwidth and Voltaire s InfiniBand solutions with up to 20Gbps per host link are ideal for this. In addition, the low overhead on the CPU frees it to carry out the image processing faster. The result is a powerful solution that delivers higher levels of resolutions and faster image processing. Figure 7: 3D visualization is commonly used in Life Sciences. Putting It All Together Key Features & Benefits Voltaire solutions for life sciences offer many compelling benefits to users: High Bandwidth: Voltaire solutions provide bandwidth of 20 Gbps to allow for faster and more frequent analysis by life sciences applications. Lower Latency: Voltaire solutions provide latency as low as 1.3 microseconds. Moreover, Voltaire s InfiniBand-based solutions leverage Remote Direct Memory Access (RDMA) with CPU and OS bypass-technologies that greatly reduce memory-copy overheads and associated CPU utilization. Standards-Based: Voltaire solutions are based on InfiniBand: the only industry-standard, highperformance interconnect Flexibility: Grids and clusters that use Voltaire solutions can be built as a fully non-blocking 20 Gbps fabric or as a lower-bandwidth fabric based on the needs of the application. Moreover, Voltaire switches are upgradeable in a non-disruptive, hot-pluggable manner. 8
9 Fast I/O for Storage: Voltaire solutions enable parallel file systems over InfiniBand, which offers far-greater performance and scalability than NFS solutions. Tested & Certified with Leading Applications By working closely with leading server and software vendors on integration and testing, Voltaire offers the fastest and most efficient high-speed interconnect solutions for the life sciences market. OS Support Linux Enterprise Edition from Novell (SUSE SLES) and Red Hat (EL/AS) Voltaire solutions support many leading life sciences applications. Supported Parallel File Systems Applications Lustre, HP SFS, IBM GPFS, Panasas, IBRIX, TerraScale s TeraGrid Accelrys, Gaussian, AMBER, BLAST, FASTA, GlimmerM, Wise2, ACT, ClustalW, EMBOSS, HMMER, Image, T-Coffee, Artermis, CHARMm, Cn3D, GAMESS, GROMACS, RasMol, ReadSeq, TribeMCL, NAMD, NMRView Systems & Platform Partners HP, IBM, SUN, NEC, SGI, NEC, Intel, AMD Customer Success Stories Customers in a variety of industries that rely on modeling, simulation and analysis leverage Voltaire solutions for their cluster interconnects. Companies include consumer-product manufacturers, research orgnizations, pharmaceutical companies, and university development labs throughout the world. Swiss Institute of BioInformatics (SIB) The Swiss Institute of BioInformatics (SIB) needed a system that could help them speed up discovery of research for their humanitarian efforts around diseases like mad cow and breast cancer research. They turned to HP and Voltaire to supply a system that could grow with their needs, but also help speed up the life science applications running on the cluster. The Swiss Institute of BioInformatics, through its VitaI IT Computing Institution, created a joint venture between Oracle, HP and Intel using Voltaire Grid Director switches to enable the modeling and analysis. SIB first built the Vital-IT Computer Center in 2003 using a configuration relying on SAN storage and NFS services, using gigabit Ethernet for communication among servers. But 18 months after the initial deployment, SIB realized two major problems. With SAN storage, a limited number of servers can be connected to the SAN, said Dr. Victor Jongeneel of SIB. Our servers send and receive data from the compute nodes using NFS, and this turned out to be a major performance bottleneck. With 64 clients running significant I/O, we needed a faster way than gigabit Ethernet for all of our compute nodes to share common file space. 9
10 sfu-8 Solution Brief: Higher Education and Research Voltaire Grid Director Switch Storage HP SFS Test Cluster #1 (8 nodes) Test Cluster #2 (4 nodes) Production Cluster (68 nodes) Figure 8: The system configuration as deployed by Swiss Institute of BioInformatics (SIB) SIB also has many computing jobs that read large amounts of data into memory, many of which are I/O bound. Because of the load on the file servers, they would sometimes crash, and jobs could be aborted before completion. Because of this, we could not run as many jobs per unit time as we wanted, Jongeneel said. The Voltaire interconnect between compute nodes provides much better performance both in terms of bandwidth and latency, Jongeneel said. Computing jobs using HP-MPI, Voltaire MPI and LAM MPI now run much faster than they used to, and more importantly, our server I/O capacity is no longer a bottleneck for running any size job. SIB selected a configuration of one large cluster consisting of 80 HP Servers with a mix of Itanium2 and EM64T processors connected by the Voltaire Grid Director switch. The solution leverages the multi-service capabilities of Voltaire s Grid Director switches to enable Fast I/O with storage connectivity. The clusters run multiple life science applications such as Blast, Gromacs, CHARM and Platform Computing s scheduler LSF. This customer is extremely satisfied with the solution because the combination of the Voltaire Grid Director switch, HP servers, MPI software and File I/O for larger jobs makes for easier file manipulation at high performance. 10
11 sctrl sctrl sctrl sctrl sctrl sctrl sctrl sctrl Tokyo Institute of Technology (TiTech) The Tokyo Institute of Technology (TiTech), located in Tokyo, Japan, is one of the leading technical universities in the world. TiTech has a long history as a world leader in high performance and GRID computing, and houses one of the world s largest supercomputers. Server Core Storage The TiTech system serves as an example of how scalable and flexible the Voltaire family of products is. Used by a wide range of researchers across the university and collaborators across Japan and around the globe, the solution delivers more than 40 trillion floating point operations per second (TFlops). The solution tackles computationally difficult problems ranging from: Analysis of how Avian Flu mutates and is transmitted from birds to humans Structural analysis of new materials Making buildings more resistant to earthquakes More accurate prediction of the earth s climate Visit for more information on this solution. 11
12 About Voltaire Voltaire (NASDAQ: VOLT) designs and develops server and storage switching and software solutions that enable high-performance grid computing within the data center. Voltaire refers to its server and storage switching and software solutions as the Voltaire Grid Backbone. Voltaire s products leverage InfiniBand technology and include director-class switches, multi-service switches, fixed-port configuration switches, Ethernet and Fibre Channel routers and standards-based driver and management software. Voltaire s solutions have been sold to a wide range of end customers including governmental, research and educational organizations, as well as market-leading enterprises in the manufacturing, oil and gas, entertainment, life sciences and financial services industries. More information about Voltaire is available at or by calling Notice Reproduction of this publication in any form without prior written permission is not allowed. The information in this publication is subject to change without notice and is provided AS IS WITHOUT WARRANTY OF ANY KIND. THE ENTIRE RISK ARISING OUT OF THE USE OR INTERPRETATIONS OF THIS INFORMATION REMAINS WITH RECIPIENT. IN NO EVENT SHALL VOLTAIRE BE LIABLE FOR ANY DIRECT, SPECIAL, PUNITIVE OR OTHER DAMAGES. Performance results will vary based upon a number of system factors. Some of these include: server configuration of the processor, chip set, memory size, firmware and driver release versions, MPI version and OS kernel version. The configuration or configurations tested or described may or may not be the only available solution. These tests are not a determination of product quality or correctness, nor does it ensure compliance with any federal state or local requirements. Product names mentioned herein may be trademarks and/or registered trademarks of their respective companies. Contact Voltaire to Learn More info@voltaire.com Voltaire Inc. All rights reserved. Voltaire and the Voltaire logo are registered trademarks of Voltaire Inc. Grid Director is a trademark of Voltaire Inc. Other company, product, or service names are the property of their respective owners.
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
More informationClusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
More informationIntroduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution
Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction
More informationLS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
More informationInfiniBand in the Enterprise Data Center
InfiniBand in the Enterprise Data Center InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership Mellanox Technologies Inc. 2900
More informationWhite Paper Solarflare High-Performance Computing (HPC) Applications
Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters
More informationALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
More informationSolving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
More informationFrom Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
More informationCluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
More informationRemoving Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
More informationPanasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory
Customer Success Story Los Alamos National Laboratory Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory June 2010 Highlights First Petaflop Supercomputer
More informationAgenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
More informationAchieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
More informationAchieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
More informationECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009
ECLIPSE Best Practices Performance, Productivity, Efficiency March 29 ECLIPSE Performance, Productivity, Efficiency The following research was performed under the HPC Advisory Council activities HPC Advisory
More informationLarge Scale Clustering with Voltaire InfiniBand HyperScale Technology
Large Scale Clustering with Voltaire InfiniBand HyperScale Technology Scalable Interconnect Topology Tradeoffs Since its inception, InfiniBand has been optimized for constructing clusters with very large
More informationTHE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC
THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC The Right Data, in the Right Place, at the Right Time José Martins Storage Practice Sun Microsystems 1 Agenda Sun s strategy and commitment to the HPC or technical
More informationWindows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
More informationData Center Architecture Overview
1 CHAPTER Note Important Updated content: The Cisco Virtualized Multi-tenant Data Center CVD (http://www.cisco.com/go/vmdc) provides updated design guidance including the Cisco Nexus Switch and Unified
More informationSun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
More informationECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
More informationBlock based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
More informationCut I/O Power and Cost while Boosting Blade Server Performance
April 2009 Cut I/O Power and Cost while Boosting Blade Server Performance 1.0 Shifting Data Center Cost Structures... 1 1.1 The Need for More I/O Capacity... 1 1.2 Power Consumption-the Number 1 Problem...
More informationImproved LS-DYNA Performance on Sun Servers
8 th International LS-DYNA Users Conference Computing / Code Tech (2) Improved LS-DYNA Performance on Sun Servers Youn-Seo Roh, Ph.D. And Henry H. Fong Sun Microsystems, Inc. Abstract Current Sun platforms
More informationPARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
More informationScaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
More informationAppro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
More informationBuilding a Linux Cluster
Building a Linux Cluster CUG Conference May 21-25, 2001 by Cary Whitney Clwhitney@lbl.gov Outline What is PDSF and a little about its history. Growth problems and solutions. Storage Network Hardware Administration
More informationInfiniBand Update Addressing new I/O challenges in HPC, Cloud, and Web 2.0 infrastructures. Brian Sparks IBTA Marketing Working Group Co-Chair
InfiniBand Update Addressing new I/O challenges in HPC, Cloud, and Web 2.0 infrastructures Brian Sparks IBTA Marketing Working Group Co-Chair Page 1 IBTA & OFA Update IBTA today has over 50 members; OFA
More informationDriving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
More informationInfiniBand -- Industry Standard Data Center Fabric is Ready for Prime Time
White Paper InfiniBand -- Industry Standard Data Center Fabric is Ready for Prime Time December 2005 Server and storage clusters benefit today from industry-standard InfiniBand s price, performance, stability,
More informationTop Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center Expect enhancements in performance, simplicity, and agility when deploying Oracle Virtual Networking in the data center. ORACLE
More informationVirtual Compute Appliance Frequently Asked Questions
General Overview What is Oracle s Virtual Compute Appliance? Oracle s Virtual Compute Appliance is an integrated, wire once, software-defined infrastructure system designed for rapid deployment of both
More informationCisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage
Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp
More informationBuilding Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl lindahl@cbr.su.se CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
More informationBioscience. Introduction. The Importance of the Network. Network Switching Requirements. Arista Technical Guide www.aristanetworks.
Introduction Over the past several years there has been in a shift within the biosciences research community, regarding the types of computer applications and infrastructures that are deployed to sequence,
More informationInterconnect Analysis: 10GigE and InfiniBand in High Performance Computing
Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing WHITE PAPER Highlights: There is a large number of HPC applications that need the lowest possible latency for best performance
More informationHigh Performance Computing. Course Notes 2007-2008. HPC Fundamentals
High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs
More informationAchieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
More informationInteroperability Testing and iwarp Performance. Whitepaper
Interoperability Testing and iwarp Performance Whitepaper Interoperability Testing and iwarp Performance Introduction In tests conducted at the Chelsio facility, results demonstrate successful interoperability
More informationSimplifying the Data Center Network to Reduce Complexity and Improve Performance
SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,
More informationVBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE IN PHYSICAL AND VIRTUAL ENVIRONMENTS
Vblock Solution for SAP: SAP Application and Database Performance in Physical and Virtual Environments Table of Contents www.vce.com V VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE
More informationBuilding a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
More informationThe virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.
White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization
More informationBuilding a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
More informationLow Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect
White PAPER Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect Introduction: High Performance Data Centers As the data center continues to evolve to meet rapidly escalating
More informationUpgrading Data Center Network Architecture to 10 Gigabit Ethernet
Intel IT IT Best Practices Data Centers January 2011 Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Executive Overview Upgrading our network architecture will optimize our data center
More informationPRIMERGY server-based High Performance Computing solutions
PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating
More informationIBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect
IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide
More informationInfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
More informationBuilding Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
More information3 Red Hat Enterprise Linux 6 Consolidation
Whitepaper Consolidation EXECUTIVE SUMMARY At this time of massive and disruptive technological changes where applications must be nimbly deployed on physical, virtual, and cloud infrastructure, Red Hat
More informationUltra Low Latency Data Center Switches and iwarp Network Interface Cards
WHITE PAPER Delivering HPC Applications with Juniper Networks and Chelsio Communications Ultra Low Latency Data Center Switches and iwarp Network Interface Cards Copyright 20, Juniper Networks, Inc. Table
More informationSMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
More informationMichael Kagan. michael@mellanox.com
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies michael@mellanox.com Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
More informationScaling from Workstation to Cluster for Compute-Intensive Applications
Cluster Transition Guide: Scaling from Workstation to Cluster for Compute-Intensive Applications IN THIS GUIDE: The Why: Proven Performance Gains On Cluster Vs. Workstation The What: Recommended Reference
More informationReplacing SAN with High Performance Windows Share over a Converged Network
WHITE PAPER November 2015 Replacing SAN with High Performance Windows Share over a Converged Network Abstract...1 Introduction...1 Early FC SAN (Storage Area Network)...1 FC vs. Ethernet...1 Changing SAN
More informationPerformance Across the Generations: Processor and Interconnect Technologies
WHITE Paper Performance Across the Generations: Processor and Interconnect Technologies HPC Performance Results ANSYS CFD 12 Executive Summary Today s engineering, research, and development applications
More informationHigh-Throughput Computing for HPC
Intelligent HPC Workload Management Convergence of high-throughput computing (HTC) with high-performance computing (HPC) Table of contents 3 Introduction 3 The Bottleneck in High-Throughput Computing 3
More informationGigabit to the edge. HP ProCurve Networking Solutions
Gigabit to the edge HP ProCurve Networking Solutions Performance to the edge taking high-speed Gigabit to the edge of your network When it comes to your network, the faster you want something, the slower
More informationWhitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers
Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Introduction Adoption of
More informationArchitecting Low Latency Cloud Networks
Architecting Low Latency Cloud Networks Introduction: Application Response Time is Critical in Cloud Environments As data centers transition to next generation virtualized & elastic cloud architectures,
More informationHPC Growing Pains. Lessons learned from building a Top500 supercomputer
HPC Growing Pains Lessons learned from building a Top500 supercomputer John L. Wofford Center for Computational Biology & Bioinformatics Columbia University I. What is C2B2? Outline Lessons learned from
More informationNew Storage System Solutions
New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems
More informationMellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
More informationJuniper Networks QFabric: Scaling for the Modern Data Center
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
More informationRoCE vs. iwarp Competitive Analysis
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
More informationFLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
More informationRed Hat Enterprise Linux solutions from HP and Oracle
Red Hat Enterprise Linux solutions from HP and Oracle Driven by innovation to improve interoperability and scalability, HP, Red Hat, and Oracle deliver a broad and deep range of Linux offerings to enhance
More informationComparing the performance of the Landmark Nexus reservoir simulator on HP servers
WHITE PAPER Comparing the performance of the Landmark Nexus reservoir simulator on HP servers Landmark Software & Services SOFTWARE AND ASSET SOLUTIONS Comparing the performance of the Landmark Nexus
More informationConnecting the Clouds
Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server
More informationLustre Networking BY PETER J. BRAAM
Lustre Networking BY PETER J. BRAAM A WHITE PAPER FROM CLUSTER FILE SYSTEMS, INC. APRIL 2007 Audience Architects of HPC clusters Abstract This paper provides architects of HPC clusters with information
More informationHP Moonshot System. Table of contents. A new style of IT accelerating innovation at scale. Technical white paper
Technical white paper HP Moonshot System A new style of IT accelerating innovation at scale Table of contents Abstract... 2 Meeting the new style of IT requirements... 2 What makes the HP Moonshot System
More informationColgate-Palmolive selects SAP HANA to improve the speed of business analytics with IBM and SAP
selects SAP HANA to improve the speed of business analytics with IBM and SAP Founded in 1806, is a global consumer products company which sells nearly $17 billion annually in personal care, home care,
More informationQuick Reference Selling Guide for Intel Lustre Solutions Overview
Overview The 30 Second Pitch Intel Solutions for Lustre* solutions Deliver sustained storage performance needed that accelerate breakthrough innovations and deliver smarter, data-driven decisions for enterprise
More informationCisco SFS 7000P InfiniBand Server Switch
Data Sheet Cisco SFS 7000P Infiniband Server Switch The Cisco SFS 7000P InfiniBand Server Switch sets the standard for cost-effective 10 Gbps (4X), low-latency InfiniBand switching for building high-performance
More informationComparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
More informationHPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems mikev@sun.com Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
More informationBUILDING A SCALABLE BIG DATA INFRASTRUCTURE FOR DYNAMIC WORKFLOWS
BUILDING A SCALABLE BIG DATA INFRASTRUCTURE FOR DYNAMIC WORKFLOWS ESSENTIALS Executive Summary Big Data is placing new demands on IT infrastructures. The challenge is how to meet growing performance demands
More informationIntel DPDK Boosts Server Appliance Performance White Paper
Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks
More information3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
More informationSRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center
SRNWP Workshop HP Solutions and Activities in Climate & Weather Research Michael Riedmann European Performance Center Agenda A bit of marketing: HP Solutions for HPC A few words about recent Met deals
More informationInfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice
InfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice Provides the Best Return-on-Investment by Delivering the Highest System Efficiency and Utilization TOP500 Supercomputers June
More informationHADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW
HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director brett.weninger@adurant.com Dave Smelker, Managing Principal dave.smelker@adurant.com
More informationFOR SERVERS 2.2: FEATURE matrix
RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,
More informationWhite Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
More informationSolution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux
Solution Brief July 2014 All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Traditional SAN storage systems cannot keep up with growing application performance needs.
More informationWhere IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
More informationAn Oracle White Paper December 2010. Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric
An Oracle White Paper December 2010 Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric Introduction... 1 Today s Datacenter Challenges... 2 Oracle s Network Fabric... 3 Maximizing
More informationQuantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
More informationAccelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team
Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC Wenhao Wu Program Manager Windows HPC team Agenda Microsoft s Commitments to HPC RDMA for HPC Server RDMA for Storage in Windows 8 Microsoft
More informationHPC Software Requirements to Support an HPC Cluster Supercomputer
HPC Software Requirements to Support an HPC Cluster Supercomputer Susan Kraus, Cray Cluster Solutions Software Product Manager Maria McLaughlin, Cray Cluster Solutions Product Marketing Cray Inc. WP-CCS-Software01-0417
More informationSun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008
Sun in HPC Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Bjorn Andersson Director, HPC Marketing Makia Minich Lead Architect, Sun HPC Software, Linux Edition Sun Microsystems Core Focus Areas for
More informationOverview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview Oracle s Fibre Channel over Ethernet (FCoE technology provides an opportunity to reduce data center costs by converging
More informationRED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server
More informationThe Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
More informationRecommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
More informationOn-Demand Supercomputing Multiplies the Possibilities
Microsoft Windows Compute Cluster Server 2003 Partner Solution Brief Image courtesy of Wolfram Research, Inc. On-Demand Supercomputing Multiplies the Possibilities Microsoft Windows Compute Cluster Server
More informationA High-Performance Storage and Ultra-High-Speed File Transfer Solution
A High-Performance Storage and Ultra-High-Speed File Transfer Solution Storage Platforms with Aspera Abstract A growing number of organizations in media and entertainment, life sciences, high-performance
More information