Fast Setup and Integration of ABAQUS on HPC Linux Cluster and the Study of Its Scalability
|
|
|
- Gabriella Jewel Simmons
- 10 years ago
- Views:
Transcription
1 Fast Setup and Integration of ABAQUS on HPC Linux Cluster and the Study of Its Scalability Betty Huang, Jeff Williams, Richard Xu Baker Hughes Incorporated Abstract: High-performance computing (HPC), the massive powerhouse of IT, is now the fastest-growing sector in the industry, especially for the oil and gas industry, which has outpaced other US industries in integrating HPC into its critical business functions. HPC can offer greater capacity and flexibility to allow more advanced data analysis, which usually cannot be handled by individual workstations. In April 28, Baker Oil Tools installed a Linux Cluster to boost its finite element analysis and computational fluid dynamics application performance. Platform Open Cluster Stack (OCS) has been implemented as cluster and system management software and load-sharing facility for high-performance computing (LSF-HPC) as job scheduler. OCS is a pre-integrated, modular and hybrid software stack that contains open-source software and proprietary products. It is a simple and easy way to rapidly assemble and manage small- to large-scale Linux-based HPC clusters. 1. Introduction of HPC A cluster is a group of linked computers working together closely so they form a single computer. Depending on the function, clusters can be divided into several types: high-availability (HA) cluster, loadbalancing cluster, grid computing, and Beowulf cluster (computing). In this paper, we will focus on the Beowulf-type high-performance computing (HPC) cluster. Driven by more advanced simulation project needs and the capability of modern computers, the simulation world relies more and more on HPC. HPC cluster has become the dominating resource in this area. It typically comprises three layers: hardware, software, and management between them. These components integrate together to carry out the computing tasks in parallel, resulting in speeding up the whole process. 2. HPC Setup by OCS The cost/performance ratio for cluster is very attractive, though it is still a big challenge to run commercial finite element analysis software on cluster. We chose Platform Open Cluster Stack (OCS) as cluster management/administration software and LSF (load-sharing facility) as job scheduler. It proved to be a simple and easy way to rapidly assemble and manage small- to large-scale Linux-based HPC clusters. 2.1 OCS Introduction OCS is a comprehensive cluster and system management toolkit, Intel Cluster Ready certified software suite based on Red Hat Linux. It is built on top of the original OCS toolkit developed by San Diego Supercomputer Center (SDSC). It is a hybrid software stack, containing a blend of open-source and proprietary software technologies. Figure 1 shows the layer structure of OCS.
2 Platform OCS Rolls Tested and configured third-party tools and applications Platform Open Cluster Stack Node &Cluster Management Node & Cluster Filesystem Workload & Resource Management Development Tools & Utilities RedHat Enterprise Linux or CentOS x86 and x86-64 hardware Figure 1. The Structure of Platform OCS The bottom is the hardware layer and the top is the software/application layer. OCS is the joint of the two parts. It supports 32- and 64-bit x86 hardware and provides the core functions for a cluster: Node & Cluster management, workload & recourse management to provide functions of OS, applications provisioning. The new version OCS 5 has more options in operating systems, such as SuSE Linux, Windows, and IBM AIX. To ensure the compatibility of the drivers, tools and applications installed on cluster, Platform OCS wraps them into rolls and develops a series of related roll manipulation commands. Roll is a self-contained ISO image that holds packages and their configuration files. It can have one or more RPMs, a set of scripts for installing the package in the post-installation step. There are required rolls and optional rolls. Required rolls are fundamental for OCS to function correctly, such as Base Roll and HPC Roll. To reduce cluster down time, you can dynamically add/remove rolls while cluster is in function. 2.2 Quickly and Easily Set up Cluster by OCS OCS installation starts from the master node with installation DVD kit. When the installation option prompts, choose installation type as frontend (default is compute node). Fill in the cluster name, static IP address, roll selection, etc. and then installation proceeds automatically. In OCS 4.5, Red Hat Linux is included in OCS DVD. While in the new version OCS 5, you need to add extra OS DVD separately. After installing OCS on the master node, you will get a functional cluster master node with all the cluster basic features included. The built-in utility insert/ethers or add-hosts can add computer nodes, switches, and customized appliances into the database. Reboot into PXE (Pre-Execution Environment), and the computer nodes will start to communicate with the master node to obtain installation and configuration files to complete OS and application provisioning. The following schema is the Linux cluster of Baker Oil Tools set up by OCS in April 28 (Figure 2). It contains one master node and six compute nodes. They are all DELL Power Edge servers with dual quadcore processors and 16 GB RAM (2GB RAM per core). RAID1 and RAID5 disk configuration is applied on the master node to provide redundancy.
3 Figure 2. Schema for Linux Cluster of Baker Oil Tools The whole cluster is isolated from public networking except master node, on which two networking cards are installed. One connects to public networking and one connects to private networking within cluster. We also used two switches: one is an Ethernet switch, providing Ethernet networking for cluster management and job submission. The other is an infiniband switch, providing message passing networking for faster computing. The cluster end user logs into the master node from a public network, and then submits jobs to compute nodes through a private network. 2.3 Live Monitoring by Ganglia Once the cluster sets up and is running, we use graphic web monitor Ganglia (Figure 3) to monitor cluster status dynamically.
4 Figure 3. Screenshot of Ganglia (graphic web monitoring tool) Ganglia provides a good high-level view of the entire cluster load and individual node load. It pushes the data via UDP from a gmond daemon to a gmetad daemon. The monitor gathers information for various metrics such as CPU load, free memory, disk usage, network I/O, and operating system version. 3. ABAQUS Integration on Cluster by LSF LSF is a job scheduler that configures multiple systems to perform the same function so that the workload is distributed and shared. On this cluster, we use HPC LSF. Compared to general LSF, it offers advanced HPC scheduling policies to enhance the job management capability of cluster, such as policy-based job preemption, advance reservation, memory and processor reservation, cluster-wide resource allocation limits, and user- and project-based fair share scheduling. Platform LSF can help computer-aided engineering (CAE) users reduce the cost of manufacturing, and increase engineer productivity and the quality of results. It is integrated to work out of the box with many HPC applications, such as LSTC LS-Dyna, FLUENT, ANSYS, MSC Nastran, Gaussian, Lion Bioscience SRS and NCBI BLAST. We integrated ABAQUS, Fluent and MSC.Marc with HPC LSF batch submission. Additionally, we also use VNC remote desktop to launch ABAQUS CAE, interactively submitting jobs. In this way, the user will have a shorter learning curve to using cluster. Recently, a new way was discovered to launch ABAQUS CAE through bsub I, which can submit job via GUI under LSF scheduler control. 4. ABAQUS Parallel Scalability Study ABAQUS users want to get the best performance out of the software and hardware combination. So, we performed parallel scalability studies for ABAQUS before and after purchasing the Red Hat Linux cluster. The information obtained in the serials of systematic study serves as the reference to make purchase decisions and optimize simulation job running.
5 4.1 ABAQUS benchmarks on single workstation of Red Hat Linux and 23 Windows server In order to study the performance of ABAQUS on Linux and Windows platforms, we ran both explicit and one standard model involving different-size ABAQUS benchmark models and two BOT engineering jobs on identical workstations (Intel 3.6 GHz, dual-core, 4 GB memory) with Red Hat Linux and Windows 23 server. The run time (seconds) is listed in Table1. The red fonts indicate better performance. ABAQUS Model Standard explicit S3A S5 S6 Ball_Seat (non-linear) E2 Auxetic2 Windows 23 Server Red Hat Linux Table1. Running time for ABAQUS 6.7 EF-1 on identical DELL Power Edge servers with different operating systems Due to the RAM limitation of these computers, we can t run a larger model. From the results above for small to mid-size models, it is hard to tell which OS wins. 4.2 ABAQUS benchmarks on Red Hat Linux Cluster Figure 4 shows the run time results for ABAQUS S2B model running on our Linux cluster by ABAQUS 6.8 implicit solver with various numbers of. It can be shown that the compute cluster approaches maximum efficiency with 16 (nodes). Abaqus sample S2B Benchmark on BOTCluster (Implicit solver, DOF 474,744) Run Time (Sec) Utilized Figure 4. ABAQUS benchmark of S2B on BOTCluster by ABAQUS 6.8
6 Another study with a high number of elements was performed with the explicit solver. The model consisted of a 3D segmented cone forming inside blank pipe (Figure 5). The model contained high interface contact (.5 elements to define the surfaces) for the segments forming the metal. The segments were meshed with discrete rigid, linear quadrilateral shell elements and the pipe had 3D deformable, linear hexahedral elements. The results (Figure 6) were scalable, but varied depending upon how many nodes per host were used. The results did not show clear parallel scalability with the number of increasing if the hosts were free to use all 8 nodes (ptile=8). Each time a new host was introduced there was a dramatic increase in run time. This was due mainly to the communication cost. As the number of increased, this influence was minimized but reached a steady state at approximately 24. Although, if the hosts were limited to using 6 nodes (ptile=6), then the study showed maximum efficiency at 12. Still, two more studies of less complexity were analyzed with the explicit solver: ABAQUS benchmark samples E1 (with 274,632 of elements) and E3 (with 34,54 of elements). Results are shown in Figure 7 and Figure 8, respectively. These studies showed good scalability with increasing, although it should be noted that the BOT study had much more contact. Similar to the implicit study, one can gather from this statistical data that using two hosts is the most efficient setup for the cluster with a large model (whether 6+6 nodes or 8+8 nodes). Figure 5. Expansion of segmented cone forming inside blank pipe by ABAQUS 6.8EF1
7 1 9 BOT-Abaqus Compute Cluster Benchmark #2 (Explicit Solver) Expansion Cone Formed Inside Pipe (875,232 DOF) Total Clock Time (min) HOST 1 HOST (nodes) Figure 6. Benchmark of expansion cone forming inside blank pipe on BOTCluster by ABAQUS 6.8EF1 Abaqus sample E1 Benchmark on BOTCluster (Explicit solver, 274,632 elements) Run Time (sec) hosts hosts 32 Utilized
8 Figure 7. ABAQUS benchmark of E1 on BOTCluster by ABAQUS 6.8 Abaqus sample E3 Benchmark on BOTCluster vs. Abaqus Cluster (Explicit solver, 34,54 elements) Run Time (sec) BOTCluster Abaqus Cluster hosts hosts 32 Utilized Figure 8. ABAQUS benchmark of E3 on BOTCluster vs. ABAQUS Cluster by ABAQUS Analysis of the benchmarks The cost of distribution Distributing jobs to multi- and multi-hosts comes with an added cost of more memory and to get the job done. When parallel efficiency exceeds the cost of communication, disk I/O and license usage, you will gain benefit. Otherwise, it could end up a detriment. Distribution will also cause the output database to grow (See Table 2). Comparing E1 and E3, E3 was moderate and contains less contact. In certain ranges of, when the job was split to multiple hosts it showed better performance (e.g. comparing 8 on one host and two hosts). But with the number of increasing, the cost of communication exceeded the benefit of gaining more resources from multiple hosts. When this happens, the performance suffers (e.g. comparing CPU16 and CPU16-4hosts). When the model size grows larger and the degree of complexity also increases, the Linux cluster shows the intrinsic scalability - the more and more hosts that are implemented, higher performance is obtained. Again, considering the BOT segmented cone model, the model size is much bigger than E1 and E3 and it runs much longer. Plus, it contained much more contact, which resulted in more communication cost when distributed to multi-hosts. 4 8 E1 E
9 Table 2. The size of.odb files generated with different utilized in bytes Dual-core vs. Quad-core Figure 8 compares our benchmark E3 with ABAQUS result (LIN64-29). We found at lower, BOTCluster shows better results. When CPU number climbs up, when we reach 32, BOTCluster slows down. I think it may be due to different computing node infrastructure. BOTCluster uses dual quadcore processors, and ABAQUS uses dual dual-core. In ABAQUS 6.7 the element loop calculations are done using MPI. When moving to a DMP situation (more than one node) the element calculations are still done with MPI but only one thread is used per node. This is absolutely a bottle neck for using ABAQUS parallel. I don t know what changed in ABAQUS 6.8 for this part. The results fit the performance prediction for dual-core and quad-core processor. Loading balance effect Also, the result of 12 is always out of the trend curve. We studied this case in details (see Figure 9). When 12 was divided on 8 on one host and 4 on another host, comparing to equally spreading out on two hosts, it needs more running time. This may indicate some load-balance efficiency. This needs to be confirmed with more research in future. ABAQUS benchmark of loading balance study ptile8 ptile6 eng_ptile8 eng_ptile Run Time (sec) Run Time (min) 1 2 e1 1 2 e3 3eng Figure 9. ABAQUS benchmark of loading balance effect study 5. Conclusions Overall, OCS is simple to rapidly set up Linux-based HPC Clusters. It can effectively minimize the cost and time spent on deploying and managing a Linux cluster. From the benchmark results above, we can see in general cases, ABAQUS showed very good parallel scalability performance on Linux Cluster. And
10 resource requirement varies with the models. To optimize using the cluster resource, a deeper understanding of model category, the complexity and element contact level seems necessary. 6. References 1. Bernstein, J., Arend Dittmer, Higher Returns on the Simulation Investment, ANSYS Advantage, Vol. II Issue 3, Hutchings, B., Cluster Computing with Windows CCS, ANSYS Advantage, Vol. II Issue 3, Platform OCS 4.x Administration Virtual Course Training Material, Acknowledgment Special thanks to Tom Gardosik (Baker Hughes-INTEQ), Steve Hiller (Baker Hughes-BHBSS) and Rob Miller (ABAQUS), who provided a lot of help in Cluster infrastructure design and installation. Also, my supervisor-rodney Frosch assisted in determining the computing resource specifications and accelerating the purchasing process.
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
Cloud Computing through Virtualization and HPC technologies
Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
Dell Reference Configuration for Hortonworks Data Platform
Dell Reference Configuration for Hortonworks Data Platform A Quick Reference Configuration Guide Armando Acosta Hadoop Product Manager Dell Revolutionary Cloud and Big Data Group Kris Applegate Solution
Recent Advances in HPC for Structural Mechanics Simulations
Recent Advances in HPC for Structural Mechanics Simulations 1 Trends in Engineering Driving Demand for HPC Increase product performance and integrity in less time Consider more design variants Find the
Scaling from Workstation to Cluster for Compute-Intensive Applications
Cluster Transition Guide: Scaling from Workstation to Cluster for Compute-Intensive Applications IN THIS GUIDE: The Why: Proven Performance Gains On Cluster Vs. Workstation The What: Recommended Reference
LOCKSS on LINUX. CentOS6 Installation Manual 08/22/2013
LOCKSS on LINUX CentOS6 Installation Manual 08/22/2013 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 6 BIOS Settings... 9 Installation... 10 Firewall Configuration...
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
PARALLELS SERVER 4 BARE METAL README
PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels
ABAQUS High Performance Computing Environment at Nokia
ABAQUS High Performance Computing Environment at Nokia Juha M. Korpela Nokia Corporation Abstract: The new commodity high performance computing (HPC) hardware together with the recent ABAQUS performance
Simple Introduction to Clusters
Simple Introduction to Clusters Cluster Concepts Cluster is a widely used term meaning independent computers combined into a unified system through software and networking. At the most fundamental level,
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
Performance Guide. 275 Technology Drive ANSYS, Inc. is Canonsburg, PA 15317. http://www.ansys.com (T) 724-746-3304 (F) 724-514-9494
Performance Guide ANSYS, Inc. Release 12.1 Southpointe November 2009 275 Technology Drive ANSYS, Inc. is Canonsburg, PA 15317 certified to ISO [email protected] 9001:2008. http://www.ansys.com (T) 724-746-3304
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1
Introduction to High Performance Cluster Computing Cluster Training for UCL Part 1 What is HPC HPC = High Performance Computing Includes Supercomputing HPCC = High Performance Cluster Computing Note: these
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
High-Performance Computing Clusters
High-Performance Computing Clusters 7401 Round Pond Road North Syracuse, NY 13212 Ph: 800.227.3432 Fx: 315.433.0945 www.nexlink.com What Is a Cluster? There are several types of clusters and the only constant
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Aqua Connect Load Balancer User Manual (Mac)
Aqua Connect Load Balancer User Manual (Mac) Table of Contents About Aqua Connect Load Balancer... 3 System Requirements... 4 Hardware... 4 Software... 4 Installing the Load Balancer... 5 Configuration...
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%
openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
PARALLELS SERVER BARE METAL 5.0 README
PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal
PRIMERGY server-based High Performance Computing solutions
PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating
Parallels Virtuozzo Containers
Parallels Virtuozzo Containers White Paper Virtual Desktop Infrastructure www.parallels.com Version 1.0 Table of Contents Table of Contents... 2 Enterprise Desktop Computing Challenges... 3 What is Virtual
Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform
Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform Joris Poort, President & CEO, Rescale, Inc. Ilea Graedel, Manager, Rescale, Inc. 1 Cloud HPC on the Rise 1.1 Background Engineering and science
Copyright 1999-2011 by Parallels Holdings, Ltd. All rights reserved.
Parallels Virtuozzo Containers 4.0 for Linux Readme Copyright 1999-2011 by Parallels Holdings, Ltd. All rights reserved. This document provides the first-priority information on Parallels Virtuozzo Containers
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers
Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers Enterprise Product Group (EPG) Dell White Paper By Todd Muirhead and Peter Lillian July 2004 Contents Executive Summary... 3 Introduction...
Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin. http://www.dell.com/clustering
Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin Reza Rooholamini, Ph.D. Director Enterprise Solutions Dell Computer Corp. [email protected] http://www.dell.com/clustering
Hardware RAID vs. Software RAID: Which Implementation is Best for my Application?
STORAGE SOLUTIONS WHITE PAPER Hardware vs. Software : Which Implementation is Best for my Application? Contents Introduction...1 What is?...1 Software...1 Software Implementations...1 Hardware...2 Hardware
NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB
bankmark UG (haftungsbeschränkt) Bahnhofstraße 1 9432 Passau Germany www.bankmark.de [email protected] T +49 851 25 49 49 F +49 851 25 49 499 NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB,
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Intel Solid-State Drives Increase Productivity of Product Design and Simulation
WHITE PAPER Intel Solid-State Drives Increase Productivity of Product Design and Simulation Intel Solid-State Drives Increase Productivity of Product Design and Simulation A study of how Intel Solid-State
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing
Microsoft Windows Compute Cluster Server 2003 Partner Solution Brief Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing Microsoft Windows Compute Cluster Server Runs
ANSYS Computing Platform Support. July 2013
ANSYS Computing Platform Support July 2013 1 Outline Computing platform trends and support roadmap Windows Linux Solaris ANSYS 14.5 Platform Support By application Other Platform Related Issues MPI and
HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
Dell Desktop Virtualization Solutions Simplified. All-in-one VDI appliance creates a new level of simplicity for desktop virtualization
Dell Desktop Virtualization Solutions Simplified All-in-one VDI appliance creates a new level of simplicity for desktop virtualization Executive summary Desktop virtualization is a proven method for delivering
Panasas: High Performance Storage for the Engineering Workflow
9. LS-DYNA Forum, Bamberg 2010 IT / Performance Panasas: High Performance Storage for the Engineering Workflow E. Jassaud, W. Szoecs Panasas / transtec AG 2010 Copyright by DYNAmore GmbH N - I - 9 High-Performance
HPSA Agent Characterization
HPSA Agent Characterization Product HP Server Automation (SA) Functional Area Managed Server Agent Release 9.0 Page 1 HPSA Agent Characterization Quick Links High-Level Agent Characterization Summary...
QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:
Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.1 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization
ECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments
RED HAT ENTERPRISE VIRTUALIZATION DATASHEET RED HAT ENTERPRISE VIRTUALIZATION AT A GLANCE Provides a complete end-toend enterprise virtualization solution for servers and desktop Provides an on-ramp to
System Software for High Performance Computing. Joe Izraelevitz
System Software for High Performance Computing Joe Izraelevitz Agenda Overview of Supercomputers Blue Gene/Q System LoadLeveler Job Scheduler General Parallel File System HPC at UR What is a Supercomputer?
Virtuozzo 7 Technical Preview - Virtual Machines Getting Started Guide
Virtuozzo 7 Technical Preview - Virtual Machines Getting Started Guide January 27, 2016 Parallels IP Holdings GmbH Vordergasse 59 8200 Schaffhausen Switzerland Tel: + 41 52 632 0411 Fax: + 41 52 672 2010
Maximizing Hadoop Performance and Storage Capacity with AltraHD TM
Maximizing Hadoop Performance and Storage Capacity with AltraHD TM Executive Summary The explosion of internet data, driven in large part by the growth of more and more powerful mobile devices, has created
LS-DYNA Scalability on Cray Supercomputers. Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp.
LS-DYNA Scalability on Cray Supercomputers Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp. WP-LS-DYNA-12213 www.cray.com Table of Contents Abstract... 3 Introduction... 3 Scalability
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
AirWave 7.7. Server Sizing Guide
AirWave 7.7 Server Sizing Guide Copyright 2013 Aruba Networks, Inc. Aruba Networks trademarks include, Aruba Networks, Aruba Wireless Networks, the registered Aruba the Mobile Edge Company logo, Aruba
Icepak High-Performance Computing at Rockwell Automation: Benefits and Benchmarks
Icepak High-Performance Computing at Rockwell Automation: Benefits and Benchmarks Garron K. Morris Senior Project Thermal Engineer [email protected] Standard Drives Division Bruce W. Weiss Principal
Dell s SAP HANA Appliance
Dell s SAP HANA Appliance SAP HANA is the next generation of SAP in-memory computing technology. Dell and SAP have partnered to deliver an SAP HANA appliance that provides multipurpose, data source-agnostic,
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper
Parallels Cloud Server 6.0
Parallels Cloud Server 6.0 Getting Started Guide March 17, 2015 Copyright 1999-2015 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH Vordergasse 59 8200 Schaffhausen
Tableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
HP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
Content Distribution Management
Digitizing the Olympics was truly one of the most ambitious media projects in history, and we could not have done it without Signiant. We used Signiant CDM to automate 54 different workflows between 11
IOS110. Virtualization 5/27/2014 1
IOS110 Virtualization 5/27/2014 1 Agenda What is Virtualization? Types of Virtualization. Advantages and Disadvantages. Virtualization software Hyper V What is Virtualization? Virtualization Refers to
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
Best Practices for Optimizing Your Linux VPS and Cloud Server Infrastructure
Best Practices for Optimizing Your Linux VPS and Cloud Server Infrastructure Q1 2012 Maximizing Revenue per Server with Parallels Containers for Linux www.parallels.com Table of Contents Overview... 3
InterWorx Clustering Guide. by InterWorx LLC
InterWorx Clustering Guide by InterWorx LLC Contents 1 What Is Clustering? 3 1.1 What Does Clustering Do? What Doesn t It Do?............................ 3 1.2 Why Cluster?...............................................
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
Implementing Oracle Grid: A Successful Customer Case Study
Implementing Oracle Grid: A Successful Customer Case Study By Kai Yu, Dan Brint and Aaron Burns T he Oracle grid consolidates the physical servers, storage and network infrastructure as resources to form
N-central 8.0 On-Premise Software and N-compass 3.1 Advanced Reporting Software
System Requirements N-central 8.0 On-Premise Software and N-compass 3.1 Advanced Reporting Software The following describes the server requirements for N-central 8.0 and its optional advanced reporting
Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution
Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage
Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp
Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7
Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7 Yan Fisher Senior Principal Product Marketing Manager, Red Hat Rohit Bakhshi Product Manager,
Qualcomm Achieves Significant Cost Savings and Improved Performance with Red Hat Enterprise Virtualization
Qualcomm Achieves Significant Cost Savings and Improved Performance with Red Hat Enterprise Virtualization Fast facts Customer Industry Geography Business challenge Solution Qualcomm Telecommunications
Efficient Parallel Execution of Sequence Similarity Analysis Via Dynamic Load Balancing
Efficient Parallel Execution of Sequence Similarity Analysis Via Dynamic Load Balancing James D. Jackson Philip J. Hatcher Department of Computer Science Kingsbury Hall University of New Hampshire Durham,
LOCKSS on LINUX. Installation Manual and the OpenBSD Transition 02/17/2011
LOCKSS on LINUX Installation Manual and the OpenBSD Transition 02/17/2011 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 7 BIOS Settings... 10 Installation... 11 Firewall
Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U
Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U Datasheet Brings the performance and reliability of mainframe virtualization to blade computing BladeSymphony is the first true enterprise-class
Linux Cluster - Compute Power Out of the Box
4 th European LS-DYNA Users Conference MPP / Linux Cluster / Hardware II Linux Cluster - Compute Power Out of the Box Harry Schlagenhauf email: [email protected] www: http://www.science-computing.de
Performance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
Installing and Configuring Websense Content Gateway
Installing and Configuring Websense Content Gateway Websense Support Webinar - September 2009 web security data security email security Support Webinars 2009 Websense, Inc. All rights reserved. Webinar
DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering
DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
NFS SERVER WITH 10 GIGABIT ETHERNET NETWORKS
NFS SERVER WITH 1 GIGABIT ETHERNET NETWORKS A Dell Technical White Paper DEPLOYING 1GigE NETWORK FOR HIGH PERFORMANCE CLUSTERS By Li Ou Massive Scale-Out Systems delltechcenter.com Deploying NFS Server
locuz.com HPC App Portal V2.0 DATASHEET
locuz.com HPC App Portal V2.0 DATASHEET Ganana HPC App Portal makes it easier for users to run HPC applications without programming and for administrators to better manage their clusters. The web-based
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Red Hat enterprise virtualization 3.0 feature comparison
Red Hat enterprise virtualization 3.0 feature comparison at a glance Red Hat Enterprise is the first fully open source, enterprise ready virtualization platform Compare the functionality of RHEV to VMware
Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms
Intel Cloud Builders Guide Intel Xeon Processor based Servers StackIQ Rocks+ Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms Creating Private Clouds from Bare Metal using Rocks+
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
Plexxi Control Installation Guide Release 2.1.0
Plexxi Control Installation Guide Release 2.1.0 702-20002-10 Rev 1.2 February 19, 2015 100 Innovative Way - Suite 3322 Nashua, NH 03062 Tel. +1.888.630.PLEX (7539) www.plexxi.com Notices The information
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
