IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server



Similar documents
Clusters: Mainstream Technology for CAE

High Performance Computing in CST STUDIO SUITE

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

IBM System x family brochure

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

præsentation oktober 2011

Comparing the performance of the Landmark Nexus reservoir simulator on HP servers

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

HPC Update: Engagement Model

Performance Comparison of ISV Simulation Codes on Microsoft Windows HPC Server 2008 and SUSE Linux Enterprise Server 10.2

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

Cisco Unified Computing System Hardware

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

LS DYNA Performance Benchmarks and Profiling. January 2009

Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing

Ignify ecommerce. Item Requirements Notes

SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center

HP Proliant BL460c G7

Scientific Computing Data Management Visions

Cisco SmartPlay Select. Cisco Global Data Center Promotional Program

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

IBM System x family brochure

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector

Very Large Enterprise Network, Deployment, Users

10.2 Requirements for ShoreTel Enterprise Systems

New!! - Higher performance for Windows and UNIX environments

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC Denver

ABAQUS High Performance Computing Environment at Nokia

Virtual Compute Appliance Frequently Asked Questions

Annex 9: Private Cloud Specifications

Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack

SERVER CLUSTERING TECHNOLOGY & CONCEPT

Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison

Oracle Database Reliability, Performance and scalability on Intel Xeon platforms Mitch Shults, Intel Corporation October 2011

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

Cloud Computing through Virtualization and HPC technologies

FLOW-3D Performance Benchmark and Profiling. September 2012

Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015

ECLIPSE Performance Benchmarks and Profiling. January 2009

Keysight M9485A PXIe Multiport Vector Network Analyzer. Configuration Guide

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

Boost Database Performance with the Cisco UCS Storage Accelerator

50. DFN Betriebstagung

Platfora Big Data Analytics

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7

Amdocs Turbo Charging and IBM Bladecenter blade servers for breakthrough real-time performance

Cost Efficient VDI. XenDesktop 7 on Commodity Hardware

Very Large Enterprise Network Deployment, 25,000+ Users

Enhanced IBM BladeCenter Open Fabric Manager offers an easy to use Web-based interface to easily configure, deploy, and manage BladeCenter products

Pentaho High-Performance Big Data Reference Configurations using Cisco Unified Computing System

White Paper WP01. Blade Server Technology Overview

IBM Power Systems Facts and Features POWER7 Blades and Servers October 2010

Marvell DragonFly. TPC-C OLTP Database Benchmark: 20x Higher-performance using Marvell DragonFly NVCACHE with SanDisk X110 SSD 256GB

Fujitsu PRIMERGY Servers Portfolio

Up to 4 PCI-E SSDs Four or Two Hot-Pluggable Nodes in 2U

FUJITSU Enterprise Product & Solution Facts

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

Cisco-EMC Microsoft SQL Server Fast Track Warehouse 3.0 Enterprise Reference Configurations. Data Sheet

Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team

A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing.

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

Lab Validation Report. Leo Nguyen. Month Year

Fast Setup and Integration of ABAQUS on HPC Linux Cluster and the Study of Its Scalability

INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering

Microsoft Exchange Server 2003 Deployment Considerations

RLX Technologies Server Blades

HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX

OpenPower: IBM s Strategy for Best of Breed 64-bit Linux

Contract Number NNG07DA20B NASA SEWP IV

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Out-of-box comparison between Dell, HP, and IBM blade servers

Virtualized High Availability and Disaster Recovery Solutions

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009

HP reference configuration for entry-level SAS Grid Manager solutions

vrealize Business System Requirements Guide

Interoperability Testing and iwarp Performance. Whitepaper

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science

OpenMP Programming on ScaleMP

Data Sheet FUJITSU Server PRIMERGY CX400 M1 Multi-Node Server Enclosure

Boosting Data Transfer with TCP Offload Engine Technology

PRIMERGY server-based High Performance Computing solutions

Power Redundancy. I/O Connectivity Blade Compatibility KVM Support

THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC

Michael Kagan.

SUN ORACLE DATABASE MACHINE

Configuration Guide February 2015

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

Fujitsu PRIMERGY Servers Portfolio

Cisco Unified Communications on the Cisco Unified Computing System

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis

COMPUTING. Centellis Virtualization Platform An open hardware and software platform for implementing virtualized applications

Transcription:

IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server IBM FLUENT Benchmark Results

IBM & FLUENT Recommended Configurations

IBM 16-Core BladeCenter S Cluster for FLUENT Systems: Up to Six BladeCenter HS21 XM (Xeon) nodes, each dual socket / dual core Four cores per compute node LS21 (Opteron) nodes also available Node one doubles as head node Options: Head node can be utilized for large jobs and/or pre/post processing utilizing integrated SAS or SATA storage in chassis Head / Compute Node Compute Nodes Future Growth Cluster configuration: 8 GB/core (32 GB/node) on head node 2 or 4 GB/core (8 or 16 GB/node) on each 3 remaining compute nodes Cluster Interconnects: Integrated Gigabit Ethernet Optional InfiniBand Storage: SAS or SATA drives integrated in chassis Direct attach storage to any node in the cluster Up to 9 TB of additional storage Integrated storage & networking Operates off 110V or 220V power

IBM 16-Core System x3550 for FLUENT Systems: Four IBM System x3550 (Intel Xeon) nodes, each dual socket / dual core Four cores per compute node x3455 (Opteron) nodes also available Node one doubles as head node Options: Head node can be utilized for large jobs and/or pre/post processing utilizing integrated SAS or SATA storage in node http://www-03.ibm.com/systems/x/rack/x3550/index.html Cluster configuration: 8 GB/core (32 GB/node) on head node 2 or 4 GB/core (8 or 16 GB/node) on each 3 remaining compute nodes Cluster Interconnects: Onboard Gigabit Ethernet Optional InfiniBand Storage: SAS or SATA drives integrated in node Up to 1.5 TB per node 1U Form Factor Additional scalability can be achieved by incorporating additional nodes Reduced power options also available

IBM 32-Core BladeCenter E Cluster for FLUENT Systems: Up to 14 BladeCenter HS21 XM (Xeon) nodes, each dual socket / dual core Four cores per compute node 146 GB SAS drive Options: Head node can be utilized for decomposition, large jobs, and/or pre/post processing. (Up to four 146 GB SAS Drives utilizing BladeCenter SIO blade suitable for head node) Compute Nodes Head / Compute Node Future Growth Cluster configuration: 2 GB/core (8 GB/node) Head Node: Recommend 8 GB/core (32 GB/node) Cluster Interconnects Integrated Gigabit Ethernet Optional InfiniBand Myricom Storage: Optional BladeCenter Storage & I/O Blade Extend direct attached storage on the head node Up to three additional SFF hot-swap SAS drives For additional performance, cluster can be upgraded to BladeCenter H chassis using existing nodes & switches

IBM 32-Core System x3550 for FLUENT Systems: Eight IBM System x3550 (Intel Xeon) nodes, each dual socket / dual core Four cores per compute node x3455 (Opteron) nodes also available Node one doubles as head node Options: Head node can be utilized for large jobs and/or pre/post processing utilizing integrated SAS or SATA storage in node http://www-03.ibm.com/systems/x/rack/x3550/index.html Cluster configuration: 8 GB/core (32 GB/node) on head node 2 or 4 GB/core (8 or 16 GB/node) on each 7 remaining compute nodes Cluster Interconnects: Onboard Gigabit Ethernet Optional InfiniBand Storage: SAS or SATA drives integrated in node Up to 1.5 TB per node 1U Form Factor Additional scalability can be achieved by incorporating additional nodes Reduced power options also available

FLUENT 6.3 IBM Benchmark Results

FLUENT Benchmark Environment FLUENT 6.3 Microsoft Windows Compute Cluster Server 2003 R2 Service Pack 2 Microsoft Windows Compute Cluster Pack Service Pack 1

FLUENT Benchmark Hardware Details IBM System x3550 2 x Intel Xeon 5160 (3.0 GHz) processors (4 cores total) 8GB RAM Cisco 4x InfiniBand Interconnect HP-MPI used for all runs with both Gigabit Ethernet & InfiniBand

FLUENT Medium Class Benchmarks Benchmark Cells Solver Description FL5M1 155,188 Segregated implicit Coal combustion in a boiler FL5M2 242,782 Segregated implicit Turbulent flow in an engine valveport FL5M3 352,800 Segregated implicit Combustion in a high velocity burner FL5M1 FL5M2 FL5M3

FLUENT InfiniBand Scaling Medium Class Benchmarks Performance relative to single core (bigger is better) 20 18 16 14 12 10 8 6 4 2 0 FL5M1 FL5M2 FL5M3 2 4 8 16 32 Cores

FLUENT Rating Medium Class Benchmarks 30000 25000 FL5M1 FL5M2 FL5M3 FLUENT Rating (bigger is better) 20000 15000 10000 5000 0 1 2 4 8 16 32 64 Cores

FLUENT Large Class Benchmarks Benchmark Cells Solver Description FL5L1 847,746 Coupled implicit Transonic flow around a fighter FL5L2 3,618,080 Segregated implicit External aerodynamics around a car FL5L3 9,792,512 Segregated implicit Turbulent flow in a transition duct FL5L1 FL5L2 FL5L3

FLUENT InfiniBand Scaling Large Class Benchmarks FL5L1 & FL5L2 Performance relative to single core (bigger is better) 90 80 70 60 50 40 30 20 10 0 FL5L1 FL5L2 2 4 8 16 32 64 128 Cores

FLUENT InfiniBand Scaling Large Class Benchmarks FL5L3 Performance relative to 8 cores (bigger is better) 14 12 10 8 6 4 2 0 FL5L3 16 32 64 128 Cores

FLUENT Rating Large Class Benchmarks FLUENT Rating (bigger is better) 16000 14000 12000 10000 8000 6000 4000 2000 0 FL5L1 FL5L2 1 2 4 8 16 32 64 128 Cores

FLUENT Rating Large Class Benchmarks 2500 FL5L3 2000 FLUENT Rating (bigger is better) 1500 1000 500 0 8 16 32 64 128 Cores

For More Information IBM System x www.ibm.com/systems/x IBM BladeCenter www.ibm.com/bladecenter IBM System Cluster 1350 http://www-03.ibm.com/systems/clusters/hardware/1350/index.html IBM HPC Solutions for CAE http://www-03.ibm.com/systems/x/solutions/industry/auto/hpcsolutions/

IBM Contacts Worldwide & Americas IBM Worldwide Name Title Telephone e-mail address Gregory Clifford IBM Sales Executive Auto/Aero 651-697-0595 gcliffor@us.ibm.com Par Hettinga IBM CAE Segment Manager +31 6-533-59940 par@nl.ibm.com Balaji Atyam IBM Senior Technical Consultant 512-838-6015 balaji@us.ibm.com Hari Reddy IBM Senior Technical Consultant 817-766-4027 hari@us.ibm.com Michael L Nelson IBM Solution Engagement Manager 919-254-1925 mlnelson@us.ibm.com IBM Americas Name Title Telephone e-mail address Ian Wight (North America) Levar Kelman (North America) IBM Business Partners Client Rep IBM Business Partners Client Rep 289-333-5048 289-333-7108 ian@ca.ibm.com kelman@ca.ibm.com Par Hettinga(South/Central America) IBM CAE Segment Manager +31 6-533-59940 par@nl.ibm.com

IBM Contacts Europe and Asia Pacific IBM Europe Name Title Telephone e-mail address Angelo Apa (United Kingdom) IBM Sales Executive +44 192-646-2234 angelo.apa@uk.ibm.com Kurt Armbruster (Germany) Dieter Oehmigen (Germany) IBM ISV Sales Specialist IBM HPC Sales Specialist +49 711-785-1340 +49 711-785- 5062 armbruster@de.ibm.com dieter_oehmigen@de.ibm.com Olivier Multon (France) IBM HPC Sales Specialist +33 1-4905-8237 olivier.multon@fr.ibm.com Lars Annell (Switzerland) IBM Senior Sales Specialist +41 58-333-4685 LMAN@ch.ibm.com Andreas Ryden (Sweden) IBM Nordics HPC Sales Leader +46 8-793-3442 Andreas.Ryden@se.ibm.com Marco Briscolini (Italy) IBM HPC Sales Specialist +39 06-596.65393 marco_briscolini@it.ibm.com Par Hettinga (Other Countries) IBM CAE Segment Manager +31 6-533-59940 par@nl.ibm.com IBM Asia Pacific and Japan Name Title Telephone e-mail address Satoshi Kozono (Japan) IBM Sales Specialist +81 3-3808-5042 E50906@jp.ibm.com Chang Jie Hao (China) IBM Sales Specialist +86 13801186991697-0595 ++ haocj@cn.ibm.com Par Hettinga (Other Countries) IBM CAE Segment Manager +31 6-533-59940 par@nl.ibm.com