Extreme Computing: The Bull Way



Similar documents
HPC Update: Engagement Model

A-CLASS The rack-level supercomputer platform with hot-water cooling

Jean-Pierre Panziera Teratec 2011

Sun Constellation System: The Open Petascale Computing Architecture

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Data Sheet FUJITSU Server PRIMERGY CX400 M1 Multi-Node Server Enclosure

Parallel Programming Survey

præsentation oktober 2011

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

FUJITSU Enterprise Product & Solution Facts

FLOW-3D Performance Benchmark and Profiling. September 2012

HUAWEI TECHNOLOGIES CO., LTD. HUAWEI FusionServer X6800 Data Center Server

Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal by SGI Federal. Published by The Aerospace Corporation with permission.

UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment

HUAWEI Tecal E6000 Blade Server

Copyright 2013, Oracle and/or its affiliates. All rights reserved.

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

Fujitsu PRIMERGY Servers Portfolio

IBM System x family brochure

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

OpenMP Programming on ScaleMP

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

Kriterien für ein PetaFlop System

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

Data Sheet FUJITSU Server PRIMERGY CX400 S2 Multi-Node Server Enclosure

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Sun Microsystems Special Promotions for Education and Research January 9, 2007

INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering

HP ProLiant SL270s Gen8 Server. Evaluation Report

Defying the Laws of Physics in/with HPC. Rafa Grimán HPC Architect

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

Data Sheet FUJITSU Server PRIMERGY CX420 S1 Out-of-the-box Dual Node Cluster Server

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering

SGI High Performance Computing

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

Fujitsu PRIMERGY Servers Portfolio

HUAWEI Tecal E9000 Converged Infrastructure Blade Server

PRIMERGY server-based High Performance Computing solutions

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

Building a Top500-class Supercomputing Cluster at LNS-BUAP

A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing.

HUAWEI E9000 Converged Infrastructure Blade Server

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

U.S. Product Specifications Reference PSREF. Version 483, December ThinkServer Servers. Visit for the latest version

Transforming the UL into a Big Data University. Current status and planned evolutions

Transforming your IT Infrastructure for Improved ROI. October 2013

UCS M-Series Modular Servers

THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC

Hadoop on the Gordon Data Intensive Cluster

Headline in Arial Bold 30pt. The Need For Speed. Rick Reid Principal Engineer SGI

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

How To Build A Cisco Ukcsob420 M3 Blade Server

We Take Supercomputing Personally

HP Cloudline Overview

Microsoft Private Cloud Fast Track Reference Architecture

Pedraforca: ARM + GPU prototype

Boost Database Performance with the Cisco UCS Storage Accelerator

Cisco Unified Computing System Hardware

Building Clusters for Gromacs and other HPC applications

C460 M4 Flexible Compute for SAP HANA Landscapes. Judy Lee Released: April, 2015

Current Status of FEFS for the K computer

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis

Intel Xeon Processor E5-2600

Arrow ECS sp. z o.o. Oracle Partner Academy training environment with Oracle Virtualization. Oracle Partner HUB

LS DYNA Performance Benchmarks and Profiling. January 2009

Designed for Maximum Accelerator Performance

Brainlab Node TM Technical Specifications

Cisco SmartPlay Select. Cisco Global Data Center Promotional Program

IBM System x family brochure

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Storage Architectures. Ron Emerick, Oracle Corporation

Clusters: Mainstream Technology for CAE

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

HUAWEI E9000 Converged Infrastructure Blade Server

Express5800 Scalable Enterprise Server Reference Architecture. For NEC PCIe SSD Appliance for Microsoft SQL Server

MapR Enterprise Edition & Enterprise Database Edition

An Introduction to the Gordon Architecture

Cloud Data Center Acceleration 2015

ZD-XL SQL Accelerator 1.5

Scaling Across the Supercomputer Performance Spectrum

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Up to 4 PCI-E SSDs Four or Two Hot-Pluggable Nodes in 2U

NEC Micro Modular Server Introduction. NEC November 18,2014

Netapp HPC Solution for Lustre. Rich Fenton UK Solutions Architect

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

The Hardware Dilemma. Stephanie Best, SGI Director Big Data Marketing Ray Morcos, SGI Big Data Engineering

Oracle Database Reliability, Performance and scalability on Intel Xeon platforms Mitch Shults, Intel Corporation October 2011

APACHE HADOOP PLATFORM HARDWARE INFRASTRUCTURE SOLUTIONS

Comparing the performance of the Landmark Nexus reservoir simulator on HP servers

SUN ORACLE EXADATA STORAGE SERVER

How To Write An Article On An Hp Appsystem For Spera Hana

Technical Brief: Egenera Taps Brocade and Fujitsu to Help Build an Enterprise Class Platform to Host Xterity Wholesale Cloud Service

Transcription:

Architect of an Open World Extreme Computing: The Bull Way Dr.-Ing. Joachim Redmer, Director HPC ( j.redmer@bull.de )

Bull today Bull is an Information Technology company, focusing on open and secure systems Our mission is to help corporations and public sector organizations optimize the architecture, operations and financial return of their Information Systems, supporting their core business processes Bull is the only European IT company that is positioned to deliver all the key elements of the IT value chain 2009 Bull activities w/o consolidation of Amesys activities 2 Bull, 2011 Bull Extreme Computing

Bull in Extreme Computing today Reached 100 M products revenue in 2009 - On track to exceed 10% market share in Europe Anticipated 150 M products revenue in 2010 Introduced bullx range in 2009 - Delivering new levels of performance and innovation for Extreme Computing - Xeon-based as well as hybrid blades (NVIDIA) - bullx nominated best HPC server product by HPCWire Signed landmark deals for bullx servers in 2010 - GENCI /TGCC (France) - AWE - United Kingdom - RWTH Aachen - Germany - Dassault Aviation (France) - Société Générale (France) - Ineris (France) - Reims University (France), first petascale system in Europe at CEA 500 specialists dedicated to HPC in Europe 3 Bull, 2011 Bull Extreme Computing

Key offerings boosted by recent acquisitions enatel SOLUTIONS & INTEGRATION 2008: SIRUS - Vertical ISV & SI in France 2008: CSB Consulting - IT consulting in Belux 2007: Siconet - SI in Spain 2006: Address vision - Postal automation ISV & SI in USA 2006: AMG - Telco SI in Poland EXTREME COMPUTING - 2008: Science + Computing - Extreme Computing Solutions & Services in Germany - 2007: Serviware - Extreme Computing SI in France SECURITY & OUTSOURCING - 2010: Amesys - 2006: Agarik - Internet SI & hoster in France - 2005: Enatel 4 Bull, 2011 Bull Extreme Computing

extreme Computing applications Electro-Magnetics Computational Chemistry Quantum Mechanics Computational Chemistry Molecular Dynamics Computational Biology Structural Mechanics Implicit Structural Mechanics Explicit Seismic Processing A dedicated team of experts in application performance Computational Fluid Dynamics Reservoir Simulation Rendering / Ray Tracing Climate / Weather Ocean Simulation Data Analytics 6 Bull, 2011 Bull Extreme Computing

Architect of an Open World Latest News

2010: TERA 100 Architect of an Open World 1.25 Peak PFlops 1.05 Linpack PFlops 4 300 bullx S nodes 140 000 Intel Nehalem-EX cores 300 TB of memory 20 PB of disk storage QDR InfiniBand interconnect 500 GB/s bandwidth to the global file system Best Linpack efficiency in Top10

Bullx Supercomputer: Best Top10 Linpack efficiency 0,9 0,8 0,7 0,6 0,5 0,4 0,3 9 Bull, 2011 Bull Extreme Computing

GENCI CURIE: PRACE Tier 0 "With technical support from CEA, through a competitive tendering process, we were able to assess the excellence of Bull's offering. This means we will soon have at our disposal a machine that will offer French and European scientists the resources they need to carry out their research work at the highest possible level in a highly competitive global environment," Catherine Rivière, CEO of GENCI 1 st phase Implemented in October 2010 CURIE in figures 1.6 PetaFlops (90,000+ Xeon cores) 10 PB of storage 250 GB/s data throughput 200 m² footprint 10 Bull, 2011 Bull Extreme Computing

2011: RWTH Aachen Architect of an Open World 292 Peak TFlops 1350 bullx blades 362 bullx S nodes 16 200 Intel Westmere cores 11 500 Intel Nehalem-EX cores 1500 TB HPC storage 1500 TB Home storage QDR InfiniBand interconnect Soon operational

Architect of an Open World Product Descriptions

Product descriptions bullx blade system bullx rack-mounted systems bullx SMP system NVIDIA Tesla Systems Bull Storage Cool cabinet door bullx cluster suite 14 Bull, 2011 Bull Extreme Computing

bullx blade system overall concept 15 Bull, 2011 Bull Extreme Computing

bullx blade system overall concept General purpose, versatile - Xeon Westmere processor - 96 GB RAM per blade - Local HDD/SSD or Diskless - IB / GBE - RH, Suse, Win HPC2008, CentOs, - Compilers: GNU, Intel, Uncompromised performances - Support of high frequency Westmere - Memory bandwidth: 12x mem slots - Fully non blocking IB QDR interconnect - 2.64TFLOPS per chassis (Intel Xeon X5675 3.06GHz) - Up to 15.8TFLOPS per rack (with CPUs) Leading edge technologies High density - 7U chassis - 18x blades with 2 proc, 12x DIMMs, HDD/SSD slot/ib connection - 1x IB switch (36 ports) - 1x GBE switch (24p) - 10 GigE Uplink (optional) - Ultracapacitor 16 Bull, 2011 Bull Extreme Computing - Intel Westmere - InfiniBand QDR - Diskless - Ready for GPU blades Optimized Power consumption - Typical 6.5 kw / chassis - High efficiency (90%) PSU - Smart fan control in each chassis - Smart fan control in water-cooled rack - Ultracapacitor no UPS required

bullx blade system Block Diagram UCM 18x compute blades - 2x Westmere-EP sockets - 12x memory DDR3 DIMMs (12x 8GB= 96GB) - 1x SATA HDD/SSD slot (optional diskless an option) - 1x IB ConnectX/QDR chip 1x InfiniBand Switch Module (ISM) for cluster interconnect - 36 ports QDR IB switch - 18x internal connections - 18x external connections 1x Chassis Management Module (CMM) - OPMA board - 24 ports GbE switch - 18x internal ports to Blades - 3x external ports 1x optional Ethernet Switch Module (ESM) - 24ports GbE switch - 18x internal ports to Blades - 3x external ports 1x optional Ethernet Switch Module (TSM) - GigE Switch with 10GigE Uplinks 1x optional Ultra Capacitor Module (UCM) 17 Bull, 2011 Bull Extreme Computing

InfiniBand InfiniBand Accelerator InfiniBand Accelerator bullx blade system blade block diagrams bullx B500 compute blade bullx B505 accelerator blade Westmere EP QPI Nehalem EP Westmere EP Westmere EP QPI Nehalem EP Westmere EP QPI 12.8GB/s Each direction QPI QPI 12.8GB/s Each direction QPI 31.2GB/s I/O Controller (Tylersburg) 31.2GB/s 31.2GB/s I/O Controller (Tylersburg) QPI I/O Controller (Tylersburg) 31.2GB/s SATA SSD diskless PCIe 8x 4GB/s GBE SATA SSD diskless PCIe 8x 4GB/s PCIe 16x 8GB/s PCIe 8x 4GB/s PCIe 16x 8GB/s GBE 18 Bull, 2011 Bull Extreme Computing

Ultracapacitor Module (UCM) Embedded protection against short power outages Protect one chassis with all its equipment under load Up to 250ms NESSCAP Capacitors (2x6) Avoid on site UPS save on infrastructure costs save up to 15% on electrical costs Board Leds Improve overall availability Run longer jobs 20 Bull, 2011 Bull Extreme Computing

7U chassis bullx chassis packaging LCD unit CMM PSU x4 18x blades ESM/TSM 21 Bull, 2011 Bull Extreme Computing

bullx B505 accelerator blade Embedded Accelerator for high performance with high energy efficiency 7U 2 x Intel Xeon 5600 2 x NVIDIA T20 2 x IB QDR 18.5 TFLOPS in 7 U 22 Bull, 2011 Bull Extreme Computing

Product descriptions bullx blade system bullx rack-mounted systems bullx SMP system NVIDIA Tesla Systems Bull Storage Cool cabinet door bullx cluster suite 24 Bull, 2011 Bull Extreme Computing

COMPUTE NODE SERVICE NODE VISUALIZATION bullx rack-mounted systems R424 E2 R423 E2 R425 E2 4 nodes per 2U for unprecedented density Xeon 5600 2x 2-Socket 2x 12 DIMMs QPI up to 6.4 GT/s 2x 1 PCI-Express x16 Gen2 InfiniBand QDR embedded (optional) 3x SATA2 hot-swap HDD 92% PSU efficiency Enhanced connectivity and storage 2U Xeon 5600 2-Socket 18 DIMMs 2 PCI-Express x16 Gen2 8x SATA2 or 8x SAS HDD Redundant power supply Hot-swap fans Supports latest graphics & accelerator cards 4U or tower 2-Socket Xeon 5600 18 DIMMs 2 PCI-Express x16 Gen2 8x SATA2 or 8x SAS HDD Powerful power supply Hot-swap Fans 25 Bull, 2011 Bull Extreme Computing

Product descriptions bullx blade system bullx rack-mounted systems bullx SMP system NVIDIA Tesla Systems Bull Storage Cool cabinet door bullx cluster suite 26 Bull, 2011 Bull Extreme Computing

Mesca s fundamentals SMP of up to 16 sockets based on Bull Coherent Switch - Intel Xeon Nehalem EX processors - Coherent memory of up to 1 TB Several types of packaging - High-density compute node - High I/O connectivity node RAS features - Self-healing of the QPI and XQPI - Hot swap disk, fans, power supplies Green features - Ultra Capacitor 27 Bull, 2011 Bull Extreme Computing

Module level diagrams NHM NHM NHM NHM NHM NHM NHM NHM IOH IOH IOH BCS IOH For 4 socket-only systems Repeated n times for >4 socket systems Full width QPI link XCSI link NHM = NeHaleM-EX Xeon proc 28 Bull, 2011 Bull Extreme Computing

Bullx S60x0 CC-NUMA server NHM EX NHM EX BCS BCS BCS BCS NHM EX NHM EX IOH BCS IOH SMP (CC-NUMA) Node maximum configuration : 4 modules 16 sockets 128 cores (Nehalem-EX) 128 memory slots (2TB) Large nodes : Large shared memory (pre/post-processing) Many more cores (SMP) Fewer nodes Simpler system administration Multi Level Parallelism (MPI/OpenMP) 29 Bull, 2011 Bull Extreme Computing

Taking advantage of SMP nodes with MPI Optimized intra-node throughput - Enable direct copy from sender to receiver - Rely on SSE instruction set - Achieve a transfer rate of half the memory bandwidth Optimized intra-node latency - Lock-free shared memory device - Take advantage of socket architecture 1,8 1,6 1,4 1,2 1 0,8 0,6 0,4 0,2 To socket 1, core 2 To socket 1, core 3 and 4 To socket 2 To socket 3 and 4 - Shared cache latency: 200 nsec 0 4 cores, Mvapich2 8 cores, Mvapich2 16 cores, Mvapich2 4 cores, MPIBull2 8 cores, MPIBull2 16 cores, MPIBull2 30 Bull, 2011 Bull Extreme Computing

Product descriptions bullx blade system bullx rack-mounted systems bullx SMP system NVIDIA Tesla Systems Bull Storage Cool cabinet door bullx cluster suite 31 Bull, 2011 Bull Extreme Computing

Bull Storage for HPC clusters A complete line of storage systems Performance Modularity High Availability* A rich management suite Monitoring Grid & standalone system deployment Performance analysis *: with Lustre 32 Bull, 2011 Bull Extreme Computing

Bull Storage Systems for HPC StoreWay Optima 1500 SAS/SATA 3 to 144 HDDs Up to 12 host ports 2U drawers FC/SATA Up to 480 HDDs Up to 16 host ports 3U drawers StoreWay EMC CX4 DataDirect Networks SFA 10k (consult us) SAS/SATA Up to 1200 HDDs 8 host ports 4+ 2/3/4 U drawers cluster suite 33 Bull, 2011 Bull Extreme Computing

Bull Storage Systems for HPC - details Optima1500 CX4-120 CX4-480 SFA 10k couplet #disk 144 120 480 1200 Disk type SAS 146/300/450 GB SATA 1TB FC 146/300/400/450 GB SATA 1TB FC 10Krpm 400 GB 15Krpm 146/300/450 GB SATA 1TB SAS 10Krpm 400 GB SAS 15Krpm 300/450/600 GB SATA 1000/2000 GB RAID R1, 3, 3DP, 5, 6, 10, 50 and TM R0, R1, R10, R3, R5, R6 R0, R1, R10, R3, R5, R6 8+2 (RAID 6) Host ports 2/12 FC 4 4/12 FC4 8/16 FC4 16 FC8 / 8 QDR Back end ports 2 SAS 4X 2 8 20 SAS 4X Cache size (max) 4 GB 6GB 16GB 5 GB RAID-protected Controller size 2 U base with disks 3 U 3 U 4 U Disk drawer 2 U 12 slots 3 U 15 slots 3 U 15 slots 4 U 60 slots Performance (MB/s; Raid5) R: Read; W:Write R: up to 900 MB/s W: up to 440 MB/s R: up to 720 MB/s W: up to 410 MB/s R: up to 1.25 GB/s W: up to 800 MB/s R&W: up to 20 GB/s 34 Bull, 2011 Bull Extreme Computing

Product descriptions bullx blade system bullx rack-mounted systems bullx SMP system NVIDIA Tesla Systems Bull Storage Cool cabinet door bullx cluster suite 35 Bull, 2011 Bull Extreme Computing

Bull Cool Cabinet door No impact on server behaviour air flow through doors is adjusted to match the drawer air flows No impact on computer room Select outlet Air temperature 2 way or 3 way valve controls heat exchanger water flow. No more hot spots better MTBF Cools up to 40kW per rack ready for Bull Extreme Computing systems 20 C Server ΔT 15 C Server ΔT 15 C Server ΔT 15 C Server ΔT 15 C Server ΔT 15 C Water cooled door 35 C 20 C Rack pressure Exchanger M Δp Room pressure Inlet water 7 C 36 Bull, 2011 Bull Extreme Computing

Jülich Research Center: water-cooled system 37 Bull, 2011 Bull Extreme Computing

Cool cabinet door: Characteristics Width 600mm (19 ) Height 2020mm (42U) Depth 200mm (8 ) Weight 150 kg Cooling capacity Up to 40 kw Power supply Redundant Power consumption 700 W Input water temperature 7-12 C Output water temperature 12-17 C Water flow 2 liter/second (7 m3/hour) Ventilation 14 managed multi-speed fans Recommended cabinet air inlet 20 C +- 2 C Cabinet air outlet 20 C +- 2 C Management Integrated management board for local regulation and alert reporting to Bull System Manager 38 Bull, 2011 Bull Extreme Computing

Product descriptions bullx blade system bullx rack-mounted systems bullx SMP system NVIDIA Tesla Systems Bull Storage Cool cabinet door bullx cluster suite 39 Bull, 2011 Bull Extreme Computing

bullx supercomputer suite extreme Pack thousands of nodes Standard Edition Advanced Edition Advanced Edition Supercomputer size 40 Bull, 2011 Bull Extreme Computing

bullx supercomputer suite Advanced Edition 41 Bull, 2011 Bull Extreme Computing

Advanced Edition / Extreme Pack: components bullx MC Management Center bullx PFS Parallel File System bullx BM Batch Management Super-Fast image based provisioning Web-based Multi-level supervision Power management Automated health management Maintenance management Highly available cells based architecture Increased throughput and scalability Advanced placement policies Topology aware resource allocation bullx MPI bullx DE Development Environment bullx Linux Multi-path network failover Abnormal patterns detection Topology aware operations 42 Bull, 2011 Bull Extreme Computing Complete best of breed set of tools (from compiling, debugging to profiling and optimizing activities) HPC Enabled (OS jitter reduction, Optimized operations for increased application performance) Enhanced OFED

Product descriptions bullx blade system bullx rack-mounted systems bullx SMP system NVIDIA Tesla Systems Bull Storage Cool cabinet door bullx cluster suite Windows HPC Server 2008 44 Bull, 2011 Bull Extreme Computing

Bull and Windows HPC Server 2008 Clusters of bullx R422 E2 servers - Intel 5600 processors - Compact rack design: 2 servers in 1U - Fast & reliable InfiniBand interconnect supporting Microsoft Windows HPC Server 2008 - Simplified cluster deployment and management - Broad application support - Enterprise-class performance and scalability Common collaboration with leading ISVs to provide complete solutions The right technologies to handle industrial applications efficiently 45 Bull, 2011 Bull Extreme Computing

Windows HPC Server 2008 Combining the power of the Windows Server platform with rich, out-of-the-box functionality to help improve the productivity and reduce the complexity of your HPC environment Microsoft Windows Server 2008 HPC Edition Support for high performance hardware (x64 bit architecture) Winsock Direct support for RDMA for high performance interconnects (Gigabit Ethernet, InfiniBand, Myrinet, and others) Microsoft HPC Pack 2008 + = Support for Industry Standards MPI2 Integrated Job Scheduler Cluster Resource Management Tools Microsoft Windows HPC Server 2008 Integrated out of the box solution Leverages past investments in Windows skills and tools Makes cluster operation just as simple and secure as operating a single system 46 Bull, 2011 Bull Extreme Computing

A complete turn-key solution Bull delivers a complete ready-to-run solution - Sizing - Factory pre-installed and pre-configured (R@ck n Roll) - Installation, integration in the existing infrastructure - 1st and 2nd level support - Monitoring, audit - Training Bull has a Microsoft Competence Center 47 Bull, 2011 Bull Extreme Computing

bullx cluster 400-W Enter the world of High Performance Computing with bullx cluster 400-W running Windows HPC Server 2008 bullx cluster 400-W4-4 compute nodes to relieve the strain on your work stations bullx cluster 400-W8-8 compute nodes to give independent compute resources to a small team of users, enabling them to submit large jobs or several jobs simultaneously bullx cluster 400-W16-16 compute nodes to equip a workgroup with independent high performance computing resources that can handle their global compute workload A solution that combines: The performance of bullx rack servers equipped with Intel Xeon processors The advantages of Windows HPC Server 2008 - Simplified cluster deployment and management - Easy integration with IT infrastructure - Broad application support - Familiar development environment And expert support from Bull s Microsoft Competence Center 48 Bull, 2011 Bull Extreme Computing

49 Bull, 2011 Bull Extreme Computing

50 Bull, 2011 Bull Extreme Computing