JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
|
|
- Elvin Todd
- 2 years ago
- Views:
Transcription
1 Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert
2 JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA (Juropa-JSC) FZJ production system NIC and VSR projects Commercial customers PRACE Tier 1 system HPC-FF HPC-FF: High Performance Computing For Fusion Dedicated to European Fusion Research Community Operation of the HPC-FF partition ceased on June 30, All HPC-FF compute resources have been moved to the JSC partition!
3 Juropa Components (1) JUROPA Hardware Sun Constellation System + Bull NovaScale R422-E2 Infiniband QDR (40 Gb/s per link and direction) Full Fat Tree Topology 3288 compute nodes: 2 Intel Nehalem-EP quad-core processors (Xeon X5570) 2.93 GHz 24 GB memory (DDR3, 1066 MHz) IB QDR HCA cores 308 TF peak
4 Juropa Components (2) Lustre Storage Pool 4 Meta Data Servers (MDS) Bull NovaScale R423-E2 (Nehalem-EP 4-core/Westmere 6-core) 100 TB for meta data home and work (EMC² CX4-240) 14 Object Storage Servers (OSS) (home) Sun Fire X4170 Server 500 TB user data 8 Object Storage Servers (OSS) (home) Bull NovaScale R423-E2 500 TB user data 8 Object Storage Servers (OSS) (work) Bull NovaScale R423-E2 800 TB user data Aggregated data rate ~40 GB/s
5 Juropa Architecture
6 Infiniband Topology (Fat Tree) 648 port switch 648 port switch 648 port switch 648 port switch 648 port switch 648 port switch 648 port switch 648 port switch JUROPA Nodes... JUROPA Nodes x 92 building blocks (24 nodes each) x 60 building blocks (18 nodes each)
7 Infiniband Topology (Fat Tree) Sun Nodes Service BULL Nodes 23 x 4 QNEM modules, 24 ports each 6 x M9 switches, 648 ports max. each, 468/276 links used Mellanox MTS3600 switches (Shark), 36 ports, for service nodes 4 Compute Sets (CS) with 15 Compute Cells (CC) each CC with 18 Compute Nodes (CN) and 1 Mellanox MTS3600 (Shark) switch each Virtual 648-port switches constructed from 54x/44x Mellanox MTS3600
8 Software (1) Operating System SUSE SLES 11 SP1 Cluster Management ParaStation GridMonitor, Jumpmon Batch System Torque Resource Manager (start jobs, return output etc.) Moab Workload Manager (priorities, accounting, job chains) User command line interface (job start, status, cancel etc.) Compiler Intel Professional Fortran, C/C++
9 Software (2) Libraries Intel Math Kernel Library (MKL) BLAS, LAPACK, ScaLAPACK1, Sparse Solvers, Fast Fourier Transforms, Vector Math etc. Highly optimized for Intel CPU architecture /usr/local ~ 90 packages from adf, amber to wsmp, zlib MPI - Message Passing Interface ParTec MPI (based on MPICH2) OpenMP - Memory-Parallel Multi-Threading Intel Unicore, PRACE (gsissh, gridftp)
10 Modules Modules allow to switch between versions of a specific software or library module avail shows available modules module list lists loaded modules Currently loaded modulefiles: 1) parastation/mpi2-intel ) mkl/ ) intel/ module load unload load / unload a module module help list usage information
11 What is Simultaneous Multi-Threading (SMT)? Node Memory MPI (Distributed memory)... Node Memory Compute nodes: : mostly independent of each other, local memory, shared network access Node QPI Socket 0 Socket 1 Memory Controller Memory Controller Socket: : full-featured quad-core Nehalem-EP CPU with its own memory controller and Quickpath Interconnect (QPI) Memory Memory
12 What is SMT? Socket Core 0 Cache HWT 0 HWT 1 Core 1 HWT 0 HWT 1 Core 2 HWT 0 HWT 1 Memory Controller Process (Task): Core 3 HWT 0 HWT 1 Core: : full-featured processor with its own register set, functional units and caches. Memory access is shared among cores. HWT - Hardware Thread: Collection of registers and functional units ( virtual core ) that are replicated in each core. Non-replicated units like caches are shared among HWTs. Computational entity that possesses its own copy of program code and data. Thread: Cache Cache Cache Computational entity that shares program code and data with other threads. Threads may have (a limited amount of) non-shared, local data.
13 Node Types (1) Compute nodes Intended for batch jobs includes interactive batch (64 nodes max.) charged for connect time No direct login on compute nodes (except interactive batch) Exclusive usage by one user/job (no node sharing) smallest reservation unit is one node (= 8 cores / 16 with SMT) charged for wall-clock time unlimited access to existing resources ~ 22 GB memory, 8 processors (16 with SMT), wallclock time limit 24 h
14 Compute Nodes - Available Memory Communication Scalability Memory consumption for static IB communication buffers depends on the number of communicating tasks Example: Default requirement per connection = 0.5 MB => An 8-core node that connects to 8 x 512 tasks consumes 16 GB just for buffers Solutions Smaller / less buffers: PSP_OPENIB_SENDQ_SIZE=3..16 (default: 16) PSP_OPENIB_RECVQ_SIZE=3..16 (default: 16) Buffer allocation on-demand : export PSP_ONDEMAND=1 or mpiexec --ondemand...
15 Node Types (2) Login nodes (juropa, juropa , 09) Intended for interactive work program development (edit, compile, test) pre and postprocessing access to home filesystem and work (Lustre) no production jobs here! (cpu time limit 30 min.) use command ulimit -Sa or ulimit -Ha to display limits HPC-FF Login node (hpcff, hpcff01) Access to HPC-FF data Access to HPC-FF data ceased on April 30, 2014
16 Node Types (3) GPFS nodes (juropagpfs, juropagpfs ) Access to GPFS file systems (mounted also on JUQUEEN) Intended for data manipulation copy data to and from GPFS/Lustre restore data from TSM backup import/export data to/from external sources (PRACE) same limits as on Login nodes (except CPU limit: 360 min) GPFS nodes 04 and 05 for big-memory requests: 192 GB memory Interactive performance might be degraded on GPFS nodes due to heavy data traffic. Recommendation: Use Login nodes, if GPFS file system, large memory, higher CPU time limit or connection to PRACE network is not needed.
17 Accessing the System Login nodes ssh [-X] Login nodes , 09 are selected round-robin Hostnames: jj28l , 09 ssh [-X] Access specific node juropa01
18 Accessing the System GPFS nodes ssh [-X] GPFS nodes are selected round-robin Hostnames: jj28g ssh [-X] Access to all nodes requires SSH key To be provided when applying for an account (JSC Dispatch) Do provide a passphrase for your ssh key! Password access is not possible!
19 File Systems Overview GPFS home / work GPFS arch Login Nodes Login HPC-FF GPFS Nodes Lustre home... Lustre work Compute Nodes
20 File Systems (1) Lustre Mounted on Login, GPFS and Compute nodes $WORK = $LUSTREWORK = /lustre/jwork 800 TB default group quota: 3 TB, 2 million files no backup files older than 28 days will be deleted recommended for large temporary files and high performance requirements $HOME = $LUSTREHOME = /lustre/jhome from 29 to 62 TB per file system, distributed among user groups default group quota: 3 TB, 2 million files daily backup recommended for permanent program data with low performance requirements (e.g. program sources, input files, configuration data)
21 File Systems (2) GPFS Mounted on GPFS nodes only Shared with JUQUEEN, mounted from JUST file server Only available with valid JUQUEEN user ID $GPFSWORK = /gpfs/work 3.6 PB, mounted from JUST file server details on sizes, quota, backup etc. => tomorrow $GPFSHOME = /gpfs/homea,.. homec mounted JUQUEEN home file systems details on sizes, quota, backup etc. => tomorrow $GPFSARCH = /gpfs/arch,..1,..2 automatic data migration to/from tape library
22 Backup Backup is done for the Lustre home file systems, GPFS/home and GPFS/arch Daily backup with TSM Restore of user data can only be done on the GPFS nodes: ssh -X adsmback Select home, arch or gpfshome This opens a panel for interactive restore Select => Restore => File Level Choose your files/directories to restore => Restore
23 Further Information Regular preventive maintenance on Thursdays See Message of today at login Juropa on-line documentation User support at FZJ Phone:
JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering
JuRoPA Jülich Research on Petaflop Architecture One Year on Hugo R. Falter, COO Lee J Porter, Engineering HPC Advisoy Counsil, Workshop 2010, Lugano 1 Outline The work of ParTec on JuRoPA (HF) Overview
Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0)
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Current Status of FEFS for the K computer
Current Status of FEFS for the K computer Shinji Sumimoto Fujitsu Limited Apr.24 2012 LUG2012@Austin Outline RIKEN and Fujitsu are jointly developing the K computer * Development continues with system
Washington University High Performance Cluster
Washington University High Performance Cluster IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. yjw@us.ibm.com 2/22/2010 HPC performance scaling Scale Out distributed parallel node ->
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
The Asterope compute cluster
The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU
SGI UV 300, UV 30EX: Big Brains for No-Limit Computing
SGI UV 300, UV 30EX: Big Brains for No-Limit Computing The Most ful In-memory Supercomputers for Data-Intensive Workloads Key Features Scales up to 64 sockets and 64TB of coherent shared memory Extreme
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
Running Native Lustre* Client inside Intel Xeon Phi coprocessor
Running Native Lustre* Client inside Intel Xeon Phi coprocessor Dmitry Eremin, Zhiqi Tao and Gabriele Paciucci 08 April 2014 * Some names and brands may be claimed as the property of others. What is the
OpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl schmidl@rz.rwth-aachen.de Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems mikev@sun.com Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
Using the Windows Cluster
Using the Windows Cluster Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster
Overview of HPC systems and software available within
Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
Mathematical Libraries and Application Software on JUROPA and JUQUEEN
Mitglied der Helmholtz-Gemeinschaft Mathematical Libraries and Application Software on JUROPA and JUQUEEN JSC Training Course May 2014 I.Gutheil Outline General Informations Sequential Libraries Parallel
Juropa. Batch Usage Introduction. May 2014 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de
Juropa Batch Usage Introduction May 2014 Chrysovalantis Paschoulas c.paschoulas@fz-juelich.de Batch System Usage Model A Batch System: monitors and controls the resources on the system manages and schedules
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture Ron Weiss, Exadata Product Management Exadata Database Machine Best Platform to Run the
Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research
! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at
Getting Started with HPC
Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
Michael Kagan. michael@mellanox.com
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies michael@mellanox.com Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
A Crash course to (The) Bighouse
A Crash course to (The) Bighouse Brock Palen brockp@umich.edu SVTI Users meeting Sep 20th Outline 1 Resources Configuration Hardware 2 Architecture ccnuma Altix 4700 Brick 3 Software Packaged Software
Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)
NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre University of Cambridge, UIS, HPC Service Authors: Wojciech Turek, Paul Calleja, John Taylor
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
System Requirements Document
System Requirements Document Table Of Contents Overview... 2 ADVANTAGE 2009... 3 Server Hardware... 3 Proprietary Navision Database... 4 Microsoft SQL Server 2005 /2008 Database... 5 SQL Server Hardware...
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
An Introduction to High Performance Computing in the Department
An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software
Multicore Parallel Computing with OpenMP
Multicore Parallel Computing with OpenMP Tan Chee Chiang (SVU/Academic Computing, Computer Centre) 1. OpenMP Programming The death of OpenMP was anticipated when cluster systems rapidly replaced large
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform
Mitglied der Helmholtz-Gemeinschaft System monitoring with LLview and the Parallel Tools Platform November 25, 2014 Carsten Karbach Content 1 LLview 2 Parallel Tools Platform (PTP) 3 Latest features 4
Altix Usage and Application Programming. Welcome and Introduction
Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang
Mathematical Libraries on JUQUEEN. JSC Training Course
Mitglied der Helmholtz-Gemeinschaft Mathematical Libraries on JUQUEEN JSC Training Course May 10, 2012 Outline General Informations Sequential Libraries, planned Parallel Libraries and Application Systems:
SLURM Workload Manager
SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux
Abaqus Performance Benchmark and Profiling. March 2015
Abaqus 6.14-2 Performance Benchmark and Profiling March 2015 2 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information
Windows Server 2008 R2 Hyper V. Public FAQ
Windows Server 2008 R2 Hyper V Public FAQ Contents New Functionality in Windows Server 2008 R2 Hyper V...3 Windows Server 2008 R2 Hyper V Questions...4 Clustering and Live Migration...5 Supported Guests...6
Introduction to Supercomputing with Janus
Introduction to Supercomputing with Janus Shelley Knuth shelley.knuth@colorado.edu Peter Ruprecht peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?
How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014
How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014 2014 年 云 计 算 效 率 与 能 耗 暨 第 一 届 国 际 云 计 算 咨 询 委 员 会 中 国 高 峰 论 坛 Contents Background
1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
PARALLELS SERVER 4 BARE METAL README
PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels
Cray DVS: Data Virtualization Service
Cray : Data Virtualization Service Stephen Sugiyama and David Wallace, Cray Inc. ABSTRACT: Cray, the Cray Data Virtualization Service, is a new capability being added to the XT software environment with
Accelerating CST MWS Performance with GPU and MPI Computing. CST workshop series
Accelerating CST MWS Performance with GPU and MPI Computing www.cst.com CST workshop series 2010 1 Hardware Based Acceleration Techniques - Overview - Multithreading GPU Computing Distributed Computing
InterWorx Clustering Guide. by InterWorx LLC
InterWorx Clustering Guide by InterWorx LLC Contents 1 What Is Clustering? 3 1.1 What Does Clustering Do? What Doesn t It Do?............................ 3 1.2 Why Cluster?...............................................
New Storage System Solutions
New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems
Manual for using Super Computing Resources
Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering slyness@appro.com Company Overview
The CNMS Computer Cluster
The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the
www.thinkparq.com www.beegfs.com
www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a
VASP Performance Benchmark and Profiling. January 2013
VASP Performance Benchmark and Profiling January 2013 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource HPC
Mitglied der Helmholtz-Gemeinschaft UNICORE. Uniform Access to JSC Resources. Michael Rambadt, unicore-info@fz-juelich.de. 20.
Mitglied der Helmholtz-Gemeinschaft UNICORE Uniform Access to JSC Resources 20. Mai 2014 Michael Rambadt, unicore-info@fz-juelich.de Outline Introduction Features UNICORE Portal UNICORE Rich Client UNICORE
Globus and the Centralized Research Data Infrastructure at CU Boulder
Globus and the Centralized Research Data Infrastructure at CU Boulder Daniel Milroy, daniel.milroy@colorado.edu Conan Moore, conan.moore@colorado.edu Thomas Hauser, thomas.hauser@colorado.edu Peter Ruprecht,
SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center
SRNWP Workshop HP Solutions and Activities in Climate & Weather Research Michael Riedmann European Performance Center Agenda A bit of marketing: HP Solutions for HPC A few words about recent Met deals
System Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
Main Memory Data Warehouses
Main Memory Data Warehouses Robert Wrembel Poznan University of Technology Institute of Computing Science Robert.Wrembel@cs.put.poznan.pl www.cs.put.poznan.pl/rwrembel Lecture outline Teradata Data Warehouse
Solution for private cloud computing
The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details System requirements and installation How to get it? 2 What is CC1? The CC1 system is a complete solution
Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz)
Center for e-research (eresearch@nesi.org.nz) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using
PARALLELS SERVER BARE METAL 5.0 README
PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal
Frequently Asked Questions: EMC UnityVSA
Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the
Parallels Cloud Storage
Parallels Cloud Storage White Paper Best Practices for Configuring a Parallels Cloud Storage Cluster www.parallels.com Table of Contents Introduction... 3 How Parallels Cloud Storage Works... 3 Deploying
XSEDE Service Provider Software and Services Baseline. September 24, 2015 Version 1.2
XSEDE Service Provider Software and Services Baseline September 24, 2015 Version 1.2 i TABLE OF CONTENTS XSEDE Production Baseline: Service Provider Software and Services... i A. Document History... A-
Configuration Maximums VMware vsphere 4.0
Topic Configuration s VMware vsphere 4.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 4.0. The limits presented in the
HPC Growing Pains. Lessons learned from building a Top500 supercomputer
HPC Growing Pains Lessons learned from building a Top500 supercomputer John L. Wofford Center for Computational Biology & Bioinformatics Columbia University I. What is C2B2? Outline Lessons learned from
The Foundation for Better Business Intelligence
Product Brief Intel Xeon Processor E7-8800/4800/2800 v2 Product Families Data Center The Foundation for Big data is changing the way organizations make business decisions. To transform petabytes of data
Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms
IT@Intel White Paper Intel IT IT Best Practices: Data Center Solutions Server Virtualization August 2010 Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms Executive
A highly configurable and efficient simulator for job schedulers on supercomputers
Mitglied der Helmholtz-Gemeinschaft A highly configurable and efficient simulator for job schedulers on supercomputers April 12, 2013 Carsten Karbach, Jülich Supercomputing Centre (JSC) Motivation Objective
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
An introduction to Fyrkat
Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
ECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering Enquiry No: Enq/IITK/ME/JB/02 Enquiry Date: 14/12/15 Last Date of Submission: 21/12/15 Formal quotations are invited for HPC cluster.
Microsoft SharePoint Server 2010
Microsoft SharePoint Server 2010 Medium Farm Solution Performance Study Dell SharePoint Solutions Ravikanth Chaganti and Quocdat Nguyen August 2010 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY,
Oracle Database Reliability, Performance and scalability on Intel Xeon platforms Mitch Shults, Intel Corporation October 2011
Oracle Database Reliability, Performance and scalability on Intel platforms Mitch Shults, Intel Corporation October 2011 1 Intel Processor E7-8800/4800/2800 Product Families Up to 10 s and 20 Threads 30MB
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
Dynamic Extension of a Virtualized Cluster by using Cloud Resources CHEP 2012
Dynamic Extension of a Virtualized Cluster by using Cloud Resources CHEP 2012 Thomas Hauth,, Günter Quast IEKP KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz
7 Real Benefits of a Virtual Infrastructure
7 Real Benefits of a Virtual Infrastructure Dell September 2007 Even the best run IT shops face challenges. Many IT organizations find themselves with under-utilized servers and storage, yet they need
Lustre on Hyperion. Marc Stearman. April, 2009
Lustre on Hyperion Marc tearman marc@llnl.gov April, 2009 This work performed under the auspices of the U.. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
RDMA over Ethernet - A Preliminary Study
RDMA over Ethernet - A Preliminary Study Hari Subramoni, Miao Luo, Ping Lai and Dhabaleswar. K. Panda Computer Science & Engineering Department The Ohio State University Outline Introduction Problem Statement
RWTH GPU Cluster. Sandra Wienke wienke@rz.rwth-aachen.de November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke wienke@rz.rwth-aachen.de November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative
Purpose... 3. Computer Hardware Configurations... 6 Single Computer Configuration... 6 Multiple Server Configurations... 7. Data Encryption...
Contents Purpose... 3 Background on Keyscan Software... 3 Client... 4 Communication Service... 4 SQL Server 2012 Express... 4 Aurora Optional Software Modules... 5 Computer Hardware Configurations... 6
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
HPC Software Requirements to Support an HPC Cluster Supercomputer
HPC Software Requirements to Support an HPC Cluster Supercomputer Susan Kraus, Cray Cluster Solutions Software Product Manager Maria McLaughlin, Cray Cluster Solutions Product Marketing Cray Inc. WP-CCS-Software01-0417
virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06
virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06 SWsoft Virtuozzo 3.5.1 (for Windows) Review 2 Summary 0. Introduction 1. Installation 2. VPSs creation and modification
Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This document
The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver
1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution
Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School
Introduction to Infiniband Hussein N. Harake, Performance U! Winter School Agenda Definition of Infiniband Features Hardware Facts Layers OFED Stack OpenSM Tools and Utilities Topologies Infiniband Roadmap
Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014
Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan
Fortran Program Development with Visual Studio* 2005 ~ Use Intel Visual Fortran with Visual Studio* ~
Fortran Program Development with Visual Studio* 2005 ~ Use Intel Visual Fortran with Visual Studio* ~ 31/Oct/2006 Software &Solutions group * Agenda Features of Intel Fortran Compiler Integrate with Visual
Oracle Enterprise Manager 12c New Capabilities for the DBA. Charlie Garry, Director, Product Management Oracle Server Technologies
Oracle Enterprise Manager 12c New Capabilities for the DBA Charlie Garry, Director, Product Management Oracle Server Technologies of DBAs admit doing nothing to address performance issues CHANGE AVOID