AFS in a GRID context
|
|
|
- Cameron Nelson
- 9 years ago
- Views:
Transcription
1 Rome September 2009, Università Roma Tre European AFS meeting AFS in a GRID context G. Bracco, S.Migliori, S. Podda, P. D'Angelo A. Santoro, A. Rocchi, C. Sciò ENEA FIM, C.R. ENEA Frascati V. E. Fermi 45 Frascati ROMA (Italy) [email protected]
2 Motivation Illustrate the utilization of AFS in a GRID framework, showing architectural aspects, advantages and issues from experience in ENEA-GRID, a GRID infrastructure operating since 1999 ENEA-GRID and its computational resources, CRESCO HPC system, interoperability with other grids AFS and ENEA-GRID GRID related AFS pros/cons Conclusions
3 ENEA-GRID ENEA, the Italian Agency for New Technologies, Energy and Sustainable Development, has 12 research sites, 6 of them with computer centers. All ENEA main computational resources are integrated in the ENEA-GRID infrastructure which provides to ENEA researchers and their collaborators an easy access to the available multiplatform resources.
4 ENEA GRID and the Network WAN connection is provided by GARR, the Italian Academic and Research Network Consortium: for ENEA 9 PoP, Mbps Brindisi 150 Mb/s Bologna 30 Mb/s Casaccia 200 Mb/s Frascati 1000 Mb/s Portici 2000 Mb/s Trisaia 18 Mb/s Palermo Pisa Roma Sede CAS POR TRI BOL FRA BRI
5 ENEA-GRID Architecture GRID functionalities (authentication, authorization, resource and data discovery, sharing & management) are provided by mature multiplatform components (ENEA-GRID Middleware): Distributed File System : OpenAFS/Kerberos 5 integrated WAN resource manager: LSF Multicluster [ User Interface: Java & Citrix Technologies, now also FreeNX These production ready components have permitted to integrate reliably over the years (ENEA-GRID started in 1999) the state of the art computational resources. ENEA participates in GRID projects (Datagrid, EGEE,EGEE-II & III, BEINGRID, Italian PON projects, GRISU..) focusing on interoperability solutions with other middleware (glite, Unicore): a gateway implementation method (SPAGO, Shared Proxy Approach for GRID Objects) has been developed and applied to interoperability with glite based grids.
6 ENEA-GRID computational resources Hardware Most relevant: CRESCO HPC system, located in Portici (NA) rank #125 in Top500 June/2008 (rank #2 in Italy) 17.1 Tflops, 300 hosts, 2720 cores, InfiniBand 4xDDR Others: ~100 hosts ~650 cpu AIX: IBM SP5 256 cpu (12 p GHz, 16 cpu + 1 p Ghz, 64 cpu, 1.5 Tflops); SP4, 96 cpu SGI Altix 350 (IA64) 32 cpu & Onyx Cray XD1 24 cpu Linux clusters 32/x86_64; Apple cluster; Windows servers... Software: commercial codes (fluent, ansys, abaqus..); computational environment (Matlab, IDL,..), research codes (CPMD, MCNP...)
7 CRESCO HPC system (1) CRESCO (Computational Research Center for Complex Systems) is an ENEA Project, co-funded by the Italian Ministry of University and Research (MUR) in the framework of PON call In operation since spring
8 CRESCO HPC system (2) A general purpose facility based on leading multicore x86_64 technology. Three sections: Section 1: 672 cores: large memory requirement 42 fat nodes IBM x3850/x3950-m2, 4 Xeon Quad-Core Tigerton E7330 processors (2.4GHz/1066MHz/6MB L2), 32 GB RAM (4 extra-fat nodes with 64 GB RAM, 1 coupled node (x2) 32 cores /128 GB. Section 2: 2048 cores : high scalability 256 blades IBM HS21, Xeon Quad-Core Clovertown E5345 processors (2.33GHz/1333MHz/8MB L2), 16 GB RAM Experimental section: cell, FPGA, GPU 4 blades IBM QS21, 2 Cell BE Processors 3.2 Ghz 6 nodes IBM x3755, 4 sockets AMD Dualcore 8222, FPGA VIRTEX5 LX330 1 node IBM x 3755, 4 sockets AMD Dualcore 8222, NVIDIA Quadro FX 4700 X2 video card
9 CRESCO Infiniband Network CRESCO Storage: DDN S2A TB raw; 120 TB GPFS
10 ENEA-GRID interoperability in production IGI & EGEE: Italian and European GRID GRISU: Southern Italy GRID GARR GARR ENEA-GRID PI2S2
11 ENEA-GRID and Research Requirements Computing power: High Performance / High Throughput Computing Systems at the state of art of computational performance Reliable and stable user environment Tools for collaborative activities User support (access, infrastructure integration, resource availability and monitoring) ENEA-GRID & Research: ENEA Examples Computational Chemistry Nuclear Fusion, plasma stability Climate/Weather/Ocean Simulations Pollutant Athmospheric Diffusion Combustion Simulation...
12 ENEA-GRID & Industry Additional specific requirements/issues Access to proprietary codes Certification issues (license management) Reduction in simulation time, also at the price of weak problem scalability Synergy between proprietary and open source codes (e.g. OpenFoam vs Fluent) Customization of access control to codes and data Security and traceability, intellectual property protection SLA, charging model,... Some examples with the following companies AVIO AnsaldoEnergia/AnsaldoRicerche AAPS Informatica CETMA...
13 AFS and GRID AFS was born much earlier the GRID paradigm was introduced (1995 by Foster and Kesselman) But many of the AFS features were already GRID like Born on the WAN with a Global name space Strong authentication Data location transparency for the user Powerful PTS group management Multi-platform When GRID infrastructures started to develop AFS was a closed/proprietary software Some performances issues Not a problem for ENEA when ENEA-GRID was started AFS was multi-platform and production ready
14 AFS and ENEA-GRID History 1999 initial Transarc installation Collaboration with CASPUR since 2004 OpenAFS migration 2006/may Kerberos 5 (MIT) migration 2007/february Features enea.it cell, servers and architecture User and Data layout Management Utilities LSF integration
15 enea.it AFS cell Dbservers, (scientific linux 4.x, openafs 1.4.0) 3 standard dbservers located in one site (Frascati) and listed in the public available CellServDB at grancentral.org. 5 clone dbservers in the other sites (2 in Portici, where CRESCO HPC is located) All dbservers run fakeka and are secondary KDC of the Realm ENEA.IT, primary KDC is in Frascati. Fileservers (SL 4x-5x, AIX 5.3, Solaris 9, openafs 1.4.1,1.4.4,1.4.10) 11 fileservers: at least two in the main sites, Portici and Frascati Total allocated space: 44 TB Total used space 11.2 TB Clients SL 4x, SL 5x, CentOS 5.x, AIX , IRIX 6.5, MacOSX Computational Working nodes: CellServDB contains only local dbservers+1 standard on for sites where only 1 dbserver is available Vlserver and fileserver preferences set automatically by a cron script
16 enea.it user & data setup (1) User propagation using AFS: /etc/passwd is managed using a hourly cron job and user data in AFS space AFS space is organized following the ENEA sites /afs/enea.it./bol»./bri»./cas»./fra»./por»./tri BOL FRA CAS POR TRI BRI
17 enea.it user & data setup (2) User space: dual entry to user HOME, global and site entries /afs/enea.it/user/b/bracco is the same area as /afs/enea.it/fra/bracco < global path < local path : HOME on Frascati site» ~/private» ~/public» ~/public_html» ~/rem/por user volume on Portici site» ~/rem/bri user volume on Brindisi site» ~/PFS/por link to user GPFS in Portici site user space on remote sites is provided directly from the user HOME
18 enea.it user & data setup (3) Data space for software and projects ( organized in 2 level volumes: project volume, subproject volumes) /afs/enea.it/software»./maxima»./maxima/html > /afs/enea.it/projects» ~/eneagrid» ~/eneagrid/html > ~/eneagrid/docs»... project and software volumes have standard name p.site.nickname.subx while for users the standard approach is used: e.g user.bracco
19 enea.it user & data setup (4) Also Project/Software area have also global and local access paths /afs/enea.it/projects» ~/eneagrid /afs/enea.it/fra» ~/arcproj/eneagrid» ~/user» ~/remote < a site based entry for user remote volumes Backup volumes mount point have only a global access path /afs/enea.it/backup»./user/b/bracco» /user/bracco.rem/fra > remote volume backup»./arcproj/eneagrid»./arcproj/eneagrid.html
20 enea.it user & data setup (5) Projects are also used to provide to the user a scalable data space for code results In ENEA-GRID it has been chosen to limit the volume size to about 50 GB for an easy management Many subproject volumes can be allocated to provide a large data space to selected users That is pratical when user needs some hundreds GB It becomes cumbersome for the user when many TB are required In that case the user must solve the problem of fitting his data into fixed size AFS volumes.
21 Management Utilities Users and projects are maintained using a dedicated application (WARC) developed by CASPUR in collaboration with ENEA; WARC provides also management delegation to project administrators and site administrators WARC User management Project creation Project management PTS groups Release RO volumes Historical analysis and routine checkup of enea.it cell is performed using AMACA application: see Alessio Rocchi presentation on Tuesday 29/9
22 LSF integration/ Interoperability issues Token management is the key point for LSF integration The integration is performed using gettok/puttok routines in the version provided by Platform which required some patch to run correctly with the tokens generated by kinit afslog procedure. blaunch integration for mpi jobs is in production gettok/puttok available also for windows (result of a collaboration between ENEA and Salerno University in the framework of CRESCO project) Token management has been also an issue in participating to other GRID projects (for example EGEE) where authentication is based on X509 certificate, with extended features (e.g. Virtual Organization support) gssklog/gssklogd is the tool used to convert X509 to AFS token In the framework of ENEA participation to EGEE, VO support has been added to gssklog/gssklogd and that work was presented at AFS BPW in Ann Arbor, 2006.
23 Grid related AFS pros/cons (1) The main advantage of AFS is the transparent access to applications and data, together with good security Advantage for the user: no explicit data transfer required Advantage for the administrator: easy deployment of applications and data Issues arise essentially from performance problems It is well known that AFS performances over WAN are limited by rx protocol [e.g. see H.Reuter presentation at last European AFS Workshop, Graz, 2008] to ~4 MB/s for rtt=10 ms If performance matters, then the user must be aware of the data localization In ENEA-GRID we have tried to solve that by providing local and remote volumes to users, but of course some of the AFS elegance is lost in the process
24 Grid related AFS pros/cons (2) For users, data transfer over WAN can be several times faster using standard methods (scp) than by direct AFS trasfer over LAN (CRESCO) AFS (memory cache) trasfers at 50 MB/s What about reliability? AFS is very reliable but obviously it can fail More often failures depends on the network and that induces a reliability issue for the local resources due to remote network problems The effect for the user sometimes can be minimized: By a proper configuration of read-only copies By the availability of clone dbservers and proper cell definition on the clients
25 Grid related AFS pros/cons (3) The PTS group in AFS can be exploited to provided VO (Virtual Organization) functionality for a GRID structure. In ENEA-GRID PTS groups: are used define the ACL for access to software/and data (standard) are connected to defined Projects/Software AFS space are used to enable access to services by controlling for example the access to dedicated Web pages or portals can be defined by non-admin users, by means of WARC utility, as also the user membership. In fact the WARC utility permits to delegate these operations to Project Administrators These features permits to organize the cooperative activity of group of users with functionalities similar to the one provided by other VO implementation
26 Conclusions AFS is one of the main component of ENEA- GRID and at the moment there are non alternatives for the features it provides Some of the GRID features in ENEA-GRID are based on AFS (together with some utilities): data location transparency, data security, Virtual Organization management (authorization, collaboration) The impact of the low WAN performances of AFS is relevant and any improvement on this issue can have important effect on the AFS usability in GRID context
The ENEA-EGEE site: Access to non-standard platforms
V INFNGrid Workshop Padova, Italy December 18-20 2006 The ENEA-EGEE site: Access to non-standard platforms C. Sciò**, G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, F. Simoni, S. Podda
Cloud Computing in ENEA-GRID: Virtual Machines, Roaming Profile, and Online Storage
Workshop GARR Calcolo e Storage Distribuito November 29-30, 2012 - Rome Cloud Computing in ENEA-GRID: Virtual Machines, Roaming Profile, and Online Storage Ing. Giovanni Ponti, Ph.D. ENEA UTICT-HPC [email protected]
Deploying and managing a Visualization Farm @ Onera
Deploying and managing a Visualization Farm @ Onera Onera Scientific Day - October, 3 2012 Network and computing department (DRI), Onera P.F. Berte [email protected] Plan Onera global HPC
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
Autodesk Revit 2016 Product Line System Requirements and Recommendations
Autodesk Revit 2016 Product Line System Requirements and Recommendations Autodesk Revit 2016, Autodesk Revit Architecture 2016, Autodesk Revit MEP 2016, Autodesk Revit Structure 2016 Minimum: Entry-Level
OpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
CNR-INFM DEMOCRITOS and SISSA elab Trieste
elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open
System requirements for MuseumPlus and emuseumplus
System requirements for MuseumPlus and emuseumplus System requirements for MuseumPlus and emuseumplus Valid from July 1 st, 2008 Apart from the listed system requirements, the requirements established
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
ITM Gateway. F.Iannone : Associazione Euratom/ENEA sulla fusione
ITM Gateway F.Iannone : Associazione Euratom/ENEA sulla fusione G.Bracco & S. Migliori ENEA IT Department (FIM) A. Maslennikov CASPUR (Consortium for Supercomputer Applications for University and Research)
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain
Virtualization of a Cluster Batch System
Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Red Hat Enterprise Linux 6. Stanislav Polášek ELOS Technologies [email protected]
Stanislav Polášek ELOS Technologies [email protected] Red Hat - an Established Global Leader Compiler Development Identity & Authentication Storage & File Systems Middleware Kernel Development Virtualization
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
Using the Windows Cluster
Using the Windows Cluster Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster
Elastic Cloud Computing in the Open Cirrus Testbed implemented via Eucalyptus
Elastic Cloud Computing in the Open Cirrus Testbed implemented via Eucalyptus International Symposium on Grid Computing 2009 (Taipei) Christian Baun The cooperation of and Universität Karlsruhe (TH) Agenda
An Open Source Wide-Area Distributed File System. Jeffrey Eric Altman jaltman *at* secure-endpoints *dot* com
An Open Source Wide-Area Distributed File System Jeffrey Eric Altman jaltman *at* secure-endpoints *dot* com What is AFS? A global wide-area Distributed File System providing location independent authenticated
PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing
Batch virtualization and Cloud computing Batch virtualization and Cloud computing Part 1: Batch virtualization Tony Cass, Sebastien Goasguen, Belmiro Moreira, Ewan Roche, Ulrich Schwickerath, Romain Wartel
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
High Performance Computing Infrastructure at DESY
High Performance Computing Infrastructure at DESY Sven Sternberger & Frank Schlünzen High Performance Computing Infrastructures at DESY DV-Seminar / 04 Feb 2013 Compute Infrastructures at DESY - Outline
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
Hadoop on a Low-Budget General Purpose HPC Cluster in Academia
Hadoop on a Low-Budget General Purpose HPC Cluster in Academia Paolo Garza, Paolo Margara, Nicolò Nepote, Luigi Grimaudo, and Elio Piccolo Dipartimento di Automatica e Informatica, Politecnico di Torino,
Parallel Computing with MATLAB
Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best
Using NeSI HPC Resources. NeSI Computational Science Team ([email protected])
NeSI Computational Science Team ([email protected]) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting
The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7
Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:
Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF
Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle
HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle HP ProLiant BL685c takes #2 spot HP Leadership» The HP ProLiant BL460c
HP ProLiant DL380 G5 takes #1 2P performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Windows
HP ProLiant DL380 G5 takes #1 2P performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Windows HP first rack server benchmark on Siebel HP Leadership The HP ProLiant DL380
Pedraforca: ARM + GPU prototype
www.bsc.es Pedraforca: ARM + GPU prototype Filippo Mantovani Workshop on exascale and PRACE prototypes Barcelona, 20 May 2014 Overview Goals: Test the performance, scalability, and energy efficiency of
System requirements for Autodesk Building Design Suite 2017
System requirements for Autodesk Building Design Suite 2017 For specific recommendations for a product within the Building Design Suite, please refer to that products system requirements for additional
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
HPC-related R&D in 863 Program
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010 Outline The 863 key project on HPC and Grid Status and Next 5 years 863 efforts on
Numerix CrossAsset XL and Windows HPC Server 2008 R2
Numerix CrossAsset XL and Windows HPC Server 2008 R2 Faster Performance for Valuation and Risk Management in Complex Derivative Portfolios Microsoft Corporation Published: February 2011 Abstract Numerix,
HP ProLiant BL460c takes #1 performance on Siebel CRM Release 8.0 Benchmark Industry Applications running Linux, Oracle
HP ProLiant BL460c takes #1 performance on Siebel CRM Release 8.0 Benchmark Industry Applications running Linux, Oracle HP first to run benchmark with Oracle Enterprise Linux HP Leadership» The HP ProLiant
Ensure that the server where you install the Primary Server software meets the following requirements: Item Requirements Additional Details
ZENworks 11 SP3 System December 2014 The following sections provide the Novell ZENworks 11 SP3 requirements for hardware and software: Section 1, Primary Server, on page 1 Section 2, Managed Device, on
Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers
Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers This section includes system requirements for DMENE Network configurations that utilize virtual
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
Oracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud
An Oracle White Paper July 2011 Oracle TimesTen In-Memory Database on Oracle Exalogic Elastic Cloud Executive Summary... 3 Introduction... 4 Hardware and Software Overview... 5 Compute Node... 5 Storage
Simulation Platform Overview
Simulation Platform Overview Build, compute, and analyze simulations on demand www.rescale.com CASE STUDIES Companies in the aerospace and automotive industries use Rescale to run faster simulations Aerospace
Introduction to HPC Workshop. Center for e-research ([email protected])
Center for e-research ([email protected]) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using
3DVISION MOTION CAPTURE SYSTEM AND THE DYSCO "VIRTUAL LABORATORY" FOR SEISMIC AND VIBRATION TESTS
ATTIVITÀ IN CAMPO SISMICO RECENTI STUDI E SVILUPPI FUTURI Attività in campo sismico. 3DVISION MOTION CAPTURE SYSTEM AND THE DYSCO "VIRTUAL LABORATORY" FOR SEISMIC AND VIBRATION TESTS Gerardo De Canio,,
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent
An approach to grid scheduling by using Condor-G Matchmaking mechanism
An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, dobrisa.dobrenic}@srce.hr
IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server
IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server IBM FLUENT Benchmark Results IBM & FLUENT Recommended Configurations IBM 16-Core BladeCenter S Cluster for FLUENT Systems: Up to Six
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
Phire Architect Hardware and Software Requirements
Phire Architect Hardware and Software Requirements Copyright 2014, Phire. All rights reserved. The Programs (which include both the software and documentation) contain proprietary information; they are
Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Intel Xeon Based Servers. Version 1.0
Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Intel Xeon Based Servers Version 1.0 March 2009 Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Inel
Simple virtualization IBM SAN Volume Controller (SVC)
Simple virtualization IBM Volume Controller (SVC) Jan Vojtěch AVNET, TSSC Specialist [email protected] Agenda Introduction to IBM Virtualization IBM SystemStorage Volume Controller Announcement IBM
The GRID and the Linux Farm at the RCF
The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture
Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U
Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U Datasheet Brings the performance and reliability of mainframe virtualization to blade computing BladeSymphony is the first true enterprise-class
SUN ORACLE DATABASE MACHINE
SUN ORACLE DATABASE MACHINE FEATURES AND FACTS FEATURES From 2 to 8 database servers From 3 to 14 Sun Oracle Exadata Storage Servers Up to 5.3 TB of Exadata QDR (40 Gb/second) InfiniBand Switches Uncompressed
Using Linux Clusters as VoD Servers
HAC LUCE Using Linux Clusters as VoD Servers Víctor M. Guĺıas Fernández [email protected] Computer Science Department University of A Corunha funded by: Outline Background: The Borg Cluster Video on Demand.
Technical Overview of Windows HPC Server 2008
Technical Overview of Windows HPC Server 2008 Published: June, 2008, Revised September 2008 Abstract Windows HPC Server 2008 brings the power, performance, and scale of high performance computing (HPC)
Revit products will use multiple cores for many tasks, using up to 16 cores for nearphotorealistic
Autodesk Revit 2013 Product Line System s and Recommendations Autodesk Revit Architecture 2013 Autodesk Revit MEP 2013 Autodesk Revit Structure 2013 Autodesk Revit 2013 Minimum: Entry-Level Configuration
Cloud Computing through Virtualization and HPC technologies
Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center
SRNWP Workshop HP Solutions and Activities in Climate & Weather Research Michael Riedmann European Performance Center Agenda A bit of marketing: HP Solutions for HPC A few words about recent Met deals
SUN ORACLE EXADATA STORAGE SERVER
SUN ORACLE EXADATA STORAGE SERVER KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch SAS or SATA disks 384 GB of Exadata Smart Flash Cache 2 Intel 2.53 Ghz quad-core processors 24 GB memory Dual InfiniBand
Recent Advances in HPC for Structural Mechanics Simulations
Recent Advances in HPC for Structural Mechanics Simulations 1 Trends in Engineering Driving Demand for HPC Increase product performance and integrity in less time Consider more design variants Find the
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers
Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing
MOSIX: High performance Linux farm
MOSIX: High performance Linux farm Paolo Mastroserio [[email protected]] Francesco Maria Taurino [[email protected]] Gennaro Tortone [[email protected]] Napoli Index overview on Linux farm farm
The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver
1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution
Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015
Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015 Xian Shi 1 bio I am a second-year Ph.D. student from Combustion Analysis/Modeling Lab,
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
vrealize Business System Requirements Guide
vrealize Business System Requirements Guide vrealize Business Advanced and Enterprise 8.2.1 This document supports the version of each product listed and supports all subsequent versions until the document
Amazon EC2 XenApp Scalability Analysis
WHITE PAPER Citrix XenApp Amazon EC2 XenApp Scalability Analysis www.citrix.com Table of Contents Introduction...3 Results Summary...3 Detailed Results...4 Methods of Determining Results...4 Amazon EC2
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
AssetWise Performance Management. APM Installation Prerequisites
AssetWise Performance Management APM Installation Prerequisites Trademark Notice Bentley, the B Bentley logo, AssetWise, Ivara, the Ivara EXP logo, Ivara Work Smart, Aladon and RCM2 are either registered
