A National Computing Grid: FGI
|
|
|
- Lorraine Black
- 9 years ago
- Views:
Transcription
1 A National Computing Grid: FGI Vera Hansper, Ulf Tigerstedt, Kimmo Mattila, Luis Alves 3/10/2012 FGI
2 Grids in Finland : a short history 3/10/2012 FGI
3 In the beginning, we had M-Grid Interest in Grid technology rose in Finland during 2003 A consortium of 7 Universities, HIP and CSC was formed which successfully obtained funding for the FIRST Finnish Computing Grid M-Grid Effort was driven by CSC and Kai Nordlund (HU) M-Grid was operational from 2005 to Sites Theoretical total computing capacity ~ 2.5 TFlops Infrastructure had aged significantly by end 2008
4 Then, FGI is born The second generation M-Grid planned since ~2009 Many discussions about upgrading infrastructure Pekka Lehtovuori (CSC) & Kai Nordlund seek funding Application for funding made in October 2010 FIRI grant approved beginning 2011 Academy funding totals 1.38M Consortium consists of the following: Aalto University, University of Helsinki, Lappeenranta University of Technology, Tampere University of Technology, University of Eastern Finland, University of Jyväskylä, University of Oulu, University of Turku, Åbo Akademi University and CSC CSC coordinates the activity Members host the clusters
5 What was ordered Standard node configuration (408) HP SG7 scaleout dual 6 core 2.67GHz Xeon X GB memory (min.) Big Memory nodes (4) HP Proliant DL 580 G7 server 1 TB memory GPGPU nodes (52) 2 Nvidia Tesla cards in a standard compute node Theoretical peak computing capacity of ~154 Tflops Disk servers: Total storage capacity of about 1 PB QDR InfiniBand & Gigabit ethernet for interconnect and network.
6 Getting the stuff, Installation and Acceptance Delivery started early November and installation at sites was done within one to two days of delivery Operating system is Scientific Linux 6 Scheduler used is SLURM And what there is... Aalto: 112 nodes, 8 GPGPU nodes, two 1TB big memory nodes Lappeenranta: 16 nodes Eastern Finland: 64 nodes Helsinki: 49 nodes, 20 GPGPU nodes, one 1 TB big memory node Jyväskylä: 48 nodes, 8 GPGPU nodes Oulu: 30 nodes Tampere (TUT): 37 nodes, 8 GPGPU nodes, one 1 TB big memory node Turku: 20 nodes Åbo Akademi: 8 GPGPU nodes CSC: 24 nodes (with 96GB memory)
7 Systems on line Local use is open at all sites (since early 2012) Sites maintain their own clusters: Site administrators are encouraged to collaborate and communicate Weekly meetings Providing grid software support for users Becoming part of the FGI community Small team from CSC manage the general administration
8 What FGI can offer you: Hardware resources More resources than a single University can offer Distributed nature means better availability even when the local cluster is full Local account is not required! Software There are a number of software packages already available for use via the grid Runtime environments list (currently 15 and growing) is available at Support CSC provides GRID administrative support, software AND user support send an to : [email protected]
9 Normal clusters Job scheduler (e.g. Slurm, PBS) User X Send job (sbatch, qsub...) User X: Job 1 User X: Job 2 User Y: Job 3 User Z: Job 4 Frontend Storage Network Compute node 1-n
10 Grids Storage Work computer re St o User X da ta Grid interface d Sen j ob Grid tools Lappeenranta cluster job Send Se nd job Grid interface Grid interface CSC cluster 03/10/2012 Helsinki cluster FGI Symposium at Viikki
11 What do you need? Certificate VO membership The ARC client tools Installable on most Linux versions MAC OSX Available on CSC servers: HIPPU, Vuori Also available on your local cluster login node
12 Starting with FGI and follow the links to FGI and FGI user pages Central place for all documentation and information about FGI Getting started Available software, and how to use it Problems? Requests?
13 Software in FGI Some scientific software is pre-installed Primarily open source software You can also run your own programs in FGI If you have suggestions contact us We can help you install YOUR software requirements!
14 FGI and EGI FGI is the Finnish NGI and EGI sees us as NGI_FI CSC is the Operations Center for FGI Uses the monitoring and service tools provided by EGI Follows EGI procedures for operations Manages the Regional Operational on Duty team Sites admins are part of this team!
15 What EGI can offer.. An even larger computational resource than just FGI! Connections with international user groups in your field Some of them have already made tools/software GRID-ready Enables easy sharing of expertise with your collaborators through Virtual Organisations (VOs)
The Asterope compute cluster
The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU
Grids Computing and Collaboration
Grids Computing and Collaboration Arto Teräs CSC, the Finnish IT center for science University of Pune, India, March 12 th 2007 Grids Computing and Collaboration / Arto Teräs 2007-03-12 Slide
Estonian Scientific Computing Infrastructure (ETAIS)
Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder [email protected] University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers
Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing
Manual for using Super Computing Resources
Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
HIP Computing Resources for LHC-startup
HIP Computing Resources for LHC-startup Tomas Lindén Finnish CMS meeting in Kumpula 03.10. 2007 Kumpula, Helsinki October 3, 2007 1 Tomas Lindén Contents 1. Finnish Tier-1/2 computing in 2007 and 2008
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
CSC computing resources. Ville Savolainen, Tommi Nyrönen and Tomasz Malkiewicz CSC IT Center for Science Ltd.
CSC computing resources Ville Savolainen, Tommi Nyrönen and Tomasz Malkiewicz CSC IT Center for Science Ltd. Program 10-11:30 s Interactive! Q&A welcome 11:30-12:00 Round robin / free discussion 12:00
Introduction Physics at CSC. Tomasz Malkiewicz Jan Åström
Introduction Physics at CSC Tomasz Malkiewicz Jan Åström CSC Autumn School in Computational Physics 2013 9.00-9.30 9.30-10.15 Monday November 25 Tuesday November 26 Course intro, physics@csc (T. Malkiewicz,
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24.
bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24. November 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 17 Course
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
LUMA CENTRE FINLAND: NATIONAL LUMA DAYS NATIONAL SCIENTIX DAYS THE 5 TH ISSE SYMPOSIUM
LUMA CENTRE FINLAND: NATIONAL LUMA DAYS NATIONAL SCIENTIX DAYS THE 5 TH ISSE SYMPOSIUM Prof. Maija Aksela The head of the LUMA Centre Finland [email protected] LUMA = STEM LU stands for natural
HPC @ CRIBI. Calcolo Scientifico e Bioinformatica oggi Università di Padova 13 gennaio 2012
HPC @ CRIBI Calcolo Scientifico e Bioinformatica oggi Università di Padova 13 gennaio 2012 what is exact? experience on advanced computational technologies a company lead by IT experts with a strong background
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 [email protected] THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833
1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
Information and accounting systems. Lauri Anton
Information and accounting systems Lauri Anton Overview SLURM Grid information services ARC CE and its services Information indexing Accounting records Service monitoring Authorization services AAA AAA
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
University education 2012
Education 203 University education 202 Persons with university s A total of 29,400 university s were attained in 202 According to Statistics Finland, a total of 29,400 university s were attained in Finland
Cornell University Center for Advanced Computing
Cornell University Center for Advanced Computing David A. Lifka - [email protected] Director - Cornell University Center for Advanced Computing (CAC) Director Research Computing - Weill Cornell Medical
Logically a Linux cluster looks something like the following: Compute Nodes. user Head node. network
A typical Linux cluster consists of a group of compute nodes for executing parallel jobs and a head node to which users connect to build and launch their jobs. Often the compute nodes are connected to
19-23 May 2014, Helsinki, Finland http://cf2014.egi.eu/ EXHIBITION GUIDE
19-23 May 2014, Helsinki, Finland http://cf2014.egi.eu/ EXHIBITION GUIDE The EGI Community Forum 2014 is an excellent opportunity to find out who is who in this exciting field of worldwide distributed
NorduGrid ARC Tutorial
NorduGrid ARC Tutorial / Arto Teräs and Olli Tourunen 2006-03-23 Slide 1(34) NorduGrid ARC Tutorial Arto Teräs and Olli Tourunen CSC, Espoo, Finland March 23
SYSTEM SETUP FOR SPE PLATFORMS
BEST PRACTICE SYSTEM SETUP FOR SPE PLATFORMS Product Snow License Manager Version 7.0 Content System requirements SQL Server configuration Maintenance Test environment Document date 2015-10-15 ABOUT THIS
University education 2014
Education 25 University education 2 Number of university students decreased and that of degrees increased in 2 According to Statistics Finland's Education Statistics, a of 6,8 students attended university
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
Getting Started with HPC
Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage
How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications
Amazon Cloud Performance Compared David Adams Amazon EC2 performance comparison How does EC2 compare to traditional supercomputer for scientific applications? "Performance Analysis of High Performance
Remote & Collaborative Visualization. Texas Advanced Compu1ng Center
Remote & Collaborative Visualization Texas Advanced Compu1ng Center So6ware Requirements SSH client VNC client Recommended: TigerVNC http://sourceforge.net/projects/tigervnc/files/ Web browser with Java
The CNMS Computer Cluster
The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the
Auto-administration of glite-based
2nd Workshop on Software Services: Cloud Computing and Applications based on Software Services Timisoara, June 6-9, 2011 Auto-administration of glite-based grid sites Alexandru Stanciu, Bogdan Enciu, Gabriel
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
Interoperability Testing and iwarp Performance. Whitepaper
Interoperability Testing and iwarp Performance Whitepaper Interoperability Testing and iwarp Performance Introduction In tests conducted at the Chelsio facility, results demonstrate successful interoperability
Comparing the performance of the Landmark Nexus reservoir simulator on HP servers
WHITE PAPER Comparing the performance of the Landmark Nexus reservoir simulator on HP servers Landmark Software & Services SOFTWARE AND ASSET SOLUTIONS Comparing the performance of the Landmark Nexus
Overview of HPC systems and software available within
Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
QuickSpecs. What's New Support for two InfiniBand 4X QDR 36P Managed Switches
QuickSpecs Overview 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Control Rack - Front View InfiniBand 4X QDR 36P Managed Switch (2) Backup server and storage (configuration and space required dependent upon total capacity
Cornell University Center for Advanced Computing
Cornell University Center for Advanced Computing David A. Lifka - [email protected] Director - Cornell University Center for Advanced Computing (CAC) Director Research Computing - Weill Cornell Medical
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
Scientific Computing Data Management Visions
Scientific Computing Data Management Visions ELI-Tango Workshop Szeged, 24-25 February 2015 Péter Szász Group Leader Scientific Computing Group ELI-ALPS Scientific Computing Group Responsibilities Data
Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison
April 23 11 Aviation Parkway, Suite 4 Morrisville, NC 2756 919-38-28 Fax 919-38-2899 32 B Lakeside Drive Foster City, CA 9444 65-513-8 Fax 65-513-899 www.veritest.com [email protected] Microsoft Windows
Cloud Computing. Alex Crawford Ben Johnstone
Cloud Computing Alex Crawford Ben Johnstone Overview What is cloud computing? Amazon EC2 Performance Conclusions What is the Cloud? A large cluster of machines o Economies of scale [1] Customers use a
HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA
HPC Cloud Focus on your research Floris Sluiter Project leader SARA Why an HPC Cloud? Christophe Blanchet, IDB - Infrastructure Distributing Biology: Big task to port them all to your favorite architecture
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater
HPC-related R&D in 863 Program
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010 Outline The 863 key project on HPC and Grid Status and Next 5 years 863 efforts on
SUN ORACLE EXADATA STORAGE SERVER
SUN ORACLE EXADATA STORAGE SERVER KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch SAS or SATA disks 384 GB of Exadata Smart Flash Cache 2 Intel 2.53 Ghz quad-core processors 24 GB memory Dual InfiniBand
Brainlab Node TM Technical Specifications
Brainlab Node TM Technical Specifications BRAINLAB NODE TM HP ProLiant DL360p Gen 8 CPU: Chipset: RAM: HDD: RAID: Graphics: LAN: HW Monitoring: Height: Width: Length: Weight: Operating System: 2x Intel
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3
Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal. 2013 by SGI Federal. Published by The Aerospace Corporation with permission.
Stovepipes to Clouds Rick Reid Principal Engineer SGI Federal 2013 by SGI Federal. Published by The Aerospace Corporation with permission. Agenda Stovepipe Characteristics Why we Built Stovepipes Cluster
Managing a local Galaxy Instance. Anushka Brownley / Adam Kraut BioTeam Inc.
Managing a local Galaxy Instance Anushka Brownley / Adam Kraut BioTeam Inc. Agenda Who are we Why a local installation Local infrastructure Local installation Tips and Tricks SlipStream Appliance WHO ARE
Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team
Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC Wenhao Wu Program Manager Windows HPC team Agenda Microsoft s Commitments to HPC RDMA for HPC Server RDMA for Storage in Windows 8 Microsoft
Parallels Plesk Automation
Parallels Plesk Automation Contents Compact Configuration: Linux Shared Hosting 3 Compact Configuration: Mixed Linux and Windows Shared Hosting 4 Medium Size Configuration: Mixed Linux and Windows Shared
Using the Windows Cluster
Using the Windows Cluster Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster
Scalable Cloud Computing Solutions for Next Generation Sequencing Data
Scalable Cloud Computing Solutions for Next Generation Sequencing Data Matti Niemenmaa 1, Aleksi Kallio 2, André Schumacher 1, Petri Klemelä 2, Eija Korpelainen 2, and Keijo Heljanko 1 1 Department of
Virtualization of a Cluster Batch System
Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational
Ignify ecommerce. Item Requirements Notes
wwwignifycom Tel (888) IGNIFY5 sales@ignifycom Fax (408) 516-9006 Ignify ecommerce Server Configuration 1 Hardware Requirement (Minimum configuration) Item Requirements Notes Operating System Processor
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering Enquiry No: Enq/IITK/ME/JB/02 Enquiry Date: 14/12/15 Last Date of Submission: 21/12/15 Formal quotations are invited for HPC cluster.
SERVER CLUSTERING TECHNOLOGY & CONCEPT
SERVER CLUSTERING TECHNOLOGY & CONCEPT M00383937, Computer Network, Middlesex University, E mail: [email protected] Abstract Server Cluster is one of the clustering technologies; it is use for
HP Proliant BL460c G7
HP Proliant BL460c G7 The HP Proliant BL460c G7, is a high performance, fully fault tolerant, nonstop server. It s well suited for all mid-level operations, including environments with local storage, SAN
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
EMC ISILON NL-SERIES. Specifications. EMC Isilon NL400. EMC Isilon NL410 ARCHITECTURE
EMC ISILON NL-SERIES The challenge of cost-effectively storing and managing data is an ever-growing concern. You have to weigh the cost of storing certain aging data sets against the need for quick access.
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
Performance Characteristics of Large SMP Machines
Performance Characteristics of Large SMP Machines Dirk Schmidl, Dieter an Mey, Matthias S. Müller [email protected] Rechen- und Kommunikationszentrum (RZ) Agenda Investigated Hardware Kernel Benchmark
Virtual Compute Appliance Frequently Asked Questions
General Overview What is Oracle s Virtual Compute Appliance? Oracle s Virtual Compute Appliance is an integrated, wire once, software-defined infrastructure system designed for rapid deployment of both
Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research
! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture Ron Weiss, Exadata Product Management Exadata Database Machine Best Platform to Run the
EMC ISILON X-SERIES. Specifications. EMC Isilon X200. EMC Isilon X210. EMC Isilon X410 ARCHITECTURE
EMC ISILON X-SERIES EMC Isilon X200 EMC Isilon X210 The EMC Isilon X-Series, powered by the OneFS operating system, uses a highly versatile yet simple scale-out storage architecture to speed access to
ORACLE BIG DATA APPLIANCE X3-2
ORACLE BIG DATA APPLIANCE X3-2 BIG DATA FOR THE ENTERPRISE KEY FEATURES Massively scalable infrastructure to store and manage big data Big Data Connectors delivers load rates of up to 12TB per hour between
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
4cast Server Specification and Installation
4cast Server Specification and Installation Version 2015.00 10 November 2014 Innovative Solutions for Education Management www.drakelane.co.uk System requirements Item Minimum Recommended Operating system
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
MapR Enterprise Edition & Enterprise Database Edition
MapR Enterprise Edition & Enterprise Database Edition Reference Architecture A PSSC Labs Reference Architecture Guide June 2015 Introduction PSSC Labs continues to bring innovative compute server and cluster
IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server
IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server IBM FLUENT Benchmark Results IBM & FLUENT Recommended Configurations IBM 16-Core BladeCenter S Cluster for FLUENT Systems: Up to Six
Lecture 1: the anatomy of a supercomputer
Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers of the future may have only 1,000 vacuum tubes and perhaps weigh 1½ tons. Popular Mechanics, March 1949
Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution
Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering [email protected] Company Overview
Secure Hybrid Cloud Infrastructure for Scien5fic Applica5ons
Secure Hybrid Cloud Infrastructure for Scien5fic Applica5ons Project Members: Paula Eerola Miika Komu MaA Kortelainen Tomas Lindén Lirim Osmani Sasu Tarkoma Salman Toor (Presenter) [email protected]
Avid ISIS 2500-2000 v4.7.7 Performance and Redistribution Guide
Avid ISIS 2500-2000.7 Performance and Redistribution Guide Change History Date Release Changes 11/13/2015 4.7.7 Added support for El Capitan (Mac OS 10.11) Added support for Atto Thunderlink 10Gb for Mac
How To Run A Hosted Physical Server On A Server At Redcentric
REDCENTRIC HOSTED PHYSICAL SERVER SERVICE DEFINITION SD027 V1.6 Issue Date 01 July 2014 1) OVERVIEW The Hosted Physical Server service (HPS) offers dedicated off-site server resource located within an
