ITM Gateway. F.Iannone : Associazione Euratom/ENEA sulla fusione



Similar documents
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute

Clusters: Mainstream Technology for CAE

PCI Express and Storage. Ron Emerick, Sun Microsystems

HPC Update: Engagement Model

HP reference configuration for entry-level SAS Grid Manager solutions

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

Sun Microsystems Special Promotions for Education and Research January 9, 2007

VTrak SATA RAID Storage System

EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation

Arrow ECS sp. z o.o. Oracle Partner Academy training environment with Oracle Virtualization. Oracle Partner HUB

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

Building Clusters for Gromacs and other HPC applications

Sun Constellation System: The Open Petascale Computing Architecture

Referencia: Dell PowerEdge SC1430-2GB9L3J Nettó ár: Db: 1 Tag Number: 2GB9L3J

Referencia: Dell PowerEdge SC1430-2GB9L3J Nettó ár: Db: 1 Tag Number: 2GB9L3J

Architecting a High Performance Storage System

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

DD670, DD860, and DD890 Hardware Overview

Cluster Implementation and Management; Scheduling

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

How To Configure Your Computer With A Microsoft X86 V3.2 (X86) And X86 (Xo) (Xos) (Powerbook) (For Microsoft) (Microsoft) And Zilog (X

INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering

Cray DVS: Data Virtualization Service

New Storage System Solutions

SMB Direct for SQL Server and Private Cloud

DD160 and DD620 Hardware Overview

modular Storage Solutions MSS Series

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Comparing the performance of the Landmark Nexus reservoir simulator on HP servers

Annex 1: Hardware and Software Details

IBM System x family brochure

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

Lessons learned from parallel file system operation

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

Hardware & Software Specification i2itracks/popiq

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

Referencia: Dell PowerEdge SC YMJ3J Nettó Ár: Db: 1 Tag Number: 37YMJ3J

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Altix Usage and Application Programming. Welcome and Introduction

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre

PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

Quantum StorNext. Product Brief: Distributed LAN Client

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science

Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF

Cisco-EMC Microsoft SQL Server Fast Track Warehouse 3.0 Enterprise Reference Configurations. Data Sheet

Large Scale Storage. Orlando Richards, Information Services LCFG Users Day, University of Edinburgh 18 th January 2013

LANL Computing Environment for PSAAP Partners


Michael Kagan.

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

REQUEST FOR QUOTE. All out of date servers contain approximately 1 TB of data that needs to be migrated to the new Windows domain environment.

THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC

QuickSpecs. HP Integrity cx2620 Server. Overview

An Oracle White Paper December Oracle Virtual Desktop Infrastructure: A Design Proposal for Hosted Virtual Desktops

Microsoft SharePoint Server 2010

Current Status of FEFS for the K computer

HUS-IPS-5100S(D)-E (v.4.2)

Maurice Askinazi Ofer Rind Tony Wong. Cornell Nov. 2, 2010 Storage at BNL

Servers, Clients. Displaying max. 60 cameras at the same time Recording max. 80 cameras Server-side VCA Desktop or rackmount form factor

Cisco MCS 7825-H3 Unified Communications Manager Appliance

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

FLOW-3D Performance Benchmark and Profiling. September 2012

Microsoft Exchange Server 2003 Deployment Considerations

ECLIPSE Performance Benchmarks and Profiling. January 2009

AFS in a GRID context

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

Virtualised MikroTik

When EP terminates the use of Hosting CC OG, EP is required to erase the content of CC OG application at its own cost.

SR-IOV In High Performance Computing

Mass Storage System for Disk and Tape resources at the Tier1.

Oracle Database Scalability in VMware ESX VMware ESX 3.5

FUJITSU Enterprise Product & Solution Facts

1 DCSC/AU: HUGE. DeIC Sekretariat /RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

Building a Top500-class Supercomputing Cluster at LNS-BUAP

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

Using the Windows Cluster

CRIBI. Calcolo Scientifico e Bioinformatica oggi Università di Padova 13 gennaio 2012

Cisco SFS 7000P InfiniBand Server Switch

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

LS DYNA Performance Benchmarks and Profiling. January 2009

Investigation of storage options for scientific computing on Grid and Cloud facilities

Business white paper. HP Process Automation. Version 7.0. Server performance

Transcription:

ITM Gateway F.Iannone : Associazione Euratom/ENEA sulla fusione G.Bracco & S. Migliori ENEA IT Department (FIM) A. Maslennikov CASPUR (Consortium for Supercomputer Applications for University and Research)

Outline Short history Requirements ENEA Proposal ENEA Project Layout: work packages & deliverables Gateway computing environment for ITM TF Conclusions

Short history The idea of ITM Gateway by A. Becoulet helped by B.Guillerminet. (2006) it will offer European modellers the necessary elements needed to run and analyse fusion simulations Dataset and codes computer access and archive management minimum data visual tools Requirements finalized at ITM TF meeting (Gothenburg 10/2006) Call for proposals - Started in April 2007 / Closed in June 2007 Gateway Components: - Shared Storage Data Area: 30 TB (initially) 100 TB (over 4 years); intensive parallel I/O ( 800MB/s) - Computing Resources: Computing Cluster: 600 GFlops theoretical peak & 512 GB RAM - Hosting Service: data center providing: WAN access, security, backup... - 1 Gbit or better link desirable - Installation and Operation: manpower for operation & direct interaction with vendors

Requirements (1) Shared Storage Data Area Wide Area Distributed File System (WADFS) for user home directories and code repository Parallel File System (PFS) for data and databases Hardware Server nodes for WADFS and PFS RAID disk array systems Storage infrastructure (Storage Area Network - SAN) Software WADFS & PFS reliable, scalable and preferably open source PFS over SAN for Intensive I/O peak performance 800 MBps DATA FLOW SAN based CLIENTS ETHERNET FC SAN IP LAN DISKS ARRAY FIBRE CHANNEL METADATA SERVERS METADATA FLOW

Requirements (2) Computing Resources A cluster with a powerful high speed, low latency interconnection system for message passing 2 hosts as Front-End and services: Portal, DBMS, user access interface Resource Management System (RMS): job scheduling, load balancing, etc.. Scientific Libraries and Data Analysis and Visualization tools (optional) Hardware Worker Nodes for HPC cluster; Front-End nodes Interconnection infrastructures (FC-SAN & InfiniBand) Rack mounted solution Software Unix-like OS and preferably open source (Scientific Linux) Single Sign-On Authentication service (LDAP+Kerberos V, preferably) RMS : LSF PBS - SGE Optimizing Fortran 77/95/2003 Optimizing C/C++ compilers Parallel Libraries (MPI/MPICH)

Requirements (3) The hosting data centre provides a set of services: Rack space with redundant power supply A Wide Area Network link of ~1Gbits is desirable, or attainable in near to medium term Network security, firewall and Intrusion Detection systems. Backup and staging service. Interaction with vendors for maintenance issues (next business day for hardware - 3 hour 24/7 for critical system components). Maintenance of operating environments (i.e. patches, security, updates, etc) Trouble ticket system for support in servers administration Skills in the administration of Unix clusters and High Performance Computing. Support for scientific computing and parallel programming tools would be highly desired Installation and Operation manpower to install and operate the gateway GATEWAY LAYOUT Planning Year 1 Year 2 Year3 Year 4 FS FS FS SWITCH GbE 24 port SWITCHes FC 64 ports WN#n WN#1 WN#2 Computing resources 64 cores (0.3 TFlops) 128 cores (0.6 TFlops) FC single-channel 3xFCdual-channel WN#16 Storage Resources 30Tb ~60Tb ~100TB SS 31.5 TB SS 3.15 TB SS 31.5 TB SS 21 TB SHARED STORAGE DATA AREA 100 TB (longer term) FC dual-channel COMPUTING RESOURCES A farm of 16 hosts GigaEthernet/InfiniBand interconnect Switch InfiniBand 24 port

ENEA proposal Proposal by Associazione Euratom-ENEA sulla fusione Provision of the HW/SW resources, Hosting services and Installation/Operation of Gateway Gateway housing at ENEA CRESCO HPC data center managed by ENEA IT Department (FIM) Full computational power ~1 TFlops since very beginning instead of 0.3TFlops per year Full storage capacity, 100 TB instead of 32 TB per year Access to 128 of ~2512 cores of CRESCO HPC facility by the end of July 2008 (free of charge) Possibility to use the full computational power of the CRESCO HPC facility (~25 TFlops peak performance) for code benchmarking, scalability tests etc Main comments of EFDA Offer Technical Evaluation Group (OTEG) Connectivity: InfiniBand both for Node-to-Node communication and storage area access instead of FC-SAN: performance penalty? Worker Nodes: CPU 2.2 GHz whereas 2.4 GHz were requested (for DualCore CPU)

ENEA project (1) WorkPackages & Deliverables WP.0: Project Management WP.1: Shared Storage Data Area WP.2: Computing Resources WP.3: Housing Services WP.4: Installation & Test WP.5: Time plan and operation

Project Plan by (DRAFT) Specify the main issues required for the provision of the EFDA TF ITM Gateway infrastructure and its operation Project Acronym ITMGATEWAY Project ID Project Title Provision of EFDA TF ITM Gateway infrastructure and its operation Start Date 1st October 2007 End Date 30th September 2011 Lead Institution Project Directors Project Manager & contact details Project Web URL Programme Name (and number) Programme Manager ENEA Italian National Agency for New Technologies, Energy and Environment. Silvio Migliori (ENEA FIM) and Francesco Iannone (ENEA FPN) To be appointed. Name: Position: Email: Address: Tel: Fax: TBD TBD Francesco Iannone ENEA project (2) Silvio Migliori Head of Scientific Computing Group migliori@enea.it ENEA Headquarters, via Lungotevere Thaon Di Revel, 76 00196 Rome Italy +39 06 XXXXXX +39 06 XXXXXXXX

ENEA project (3) WP.1 Shared Storage Data Area Wide Area Distributed File System: user home directories and project software a Storage Area Network (SAN) in Fibre Channel (FC) and Andrew File System (OpenAFS) n 3 AFS servers with 4 Gb/s FC Host Bus Adapter (HBA) n 1 FC switch with 16 ports+gbics Storage Area of 9 TB (net amount) with FC link for WADFS Parallel File System: experimental and simulation data and databases simultaneous large file access for multi-node jobs, I/O rate (up to 800 MB/s) RDMA data access over InfiniBand (IB) network: n 2 PFS servers with IB 4x DDR (10 Gb/s) Host Channel Adapter (HCA) n 1 InfiniBand switch 24 ports Storage Area of 100TB with IB link INFINIBAND STORAGE SOLUTION IB recently expanded from cluster interconnection into storage (visibly more efficient than FC SAN) Throughput target 4X DDR (10 Gb/s) RDMA is already usable with for LUSTRE and will soon be supported by IBM GPFS Solutions to improve the storage performance will be investigated ( I/O performance increase by separating physically WADFS and PFS storage areas)

ENEA project (4) WP.2 Computing Resources High Performance Computing Cluster: a set a Worker Nodes (WN). DualCPU-QuadCore + IB technology, with Scientific Linux operating system. n 16 WN DualCPU-QuadCore (128 cores 1 TFlops) with 32 GB RAM (4 GB/core 512 GB total), HCA IB 4x DDR/10 Gb/s dual port n 1 InfiniBand switch 24 ports 4X DDR (10 Gb/s) n 1 GEthernet switch 32 ports Front-End nodes: user access DualCPU-DualCore + IB technology with Scientific Linux operating system. n 2 nodes DualCPU-DualCore with 8 GB RAM, HCA IB 4x DDR CPU AMD Opteron QuadCore (Barcelona) Theoretical peak QuadCore = 4 x #Cores x Clock Frequency AMD's quad-core Opteron processor is finally available Barcelona will arrive in three different categories: High-performance (@2.3-3.0 GHz available early 2008) Standard-issue (@2.0 GHz already available) Energy-efficient (@1.7-1.8-1.9 GHz already available)

ENEA project (5) GATEWAY LAYOUT 9 TB + 96 TB

ENEA project (6) Software Autentication & Authorization Service (AS): users log on (authenticate) with the same userid/password to all the authorized nodes with a Single Sign-On (SSO) authentication. KERBEROS V NIS (Network Information System) or LDAP (Lightweight Directory Access Protocol) Resource Management System (RMS): manages and schedules applications on the HPC cluster of WNs (batch, interactive and parallel jobs) Load Sharing Facility (LSF) by Platform Computing (Multi-cluster license to submit jobs on ENEA-GRID and CRESCO ) Parallel Application (PA): a parallel programming environment based on Message Passing (MPI) model. OpenMPI, MVAPICH, vendor COMPILERS: Fortran/C/C++ compilers provided by ENEA GRID resources, including Portland Group Compiler suite ver. 5.5-2 (Fortran 77/90, C/C++, HPF) Commercial tools: scientific libraries, data analysis and visualization tools (IDL ver.6.3, MATLAB 7.01..) currently installed in the ENEA-GRID environment will be available to the Gateway users within the limits of the actual license pool.

ENEA project (7) WP.3 HOSTING SERVICE The Gateway will be hosted at Portici (Naples) ENEA Research Data Centre which is hosting CRESCO supercomputer facility: ~2500 processors (cores) with a computation peak power of ~ 25 TFlops CRESCO: Computational Research Center for Complex Systems Centro di Brindisi

ENEA project (8) WP.3 HOSTING SERVICE Rack space with redundant power supply rack 42U EIA-310-D (p.s. & fan) n 1 rack for Storage System server & switches n 1 rack for HPC cluster + Front-End n 1 rack for DDN storage system WAN link 400 Mbps (1 Gbps into 2008) Network security systems Backup & staging of Shared Storage Data Area CRESCO Data Centre is equipped with all the security systems (fire and intrusion detection alarm) The local manpower for CRESCO consists of about 10 ENEA FTEs. CASPUR (also a partner in CRESCO project) will provide support at system level

ENEA project (9) WP.4 Installation & test install and test the single hardware/software components configure SAN install, configure and test WADFS configure IB Storage network install, configure and test PF install, configure and test HPC cluster install, configure and test Front-End system install, configure and test Authentication/Authorization Service install, configure and test Resource Management System install, configure and test Message Passing Interface for parallel processing configure operating environments for use of ENEA-GRID Software resources configure network router/firewall apparatus for remote access to ITM TF Gateway Deliverables Performance of PFS in terms of I/O benchmarks (peak and aggregate rate) Performance of HPC cluster in terms of Linpack and SPEC benchmarks Performance of LAN in terms of throughput and delays Performance of WAN in terms of throughput and delays

ENEA project (10) WP.5 Time plan and operation The Gateway project lifetime is 4 years and it will be subdivided in three sequential phases: PHASE I (PRO) : Gateway hardware components provision PHASE II (INS): Gateway hw/sw components installation and testing PHASE III (OPE): Gateway operation The Project Team provides the installation and the operation PROJECT MANAGER (PM): project control and coordination SYSTEM ADMINSTRATOR (SA): support in hw/sw system management for Shared Storage Data Area and HPC cluster of WNs as well as Front-end systems. SOFTWARE CONSULTANT (SC): professional with skill in HPC environment: compilers, MPI environment, Resource Management System and software tools... Time planning (Start: Oct. 2007 End: Oct. 2011) PRO INS OPE 1m 1 PM 0.25SA 0.25SC Year 1 Year2 Year 3 Year 4 2m 0.3 PM 0.6 SA 0.6 SC 9m 12m 12m 12 m 0.1 PM 0.7 SA 0.7 SC 0.1 PM 0.7 SA 0.7 SC 0.1 PM 0.7 SA 0.7 SC ppy 1.5 1.5 1.5 1.5 0.1 PM 0.7 SA 0.7 SC

ENEA project (11) Operation details administration of hardware/software resources of the TF ITM Gateway direct interaction with vendors for hardware maintenance issues with Next Business Day formula software maintenance of operating system and working environments monitoring the hardware/software resources of TF ITM Gateway support to users for the installation of public domain tools Monitoring & support interfaces The status of HW/SW resources of Gateway via WEB Support is provided by means of a trouble ticket system that allows to place the user request online, using web and/or email interface email * : object * : type : Trouble Ticket System msg : Attachment : browse submit cancel

Gateway & TF ITM (1) Wide Area Distributed Filesystem A new AFS cell with servers inside the Internet domain portici.enea.it efda-itm.eu or itm.eu.? /afs /enea.it /efda-itm.eu /fusione.it /cern.ch /software /system /backup /project /user /a /b /c /d / /z /home_user /public /project /private /system sw, management and documentation /backup users & project daily snapshots /project projects and data /user/.. Users home directories initial user quota 10 GB ~/private (access is restricted to the user) ~/public (world readable) The home directory will have the lookup permission to any user. The public directory must be used only for data/sw with no distribution restrictions because it can be read by every user on the internet with an AFS client Administration policies: AFS servers are hidden from users (login access only for administrators) User management can be delegated to ITM (dedicated web interface) Every user can define AFS groups to control the access to the user data space Dedicated groups can be defined for ISIP or IMP# projects...

Gateway & TF ITM (2) PARALLEL FILESYSTEM PFS accessible from WNs and Front-End system Parallel I/O Distributed I/O on multiple files Distributed I/O on single file MPI I/O PFS tree DATABASE: experimental and simulation data for retrieval and analysis SCRATCH: temporary large output of parallel jobs BIN: large binary files Disk quota assigned to ITM Projects Access permissions can be granted to any ITM users, groups

Gateway & TF ITM (3) COMPUTING RESOURCES HPC WNs are clients of both WADFS & PFS WNs hostname: itm1.itm16 (???) - Internet domain portici.enea.it. WNs run single o parallel jobs submitted by users via LSF Users don t have login access to WNs Front-End system allows users to access remotely all the gateway resources User remote access with ssh, scp, sftp, gridftp, bbftp (citrix-metaframe is optional) Interactive session to submit parallel jobs on HPC cluster, compile projects and data visualization User Interface to EGEE-GRID Virtual Organizations over a dedicated host at least 2 Front-End nodes, the hostnames: ves1, ves2.(????)

Gateway & TF ITM (4) GENERALE ISSUES: open for discussion with ITM Software projects repository: CVS, Subversion, Mercurial,... Queue setup serial / parallel / nightly, # CPUs & memory resources... Access to CRESCO resources Environment shared libraries, compilers MDS+ : tree_path Gateway tools (implementation in charge of ISIP) KEPLER (installation: AFS or PFS (????) Front-End or WN (???)) Server mdsip for remote data access (Front-End) ITM PORTAL (web server, mysql server. On Front-End) Universal Access Layer (production/development/test environments)......

Conclusions The ITM GATEWAY at the CRESCO ENEA site fulfils the TF ITM requirements has improved features: ~1 Tflops HPC cluster 2 Storage systems for better performance 2 IB networks, separating the node interconnection and the access to the storage will be able to access CRESCO resources (up to 25 TFlops) Kick-off meeting (Sept. 27 th ) on request of EU Commission

Other slides just in case!

GATEWAY COSTS Max Costs estimated (over 4 years) - Hardware/Software: 398.2 k - Housing Service: 252 k - Manpower for installation & operation: 1 ppy (professional) ENEA PROPOSAL (over 4 years) - Hardware/Software: 409.3 k - Housing Service: 252 k - Manpower for installation & operation: 1.5 ppy (professional) EFDA Preferential Support Hardware/Software resources Hosting Services Installation & Operation (Manpower) total k 409.3 252 810 1471.3 (40%)k 163.72 100.8 324 588.52

GATEWAY DETAILS (I) SHARED STORAGE DATA AREA Hardware for WADFS Servers n 3 servers, 1U rack-mount DualCPU Dual Core Xeon 5050 3.0 GHz/2x2MB L2 cache 667FSB RAM 8 GB FB 667MHz 2 x 80 GB SATA2 (7200 rpm) 3.5 inch HD (hot plug) 8X IDE DVD-ROM Drive Two slots on separate PCI buses with either PCI Express riser with two x8 lane slots or PCI-X riser with 2 x 64-bit/133MHz slots Single Port 4Gbps Fibre Channel PCI Express HBA Card Dual Gigabit Ethernet NICs with load balancing and fail-over support Raid Controller: PERC 5/i integrated SAS/SATA daughter card controller with 256MB cache Redundant power supply SAN infrastructure n 1 FC stackable switch QLOGIC Sanbox 5600 8, 12 or 16 auto detecting 4Gb/2Gb /1Gb device ports n 16 GBIC 4Gbps RAID Array System Storage system Infortrend A16F-G2430 FC to SATA-II 9 TB(net amount) in RAID 6 2 FC-4G host channels; transfer rate up to 400MBps per channel 16 bays for HD SATA-II 16 HD 750 GB SATA-II Software for WADFS Operating System: Scientific Linux SRPM base: RHEL4/ES+patches Kernel: 2.6.9-xx.ELcern AFS : OpenAFS 1.4.2, MIT kerberosv: 1.6

GATEWAY DETAILS (II) SHARED STORAGE DATA AREA Hardware for Parallel Fylesystem Servers n 2 servers, 1U rack-mount DualCPU Dual Core Xeon 5050 3.0 GHz/2x2MB L2 cache 667FSB RAM 8 GB FB 667MHz 2 x 80 GB SATA2 (7200 rpm) 3.5 inch HD (hot plug) 8X IDE DVD-ROM Drive Two slots on separate PCI buses with either PCI Express riser with two x8 lane slots or PCI-X riser with 2 x 64-bit/133MHz slots HCA InfiniBand 4X DDR dual port (10 Gb) Dual Gigabit Ethernet NICs with load balancing and fail-over support Raid Controller: PERC 5/i integrated SAS/SATA daughter card controller with 256MB cache Storage network infrastructure n 1 InfiniBand switch CISCO SFS 7000p 24 ports 4X DDR (10 Gbps) IB cables RAID Array System (Delivered end 2007) Usable capacity (net amount) 32 TB in RAID 6 n 1 DDN S2A9550 Archive Solution composed by: n 1 S2A9550 Couplet with 5x48 Slot Enclosure 5 GB cache 4 FC4 (4Gb) ports + 4 IB ports n 2 48 slot Double Dual Port (2x24 drive) FC to SATA 1 rack 24 U Peak Bandwidth 3.2 GB/s n 24 tiers (8 populated) n 80 500 GB 7200 RPM SATA Disk FC and IB cables (Delivered within 2008) Usable capacity (net amount) 64 TB in RAID 6: n 3 48 slot Double Dual Port (2x24 drive) FC to SATA n 160 500 GB 7200 RPM SATA Disk Software for PFS Operating System: Scientific Linux SRPM base: RHEL4/ES+patches Kernel: 2.6.9-xx.ELcern GPFS (IBM) or Lustre (Cluster File System)

GATEWAY DETAILS (III) COMPUTING RESOURCES Hardware for HPC cluster n 16 Servers, max 2U rack-mount designed server DualCPU Quad Core AMD Opteron 2 GHz/ 2M L2 cache + 2 M L3 shared cache RAM 32 GB DDR II 667 MHz (4 GB/core, 16x2GB dimm) (max 64 GB) 2 x 80 GB SATA (7200 rpm) 3.5 inch HD (hot swap) 2 PCI-Express x8 and 1 PCI-Express x4 Dual Gigabit Ethernet NICs with load balancing and fail-over support HCA InfiniBand 4X DDR dual port (10 Gbps) n 1 GEthernet switch like Cisco Catalyst 2948G-GE-TX ports n 1 InfiniBand switch like CISCO SFS 7000p 24 ports 4X DDR (10 Gbps) Software for HPC cluster Operating System: Scientific Linux SRPM base: RHEL4/ES+patches Kernel: 2.6.9-xx.ELcern Hardware for Front-End system n 2 Server, 1U rack-mount designe server DualCPU Dual Core AMD Opteron 2.4 GHz / 2x 1M L2 cache 16 Gbyte DDR2 667MHz 2 x 80 GB SATA (7200 rpm) 3.5 inch HD (hot plug), 8X IDE DVD-ROM Drive Dual Gigabit Ethernet NICs with load balancing and fail-over support One PCI Express x8 full height, half length or One PCI-X (64bit/133MHz) full height, half length HCA InfiniBand 4X DDR dual port (10 Gbs) Software for Front-End system Operating System: Scientific Linux SRPM base: RHEL4/ES+patches Kernel: 2.6.x-xx.ELcern Autentication/Authorization services AFS : OpenAFS 1.4.x MIT kerberosv: 1.6 SSH: 4.5p1 K5/GSSAPI/AFS-aware (openssl=0.9.8d) Resources Management System Platform LSF 6.2 multi-cluster 32 server slots for HPC cluster Platform LSF 6.2 multi-cluster 8 client slots for Front-End system Analysis and Develop Software Software packages available in ENEA-GRID environments Rack designed server