ITM Gateway. F.Iannone : Associazione Euratom/ENEA sulla fusione
|
|
|
- Jack Montgomery
- 10 years ago
- Views:
Transcription
1 ITM Gateway F.Iannone : Associazione Euratom/ENEA sulla fusione G.Bracco & S. Migliori ENEA IT Department (FIM) A. Maslennikov CASPUR (Consortium for Supercomputer Applications for University and Research)
2 Outline Short history Requirements ENEA Proposal ENEA Project Layout: work packages & deliverables Gateway computing environment for ITM TF Conclusions
3 Short history The idea of ITM Gateway by A. Becoulet helped by B.Guillerminet. (2006) it will offer European modellers the necessary elements needed to run and analyse fusion simulations Dataset and codes computer access and archive management minimum data visual tools Requirements finalized at ITM TF meeting (Gothenburg 10/2006) Call for proposals - Started in April 2007 / Closed in June 2007 Gateway Components: - Shared Storage Data Area: 30 TB (initially) 100 TB (over 4 years); intensive parallel I/O ( 800MB/s) - Computing Resources: Computing Cluster: 600 GFlops theoretical peak & 512 GB RAM - Hosting Service: data center providing: WAN access, security, backup Gbit or better link desirable - Installation and Operation: manpower for operation & direct interaction with vendors
4 Requirements (1) Shared Storage Data Area Wide Area Distributed File System (WADFS) for user home directories and code repository Parallel File System (PFS) for data and databases Hardware Server nodes for WADFS and PFS RAID disk array systems Storage infrastructure (Storage Area Network - SAN) Software WADFS & PFS reliable, scalable and preferably open source PFS over SAN for Intensive I/O peak performance 800 MBps DATA FLOW SAN based CLIENTS ETHERNET FC SAN IP LAN DISKS ARRAY FIBRE CHANNEL METADATA SERVERS METADATA FLOW
5 Requirements (2) Computing Resources A cluster with a powerful high speed, low latency interconnection system for message passing 2 hosts as Front-End and services: Portal, DBMS, user access interface Resource Management System (RMS): job scheduling, load balancing, etc.. Scientific Libraries and Data Analysis and Visualization tools (optional) Hardware Worker Nodes for HPC cluster; Front-End nodes Interconnection infrastructures (FC-SAN & InfiniBand) Rack mounted solution Software Unix-like OS and preferably open source (Scientific Linux) Single Sign-On Authentication service (LDAP+Kerberos V, preferably) RMS : LSF PBS - SGE Optimizing Fortran 77/95/2003 Optimizing C/C++ compilers Parallel Libraries (MPI/MPICH)
6 Requirements (3) The hosting data centre provides a set of services: Rack space with redundant power supply A Wide Area Network link of ~1Gbits is desirable, or attainable in near to medium term Network security, firewall and Intrusion Detection systems. Backup and staging service. Interaction with vendors for maintenance issues (next business day for hardware - 3 hour 24/7 for critical system components). Maintenance of operating environments (i.e. patches, security, updates, etc) Trouble ticket system for support in servers administration Skills in the administration of Unix clusters and High Performance Computing. Support for scientific computing and parallel programming tools would be highly desired Installation and Operation manpower to install and operate the gateway GATEWAY LAYOUT Planning Year 1 Year 2 Year3 Year 4 FS FS FS SWITCH GbE 24 port SWITCHes FC 64 ports WN#n WN#1 WN#2 Computing resources 64 cores (0.3 TFlops) 128 cores (0.6 TFlops) FC single-channel 3xFCdual-channel WN#16 Storage Resources 30Tb ~60Tb ~100TB SS 31.5 TB SS 3.15 TB SS 31.5 TB SS 21 TB SHARED STORAGE DATA AREA 100 TB (longer term) FC dual-channel COMPUTING RESOURCES A farm of 16 hosts GigaEthernet/InfiniBand interconnect Switch InfiniBand 24 port
7 ENEA proposal Proposal by Associazione Euratom-ENEA sulla fusione Provision of the HW/SW resources, Hosting services and Installation/Operation of Gateway Gateway housing at ENEA CRESCO HPC data center managed by ENEA IT Department (FIM) Full computational power ~1 TFlops since very beginning instead of 0.3TFlops per year Full storage capacity, 100 TB instead of 32 TB per year Access to 128 of ~2512 cores of CRESCO HPC facility by the end of July 2008 (free of charge) Possibility to use the full computational power of the CRESCO HPC facility (~25 TFlops peak performance) for code benchmarking, scalability tests etc Main comments of EFDA Offer Technical Evaluation Group (OTEG) Connectivity: InfiniBand both for Node-to-Node communication and storage area access instead of FC-SAN: performance penalty? Worker Nodes: CPU 2.2 GHz whereas 2.4 GHz were requested (for DualCore CPU)
8 ENEA project (1) WorkPackages & Deliverables WP.0: Project Management WP.1: Shared Storage Data Area WP.2: Computing Resources WP.3: Housing Services WP.4: Installation & Test WP.5: Time plan and operation
9 Project Plan by (DRAFT) Specify the main issues required for the provision of the EFDA TF ITM Gateway infrastructure and its operation Project Acronym ITMGATEWAY Project ID Project Title Provision of EFDA TF ITM Gateway infrastructure and its operation Start Date 1st October 2007 End Date 30th September 2011 Lead Institution Project Directors Project Manager & contact details Project Web URL Programme Name (and number) Programme Manager ENEA Italian National Agency for New Technologies, Energy and Environment. Silvio Migliori (ENEA FIM) and Francesco Iannone (ENEA FPN) To be appointed. Name: Position: Address: Tel: Fax: TBD TBD Francesco Iannone ENEA project (2) Silvio Migliori Head of Scientific Computing Group ENEA Headquarters, via Lungotevere Thaon Di Revel, Rome Italy XXXXXX XXXXXXXX
10 ENEA project (3) WP.1 Shared Storage Data Area Wide Area Distributed File System: user home directories and project software a Storage Area Network (SAN) in Fibre Channel (FC) and Andrew File System (OpenAFS) n 3 AFS servers with 4 Gb/s FC Host Bus Adapter (HBA) n 1 FC switch with 16 ports+gbics Storage Area of 9 TB (net amount) with FC link for WADFS Parallel File System: experimental and simulation data and databases simultaneous large file access for multi-node jobs, I/O rate (up to 800 MB/s) RDMA data access over InfiniBand (IB) network: n 2 PFS servers with IB 4x DDR (10 Gb/s) Host Channel Adapter (HCA) n 1 InfiniBand switch 24 ports Storage Area of 100TB with IB link INFINIBAND STORAGE SOLUTION IB recently expanded from cluster interconnection into storage (visibly more efficient than FC SAN) Throughput target 4X DDR (10 Gb/s) RDMA is already usable with for LUSTRE and will soon be supported by IBM GPFS Solutions to improve the storage performance will be investigated ( I/O performance increase by separating physically WADFS and PFS storage areas)
11 ENEA project (4) WP.2 Computing Resources High Performance Computing Cluster: a set a Worker Nodes (WN). DualCPU-QuadCore + IB technology, with Scientific Linux operating system. n 16 WN DualCPU-QuadCore (128 cores 1 TFlops) with 32 GB RAM (4 GB/core 512 GB total), HCA IB 4x DDR/10 Gb/s dual port n 1 InfiniBand switch 24 ports 4X DDR (10 Gb/s) n 1 GEthernet switch 32 ports Front-End nodes: user access DualCPU-DualCore + IB technology with Scientific Linux operating system. n 2 nodes DualCPU-DualCore with 8 GB RAM, HCA IB 4x DDR CPU AMD Opteron QuadCore (Barcelona) Theoretical peak QuadCore = 4 x #Cores x Clock Frequency AMD's quad-core Opteron processor is finally available Barcelona will arrive in three different categories: High-performance (@ GHz available early 2008) Standard-issue (@2.0 GHz already available) Energy-efficient (@ GHz already available)
12 ENEA project (5) GATEWAY LAYOUT 9 TB + 96 TB
13 ENEA project (6) Software Autentication & Authorization Service (AS): users log on (authenticate) with the same userid/password to all the authorized nodes with a Single Sign-On (SSO) authentication. KERBEROS V NIS (Network Information System) or LDAP (Lightweight Directory Access Protocol) Resource Management System (RMS): manages and schedules applications on the HPC cluster of WNs (batch, interactive and parallel jobs) Load Sharing Facility (LSF) by Platform Computing (Multi-cluster license to submit jobs on ENEA-GRID and CRESCO ) Parallel Application (PA): a parallel programming environment based on Message Passing (MPI) model. OpenMPI, MVAPICH, vendor COMPILERS: Fortran/C/C++ compilers provided by ENEA GRID resources, including Portland Group Compiler suite ver (Fortran 77/90, C/C++, HPF) Commercial tools: scientific libraries, data analysis and visualization tools (IDL ver.6.3, MATLAB ) currently installed in the ENEA-GRID environment will be available to the Gateway users within the limits of the actual license pool.
14 ENEA project (7) WP.3 HOSTING SERVICE The Gateway will be hosted at Portici (Naples) ENEA Research Data Centre which is hosting CRESCO supercomputer facility: ~2500 processors (cores) with a computation peak power of ~ 25 TFlops CRESCO: Computational Research Center for Complex Systems Centro di Brindisi
15 ENEA project (8) WP.3 HOSTING SERVICE Rack space with redundant power supply rack 42U EIA-310-D (p.s. & fan) n 1 rack for Storage System server & switches n 1 rack for HPC cluster + Front-End n 1 rack for DDN storage system WAN link 400 Mbps (1 Gbps into 2008) Network security systems Backup & staging of Shared Storage Data Area CRESCO Data Centre is equipped with all the security systems (fire and intrusion detection alarm) The local manpower for CRESCO consists of about 10 ENEA FTEs. CASPUR (also a partner in CRESCO project) will provide support at system level
16 ENEA project (9) WP.4 Installation & test install and test the single hardware/software components configure SAN install, configure and test WADFS configure IB Storage network install, configure and test PF install, configure and test HPC cluster install, configure and test Front-End system install, configure and test Authentication/Authorization Service install, configure and test Resource Management System install, configure and test Message Passing Interface for parallel processing configure operating environments for use of ENEA-GRID Software resources configure network router/firewall apparatus for remote access to ITM TF Gateway Deliverables Performance of PFS in terms of I/O benchmarks (peak and aggregate rate) Performance of HPC cluster in terms of Linpack and SPEC benchmarks Performance of LAN in terms of throughput and delays Performance of WAN in terms of throughput and delays
17 ENEA project (10) WP.5 Time plan and operation The Gateway project lifetime is 4 years and it will be subdivided in three sequential phases: PHASE I (PRO) : Gateway hardware components provision PHASE II (INS): Gateway hw/sw components installation and testing PHASE III (OPE): Gateway operation The Project Team provides the installation and the operation PROJECT MANAGER (PM): project control and coordination SYSTEM ADMINSTRATOR (SA): support in hw/sw system management for Shared Storage Data Area and HPC cluster of WNs as well as Front-end systems. SOFTWARE CONSULTANT (SC): professional with skill in HPC environment: compilers, MPI environment, Resource Management System and software tools... Time planning (Start: Oct End: Oct. 2011) PRO INS OPE 1m 1 PM 0.25SA 0.25SC Year 1 Year2 Year 3 Year 4 2m 0.3 PM 0.6 SA 0.6 SC 9m 12m 12m 12 m 0.1 PM 0.7 SA 0.7 SC 0.1 PM 0.7 SA 0.7 SC 0.1 PM 0.7 SA 0.7 SC ppy PM 0.7 SA 0.7 SC
18 ENEA project (11) Operation details administration of hardware/software resources of the TF ITM Gateway direct interaction with vendors for hardware maintenance issues with Next Business Day formula software maintenance of operating system and working environments monitoring the hardware/software resources of TF ITM Gateway support to users for the installation of public domain tools Monitoring & support interfaces The status of HW/SW resources of Gateway via WEB Support is provided by means of a trouble ticket system that allows to place the user request online, using web and/or interface * : object * : type : Trouble Ticket System msg : Attachment : browse submit cancel
19 Gateway & TF ITM (1) Wide Area Distributed Filesystem A new AFS cell with servers inside the Internet domain portici.enea.it efda-itm.eu or itm.eu.? /afs /enea.it /efda-itm.eu /fusione.it /cern.ch /software /system /backup /project /user /a /b /c /d / /z /home_user /public /project /private /system sw, management and documentation /backup users & project daily snapshots /project projects and data /user/.. Users home directories initial user quota 10 GB ~/private (access is restricted to the user) ~/public (world readable) The home directory will have the lookup permission to any user. The public directory must be used only for data/sw with no distribution restrictions because it can be read by every user on the internet with an AFS client Administration policies: AFS servers are hidden from users (login access only for administrators) User management can be delegated to ITM (dedicated web interface) Every user can define AFS groups to control the access to the user data space Dedicated groups can be defined for ISIP or IMP# projects...
20 Gateway & TF ITM (2) PARALLEL FILESYSTEM PFS accessible from WNs and Front-End system Parallel I/O Distributed I/O on multiple files Distributed I/O on single file MPI I/O PFS tree DATABASE: experimental and simulation data for retrieval and analysis SCRATCH: temporary large output of parallel jobs BIN: large binary files Disk quota assigned to ITM Projects Access permissions can be granted to any ITM users, groups
21 Gateway & TF ITM (3) COMPUTING RESOURCES HPC WNs are clients of both WADFS & PFS WNs hostname: itm1.itm16 (???) - Internet domain portici.enea.it. WNs run single o parallel jobs submitted by users via LSF Users don t have login access to WNs Front-End system allows users to access remotely all the gateway resources User remote access with ssh, scp, sftp, gridftp, bbftp (citrix-metaframe is optional) Interactive session to submit parallel jobs on HPC cluster, compile projects and data visualization User Interface to EGEE-GRID Virtual Organizations over a dedicated host at least 2 Front-End nodes, the hostnames: ves1, ves2.(????)
22 Gateway & TF ITM (4) GENERALE ISSUES: open for discussion with ITM Software projects repository: CVS, Subversion, Mercurial,... Queue setup serial / parallel / nightly, # CPUs & memory resources... Access to CRESCO resources Environment shared libraries, compilers MDS+ : tree_path Gateway tools (implementation in charge of ISIP) KEPLER (installation: AFS or PFS (????) Front-End or WN (???)) Server mdsip for remote data access (Front-End) ITM PORTAL (web server, mysql server. On Front-End) Universal Access Layer (production/development/test environments)......
23 Conclusions The ITM GATEWAY at the CRESCO ENEA site fulfils the TF ITM requirements has improved features: ~1 Tflops HPC cluster 2 Storage systems for better performance 2 IB networks, separating the node interconnection and the access to the storage will be able to access CRESCO resources (up to 25 TFlops) Kick-off meeting (Sept. 27 th ) on request of EU Commission
24 Other slides just in case!
25 GATEWAY COSTS Max Costs estimated (over 4 years) - Hardware/Software: k - Housing Service: 252 k - Manpower for installation & operation: 1 ppy (professional) ENEA PROPOSAL (over 4 years) - Hardware/Software: k - Housing Service: 252 k - Manpower for installation & operation: 1.5 ppy (professional) EFDA Preferential Support Hardware/Software resources Hosting Services Installation & Operation (Manpower) total k (40%)k
26 GATEWAY DETAILS (I) SHARED STORAGE DATA AREA Hardware for WADFS Servers n 3 servers, 1U rack-mount DualCPU Dual Core Xeon GHz/2x2MB L2 cache 667FSB RAM 8 GB FB 667MHz 2 x 80 GB SATA2 (7200 rpm) 3.5 inch HD (hot plug) 8X IDE DVD-ROM Drive Two slots on separate PCI buses with either PCI Express riser with two x8 lane slots or PCI-X riser with 2 x 64-bit/133MHz slots Single Port 4Gbps Fibre Channel PCI Express HBA Card Dual Gigabit Ethernet NICs with load balancing and fail-over support Raid Controller: PERC 5/i integrated SAS/SATA daughter card controller with 256MB cache Redundant power supply SAN infrastructure n 1 FC stackable switch QLOGIC Sanbox , 12 or 16 auto detecting 4Gb/2Gb /1Gb device ports n 16 GBIC 4Gbps RAID Array System Storage system Infortrend A16F-G2430 FC to SATA-II 9 TB(net amount) in RAID 6 2 FC-4G host channels; transfer rate up to 400MBps per channel 16 bays for HD SATA-II 16 HD 750 GB SATA-II Software for WADFS Operating System: Scientific Linux SRPM base: RHEL4/ES+patches Kernel: xx.ELcern AFS : OpenAFS 1.4.2, MIT kerberosv: 1.6
27 GATEWAY DETAILS (II) SHARED STORAGE DATA AREA Hardware for Parallel Fylesystem Servers n 2 servers, 1U rack-mount DualCPU Dual Core Xeon GHz/2x2MB L2 cache 667FSB RAM 8 GB FB 667MHz 2 x 80 GB SATA2 (7200 rpm) 3.5 inch HD (hot plug) 8X IDE DVD-ROM Drive Two slots on separate PCI buses with either PCI Express riser with two x8 lane slots or PCI-X riser with 2 x 64-bit/133MHz slots HCA InfiniBand 4X DDR dual port (10 Gb) Dual Gigabit Ethernet NICs with load balancing and fail-over support Raid Controller: PERC 5/i integrated SAS/SATA daughter card controller with 256MB cache Storage network infrastructure n 1 InfiniBand switch CISCO SFS 7000p 24 ports 4X DDR (10 Gbps) IB cables RAID Array System (Delivered end 2007) Usable capacity (net amount) 32 TB in RAID 6 n 1 DDN S2A9550 Archive Solution composed by: n 1 S2A9550 Couplet with 5x48 Slot Enclosure 5 GB cache 4 FC4 (4Gb) ports + 4 IB ports n 2 48 slot Double Dual Port (2x24 drive) FC to SATA 1 rack 24 U Peak Bandwidth 3.2 GB/s n 24 tiers (8 populated) n GB 7200 RPM SATA Disk FC and IB cables (Delivered within 2008) Usable capacity (net amount) 64 TB in RAID 6: n 3 48 slot Double Dual Port (2x24 drive) FC to SATA n GB 7200 RPM SATA Disk Software for PFS Operating System: Scientific Linux SRPM base: RHEL4/ES+patches Kernel: xx.ELcern GPFS (IBM) or Lustre (Cluster File System)
28 GATEWAY DETAILS (III) COMPUTING RESOURCES Hardware for HPC cluster n 16 Servers, max 2U rack-mount designed server DualCPU Quad Core AMD Opteron 2 GHz/ 2M L2 cache + 2 M L3 shared cache RAM 32 GB DDR II 667 MHz (4 GB/core, 16x2GB dimm) (max 64 GB) 2 x 80 GB SATA (7200 rpm) 3.5 inch HD (hot swap) 2 PCI-Express x8 and 1 PCI-Express x4 Dual Gigabit Ethernet NICs with load balancing and fail-over support HCA InfiniBand 4X DDR dual port (10 Gbps) n 1 GEthernet switch like Cisco Catalyst 2948G-GE-TX ports n 1 InfiniBand switch like CISCO SFS 7000p 24 ports 4X DDR (10 Gbps) Software for HPC cluster Operating System: Scientific Linux SRPM base: RHEL4/ES+patches Kernel: xx.ELcern Hardware for Front-End system n 2 Server, 1U rack-mount designe server DualCPU Dual Core AMD Opteron 2.4 GHz / 2x 1M L2 cache 16 Gbyte DDR2 667MHz 2 x 80 GB SATA (7200 rpm) 3.5 inch HD (hot plug), 8X IDE DVD-ROM Drive Dual Gigabit Ethernet NICs with load balancing and fail-over support One PCI Express x8 full height, half length or One PCI-X (64bit/133MHz) full height, half length HCA InfiniBand 4X DDR dual port (10 Gbs) Software for Front-End system Operating System: Scientific Linux SRPM base: RHEL4/ES+patches Kernel: 2.6.x-xx.ELcern Autentication/Authorization services AFS : OpenAFS 1.4.x MIT kerberosv: 1.6 SSH: 4.5p1 K5/GSSAPI/AFS-aware (openssl=0.9.8d) Resources Management System Platform LSF 6.2 multi-cluster 32 server slots for HPC cluster Platform LSF 6.2 multi-cluster 8 client slots for Front-End system Analysis and Develop Software Software packages available in ENEA-GRID environments Rack designed server
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
SAN TECHNICAL - DETAILS/ SPECIFICATIONS
SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
PCI Express and Storage. Ron Emerick, Sun Microsystems
Ron Emerick, Sun Microsystems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
HP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters from One Stop Systems (OSS) PCIe Over Cable PCIe provides greater performance 8 7 6 5 GBytes/s 4
Sun Microsystems Special Promotions for Education and Research January 9, 2007
Sun Microsystems Special Promotions for Education and Research Solve big problems on a small budget with Sun-Education s trusted partner for cutting-edge technology solutions. Sun solutions help your campus
VTrak 15200 SATA RAID Storage System
Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data
EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation
PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Arrow ECS sp. z o.o. Oracle Partner Academy training environment with Oracle Virtualization. Oracle Partner HUB
Oracle Partner Academy training environment with Oracle Virtualization Technology Oracle Partner HUB Overview Description of technology The idea of creating new training centre was to attain light and
PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Referencia: 814955 Dell PowerEdge SC1430-2GB9L3J Nettó ár: 226.410.- Db: 1 Tag Number: 2GB9L3J
Referencia: 84955 Dell PowerEdge SC430-2GB9L3J Nettó ár: 226.40.- Tag Number: 2GB9L3J PE SC430 Quad-Core Xeon E5335 2.0GHz 250GB, SATA, 3.5-inch, 7.2K rpm, st HD 250GB, SATA, 3.5-inch, 7.2K rpm, Additional
Referencia: 814955 Dell PowerEdge SC1430-2GB9L3J Nettó ár: 226.410.- Db: 1 Tag Number: 2GB9L3J
Referencia: 84955 Dell PowerEdge SC430-2GB9L3J Nettó ár: 226.40.- Tag Number: 2GB9L3J PE SC430 Quad-Core Xeon E5335 2.0GHz 250GB, SATA, 3.5-inch, 7.2K rpm, st HD 250GB, SATA, 3.5-inch, 7.2K rpm, Additional
Architecting a High Performance Storage System
WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater
DD670, DD860, and DD890 Hardware Overview
DD670, DD860, and DD890 Hardware Overview Data Domain, Inc. 2421 Mission College Boulevard, Santa Clara, CA 95054 866-WE-DDUPE; 408-980-4800 775-0186-0001 Revision A July 14, 2010 Copyright 2010 EMC Corporation.
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
How To Configure Your Computer With A Microsoft X86 V3.2 (X86) And X86 (Xo) (Xos) (Powerbook) (For Microsoft) (Microsoft) And Zilog (X
System Configuration and Order-information Guide RX00 S4 February 008 Front View CD-ROM Drive (Optional) Hard Disk Bay Back View PCI Slot 0/00/000BASE-T connector Serial Port Display Mouse Keyboard Inside
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering Enquiry No: Enq/IITK/ME/JB/02 Enquiry Date: 14/12/15 Last Date of Submission: 21/12/15 Formal quotations are invited for HPC cluster.
Cray DVS: Data Virtualization Service
Cray : Data Virtualization Service Stephen Sugiyama and David Wallace, Cray Inc. ABSTRACT: Cray, the Cray Data Virtualization Service, is a new capability being added to the XT software environment with
New Storage System Solutions
New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
DD160 and DD620 Hardware Overview
DD160 and DD620 Hardware Overview Data Domain, Inc. 2421 Mission College Boulevard, Santa Clara, CA 95054 866-WE-DDUPE; 408-980-4800 775-0206-0001 Revision A March 21, 2011 Copyright 2011 EMC Corporation.
modular Storage Solutions MSS Series
N E T W O R K e d s t o r a g e modular Storage Solutions MSS Series NAS and iscsi SAN Product Family High Performance Enterprise Features Easily Scalable Utmost Reliability and Flexibility NAS & iscsi
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
Comparing the performance of the Landmark Nexus reservoir simulator on HP servers
WHITE PAPER Comparing the performance of the Landmark Nexus reservoir simulator on HP servers Landmark Software & Services SOFTWARE AND ASSET SOLUTIONS Comparing the performance of the Landmark Nexus
Annex 1: Hardware and Software Details
Annex : Hardware and Software Details Hardware Equipment: The figure below highlights in more details the relation and connectivity between the Portal different environments. The number adjacent to each
IBM System x family brochure
IBM Systems and Technology System x IBM System x family brochure IBM System x rack and tower servers 2 IBM System x family brochure IBM System x servers Highlights IBM System x and BladeCenter servers
Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server
Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket node for PRIMERGY CX420 cluster Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket node for PRIMERGY CX420 cluster Strong Performance and Cluster
Lessons learned from parallel file system operation
Lessons learned from parallel file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
Hardware & Software Specification i2itracks/popiq
Hardware & Software Specification i2itracks/popiq 3663 N. Laughlin Rd., Suite 200 Santa Rosa, CA 95403 866-820- 2212 www.i2isys.com 1 I2iSystems Hardware and Software Specifications Revised 04/09/2015
PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Referencia: 814956 Dell PowerEdge SC1435-37YMJ3J Nettó Ár: 190.900.- Db: 1 Tag Number: 37YMJ3J
Referencia: 84956 Dell PowerEdge SC435-37YMJ3J Nettó Ár: 90.900.- Db: Tag Number: 37YMJ3J SC435 Opteron 222 2.0GHz/2M 95W 250GB SATA (7, 200rpm) 3.5inchDrive (non hot-plug) 250GB SATA (7, 200rpm) 3.5inchDrive
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Altix Usage and Application Programming. Welcome and Introduction
Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering [email protected] Company Overview
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre University of Cambridge, UIS, HPC Service Authors: Wojciech Turek, Paul Calleja, John Taylor
PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems
PCI Express Impact on Storage Architectures Ron Emerick, Sun Microsystems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science Call for Expression of Interest (EOI) for the Supply, Installation
Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF
Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide
Cisco-EMC Microsoft SQL Server Fast Track Warehouse 3.0 Enterprise Reference Configurations. Data Sheet
Cisco-EMC Microsoft SQL Fast Track Warehouse 3.0 Enterprise Reference Configurations Data Sheet May 2011 Raghunath Nambiar Access and Virtualization Business Unit Introduction The Microsoft SQL Fast Track
Large Scale Storage. Orlando Richards, Information Services [email protected]. LCFG Users Day, University of Edinburgh 18 th January 2013
Large Scale Storage Orlando Richards, Information Services [email protected] LCFG Users Day, University of Edinburgh 18 th January 2013 Overview My history of storage services What is (and is not)
LANL Computing Environment for PSAAP Partners
LANL Computing Environment for PSAAP Partners Robert Cunningham [email protected] HPC Systems Group (HPC-3) July 2011 LANL Resources Available To Alliance Users Mapache is new, has a Lobo-like allocation Linux
814368 814369 814370
Referencia: 814368 Dell P/Edge 2900 Q/Core X 1600 Nettó ár: 598.250.- Db: 1 Tag Number: DR5J93J PE2900 Quad-Core Xeon E5310 1.6GHz/2x4MB CD-RW/DVD Combo Drive 300GB SAS (10,000rpm) 3.5 inch Hard Drive
Michael Kagan. [email protected]
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies [email protected] Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
REQUEST FOR QUOTE. All out of date servers contain approximately 1 TB of data that needs to be migrated to the new Windows domain environment.
REQUEST FOR QUOTE The Housing Authority of the City of Hartford is seeking quotations for the following project: Systems Department Enterprise Technology Enhancements The current infrastructure consists
THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC
THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC The Right Data, in the Right Place, at the Right Time José Martins Storage Practice Sun Microsystems 1 Agenda Sun s strategy and commitment to the HPC or technical
QuickSpecs. HP Integrity cx2620 Server. Overview
Overview At A Glance Product Numbers HP cx2620 Server with one 1.6GHz/3MB single-core CPU AB401A HP cx2620 Server with one 1.4GHz/12MB dual-core AB402A Standard System Features Multiple Operating Environment
An Oracle White Paper December 2011. Oracle Virtual Desktop Infrastructure: A Design Proposal for Hosted Virtual Desktops
An Oracle White Paper December 2011 Oracle Virtual Desktop Infrastructure: A Design Proposal for Hosted Virtual Desktops Introduction... 2! Design Goals... 3! Architectural Overview... 5! Logical Architecture...
Microsoft SharePoint Server 2010
Microsoft SharePoint Server 2010 Small Farm Performance Study Dell SharePoint Solutions Ravikanth Chaganti and Quocdat Nguyen November 2010 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY
Current Status of FEFS for the K computer
Current Status of FEFS for the K computer Shinji Sumimoto Fujitsu Limited Apr.24 2012 LUG2012@Austin Outline RIKEN and Fujitsu are jointly developing the K computer * Development continues with system
HUS-IPS-5100S(D)-E (v.4.2)
Honeywell s HUS-IPS-5100S(D)-E is a controller-based IP SAN unified storage appliance. Designed for centralized mass data storage, this IP SAN solution can be used with the high performance streaming server
Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
Servers, Clients. Displaying max. 60 cameras at the same time Recording max. 80 cameras Server-side VCA Desktop or rackmount form factor
Servers, Clients Displaying max. 60 cameras at the same time Recording max. 80 cameras Desktop or rackmount form factor IVR-40/40-DSKT Intellio standard server PC 60 60 Recording 60 cameras Video gateway
Cisco MCS 7825-H3 Unified Communications Manager Appliance
Cisco MCS 7825-H3 Unified Communications Manager Appliance Cisco Unified Communications is a comprehensive IP communications system of voice, video, data, and mobility products and applications. It enables
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
Microsoft Exchange Server 2003 Deployment Considerations
Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers
ECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
AFS in a GRID context
Rome 28-30 September 2009, Università Roma Tre European AFS meeting 2009 http://www.dia.uniroma3.it/~afscon09 AFS in a GRID context G. Bracco, S.Migliori, S. Podda, P. D'Angelo A. Santoro, A. Rocchi, C.
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Virtualised MikroTik
Virtualised MikroTik MikroTik in a Virtualised Hardware Environment Speaker: Tom Smyth CTO Wireless Connect Ltd. Event: MUM Krackow Feb 2008 http://wirelessconnect.eu/ Copyright 2008 1 Objectives Understand
When EP terminates the use of Hosting CC OG, EP is required to erase the content of CC OG application at its own cost.
Explanatory Note 1 (Hosting CC OG - For Trading) Section A Notes to the Application a. China Connect Open Gateway (CC OG) : CC OG is a hardware and software component operated by the Exchange Participant,
SR-IOV In High Performance Computing
SR-IOV In High Performance Computing Hoot Thompson & Dan Duffy NASA Goddard Space Flight Center Greenbelt, MD 20771 [email protected] [email protected] www.nccs.nasa.gov Focus on the research side
Mass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage [email protected] ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
FUJITSU Enterprise Product & Solution Facts
FUJITSU Enterprise Product & Solution Facts shaping tomorrow with you Business-Centric Data Center The way ICT delivers value is fundamentally changing. Mobile, Big Data, cloud and social media are driving
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering
Technical white paper HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering Table of contents Executive summary 2 Fast Track reference
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
Using the Windows Cluster
Using the Windows Cluster Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster
HPC @ CRIBI. Calcolo Scientifico e Bioinformatica oggi Università di Padova 13 gennaio 2012
HPC @ CRIBI Calcolo Scientifico e Bioinformatica oggi Università di Padova 13 gennaio 2012 what is exact? experience on advanced computational technologies a company lead by IT experts with a strong background
Cisco SFS 7000P InfiniBand Server Switch
Data Sheet Cisco SFS 7000P Infiniband Server Switch The Cisco SFS 7000P InfiniBand Server Switch sets the standard for cost-effective 10 Gbps (4X), low-latency InfiniBand switching for building high-performance
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
Investigation of storage options for scientific computing on Grid and Cloud facilities
Investigation of storage options for scientific computing on Grid and Cloud facilities Overview Context Test Bed Lustre Evaluation Standard benchmarks Application-based benchmark HEPiX Storage Group report
Business white paper. HP Process Automation. Version 7.0. Server performance
Business white paper HP Process Automation Version 7.0 Server performance Table of contents 3 Summary of results 4 Benchmark profile 5 Benchmark environmant 6 Performance metrics 6 Process throughput 6
