MareNostrum 3 Javier Bartolomé BSC System Head Barcelona, April 2015
|
|
|
- Shannon Johnston
- 10 years ago
- Views:
Transcription
1 MareNostrum 3 Javier Bartolomé BSC System Head Barcelona, April 2015
2 Index MareNostrum 3 Overview Compute Racks Infiniband Racks Management Racks GPFS Network Racks HPC GPFS Storage Hardware GPFS Data Services Long-Term Storage (Archive) Active Archive Hardware Active Archive Services Batch Scheduler System Software Stack 2
3 MN2 MN3 3
4 MareNostrum 3 4
5 MareNostrum 3 36 x IBM idataplex Compute racks 84 x IBM compute nodes 2x SandyBridge-EP E GHz/ M 8-core 115W 8x 4G DDR DIMMs (2GB/core) 500GB 7200 rpm SATA II local HDD 4x IBM dx460 M4 compute nodes on a Management Rack 3028 compute nodes 48,448 Intel cores Memory TB 32GB/node Peak performance: 1.1 Pflop/s Node performance: Gflops Rack Performance: Tflops Rack Consumption: kw/rack (nominal under HPL) Estimated power consumption: 1.08 MW Infiniband FDR10 non-blocking Fat Tree network topology 5
6 MareNostrum Compute Performance Memory Network MN1 (2004) Ratio MN2 (2006) Ratio MN3 (2012) Cores/chip 1 x2 2 x4 8 Chip/node Cores/node 2 x2 4 x4 16 Nodes Total cores 4812 x x4, Freq. 2,2 2,3 2,6 Gflops/core 8,8 9,2 20,8 Gflops/node 17,6 36,8 332,8 Total Tflops 42,3 x2 94,2 x10, ,0 GB/core (GB) GB/node (GB) 4 x2 8 x4 32 Total (TB) 9,6 x2 20 x4,84 96,89 Topology Non- blocking Fat Tree Non- blocking Fat Tree Non- blocking Fat Tree Latency (µs) 4 4 x5,7 0,7 Bandwidth (Gb/s) 4 4 x10 40 Storage (TB) 236 x2 460 x4, Consumption (KW) 650 x1,1 750 x1,
7 7 MN3 Hardware Layout D5 D3 D4 D1 D2 M3 M1 M2 C1 s01r1 C2 s01r2 C3 s02r1 C4 s02r2 C5 s03r1 C6 s03r2 C7 s04r1 C8 s04r2 C9 s05r1 C10 s05r2 C11 s06r1 C12 s06r2 C13 s07r1 C14 s07r2 C15 s08r1 C16 s08r2 C17 s09r1 C18 s09r2 C19 s10r1 C20 s10r2 C21 s11r1 C22 s11r2 C23 s12r1 C24 s12r2 C25 s13r1 C40 C39 C38 C37 C29 s15r1 C30 s15r2 C31 s16r1 C32 s16r2 C33 s17r1 C34 s17r2 C35 s18r1 C36 s18r2 C26 s13r2 C27 s14r1 C28 s14r2 IB5 IB3 IB4 IB1 IB2 IB6 IB7
8 8 MN3 Compute Racks D5 D3 D4 D1 D2 M3 M1 M2 C1 s01r1 C2 s01r2 C3 s02r1 C4 s02r2 C5 s03r1 C6 s03r2 C7 s04r1 C8 s04r2 C9 s05r1 C10 s05r2 C11 s06r1 C12 s06r2 C13 s07r1 C14 s07r2 C15 s08r1 C16 s08r2 C17 s09r1 C18 s09r2 C19 s10r1 C20 s10r2 C21 s11r1 C22 s11r2 C23 s12r1 C24 s12r2 C25 s13r1 C40 C39 C38 C37 C29 s15r1 C30 s15r2 C31 s16r1 C32 s16r2 C33 s17r1 C34 s17r2 C35 s18r1 C36 s18r2 C26 s13r2 C27 s14r1 C28 s14r2 IB5 IB3 IB4 IB1 IB2 IB6 IB7
9 MN3 idataplex Compute Rack 84x IBM System x idataplex server 4x Mellanox 36-port Managed FDR10 IB Switch 12x compute nodes connected to leaf switches at IB core racks. Management Network 2x BNT RackSwitch G8052F GPFS Network 2x BNT RackSwitch G8052F idpx rack with RDHX (water cooling) Performance 2.60 GHz x 8 flops/cycle (AVX) = 20.8 Gflops/core 16 core x 20.8 Gflops/core = Gflops/node 84 nodes x Gflops/node = Tflops/rack 3P 32A PDU 3P 32A PDU BNT G Mgt MLX FDR10 36 Port MLX FDR10 36 Port BNT G GPFS 3P 32A PDU 3P 32A PDU BNT G Mgt MLX FDR10 36 Port MLX FDR10 36 Port BNT G GPFS 9
10 MN3 idataplex Compute Rack 84x IBM System x idataplex server 4x Mellanox 36-port Managed FDR10 IB Switch 12x compute nodes connected to leaf switches at IB core racks. Management Network 2x BNT RackSwitch G8052F GPFS Network 2x BNT RackSwitch G8052F idpx rack with RDHX (water cooling) Performance 2.60 GHz x 8 flops/cycle (AVX) = 20.8 Gflops/core 16 core x 20.8 Gflops/core = Gflops/node 84 nodes x Gflops/node = Tflops/rack 3P 32A PDU 3P 32A PDU BNT G Mgt MLX FDR10 36 Port MLX FDR10 36 Port BNT G GPFS 3P 32A PDU 3P 32A PDU BNT G Mgt MLX FDR10 36 Port MLX FDR10 36 Port BNT G GPFS 10
11 Rear Door Heat exchanger No Leaks Sealed Internal coils Lock handle to close/ open door Perforated Door for clear airflow Industry Standard hose fittings Swings to provide access to rear PDUs 11
12 MN3 1 chassis 2U for 2 nodes with shared: Power (2x 900W redundant N+N) Cooling 80mm Fans Each node: 2x 1GbE interfaces 1x imm interface Mellanox ConnectX-3 Dual-port FDR10 QSFP IB Mezz Card Front of Chassis Rear of Chassis 16x DIMM DDR3 slots One 3.5'' SATA drive IMM port 2x CPU Sockets FCLGA Dual-port QSFP FDR10 IB Mezz card & ports Ethernet 12
13 Block diagram 13
14 MN3 Network physical configuration idataplex rack switches compute node 41x dx360m4 41 2x Management & Boot Ethernet switch BNT RackSwitch G8052F Management Network (IMM and xcat/boot) 1 Gb/s copper 42 2 IB Leaf mgt port 1 GPFS switch 2x 1 Gb/s copper GPFS Network 41x dx360m4 41 2x GPFS Ethernet switch BNT RackSwitch G8052F 1 Gb/s copper 42 4x 10 Gb/s optical Infiniband FDR10 17x dx360m4 17 4x Infiniband Leaf Switch Mellanox 36-port FDR10 IB Switch 40 Gb/s copper optical links
15 MN3 VLAN configuration idataplex rack switches compute node 41x dx360m4 41 2x IMM, Management Remote control, & Boot Ethernet Consoles, switch BNT RackSwitch Switches G8052F Management Network (IMM and xcat/boot) 1 Gb/s copper 42 OS Services: 2 xcat, 1 Network Boot, LSF, Ganglia IB Leaf mgt port GPFS switch 2x 1 Gb/s copper GPFS Network 41x dx360m4 41 2x GPFS Ethernet switch BNT RackSwitch G8052F GPFS Network I/O Traffic 1 Gb/s copper 42 4x 10 Gb/s optical 17x dx360m4 4x Infiniband Leaf Switch Mellanox 36-port FDR10 IB Switch Infiniband FDR10 17 MPI Applications Traffic 40 Gb/s copper optical links
16 16 MN3 Infiniband Racks D5 D3 D4 D1 D2 M3 M1 M2 C1 s01r1 C2 s01r2 C3 s02r1 C4 s02r2 C5 s03r1 C6 s03r2 C7 s04r1 C8 s04r2 C9 s05r1 C10 s05r2 C11 s06r1 C12 s06r2 C13 s07r1 C14 s07r2 C15 s08r1 C16 s08r2 C17 s09r1 C18 s09r2 C19 s10r1 C20 s10r2 C21 s11r1 C22 s11r2 C23 s12r1 C24 s12r2 C25 s13r1 C40 C39 C38 C37 C29 s15r1 C30 s15r2 C31 s16r1 C32 s16r2 C33 s17r1 C34 s17r2 C35 s18r1 C36 s18r2 C26 s13r2 C27 s14r1 C28 s14r2 IB5 IB3 IB4 IB1 IB2 IB6 IB7
17 MN3 Infiniband Network 6x Infiniband Racks (4 today) Melanox 648-port FDR10 Infiniband Core Switch (29U) 1x Infiniband Rack: Leaf IB switches + UFM servers 18x Mellanox 36-port Managed FDR10 IB Switch 2x Infiniband UFM servers. Unified Fabric Manager: Provision, monitor and operate data center fabric 144x Mellanox 36-port Managed FDR10 IB Switch (100 today) Front Back (cabling) 17
18 MN3 Infiniband Network MLX SX port available (507 used) MLX SX port available (507 used) MLX SX port available (507 used) MLX SX port available (507 used) MLX SX port available (507 used) MLX SX port available (507 used) MLX FDR10 36 Port MLX FDR10 36 Port MLX FDR10 36 Port MLX FDR10 36 Port... MLX FDR10 36 Port MLX FDR10 36 Port 3 UFM server Login node... Login node... UFM server s 18 Nodes 12 Nodes 18 s 18 s 18 s 18
19 19 MN3 Management racks D5 D3 D4 D1 D2 M3 M1 M2 C1 s01r1 C2 s01r2 C3 s02r1 C4 s02r2 C5 s03r1 C6 s03r2 C7 s04r1 C8 s04r2 C9 s05r1 C10 s05r2 C11 s06r1 C12 s06r2 C13 s07r1 C14 s07r2 C15 s08r1 C16 s08r2 C17 s09r1 C18 s09r2 C19 s10r1 C20 s10r2 C21 s11r1 C22 s11r2 C23 s12r1 C24 s12r2 C25 s13r1 C40 C39 C38 C37 C29 s15r1 C30 s15r2 C31 s16r1 C32 s16r2 C33 s17r1 C34 s17r2 C35 s18r1 C36 s18r2 C26 s13r2 C27 s14r1 C28 s14r2 IB5 IB3 IB4 IB1 IB2 IB6 IB7
20 MN3 Management Hardware 2x xcat GPFS Servers & 2 Storage Controllers 9TB Filesystem mounted in all management servers only Store operating system images of all nodes Logs and configuration files of the cluster 2x xcat Master Servers Main xcat servers working in high-availability Main DNS servers for the cluster 18x xcat Service Node (13 today) Holds the services for a portion of the machine DHCP,TFTP,HTTP,NFS 2x Scheduler Servers 2x Monitoring Servers 5x Login nodes and 1 master node 20
21 MN3 Management software xcat: extreme Cluster (Cloud) Administration Toolkit ( Framework for alerts and alert management Hardware management control, monitoring, etc. Administration of cluster services: DNS, DHCP, Conserver, Software provisioning and maintenance Compute nodes Boot from network RootFS is mounted via NFS (ro, rw, tmpfs) from xcat Servers Local hard drive only for temporary data and swap space Same OS image for all compute nodes 21
22 MN3 Management xcat xcat Masters xcat GPFS Servers Hierarchical Infrastructure Management Node (MN) (DHCP,DNS,TFTP,HTTP) Management Node (MN) (DHCP,DNS,TFTP,HTTP) xcatdb mysql GPFS Server GPFS Server DS3512 Exp3512 xcat Group xcat Group Service Node (DHCP,TFTP,HTTP,NFS) Service Node (DHCP,TFTP,HTTP,NFS) Service Node (DHCP,TFTP,HTTP,NFS) Service Node (DHCP,TFTP,HTTP,NFS) Service Node (DHCP,TFTP,HTTP,NFS) Service Node (DHCP,TFTP,HTTP,NFS) Service Node (DHCP,TFTP,HTTP,NFS) Service Node (DHCP,TFTP,HTTP,NFS) 8x idataplex racks 8x idataplex racks 22
23 23 MN3 GPFS Network racks D5 D3 D4 D1 D2 M3 M1 M2 C1 s01r1 C2 s01r2 C3 s02r1 C4 s02r2 C5 s03r1 C6 s03r2 C7 s04r1 C8 s04r2 C9 s05r1 C10 s05r2 C11 s06r1 C12 s06r2 C13 s07r1 C14 s07r2 C15 s08r1 C16 s08r2 C17 s09r1 C18 s09r2 C19 s10r1 C20 s10r2 C21 s11r1 C22 s11r2 C23 s12r1 C24 s12r2 C25 s13r1 C40 C39 C38 C37 C29 s15r1 C30 s15r2 C31 s16r1 C32 s16r2 C33 s17r1 C34 s17r2 C35 s18r1 C36 s18r2 C26 s13r2 C27 s14r1 C28 s14r2 IB5 IB3 IB4 IB1 IB2 IB6 IB7
24 MN3 GPFS Network idpx 1 42x 1Gb/s (5.25GB/s) NEW Force10 E1200i 10G switch 10-port Line Cards 7x 40-port Line Cards 7x Existing Line Cards Empty slots 42x 1Gb/s 4x 10Gb/s (4.8GB/s) BSC Force10 E1200i 10G Switch idpx 2 42x 1Gb/s 4x 10Gb/s... 42x 1Gb/s 4x 10Gb/s 4x 10Gb/s 4x 10Gb/s 4x 10Gb/s 20x 10Gb/s (24GB/s) 30 x 10Gb/s 1.9 PB GPFS High performance Filesystems idpx 36 43x 1Gb/s 43x 1Gb/s 4x ExaScale 10- port 10G Eth Line Card compute node on mgnt rack 24 24
25 Index MareNostrum 3 Overview Compute Racks Infiniband Racks Management Racks GPFS Network Racks HPC GPFS Storage Hardware GPFS Data Services Long-Term Storage (Archive) Active Archive Hardware Active Archive Services Batch Scheduler System Software Stack 25
26 26 HPC GPFS Storage racks D5 D3 D4 D1 D2 M3 M1 M2 C1 s01r1 C2 s01r2 C3 s02r1 C4 s02r2 C5 s03r1 C6 s03r2 C7 s04r1 C8 s04r2 C9 s05r1 C10 s05r2 C11 s06r1 C12 s06r2 C13 s07r1 C14 s07r2 C15 s08r1 C16 s08r2 C17 s09r1 C18 s09r2 C19 s10r1 C20 s10r2 C21 s11r1 C22 s11r2 C23 s12r1 C24 s12r2 C25 s13r1 C40 C39 C38 C37 C29 s15r1 C30 s15r2 C31 s16r1 C32 s16r2 C33 s17r1 C34 s17r2 C35 s18r1 C36 s18r2 C26 s13r2 C27 s14r1 C28 s14r2 IB5 IB3 IB4 IB1 IB2 IB6 IB7
27 HPC Storage Hardware 3x Data Building Blocks with: 8x Data Servers (x3550 M3) with 48 GB main memory 1x DS5300 controller couplet 8x EXP x SATA 2TB 7,2K rpm 50 disk/enclosure (Total: 400 disks) 10 empty disk slots Total capacity: 800 TB Net capacity: 640TB (RAID6 8+2P) TOTAL Data Capacity: 1200 SATA 2TB 7,2 rpm disks Net capacity: 1920 TB (RAID6 8+2P) 1x MetaData Building Block 6x Metadata servers (x3650 M3) with 128 GB main memory 1x DS5300 controller couplet (4U) 8x Storage Enclosure Expansion Units. 112x FC 600GB 15Krpm (16 disk/enclosure) Total capacity: 67.2 TB Net capacity: 33.6 TB (RAID 1) 27
28 Storage GPFS IBM high performance shared-disk file management tool. Allows multiple processes on all nodes access the same file with standard syscalls File reads / writes are stripped across multiple disks Increase of aggregate bandwith use Balance the load across all disks in a filesystem Large files are divided in equal sized blocks, consecutive blocks allocated on different disks round-robin Supports very large files and file system sizes (Max tested: 4PB ) Allows concurrent reads and writes from multiple nodes GPFS uses a local cache on each client (MN pagepool 1GB) GPFS prefetches data into its buffer pool, issuing parallel I/O requests Sequential pattern Reverse sequential pattern Various strided access patterns Supports block size up to 8MB. 28
29 Storage GPFS Distributed Locking Mechanism GPFS has a distributed-based token lock system to keep file consistency Tokens are issued at block level or whole-file depending on the operation File system manager is the token manager server Coordinates access to files granting the right to read/write data/metadata read (1,*buff,buffsize) write(1,*buff,buffsize) read (1,*buff,buffsize) write(1,*buff,buffsize) GPFS Client 2 Request Token read/write List of nodes holding conflicting tokens File system Manager GPFS Client 1 Request Token read/write Grant Token File system Manager Relinquish Token Token GPFS Client 1 Token GPFS Client 1 Read /Write GPFS Client 2 Grant Token GPFS Client 1 Token GPFS Client 2 Read /Write
30 HPC Storage GPFS Filesystems /gpfs/home: User's Home directory (59 TB ) User quotas enforced Data & MetaData replication Block Size 256KB /gpfs/apps: Applications ( 30 TB ) Data & MetaData replication Block Size 512KB /gpfs/projects: Data shared between users of same project ( 612 TB ) Group quotas enforced Metadata replication Block Size 4MB /gpfs/scratch: Data used only during executions ( 1.1 PB ) Group quotas enforced Metadata replication Block Size 4MB No backup from this filesystem 30
31 HPC GPFS MareNostrum3 cluster nodes - 1 PFlop MinoTauro cluster nodes 256 GPUs TFlops 2 SMP Machines cores 1.5 TB RAM - 96 cores 1.2 TB RAM HPC GPFS Services - 1 login server - 2 transfer servers dl01.bsc.es dt01.bsc.es dt02.bsc.es 288x 10GE links 15x 10GE links 1x 10GE links 1x 10GE link Nord Cluster nodes ppc TFlops LifeScience Cluster - 12 nodes 2x 10GE links 1.9 PB HPC GPFS /gpfs/home /gpfs/projects /gpfs/scratch /gpfs/apps 1x 10GE links 15 GB/s 31
32 HPC GPFS Services dlogin (dl01.bsc.es) Interactive access via SSH from Internet to BSC HPC GPFS dtransfer (dt01.bsc.es & dt02.bsc.es) Transfer servers from/to Internet to BSC HPC GPFS Transfer protocols supported: SCP / SFTP FTP + SSL BBCP Grid-ftp dt01.bsc.es dt02.bsc.es Internet In these nodes also access to other BSC storage is available Long-term storage: Active Archive & HSM (read-only mode) Internal BSC Departamental storage 32
33 Index MareNostrum 3 Overview Compute Racks Infiniband Racks Management Racks GPFS Network Racks HPC GPFS Storage Hardware GPFS Data Services Long-Term Storage (Archive) Active Archive Hardware Active Archive Services Batch Scheduler System Software Stack 33
34 Long-Term Storage (Archive) Not directly accessible from HPC Machines Can be used from any HPC Machine through a batch system Commands: dtcp, dttar, dtmv, Active Archive ( /gpfs/archive ) Archive system based on harddrives 3.8 PB GPFS Filesystem Group quota enabled Metadata replicated Blocksize = 1MB
35 Active Archive Hardware Overview 12x GPFS Servers (x3550 M4) with 16 GB RAM 10x Data Storage Block 1 DCS3700 Controller + 2 EXP3700 expansions 180x NL SAS 3TB K rpm (60 disks per enclousure) Block Capacity: 540 TB raw 3x Metadata Block 1 DS3512 Controller + 6 EXP3512 expansions 77x SAS 600 GB K rpm Block capacity: 45 TB raw TOTAL CAPACITY: DATA: 5.45 PB raw (4.1 PB Net) METADATA: 135 TB raw (67 TB Net) 10x client Servers (x3550 M4) with 128 GB RAM 4 amovers (explained later) 4 NFS/CIFS servers for BSC LAN 4 Data Cloud Services for Internet
36 Active Archive Services dtransfer (dt01 & dt02) Permits transfer on Long-Term Storage from/to Internet Interactive access to Archive and HSM via NFS mounts Low performance (interactive access only) Internet GPFS Mount 1x10 GE dt01.bsc.es dt02.bsc.es NFS Mount 1x1GE /gpfs/home /gpfs/projects /gpfs/scratch /gpfs/apps 3.7 PB Archive /gpfs/archive
37 Active Archive Movers Batch Active Archive Movers (amover1 amover4) Non-interactive nodes Execute movement commands: HPC GPFS Active Archive From ANY HPC machine (login or compute node) $ dtcp <ORIG> <DEST> $ dtmv <ORIG> <DEST> $ dtrsync <ORIG> <DEST> Job enters to run amover1 amover2 amover3 Each amover server can provide up to 2GB/s of performance submit a batch job 2x 10GE links amover4 2x 10GE links Data Transfer batch queue system /gpfs/home /gpfs/projects /gpfs/scratch /gpfs/apps 3.7 PB Archive /gpfs/archive
38 Index MareNostrum 3 Overview Compute Racks Infiniband Racks Management Racks GPFS Network Racks HPC GPFS Storage Hardware GPFS Data Services Long-Term Storage (Archive) Active Archive Hardware Active Archive Services Batch Scheduler System Software Stack 38
39 Batch Scheduler System Users only access to login nodes and submit job to the Batch Scheduler System. IBM LSF is used in MareNostrum3 LSF takes care: Handling user jobs ( submit, cancel, query, ) Priorization between jobs Health monitorization of all machine resources Deciding which nodes to be used by each job Controlling process spawning and finalization of each job Accouting of all hours consumed
40 Batch Scheduler Overview login1 login2 login3 login4 login5 Submit Query Cancel Return code + Info Scheduler server Scheduler server JOBID USER STATE QUEUE CPUS user92 RUNNING prace user01 RUNNING class_a user33 RUNNING prace my_user PEND class_b my_user PEND class_b 448 Jobs enters to run master Job finishes Clean up of nodes Spawn of processes
41 Jobs Priorities Job priority decided in a Fair-Share policy Hours are distributed based on a share distribution Dynamic priority per job based on Hours already consumed by the group job belongs to Type of hours (share distribution) Waiting time on queue Share distribution for MareNostrum 3 is: 70% PRACE projects 24% RES projects 3 internal level of priorities (A_hours >> B_hours >> C_hours) 6% BSC internal use Bigger jobs have more priority than little ones
42 Spawn of processes and monitoring LSF decides where to run a job depending on Free resources at each moment Use of minimal of Infiniband Switches (FUTURE) Minimal power Usage (Energy-aware scheduling) Health Control A regular process checks health of all nodes reporting any error Avoids entering new jobs to a failed node Clean of a node After any job finishes an epilog process is executed to clean the node of any remaining processes
43 MareNostrum Monitoring Nagios Monitors basic administrative elements Ganglia Monitors performance values of compute nodes Cpu load, memory used, local disk free space, Graphic visualization of all those values xcat Collects all SNMP traps from hardware component Hardware/Firmware failure reporting Scripts filter and process those traps BSC Monitoring tools GGcollector, og3, perfd
44 Index MareNostrum 3 Overview Compute Racks Infiniband Racks Management Racks GPFS Network Racks HPC GPFS Storage Hardware GPFS Data Services Long-Term Storage (Archive) Active Archive Hardware Active Archive Services Batch Scheduler System Software Stack 44
45 Software Stack Operating System SLES11 SP2 Clustrer Software xcat Compilers: Intel Cluster Studio GNU compilers MPI OpenMPI IBM Parallel Environment Intel MPI MVAPICH2 Infiniband Mellanox OFED 1.5.3
46 Thank you! For further information please contact 46
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering Enquiry No: Enq/IITK/ME/JB/02 Enquiry Date: 14/12/15 Last Date of Submission: 21/12/15 Formal quotations are invited for HPC cluster.
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
Cost Efficient VDI. XenDesktop 7 on Commodity Hardware
Cost Efficient VDI XenDesktop 7 on Commodity Hardware 1 Introduction An increasing number of enterprises are looking towards desktop virtualization to help them respond to rising IT costs, security concerns,
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Michael Kagan. [email protected]
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies [email protected] Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
A-CLASS The rack-level supercomputer platform with hot-water cooling
A-CLASS The rack-level supercomputer platform with hot-water cooling INTRODUCTORY PRESENTATION JUNE 2014 Rev 1 ENG COMPUTE PRODUCT SEGMENTATION 3 rd party board T-MINI P (PRODUCTION): Minicluster/WS systems
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science Call for Expression of Interest (EOI) for the Supply, Installation
LBNC and IBM Corporation 2009. Document: LBNC-Install.doc Date: 06.03.2009 Path: D:\Doc\EPFL\LNBC\LBNC-Install.doc Version: V1.0
LBNC Compute Cluster Installation and Configuration Author: Markus Baertschi Owner: Markus Baertschi Customer: LBNC Subject: LBNC Compute Cluster Installation and Configuration Page 1 of 14 Contents 1.
MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220
MESOS CB220 Cluster-in-a-Box Network Storage Appliance A Simple and Smart Way to Converged Storage with QCT MESOS CB220 MESOS CB220 A Simple and Smart Way to Converged Storage Tailored for SMB storage
Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions
WHITE PAPER May 2014 Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions Contents Executive Summary...2 Background...2 Network Configuration...3 Test
www.thinkparq.com www.beegfs.com
www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a
COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters
COSC 6374 Parallel Computation Parallel I/O (I) I/O basics Spring 2008 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network
Fujitsu PRIMERGY Servers Portfolio
Fujitsu Servers Portfolio Complete server solutions that drive your success shaping tomorrow with you Higher IT efficiency and reduced total cost of ownership Fujitsu Micro and Tower Servers MX130 S2 TX100
SX1012: High Performance Small Scale Top-of-Rack Switch
WHITE PAPER August 2013 SX1012: High Performance Small Scale Top-of-Rack Switch Introduction...1 Smaller Footprint Equals Cost Savings...1 Pay As You Grow Strategy...1 Optimal ToR for Small-Scale Deployments...2
UCS M-Series Modular Servers
UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend
GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"
GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID
How To Write An Article On An Hp Appsystem For Spera Hana
Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ
David Vicente Head of User Support BSC
www.bsc.es Programming MareNostrum III David Vicente Head of User Support BSC Agenda WEDNESDAY - 17-04-13 9:00 Introduction to BSC, PRACE PATC and this training 9:30 New MareNostrum III the views from
Potsdam Scientists to Tackle New Type of Weather Simulations with IBM idataplex
Potsdam Scientists to Tackle New Type of Weather Simulations with IBM idataplex Jan 21, 2009 - The Potsdam Institute for Climate Impact Research (PIK) is rolling out a new IBM supercomputer that will increase
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Servers, Clients. Displaying max. 60 cameras at the same time Recording max. 80 cameras Server-side VCA Desktop or rackmount form factor
Servers, Clients Displaying max. 60 cameras at the same time Recording max. 80 cameras Desktop or rackmount form factor IVR-40/40-DSKT Intellio standard server PC 60 60 Recording 60 cameras Video gateway
Large Scale Storage. Orlando Richards, Information Services [email protected]. LCFG Users Day, University of Edinburgh 18 th January 2013
Large Scale Storage Orlando Richards, Information Services [email protected] LCFG Users Day, University of Edinburgh 18 th January 2013 Overview My history of storage services What is (and is not)
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
Hadoop on the Gordon Data Intensive Cluster
Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
præsentation oktober 2011
Johnny Olesen System X presale præsentation oktober 2011 2010 IBM Corporation 2 Hvem er jeg Dagens agenda Server overview System Director 3 4 Portfolio-wide Innovation with IBM System x and BladeCenter
Lustre SMB Gateway. Integrating Lustre with Windows
Lustre SMB Gateway Integrating Lustre with Windows Hardware: Old vs New Compute 60 x Dell PowerEdge 1950-8 x 2.6Ghz cores, 16GB, 500GB Sata, 1GBe - Win7 x64 Storage 1 x Dell R510-12 x 2TB Sata, RAID5,
Designed for Maximum Accelerator Performance
Designed for Maximum Accelerator Performance A dense, GPU-accelerated cluster supercomputer that delivers up to 329 double-precision GPU teraflops in one rack. This power- and spaceefficient system can
Current Status of FEFS for the K computer
Current Status of FEFS for the K computer Shinji Sumimoto Fujitsu Limited Apr.24 2012 LUG2012@Austin Outline RIKEN and Fujitsu are jointly developing the K computer * Development continues with system
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater
HP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
Cisco Unified Computing System Hardware
Cisco Unified Computing System Hardware C22 M3 C24 M3 C220 M3 C220 M4 Form Factor 1RU 2RU 1RU 1RU Number of Sockets 2 2 2 2 Intel Xeon Processor Family E5-2400 and E5-2400 v2 E5-2600 E5-2600 v3 Processor
Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server
Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket node for PRIMERGY CX420 cluster Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket node for PRIMERGY CX420 cluster Strong Performance and Cluster
XenData Product Brief: SX-250 Archive Server for LTO
XenData Product Brief: SX-250 Archive Server for LTO An SX-250 Archive Server manages a robotic LTO library creating a digital video archive that is optimized for broadcasters, video production companies,
HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN
HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN Table of contents Executive summary... 2 Introduction... 2 Solution criteria... 3 Hyper-V guest machine configurations...
Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack
Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack May 2015 Copyright 2015 SwiftStack, Inc. swiftstack.com Page 1 of 19 Table of Contents INTRODUCTION... 3 OpenStack
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems
Transforming the UL into a Big Data University. Current status and planned evolutions
Transforming the UL into a Big Data University Current status and planned evolutions December 6th, 2013 UniGR Workshop - Big Data Sébastien Varrette, PhD Prof. Pascal Bouvry Prof. Volker Müller http://hpc.uni.lu
VTrak 15200 SATA RAID Storage System
Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data
Fujitsu PRIMEFLEX reference architectures
SharePoint, Exchange, Lync, OfficeMaster Gate Fujitsu PRIMEFLEX reference architectures 0 Fujitsu PRIMEFLEX for SharePoint based on Microsoft SharePoint 2013 SP1 on-premise solution designed for 250 SharePoint
APACHE HADOOP PLATFORM HARDWARE INFRASTRUCTURE SOLUTIONS
APACHE HADOOP PLATFORM BIG DATA HARDWARE INFRASTRUCTURE SOLUTIONS 1 BIG DATA. BIG CHALLENGES. BIG OPPORTUNITY. How do you manage the VOLUME, VELOCITY & VARIABILITY of complex data streams in order to find
BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX
White Paper BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH Abstract This white paper explains how to configure a Mellanox SwitchX Series switch to bridge the external network of an EMC Isilon
Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
SPC BENCHMARK 2/ENERGY EXECUTIVE SUMMARY ORACLE CORPORATION ORACLE ZFS STORAGE ZS3-2 APPLIANCE (2-NODE CLUSTER) SPC-2/E V1.5
SPC BENCHMARK 2/ENERGY EXECUTIVE SUMMARY ORACLE CORPORATION ORACLE ZFS STORAGE ZS3-2 APPLIANCE (2-NODE CLUSTER) SPC-2/E V1.5 Submitted for Review: June 25, 2014 EXECUTIVE SUMMARY Page 2 of 12 EXECUTIVE
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters from One Stop Systems (OSS) PCIe Over Cable PCIe provides greater performance 8 7 6 5 GBytes/s 4
Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008
Sun in HPC Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Bjorn Andersson Director, HPC Marketing Makia Minich Lead Architect, Sun HPC Software, Linux Edition Sun Microsystems Core Focus Areas for
Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research
! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at
Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This document
Terms of Reference Microsoft Exchange and Domain Controller/ AD implementation
Terms of Reference Microsoft Exchange and Domain Controller/ AD implementation Overview Maldivian Red Crescent will implement it s first Microsoft Exchange server and replace it s current Domain Controller
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
Brainlab Node TM Technical Specifications
Brainlab Node TM Technical Specifications BRAINLAB NODE TM HP ProLiant DL360p Gen 8 CPU: Chipset: RAM: HDD: RAID: Graphics: LAN: HW Monitoring: Height: Width: Length: Weight: Operating System: 2x Intel
Architecting a High Performance Storage System
WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to
HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX
HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX 1 G1000 Capacity Specifications Maximum (Max.) Number of Hard Drives, Including Spares 264 SFF 264 LFF 480 SFF 480 LFF 720 SFF 720 LFF 1,440 SFF 1,440 LFF
XenData Product Brief: SX-250 Archive Server for LTO
XenData Product Brief: SX-250 Archive Server for LTO An SX-250 Archive Server manages a robotic LTO library creating a digital video archive that is optimized for broadcasters, video production companies,
760 Veterans Circle, Warminster, PA 18974 215-956-1200. Technical Proposal. Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA 18974.
760 Veterans Circle, Warminster, PA 18974 215-956-1200 Technical Proposal Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA 18974 for Conduction Cooled NAS Revision 4/3/07 CC/RAIDStor: Conduction
HUAWEI Tecal E6000 Blade Server
HUAWEI Tecal E6000 Blade Server Professional Trusted Future-oriented HUAWEI TECHNOLOGIES CO., LTD. The HUAWEI Tecal E6000 is a new-generation server platform that guarantees comprehensive and powerful
WHITE PAPER 1 WWW.FUSIONIO.COM
1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
IMPLEMENTING GREEN IT
Saint Petersburg State University of Information Technologies, Mechanics and Optics Department of Telecommunication Systems IMPLEMENTING GREEN IT APPROACH FOR TRANSFERRING BIG DATA OVER PARALLEL DATA LINK
Fujitsu PRIMERGY Servers Portfolio
Fujitsu Servers Portfolio Dynamic Infrastructures for workgroup, datacenter and cloud computing shaping tomorrow with you Higher IT efficiency and reduced total cost of ownership Fujitsu Micro and Tower
SX1024: The Ideal Multi-Purpose Top-of-Rack Switch
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers
Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing
The following InfiniBand adaptor products based on Mellanox technologies are available from HP
Overview HP supports 56 Gbps Fourteen Data rate (FDR), 40Gbps 4x Quad Data Rate (QDR) InfiniBand products that include Host Channel Adapters (HCA), HP FlexLOM adaptors, switches, and cables for HP ProLiant
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture Ron Weiss, Exadata Product Management Exadata Database Machine Best Platform to Run the
Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal. 2013 by SGI Federal. Published by The Aerospace Corporation with permission.
Stovepipes to Clouds Rick Reid Principal Engineer SGI Federal 2013 by SGI Federal. Published by The Aerospace Corporation with permission. Agenda Stovepipe Characteristics Why we Built Stovepipes Cluster
Fine-grained File System Monitoring with Lustre Jobstat
Fine-grained File System Monitoring with Lustre Jobstat Daniel Rodwell [email protected] Patrick Fitzhenry [email protected] Agenda What is NCI Petascale HPC at NCI (Raijin) Lustre at NCI Lustre
Maintaining Non-Stop Services with Multi Layer Monitoring
Maintaining Non-Stop Services with Multi Layer Monitoring Lahav Savir System Architect and CEO of Emind Systems [email protected] www.emindsys.com The approach Non-stop applications can t leave on their
High Performance Computing Infrastructure at DESY
High Performance Computing Infrastructure at DESY Sven Sternberger & Frank Schlünzen High Performance Computing Infrastructures at DESY DV-Seminar / 04 Feb 2013 Compute Infrastructures at DESY - Outline
Configuration Maximums
Topic Configuration s VMware vsphere 5.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.1. The limits presented in the
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
Building a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
NetApp High-Performance Computing Solution for Lustre: Solution Guide
Technical Report NetApp High-Performance Computing Solution for Lustre: Solution Guide Robert Lai, NetApp August 2012 TR-3997 TABLE OF CONTENTS 1 Introduction... 5 1.1 NetApp HPC Solution for Lustre Introduction...5
SAN TECHNICAL - DETAILS/ SPECIFICATIONS
SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance
Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This
New and Improved Lustre Performance Monitoring Tool. Torben Kling Petersen, PhD Principal Engineer. Chris Bloxham Principal Architect
New and Improved Lustre Performance Monitoring Tool Torben Kling Petersen, PhD Principal Engineer Chris Bloxham Principal Architect Lustre monitoring Performance Granular Aggregated Components Subsystem
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering [email protected] Company Overview
Microsoft SharePoint Server 2010
Microsoft SharePoint Server 2010 Small Farm Performance Study Dell SharePoint Solutions Ravikanth Chaganti and Quocdat Nguyen November 2010 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY
Annex 1: Hardware and Software Details
Annex : Hardware and Software Details Hardware Equipment: The figure below highlights in more details the relation and connectivity between the Portal different environments. The number adjacent to each
A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing.
Appro HyperBlade A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing. Appro HyperBlade clusters are flexible, modular scalable offering a high-density
Network Storage Appliance
Enterprise File System HyperFS integrates high-performance SAN, NAS and data protection under one global namespace. No Per Seat Client Cost Based on concurrent RAID performance, connectivity, bandwidth,
UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment
DATASHEET TM NST6000 UNIFIED HYBRID STORAGE Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment UNIFIED The Nexsan NST6000 unified hybrid storage appliance is ideal for
