Experiences setting up a Rocks based Linux Cluster Tomas Lindén Helsinki Institute of Physics CMS Programme
|
|
|
- Toby Lamb
- 10 years ago
- Views:
Transcription
1 Experiences setting up a Rocks based Linux Cluster Tomas Lindén Helsinki Institute of Physics CMS Programme th Annual Workshop on Linux Clusters for Super Computing, October 22-24, 2003 National Supercomputer Centre (NSC), Linköping University, Linköping, Sweden Tomas Lindén
2 Contents 1. System requirements 2. Hardware setup 3. Disk performance results 4. Software setup 5. Experiences using NPACI Rocks 6. Conclusions 7. Acknowledgements Tomas Lindén
3 1. System requirements Hardware optimized for Monte Carlo simulations in Computational Material Science (MD-simulations) and High Energy Physics (CMS Monte Carlo production runs for CMS Data Challenges). The applications are mostly of the embarassing parallell type. Network bandwidth between the nodes and the latency for message passing is therefore not an issue, so only a 100 Mb/s network is required. A Gb/s network would be nice in the long run, however. One phase of the simulations is I/O bound and rather tough to meet because the data needs to be read randomly. The budget for the system was 50 keuros (without VAT) Tomas Lindén
4 Main design goals Maximize CPU power Maximize random disk read capacity Minimize costs The ideal computer CPU memory I/O-interfaces Tomas Lindén
5 A real computer needs additional components like casing or mechanical support motherboard power supply bootable mass storage devices network interface card(s) graphics adapter monitor keyboard expansion slots I/O ports (USB, FireWire, parallell, serial,... ) The ideal computer can be approximated by minimizing the number of components in the worker nodes, which will also minimize costs. Some money can be saved by recycling hardware for non critical items Tomas Lindén
6 Casing comparison minitower + standard commodity solution + inexpensive - big space requirement 1U or 2U 19 rack case + more compact than minitower - more expensive than minitower home made ATX blade server + more compact than minitower + least expensive if assembly costs are neglected - labour intensive - cooling problems can be an issue blade server + most compact - most expensive Tomas Lindén
7 3 node 1.2 GHz, 512 MB RAM Celeron Tualatin ATX blade built by A. Sandvik Tomas Lindén
8 The 24 nodes of the Celeron ATX blade server [1]. Here the power supply tower can be seen Tomas Lindén
9 AMD Athlon XP ATX blade at the Institute of Biotechnology at the University of Helsinki Tomas Lindén
10 A 32 node AMD Athlon XP Mosix ATX blade server Tomas Lindén
11 Heat dissipation is obviously a major problem with any sizeable cluster. Other factors affecting the choice of casing are space and cost issues. The idea of building a ATX-blade server was very attractive to us in terms of needed space and costs, but we were somewhat discouraged by the heat problems with the previously shown cluster (the heat problem was subsequently solved with more effective CPU coolers). It was also felt that one would have more problems with warranties with a completely home built system. Also the mechanical design of a ATX blade server would take some extra effort compared to a more standard solution Tomas Lindén
12 1U cases are very compact, but the only possibilty to add expansion cards is trough a PCI-riser card, which can be problematic. The cooling of a 1U case needs to be designed very carefully. The advantage of a 2U case is that one can use half height or low-profile PCI-cards without using any PCI-riser card (Intel Gb/s NICs are available in half height PCI size). Heat dissipation problems are probably less likely to occur in a 2U case because of the additional space available for airflow. The advantage of 4U cases is that standard PCI-expansion cards can be used. Because of space limitations a mini tower solution was not possible so we chose a rack based solution with 2U cases for the nodes and a 4U case for the frontend for the Linux cluster mill Tomas Lindén
13 In many cases a motherboard with as many integrated components as possible is desirable: Some motherboards come with integrated graphics adapters. No external graphics adapter is also needed if the BIOS supports a remote serial console [2]. One (or two) integrated network interface(s) is also desirable. An integrated SCSI- or ATA RAID-controller might be useful. Each worker node does not need a CDROM-drive nor a floppy disk drive if the motherboard BIOS supports booting from a corresponding USB device and the network card and the cluster management software supports PXE booting. Worker nodes can then be booted for maintenance or BIOS upgrades from USB devices that are plugged into the node only when needed if the motherboard BIOS supports this. A USB flash memory stick might replace a floppy disk drive Tomas Lindén
14 A cost effective solution to the random disk reading I/O problem is to equip each node with dual ATA disk drives in a software RAID 1 configuration. In this way we can get some 8 MB/s per node when reading randomly which is enough. The motherboard requirements were: Dual CPU support minimizes the number of boxes to save space and maintenance effort At least dual ATA 100 controllers Integrated fast ethernet or Gb/s ethernet Serial console support or integrated display adapter Support for bootable USB-devices (CDROM, FDD) Support for PXE booting Tomas Lindén
15 2. Hardware setup CPU: 2 * AMD GHz Athlon MP MB: Tyan Tiger MPX S2466N-4M Memory: 1 GB ECC registered DDR IDE disks: 2 * 80 GB 7200 rpm Hitachi 180 GXP DeskStar NIC: 3Com 3C920C 100 Mb/s Case: Supermicro SC822I-300LP 2U Power: Ablecom SP302-2C 300W FDD: integrated in the case Price: 1.4 keuros / node with 0 % VAT The 32+1 node dual AMD Athlon M- P 2U Rocks rack cluster mill. One USB-IDE case with a 52x IDE CDROM drive is common for all nodes. The 100 Mb/s network switches existed already previously Tomas Lindén
16 Frontend hardware: CPU: 2 * AMD GHz Athlon MP MB: Tyan Tiger MPX S2466N-4M Memory: 1 GB ECC registered DDR IDE disks: 2 * 60 GB 7200 rpm IBM 180 GXP DeskStar NIC: 3Com 3C996-TX Gb/s NIC: 3Com 3C920C 100 Mb/s Graphics: AGP ATI Rage Pro Turbo 8MB Case: Compucase S466A 4U Power supply: HEC300LR-PT CDROM: LG 48X/16X/48X CD-RW FDD: yes Node cables: Power supply Ethernet Serial console CAT-5 Console monitoring: Digi EtherLite 32 with 32 RS232 ports 2 port D-Link DKVM-2 switch Recycled hardware: rack shelves serial console control computer 17 display keyboard mouse Tomas Lindén
17 Cooling The frontend draws 166 W (idle) 200 W (full load). Rounded IDE cables on the nodes. The cases have large air exhaust holes near the CPUs. Node CPU temperatures are 40 C, when idle. The interior of one of the nodes Tomas Lindén
18 About the chosen hardware PXE booting works just great. USB CDROM and USB FD booting works nicely with the latest BIOS. USB 1.1 is a bit slow for large files. Booting from a USB memory stick is not so convenient, because the nodes sometimes hang when the flash memory is inserted. The BIOS boot order list has to be edited every time the memory stick is inserted, unlike the case for CDROM drives and FDDs. This also makes the flash memory unpractical for booting several nodes. The remote serial console is a bit tricky to setup, but it works. Unfortunately the BIOS does not allow interrupting the slow memory test from a serial console. The Digi EtherLite 32 has worked nicely after the correct DB9-RJ-45 adapter wiring was used Tomas Lindén
19 Node console BIOS settings can be replicated with /dev/nvram, but to see BIOS POST messages or console error messages some kind of console is needed [2]. keyboard and display is attached only when needed (headless) Keyboard Video Mouse (KVM) switch serial switch KVM Serial cabling - + LAN access - + all consoles at once - + graphics card required not required speed + - special keys + - mouse Tomas Lindén
20 3. Disk performance The two disks on each node can be used in a software RAID 1 configuration or a in a software RAID 0 configuration or as individual disks (with or without LVM). In the following performance for single disks and software RAID 1 are presented. The seek-test by Tony Wildish is a benchmark tool to study sequential and random disk reading speed as a function of the number of processes [3]. Each filesystem was filled so the disk speed variation of different disk area regions were averaged over. The files were read sequentially (1-1) and randomly (5-10) (the test can randomly skip a random number of blocks within a interval of minimum and maximum number of blocks). All of the RAM available was used in these tests and each point reads data for 600 s Tomas Lindén
21 60000 kb/s Disk read throughput "./gplot.compute-0-0_jbod_seek-test-1-1_ " using 1:2:3 "./gplot.seek-test-1-1_jbod_120gxpmb" using 1:2:3 "./gplot.seek-test-1-1_jbod_120gxp" using 1:2: (# procs) 60 Single disk sequential read performance Tiger MPX MB IDE-controller 180 GXP disk, Tyan MP MB IDE-controller 120 GXP disk and 3ware 7850 IDEcontroller 120 GXP disk on a Tiger MP Tomas Lindén
22 80000 kb/s Disk read throughput "./gplot.seek-test-1-1_hipswap_sraid1" using 1:2:3 "./gplot.sraid1_newbios_crh731_64k-seek-test-1-1" using 1:2: (# procs) 60 Software RAID 1 sequential read performance Tiger MPX MB IDE-controller and 3ware 7850 IDE-controller on a Tiger MP Tomas Lindén
23 20000 kb/s Disk read throughput "./gplot.seek-test-5-10_hipswap_sraid1" using 1:2:3 "./gplot.sraid1_newbios_crh731_64k-seek-test-5-10" using 1:2: (# procs) 60 Software RAID 1 random read performance Tiger MPX MB IDE-controller and 3ware 7850 IDE-controller on a Tiger MP Tomas Lindén
24 4. Software setup BIOS settings All nodes have serial console and PXE booting enabled. To turn on the system in a controlled way after a power loss the nodes have the BIOS setting Stay off enabled. Boot discs For rescue booting and testing the headless nodes a set of boot discs supporting a serial console were created: Memtest 3.0 (compiled with serial support enabled) Rocks bootnet.img (syslinux.cfg edited) RH9 CD 1 (syslinux.cfg edited and CDROM burned) Useful kernel parameters: RedHat kernels accept the option apm=power off, which is useful on SMP-machines Tomas Lindén
25 Cluster software requirements Cluster software should work with CERN RedHat Linux No cluster software with a modified kernel (OpenMosix, Scyld) Because of a favorable review and positive experiences within CMS NPACI Rocks was chosen as the cluster software instead of OSCAR [4]. Compilers to install: C/C++ and F77/F90 compilers by Intel and Portland Group Grid tools to install: NorduGrid software Large Hadron Collider Grid software Application software to install: Compact Muon Solenoid software Compact Muon Solenoid production software, Molecular dynamics simulation package PARCAS Tomas Lindén
26 5. Experiences using NPACI Rocks NPACI Rocks Cluster Distribution is a RPM based cluster management software for scientific computation based on RedHat Linux [5]. The most recent version is and the previous one is Both are based on RedHat 7.3. Features of Rocks Supported architectures IA32, IA64, Opteron coming, (no Alpha, SPARC) Network: Ethernet, Myrinet Relies heavily on Kickstart and Anaconda (works only with RedHat) XML configuration scripts RPM software is supported, other packages can be handled with XML postconfiguration scripts Can handle a cluster with heterogenous hardware Tomas Lindén
27 ekv a telnet based tool for remote installation monitoring Supports headless nodes Supports PXE booting and installation Installation is very easy on well behaved hardware Services and libraries out of the box Ganglia (nice graphical monitoring) PBS (batch queue system) Maui (scheduler) MPICH (parallell libraries) DHCP (node ip-addresses) NIS (user management) 411 SIS is beta in v NFS (global disk space) MySQL (cluster internal configuration bookkeeping) HTTP (cluster installation) Tomas Lindén
28 The Rocks manual covers the minimum to get one started. Rocks has a very active mailing list with a web archive for users. All nodes are considered to have soft state and any upgrade, installation or configuration change is done by node reinstallation, which takes about 10 min for a node. The default configuration is to reinstall a node also after each power down. Settings like this can be changed according to taste. Rocks makes it possible for nonexperts to setup a Linux cluster for scientific computation in a short amount of time. Recent RedHat Linux licensing changes will affect both NPACI Rocks and CERN RedHat Linux Tomas Lindén
29 Ganglia Cluster Toolkit:: Cluster Report Ganglia Cluster Toolkit:: Physical View Cluster Report for Thu, 16 Oct :24: Metric load_one Physical Last day Sorted by hostname Get Fresh Data View Job Queue Physical View for Thu, 16 Oct :14: Get Fresh Data mill > --Choose a Node Full View Overview of mill mill > --Choose a Node There are 31 nodes (62 CPUs) up and running. There are no nodes down. Current Cluster Load: 0.24, 0.12, 0.01 Verbosity level (Lower is more compact): mill - Physical View Total CPUs: 62 Total Memory: 30.5 GB Total Disk: GB Most Full Disk: 46.0% Used mill x 1666Mhz, 1006MB Rack 0 compute x 2133Mhz, 1006MB Rack 1 compute x 2133Mhz, 1006MB Snapshot of mill Legend mill load_one compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute x 2133Mhz, 1006MB compute Overview produced with Ganglia of the Rocks cluster mill. Physical view over the Rocks cluster mill Tomas Lindén
30 6. Conclusions The hardware chosen for the cluster works as expected. The BIOS has still room for improvements. BIOS writers should add remote console capabilities to something faster than the serial line (LAN or USB). This would be simpler than using a boot card. Software RAID 1 gives a good sequential and random (2 processes) disk reading performance. The 3ware IDE-controller driver or the Linux SCSI driver has some room for improvement compared to the IDE-driver. The NPACI Rocks cluster distribution has shown to be a powerful tool enabling nonexperts to set up a Linux cluster. The Rocks documentation level is not quite up to the software quality. This is mostly compensated by the active Rocks user mailing list Tomas Lindén
31 7. Acknowledgements The Institute of Physical Sciences has financed the cluster nodes. The Kumpula Campus Computational Unit hosts the mill cluster in their machine room. N. Jiganova has helped with the software and hardware of the cluster. P. Lähteenmäki has been very helpful in clarifying network issues and setting up the network for the cluster. Damicon Kraa the vendor of the nodes has given very good service Tomas Lindén
32 References [1] ATX blade server cluster built by A. Sandvik at Åbo Akademi [2] Remote Serial Console HOWTO, HOWTO/Remote-Serial-Console-HOWTO/index.html. [3] Seek-test, by T. Wildish Benchmark-results/Performance.html. [4] Analysis and Evaluation of Open Source Solutions for the Installation and Management of Clusters of PCs under Linux, R. Leiva [5] Rocks homepage, [6] NPACI All Hands Meeting, Rocks v2.3.2 Tutorial Session, March 2003, npaci-ahm-2003.pdf Tomas Lindén
Communicating with devices
Introduction to I/O Where does the data for our CPU and memory come from or go to? Computers communicate with the outside world via I/O devices. Input devices supply computers with data to operate on.
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
H ARDWARE C ONSIDERATIONS
H ARDWARE C ONSIDERATIONS for Sidewinder 5 firewall software Dell Precision 530 This document provides information on specific system hardware required for running Sidewinder firewall software on a Dell
Computer Components Study Guide. The Case or System Box
Computer Components Study Guide In this lesson, we will briefly explore the basics of identifying the parts and components inside of a computer. This lesson is used to introduce the students to the inside
Fusionstor NAS Enterprise Server and Microsoft Windows Storage Server 2003 competitive performance comparison
Fusionstor NAS Enterprise Server and Microsoft Windows Storage Server 2003 competitive performance comparison This white paper compares two important NAS operating systems and examines their performance.
Chapter 5 Cubix XP4 Blade Server
Chapter 5 Cubix XP4 Blade Server Introduction Cubix designed the XP4 Blade Server to fit inside a BladeStation enclosure. The Blade Server features one or two Intel Pentium 4 Xeon processors, the Intel
The Motherboard Chapter #5
The Motherboard Chapter #5 Amy Hissom Key Terms Advanced Transfer Cache (ATC) A type of L2 cache contained within the Pentium processor housing that is embedded on the same core processor die as the CPU
The Bus (PCI and PCI-Express)
4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the
Chapter 5 Busses, Ports and Connecting Peripherals
Chapter 5 Busses, Ports and Connecting Peripherals 1 The Bus bus - groups of wires on a circuit board that carry information (bits - on s and off s) between computer components on a circuit board or within
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Table of Contents. Linux Cluster HOWTO
Table of Contents Linux Cluster HOWTO...1 Ram Samudrala ([email protected])...1 1. Introduction...1 2. Hardware...1 3. Software...1 4. Set up and configuration...1 5. Performing tasks on the cluster...1 6. Acknowledgements...1
A+ Guide to Managing and Maintaining Your PC, 7e. Chapter 1 Introducing Hardware
A+ Guide to Managing and Maintaining Your PC, 7e Chapter 1 Introducing Hardware Objectives Learn that a computer requires both hardware and software to work Learn about the many different hardware components
Mother Board Component
Mother Board Component Explain Introduction Mother Board Component 1.Clock Generator 2. CPU socket 3. Memory Socket Memory error checking 4. ROM Bios 5. CMOS Ram 6. Battery 7. Chipset 8. Expansion Slot
H ARDWARE C ONSIDERATIONS
H ARDWARE C ONSIDERATIONS for Sidewinder 5 firewall software Compaq ProLiant ML370 G2 This document provides information on specific system hardware required for running Sidewinder firewall software on
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
Technical Product Specifications Dell Dimension 2400 Created by: Scott Puckett
Technical Product Specifications Dell Dimension 2400 Created by: Scott Puckett Page 1 of 11 Table of Contents Technical Product Specifications Model 3 PC Technical Diagrams Front Exterior Specifications
VTrak 15200 SATA RAID Storage System
Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data
EUCIP IT Administrator - Module 1 PC Hardware Syllabus Version 3.0
EUCIP IT Administrator - Module 1 PC Hardware Syllabus Version 3.0 Copyright 2011 ECDL Foundation All rights reserved. No part of this publication may be reproduced in any form except as permitted by ECDL
VIA Fedora Linux Core 8 (x86&x86_64) VT8237R/VT8237A/VT8237S/VT8251/CX700/VX800 V-RAID V3.10 Driver Installation Guide
VIA Fedora Linux Core 8 (x86&x86_64) VT8237R/VT8237A/VT8237S/VT8251/CX700/VX800 V-RAID V3.10 Driver Installation Guide 1. Summary Version 0.8, December 03, 2007 Copyright 2003~2007 VIA Technologies, INC
How To Build A Scada System
SECTION 17902 SCADA SYSTEMS PART 1 - GENERAL 1.01 SCOPE OF WORK A. The work of this section shall be performed by a qualified System Integrator and includes providing and installing SCADA computer system
Linux clustering. Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University PII 4-node clusters started in 1999 PIII 16 node cluster purchased in 2001. Plan for grid For test base HKBU -
MOSIX: High performance Linux farm
MOSIX: High performance Linux farm Paolo Mastroserio [[email protected]] Francesco Maria Taurino [[email protected]] Gennaro Tortone [[email protected]] Napoli Index overview on Linux farm farm
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
Adaptec: Snap Server NAS Performance Study
March 2006 www.veritest.com [email protected] Adaptec: Snap Server NAS Performance Study Test report prepared under contract from Adaptec, Inc. Executive summary Adaptec commissioned VeriTest, a service
HP Z Turbo Drive PCIe SSD
Performance Evaluation of HP Z Turbo Drive PCIe SSD Powered by Samsung XP941 technology Evaluation Conducted Independently by: Hamid Taghavi Senior Technical Consultant June 2014 Sponsored by: P a g e
Adaptec: Snap Server NAS Performance Study
October 2006 www.veritest.com [email protected] Adaptec: Snap Server NAS Performance Study Test report prepared under contract from Adaptec, Inc. Executive summary Adaptec commissioned VeriTest, a service
A Comparison of VMware and {Virtual Server}
A Comparison of VMware and {Virtual Server} Kurt Lamoreaux Consultant, MCSE, VCP Computer Networking and Consulting Services A Funny Thing Happened on the Way to HP World 2004 Call for speakers at the
Practice Test for the 220-801 Domain 1 - PC Hardware (Brought to you by RMRoberts.com)
Practice Test for the 220-801 Domain 1 - PC Hardware (Brought to you by RMRoberts.com) This is a practice test designed to determine if you are ready to take the CompTIA 220-801 certification test. Only
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
How To Compare Two Servers For A Test On A Poweredge R710 And Poweredge G5P (Poweredge) (Power Edge) (Dell) Poweredge Poweredge And Powerpowerpoweredge (Powerpower) G5I (
TEST REPORT MARCH 2009 Server management solution comparison on Dell PowerEdge R710 and HP Executive summary Dell Inc. (Dell) commissioned Principled Technologies (PT) to compare server management solutions
Virtualised MikroTik
Virtualised MikroTik MikroTik in a Virtualised Hardware Environment Speaker: Tom Smyth CTO Wireless Connect Ltd. Event: MUM Krackow Feb 2008 http://wirelessconnect.eu/ Copyright 2008 1 Objectives Understand
What the student will need:
COMPTIA SERVER+: The Server+ course is designed to help the student take and pass the CompTIA Server+ certification exam. It consists of Book information, plus real world information a student could use
ClearOS Network, Gateway, Server Quick Start Guide
ClearOS Network, Gateway, Server Quick Start Guide Welcome ClearOS is computer Operating System (OS) that provides enterprise-level network security and application services to the Small/Medium-sized Business
Computer Performance. Topic 3. Contents. Prerequisite knowledge Before studying this topic you should be able to:
55 Topic 3 Computer Performance Contents 3.1 Introduction...................................... 56 3.2 Measuring performance............................... 56 3.2.1 Clock Speed.................................
CSCA0201 FUNDAMENTALS OF COMPUTING. Chapter 5 Storage Devices
CSCA0201 FUNDAMENTALS OF COMPUTING Chapter 5 Storage Devices 1 1. Computer Data Storage 2. Types of Storage 3. Storage Device Features 4. Other Examples of Storage Device 2 Storage Devices A storage device
Optimization of Cluster Web Server Scheduling from Site Access Statistics
Optimization of Cluster Web Server Scheduling from Site Access Statistics Nartpong Ampornaramveth, Surasak Sanguanpong Faculty of Computer Engineering, Kasetsart University, Bangkhen Bangkok, Thailand
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Comparing Free Virtualization Products
A S P E I T Tr a i n i n g Comparing Free Virtualization Products A WHITE PAPER PREPARED FOR ASPE BY TONY UNGRUHE www.aspe-it.com toll-free: 877-800-5221 Comparing Free Virtualization Products In this
Installing the Operating System or Hypervisor
Installing the Operating System or Hypervisor If you purchased E-Series Server Option 1 (E-Series Server without preinstalled operating system or hypervisor), you must install an operating system or hypervisor.
Configuration Maximums
Topic Configuration s VMware vsphere 5.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.1. The limits presented in the
v1 System Requirements 7/11/07
v1 System Requirements 7/11/07 Core System Core-001: Windows Home Server must not exceed specified sound pressure level Overall Sound Pressure level (noise emissions) must not exceed 33 db (A) SPL at ambient
System Configuration and Order-information Guide ECONEL 100 S2. March 2009
System Configuration and Orderinformation Guide ECONEL 100 S2 March 2009 Front View DVDROM Drive 5 inch Bay Floppy Disk Drive Back View Mouse Keyboard Serial Port Display 10/100/1000BASET Connector Inside
An Analysis of 8 Gigabit Fibre Channel & 10 Gigabit iscsi in Terms of Performance, CPU Utilization & Power Consumption
An Analysis of 8 Gigabit Fibre Channel & 1 Gigabit iscsi in Terms of Performance, CPU Utilization & Power Consumption An Analysis of 8 Gigabit Fibre Channel & 1 Gigabit iscsi 1 Key Findings Third I/O found
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering Enquiry No: Enq/IITK/ME/JB/02 Enquiry Date: 14/12/15 Last Date of Submission: 21/12/15 Formal quotations are invited for HPC cluster.
2001 Acer Incorporation. All rights reserved. This paper is for informational purposes only. ACER MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS
Acer Altos Server Installation and Configuration Guide for Microsoft Server Clustering on Acer Altos Servers, Altos RS710, Altos S300, and LSI MegaRAID Elite/Enterprise 1600 Controllers This installation
Desktop PC Buying Guide
Desktop PC Buying Guide Why Choose a Desktop PC? The desktop PC in this guide refers to a completely pre-built desktop computer, which is different to a self-built or DIY (do it yourself) desktop computer
How To Configure Your Computer With A Microsoft X86 V3.2 (X86) And X86 (Xo) (Xos) (Powerbook) (For Microsoft) (Microsoft) And Zilog (X
System Configuration and Order-information Guide RX00 S4 February 008 Front View CD-ROM Drive (Optional) Hard Disk Bay Back View PCI Slot 0/00/000BASE-T connector Serial Port Display Mouse Keyboard Inside
Backup and Recovery. Introduction. Benefits. Best-in-class offering. Easy-to-use Backup and Recovery solution.
DeltaV Distributed Control System Product Data Sheet Backup and Recovery Best-in-class offering. Easy-to-use Backup and Recovery solution. Data protection and disaster recovery in a single solution. Scalable
Data Sheet ESPRIMO P3510. Your flexible economic PC ESPRIMO P3510
Pages: Data Sheet Your flexible economic PC Issue: September 2009 The is the right choice for major customers requiring the latest standard features. This Microtower PC is the ultimate balance between
A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing.
Appro HyperBlade A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing. Appro HyperBlade clusters are flexible, modular scalable offering a high-density
Cisco MCS 7825-H3 Unified Communications Manager Appliance
Cisco MCS 7825-H3 Unified Communications Manager Appliance Cisco Unified Communications is a comprehensive IP communications system of voice, video, data, and mobility products and applications. It enables
Chapter 4 System Unit Components. Discovering Computers 2012. Your Interactive Guide to the Digital World
Chapter 4 System Unit Components Discovering Computers 2012 Your Interactive Guide to the Digital World Objectives Overview Differentiate among various styles of system units on desktop computers, notebook
Using Linux Clusters as VoD Servers
HAC LUCE Using Linux Clusters as VoD Servers Víctor M. Guĺıas Fernández [email protected] Computer Science Department University of A Corunha funded by: Outline Background: The Borg Cluster Video on Demand.
Software SIParator / Firewall
Orientation and Installation Guide for the Ingate SBC and E-SBC Software SIParator / Firewall for Virtual x86 Machines For the Ingate Software SIParators using software release 5.0.6 or later Revision
Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
Virtualization and Performance NSRC
Virtualization and Performance NSRC Overhead of full emulation Software takes many steps to do what the hardware would do in one step So pure emulation (e.g. QEMU) is slow although much clever optimization
Chapter 8: Installing Linux The Complete Guide To Linux System Administration Modified by M. L. Malone, 11/05
Chapter 8: Installing Linux The Complete Guide To Linux System Administration Modified by M. L. Malone, 11/05 At the end of this chapter the successful student will be able to Describe the main hardware
Microsoft Exchange Server 2003 Deployment Considerations
Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers
SiS AMD Athlon TM 64FX/PCI-E Solution. Silicon Integrated Systems Corp. Integrated Product Division April. 2004
SiS AMD Athlon TM 64FX/PCI-E Solution Silicon Integrated Systems Corp. Integrated Product Division April. 2004 Agenda SiS AMD Athlon 64/ 64FX Product Roadmap SiS756 Architecture Advanced Feature of SiS756
Digi International: Digi One IA RealPort Latency Performance Testing
August 2002 1001 Aviation Parkway, Suite 400 Morrisville, NC 27560 919-380-2800 Fax 919-380-2899 320 B Lakeside Drive Foster City, CA 94404 650-513-8000 Fax 650-513-8099 www.etestinglabs.com [email protected]
Hardware & Software Specification i2itracks/popiq
Hardware & Software Specification i2itracks/popiq 3663 N. Laughlin Rd., Suite 200 Santa Rosa, CA 95403 866-820- 2212 www.i2isys.com 1 I2iSystems Hardware and Software Specifications Revised 04/09/2015
Linux Cluster Computing An Administrator s Perspective
Linux Cluster Computing An Administrator s Perspective Robert Whitinger Traques LLC and High Performance Computing Center East Tennessee State University : http://lxer.com/pub/self2015_clusters.pdf 2015-Jun-14
Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.
Objectives The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Identify the components of the central processing unit and how they work together and interact with memory Describe how
85MIV2 / 85MIV2-L -- Components Locations
Chapter Specification 85MIV2 / 85MIV2-L -- Components Locations RJ45 LAN Connector for 85MIV2-L only PS/2 Peripheral Mouse (on top) Power PS/2 K/B(underside) RJ45 (on top) +2V Power USB0 (middle) USB(underside)
Seradex White Paper. Focus on these points for optimizing the performance of a Seradex ERP SQL database:
Seradex White Paper A Discussion of Issues in the Manufacturing OrderStream Microsoft SQL Server High Performance for Your Business Executive Summary Microsoft SQL Server is the leading database product
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
1. Specifiers may alternately wish to include this specification in the following sections:
GUIDE SPECIFICATION Seneca is a leading US designer and manufacturer of xvault security and surveillance solutions. The xvault product line combines our IP expertise and solution partnerships with certified
Part-1: SERVER AND PC
Part-1: SERVER AND PC Item Item Details Manufacturer Quantity Unit Price Total Dell server or equivalent Intel Xeon E5-2420 1.90GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C, 95W or equivalent PCIE Riser for
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
QUESTIONS & ANSWERS. ItB tender 72-09: IT Equipment. Elections Project
QUESTIONS & ANSWERS ItB tender 72-09: IT Equipment. Elections Project In lot 1, position 1 - Server for Data Base 1. Q: You order Microsoft Windows Server 2008, 64 bit, Enterprise, License with 25 or more
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
Highly Scalable Server for Many Possible Uses. MAXDATA PLATINUM Server 3200 I
Highly Scalable Server for Many Possible Uses MAXDATA PLATINUM Server 3200 I MAXDATA PLATINUM Server 3200 I: Highly Scalable Server for Many Possible Uses Standard Features Now more than ever, profitable
HP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
Are Blade Servers Right For HEP?
Are Blade Servers Right For HEP? Rochelle Lauer Yale University Physics Department [email protected] c 2002 Rochelle Lauer:1 Outline Blade Server Evaluation Why and How The HP BL Blade Servers The
Embedded Operating Systems in a Point of Sale Environment. White Paper
Embedded Operating Systems in a Point of Sale Environment White Paper December 2008 Contents Embedded Operating Systems in a POS Environment... 3 Overview... 3 POS Operating Systems... 3 Operating Systems
s y s t e m r e q u i r e m e n t s
s y s t e m r e q u i r e m e n t s System Requirements describe minimum and recommended standards for using Dentrix G4. Exceeding the minimum standards The may result in better system performance. Note:
Using Linux Clusters as VoD Servers
HAC LUCE Using Linux Clusters as VoD Servers Víctor M. Guĺıas Fernández [email protected] Computer Science Department University of A Corunha funded by: Outline Background: The Borg Cluster Video on Demand.
MODULE 3 VIRTUALIZED DATA CENTER COMPUTE
MODULE 3 VIRTUALIZED DATA CENTER COMPUTE Module 3: Virtualized Data Center Compute Upon completion of this module, you should be able to: Describe compute virtualization Discuss the compute virtualization
Software Scalability Issues in Large Clusters
Software Scalability Issues in Large Clusters A. Chan, R. Hogue, C. Hollowell, O. Rind, T. Throwe, T. Wlodek Brookhaven National Laboratory, NY 11973, USA The rapid development of large clusters built
Brainlab Node TM Technical Specifications
Brainlab Node TM Technical Specifications BRAINLAB NODE TM HP ProLiant DL360p Gen 8 CPU: Chipset: RAM: HDD: RAID: Graphics: LAN: HW Monitoring: Height: Width: Length: Weight: Operating System: 2x Intel
Small Industries Development Bank of India Request for Proposal (RfP) For Purchase of Intel Servers for Various Locations
Small Industries Development Bank of India Request for Proposal (RfP) For Purchase of Intel Servers for Various Locations Tender : 400/2011/702/BYO/ISD dated February 25, 2011 PRE-BID CLARIFICATIONS Placed
Online Remote Data Backup for iscsi-based Storage Systems
Online Remote Data Backup for iscsi-based Storage Systems Dan Zhou, Li Ou, Xubin (Ben) He Department of Electrical and Computer Engineering Tennessee Technological University Cookeville, TN 38505, USA
Cisco 7816-I5 Media Convergence Server
Cisco 7816-I5 Media Convergence Server Cisco Unified Communications Solutions unify voice, video, data, and mobile applications on fixed and mobile networks, enabling easy collaboration every time from
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
QuickSpecs. HP Integrity cx2620 Server. Overview
Overview At A Glance Product Numbers HP cx2620 Server with one 1.6GHz/3MB single-core CPU AB401A HP cx2620 Server with one 1.4GHz/12MB dual-core AB402A Standard System Features Multiple Operating Environment
CPU. Motherboard RAM. Power Supply. Storage. Optical Drives
CPU Motherboard RAM Power Supply Storage Optical Drives GPU 2 The CPU is the brain of a computer CPU receives instructions from software programs stored in memory Instructions are read and the tasks performed
Cisco MCS 7816-I3 Unified Communications Manager Appliance
Cisco MCS 7816-I3 Unified Communications Manager Appliance Cisco Unified Communications is a comprehensive IP communications system of voice, video, data, and mobility products and applications. It enables
Minimum Hardware Specifications Upgrades
Minimum Hardware Specifications Upgrades http://www.varian.com/hardwarespecs Eclipse TM treatment planning system Hardware V 11.0 1 TPS Version 11.0 Minimum Hardware Specifications [DELL OS supported upgrade
Servers, Clients. Displaying max. 60 cameras at the same time Recording max. 80 cameras Server-side VCA Desktop or rackmount form factor
Servers, Clients Displaying max. 60 cameras at the same time Recording max. 80 cameras Desktop or rackmount form factor IVR-40/40-DSKT Intellio standard server PC 60 60 Recording 60 cameras Video gateway
Intel X58 Express Chipset
Product Brief Intel X58 Express Chipset Highest performing desktop platform for extreme gamers and demanding enthusiasts Desktop PC platforms based on the Intel X58 Express Chipset and Intel Core i7 processor
