A scalable file distribution and operating system installation toolkit for clusters
|
|
|
- Nickolas Moody
- 10 years ago
- Views:
Transcription
1 A scalable file distribution and operating system installation toolkit for clusters Philippe Augerat 1, Wilfrid Billot 2, Simon Derr 2, Cyrille Martin 3 1 INPG, ID Laboratory, Grenoble, France 2 INRIA, ID Laboratory, Grenoble, France 3 Bull/INRIA, France Abstract: This paper describes a toolkit for optimizing file distribution on a large size infrastructure such as a cluster or a grid of clusters. It provides both the software for operating system installation and file broadcasting based on multicast, chain and several tree-structured algorithms. It has been used on a 225-node cluster allowing a complete installation of the cluster in just a few minutes and efficient file staging for user s applications. It has also been used to quickly install Linux, Windows 2000 and dual boot systems. 1. Introduction Large infrastructures such as big clusters, grids of clusters or computer rooms at universities have to be managed in a very automatic, dynamic and efficient way. System administrators need scalable tools to install, administer and monitor the infrastructure while users need scalable tools to program, manage and monitor applications and data. On one hand scalability means obtaining more power with more computers, on the other hand it means obtaining the same response time regardless of the number of computers. The Ka project deals mainly with the latter problem and aims at designing management tools that scale to thousands of PCs. The main purpose of the Ka project is file distribution in the context of operating system installation (very large file distribution) and dayto-day data broadcasting (small to medium-size file distribution). The broadcast problem has been studied considerably in the framework of message passing libraries and collective communication operations [15], [16], [17], [18]. Several algorithms based mainly on multicast and unicast trees have been suggested in the literature. Yet the lack of a generic model for self-made supercomputers such as grids or network of workstations makes it harder to decide which broadcast algorithm is the best. Our idea, then, was to write, compare and make all these algorithms available through a general library for file distribution. Numerous tools exist for deploying a set of PC in clusters, intranet networks or University computer rooms. Yet, we think that the Ka management tool is more generic and scalable than most existing software. Moreover, it is completely open source and quite easy to maintain. The library has been used to install a 225-node cluster. It is also a basis for the design of scalable parallel commands for cluster management and file staging. 2. Operating system installation The installation of hundreds of PCs excludes the idea of performing manual tasks. The data to be transferred (several gigabytes) also imposes an optimal use of the network. We present now a three-step automatic installation process that performs the complete installation of hundreds of client PCs within fifteen minutes. The first step automatically launches a small operating system on each client. Then a previously installed PC is used to broadcast a file system image to all the client PCs. Finally, a post installation process provides each computer with its own name and characteristics. a) Initial boot process A blank PC can boot on its network card (PXE protocol) to obtain information on its name and the operating system that it will use next. The PXE protocol is used to contact DHCP and TFTP servers, which provide the client computer with a boot program, which in turn loads a higher level operating system. This installation scheme is used classically to install workstations in a computer room in a university, for instance with the BpBatch software [6].. Since BpBatch is not open source we re-implemented a limited set of BpBatch features (read files through TFTP, load and run a Linux kernel, start a computer on its hard disk) in our boot program, thus providing an Open Source cloning system. This program names Ka-boot. The boot program plus the minimal operating system weigh less than two megabytes. For two hundred PCs, the duration of the initial boot process will be a few seconds. A multicast file transfer protocol could be used to strengthen this part of the installation for thousands of PCs. Figure 1 summarizes the client-server exchanges during the first installation stage including the PXE and Kaboot stages.
2 Node Linux server DHCP request dhcpd IP, filename=kaboot, option135=script PXE TFTP Get: kaboot kaboot file kaboot TFTP Get: script script file TFTP Get: state file state file TFTP Get: Linux kernel Linux kernel DHCP request IP address dhcpd mini system NFS mount nfsd Ka server Connection to the server IP address of the previous node in the chain Ka server program Previous node in the chain Linux Ka client program Connection Operating sytem data Ka client program TFTP: Put state file Reboot Figure 1 b) File system cloning Consider first the case of a sequential client-server installation. If many PCs need to be installed at the same time, then the server sends the same complete OS image many times and this operation becomes a bottleneck. Several strategies can be used to remove this bottleneck. Common techniques to broadcast large files on a local network are IP multicast and optimized unicast tree broadcasting. We will compare these techniques at the end of this paper. In this section, we consider the context of a switched network where we found that the most suitable strategy is a pipelined TCP chain. The resulting application is named Ka and is available under the GPL license. The cloning process with Ka occurs in the following order: a small diskless Linux operating system is run on each client system. This system uses a Linux kernel loaded through TFTP by the boot program and a root filesystem mounted by NFS through the network. Then it creates and formats the hard disk partitions, which will be used to install the operating system and launches the Ka client. The Ka client will connect to a Ka server located on the computer to be cloned. When all the machines to be installed are connected to the Ka server, the chain algorithm starts. The Ka server coordinates the creation of a chain of TCP connections between clients. Once this chain is created, it is used to broadcast the data (Figure 2). Data is read on the server s hard disk, sent to the first client, written on its hard disk and at the same time forwarded to the second client, etc. Once all the data have been written on the hard
3 disk, each computer can reboot and use the system that it has just installed. In order to avoid restarting the whole installation procedure, a status file located on the TFTP server and updated by the installation program tells the computer to perform its post-installation phase. The broadcasted data can be a tar image of the root The test bed for our work is a 225-node cluster (Figure 3) with mainstream components. Each node is a PC equipped with a single Pentium III processor with 256 Mb of memory and 15 gigabytes of disk. The 225 computers are connected to an Ethernet gigabit backbone. Server Client 1 Client 2 Client 3 Figure 2: installation chain filesystem for a Linux system, or a raw partition image for an MS-Windows system. c) Post-installation The post-installation stage is the set of operations executed on a node during its first boot of the newly installed system. These include writing a fresh boot sector on the hard drive by running the lilo command and writing the hostname or the IP address of the node on the configuration files that need it. On Windows machines this post-installation stage includes also joining an NT domain, or generating a new Security ID (SID). The network bandwidth ranges from 100 megabits/s (the computer network interface) up to 1 gigabit/s (the switch interface). The switches interconnect (and thus the hierarchy of the communications) are easily customizable (ring, double ring, tree, star...). The aggregate bandwidth for one switch is 3.8 Gigabits. The installation of a complete Linux system (3 gigabytes of data) on 225 nodes takes less than fifteen minutes. The installation of a complete Windows 2000 system (2 gigabytes of data on a 3 gigabytes partition) on 100 took less than ten minutes (not including the post-installation process, that takes a few seconds for Linux but may last a few minutes on Windows systems). d) Tests Figure 3: Structure of I-Cluster (INRIA/Lab. Id-imag)
4 3. File distribution a) Efficient process launching The one-to-all copy of a file is a common need for the users and administrators. It can be used to upgrade a cluster by upgrading only one node manually and then broadcasting the changes. File staging is another example of the need for efficient data copy on a cluster. File copying can be carried out in the form of a multicast protocol or a unicast tree-structured protocol. Implementation of broadcasting algorithms is done in the framework of a general library for process management named rshp [1] whose main component is a program launcher. The objective of the launcher is first to quickly start all the processes involved in an application then to convey the signals between each process properly and finally to manage the input/output flows of data. The solution developed within the laboratory uses the standard rshd daemons and a customized rsh client. This customization is twofold. Considering the parallel rsh command as a multi-step process, one optimization is to minimize the number of steps. Our implementation carries out recursive calls to the rsh client so as to cover the computers involved with a tree. Another optimization is to parallelize each of the three traditional stages of the rsh (distant access, authentication, connection set up). In our implementation, asynchronous distant accesses allow the authentication part (the longest one) to occur in parallel on each client. We have implemented four tree-structures: binary, binomial, flat (one level) and lambda-tree. Each one may use a synchronous or an asynchronous rsh client. For our 225-node cluster, the best launcher is a flat tree with asynchronous distant calls. In that experiment, the launching phase lasts less than half a second for 200 nodes. b) File broadcasting In addition to algorithms described in the previous section, a multicast algorithm and a chain algorithm have been designed for file broadcasting. The chain implementation is the same as for operating system cloning. Our multicast algorithm adds a reliability layer on top of IP multicast. Two techniques based on the IETF drafts [3] were used to achieve reliability, ACK and FEC. In the ACK technique, the server starts sending multicast packets and waits at regular intervals (every M packets) for an acknowledgement of delivery (ACK) from each client. A client sends one ACK for N packets received, indicating the sequence number of the last received packets. The performance of the multicast algorithm mainly depends on the values M and N. Multicast clients are launched using the best launching algorithm as presented in the previous section. On our 225-node cluster, we use the asynchronous flat tree algorithm. We also tested an existing multicast library named mcl [4], based on the FEC (Forward Error Correction) techniques. It consists of transferring redundant packets so that a client could rebuild lost packets using redundant ones. This technique appeared not to be suited for our context because the cost of coding/decoding a packet is high compared to the network transfer costs. The figures 3,4 show the performances of the binomial, binary, chain and multicast algorithm on our cluster for respectively 16 and 201 nodes. Figure 4
5 Figure 5 From this, we see that the binary algorithm is good for small files while the chain algorithm is the best for large files. What we should call small files depends on the number of nodes. The binomial algorithm and the multicast algorithm based on ACK are useless in our switched network context. They should be useful for a cluster whose overall bandwidth is limited. Note the multicast algorithm might be improved by using more dynamic time windows. c) Mixed algorithms In the context of our switched cluster, it is possible to compare unicast algorithms. Suppose d is the time needed to establish a connection from one node to another. Suppose L is the time needed to transfer a file of size s from one node to another once the Figure 6
6 connection has been established. We assume that a node cannot do anything else while opening a connection. Therefore, the total time to transfer a file from one node to another is T = d + L In the case of an n-ary tree: at each level, the time needed by a node to contact all the nodes below him is d n. This will have to be done for each level, so if the tree has k levels, the total time for establishing the connections is ( k 1 ) d n. The data will be pipelined from the top of the tree down to all the leaves, and the node that has the most children will limit the speed of this transfer. In this case, it has n children, so the transfer time for all the nodes will be L n. Therefore the total time for an n-ary tree is k 1 + L n. t = ( d ( ) ) If n=2, then we have a binary tree with N=2 k -1 nodes and k 1 + L 2. In the case of a chain, we take t = ( d ( ) ) n=1 and then k=n, so t = d ( N 1 ) + L. In the case of a binomial tree: at each step, the transfer time is d + L. For a binomial tree with k steps, the total transfer time is t = ( d + L ) k. From this, we can produce a formula to decide which algorithm is the best depending on d, L and k and provide the user with a program that automatically chooses the good algorithm depending on the file size and the number of nodes involved in the broadcast. On our 200-node cluster, we have d=0.1 seconds thus d x N, the time for initializing a connected chain is about 20 seconds for 200 nodes. Yet, we presented in a previous section a parallel asynchronous launcher that takes less than 1 second to start a command on 200 nodes. The reason is that the authentication phase in the rsh daemon is very long compared to the network latency. It then happens to be worth mixing the two techniques (asynchronous launching and chain) to perform optimal broadcasting. Suppose now that we establish the chain using the asynchronous rshp algorithm as described in previous section. Let I be the duration of this initialization. Let d <<d be the TCP/IP latency of the network. The corresponding transfer time for the mixed algorithm is then t =I+ L + d x (N 1). The figures 5 and 6 show that the upgraded chain algorithm is better than all other algorithms tested even for small files. The curve is almost the same as would be the curve for a simple point-to-point file transfer and the time overhead is very small, between 10% and 25%. Although the initialization phase could also be improved for the binary algorithm, we guess that the upgraded chain algorithm will outperform the binary algorithm except for very small files. 4. Related work There are a numerous tools for PC cloning mainly in the Microsoft Windows environment, Ghost, being the most famous one. The Unix ghost-like software, CloneIT, g4u and Dolly also use a floppy (or a dedicated hard disk partition) to start the cloning process, thus are not adequate for large sized cluster. Yet one could combine the quoted software with a PXE-compliant boot system to provide a scalable Figure 7
7 cloning system in the same way as Ka do. It is worth noting that Ghost and Dolly respectively use a multicast and a TCP chain protocol that are both available in Ka. BpBatch, Rembo and Beoboot [6] are non-open source cloning tools. The latter tools have been used to install large infrastructures. In addition, BpBatch provides a scripting language to perform pre-installation configuration. SIS [12] is an operating system installation project whose main objective is to define and maintain a rich set of image "flavors". As far as parallel commands are concerned, parallel programming environment such as SCE [13] and Cplant [5] provide a set of commands for managing a cluster including file copying and process management. TACO [12] is a general API for collective communication operations. SUT [14] and C3 [2] which focus only on parallel commands, do not provide, to our knowledge, the same wide range of algorithms as Ka do for collective operations or for operating system installation. -SMCS 5. Conclusion We have presented a toolkit for optimizing file distribution on a large-size infrastructure. It provides both the software for operating system installation and file broadcasting based on multicast, chain and several treestructured algorithms. The toolkit should find a good distribution process for various network architecture. In the case of a switched network architecture, a chain algorithm is combined with the use of an optimized launcher. Broadcasting a file to hundred of nodes takes not much more time than a single PC-to-PC file copy. References [1] C. Martin, O. Richard. Parallel launcher for clusters of PC, submitted to World scientific [2] C3, Cluster Command & Control (C3) Tool Suite, Submitted March 2001 for review to special issue of PDCP (Parallel Distributed Computing Practices) available at [3] Reliable Multicast January 2001 [4] V. Roca, ``On the Use of On-Demand Layer Addition (ODL) with Multi-Layer Multicast Transmissions Schemes'', 2nd International Workshop on Networked Group Communication (NGC 2000), Stanford University, California, USA, November 2000 [5] R. Brightwell L. A. Fisk, Parallel Application Launch on Cplant, SC2001 technical paper, November 2001 [6] BpBatch: [7] CloneIt [8] Cluclo [9] Get4u [10] Dolly [11] SIS [12] J. Nolte and M. Sato and Y. Ishikawa, TACO -- Exploiting Cluster Networks for High-Level Collective Operations, [13] P. Uthayopas, S. Phatanapherom, T. Angskun, S. Sriprayoonsakul, SCE: A Fully Integrated Software Tool for Beowulf Cluster System», Proceedings of Linux Clusters: the HPC Revolution, National Center for Supercomputing Applications (NCSA), University of Illinois, Urbana, IL, June 25-27, 2001 [14] SUT Unix Tools on Massively Parallel Processors William Gropp and Ewing Lusk [15] M. Bernaschi and G. Iannello. Collective Communication Operations: Experimental Results vs. Theory. Concurrency: Practice and Experience, 10(5): , April 1998 [16]T. Kielmann, R. F. H. Hofman, H. E. Bal, A. Plaat, and R. A. F. Bhoedjang. MAGPIE: MPI's Collective Communication Operations for Clustered Wide Area Systems. In Proc. Symposium on Principles and Practice of Parallel Programming (PPoPP), Atlanta, GA, May 1999 [17] D.E. Culler, R.M. Karp, D.A. Patterson, A. Sahay, K.E. Schauser, E. Santos, R. Subronomian, T. van Eicken, LogP: Towards a Realistic Model of Parallel Computation, Proc. of 4th ACM Symp. on Pinciples and Practice of Parallel Programming, 1993 [18] A. Alexandrov, M. F. Ionescu, K. E. Schauser, and C. Scheiman. LogGP: Incorporating Long Messages into the LogP Model --- One Step Closer Towards a Realistic Model for Parallel Computation. In Proc. Symposium on Parallel Algorithms and Architectures (SPAA), pages , Santa Barbara, CA, July 1995.
CATS-i : LINUX CLUSTER ADMINISTRATION TOOLS ON THE INTERNET
CATS-i : LINUX CLUSTER ADMINISTRATION TOOLS ON THE INTERNET Jiyeon Kim, Yongkwan Park, Sungjoo Kwon, Jaeyoung Choi {heaven, psiver, lithmoon}@ss.ssu.ac.kr, [email protected] School of Computing, Soongsil
Course Description and Outline. IT Essential II: Network Operating Systems V2.0
Course Description and Outline IT Essential II: Network Operating Systems V2.0 Course Outline 1. Operating System Fundamentals 1.1 Operating System Basics 1.1.1 Overview of PC operating systems 1.1.2 PCs
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
Stateless Compute Cluster
5th Black Forest Grid Workshop 23rd April 2009 Stateless Compute Cluster Fast Deployment and Switching of Cluster Computing Nodes for easier Administration and better Fulfilment of Different Demands Dirk
A Comparison on Current Distributed File Systems for Beowulf Clusters
A Comparison on Current Distributed File Systems for Beowulf Clusters Rafael Bohrer Ávila 1 Philippe Olivier Alexandre Navaux 2 Yves Denneulin 3 Abstract This paper presents a comparison on current file
MOSIX: High performance Linux farm
MOSIX: High performance Linux farm Paolo Mastroserio [[email protected]] Francesco Maria Taurino [[email protected]] Gennaro Tortone [[email protected]] Napoli Index overview on Linux farm farm
Network Station - Thin Client Computing - Overview
Network Station - Thin Client Computing - Overview Overview The objective of this document is to help develop an understanding of a Server Based Computing/Thin-Client environment using MS Windows NT 4.0,
Overlapping Data Transfer With Application Execution on Clusters
Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm [email protected] [email protected] Department of Computer Science Department of Electrical and Computer
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
WinClon CC. Network-based System Deployment and Management Tool. A Windows Embedded Partner
Network-based System Deployment and Management Tool A Windows Embedded Partner Table of Contents Introduction Overview Integrated System Imaging System Requirements Deployment & Distribution System Image
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
Hadoop Cluster Applications
Hadoop Overview Data analytics has become a key element of the business decision process over the last decade. Classic reporting on a dataset stored in a database was sufficient until recently, but yesterday
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
Analysis and Implementation of Cluster Computing Using Linux Operating System
IOSR Journal of Computer Engineering (IOSRJCE) ISSN: 2278-0661 Volume 2, Issue 3 (July-Aug. 2012), PP 06-11 Analysis and Implementation of Cluster Computing Using Linux Operating System Zinnia Sultana
Internetworking and Internet-1. Global Addresses
Internetworking and Internet Global Addresses IP servcie model has two parts Datagram (connectionless) packet delivery model Global addressing scheme awaytoidentifyall H in the internetwork Properties
Distributed Operating Systems. Cluster Systems
Distributed Operating Systems Cluster Systems Ewa Niewiadomska-Szynkiewicz [email protected] Institute of Control and Computation Engineering Warsaw University of Technology E&IT Department, WUT 1 1. Cluster
Simplest Scalable Architecture
Simplest Scalable Architecture NOW Network Of Workstations Many types of Clusters (form HP s Dr. Bruce J. Walker) High Performance Clusters Beowulf; 1000 nodes; parallel programs; MPI Load-leveling Clusters
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
Linux Terminal Serve Project (LTSP) An Emerging Cheapest Open Source Client / Server Architecture for Business
Linux Terminal Serve Project (LTSP) An Emerging Cheapest Open Source Client / Server Architecture for Business Abubakar Email: Web: [email protected] http://www.bakars.com About this Session Introduction
Grid Scheduling Dictionary of Terms and Keywords
Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status
Distributed RAID Architectures for Cluster I/O Computing. Kai Hwang
Distributed RAID Architectures for Cluster I/O Computing Kai Hwang Internet and Cluster Computing Lab. University of Southern California 1 Presentation Outline : Scalable Cluster I/O The RAID-x Architecture
Course Overview: Learn the essential skills needed to set up, configure, support, and troubleshoot your TCP/IP-based network.
Course Name: TCP/IP Networking Course Overview: Learn the essential skills needed to set up, configure, support, and troubleshoot your TCP/IP-based network. TCP/IP is the globally accepted group of protocols
The Three-level Approaches for Differentiated Service in Clustering Web Server
The Three-level Approaches for Differentiated Service in Clustering Web Server Myung-Sub Lee and Chang-Hyeon Park School of Computer Science and Electrical Engineering, Yeungnam University Kyungsan, Kyungbuk
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
Scalability and Classifications
Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static
Automated deployment of virtualization-based research models of distributed computer systems
Automated deployment of virtualization-based research models of distributed computer systems Andrey Zenzinov Mechanics and mathematics department, Moscow State University Institute of mechanics, Moscow
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Scalable monitoring and configuration tools for grids and clusters
Scalable monitoring and configuration tools for grids and clusters Philippe Augerat 1, Cyrill Martin 2, Benhur Stein 3 1 INPG, ID laboratory, Grenoble 2 BULL, INRIA, ID laboratory, Grenoble 3 UFSM, Santa-Maria,
Measuring MPI Send and Receive Overhead and Application Availability in High Performance Network Interfaces
Measuring MPI Send and Receive Overhead and Application Availability in High Performance Network Interfaces Douglas Doerfler and Ron Brightwell Center for Computation, Computers, Information and Math Sandia
Optimizing Enterprise Network Bandwidth For Security Applications. Improving Performance Using Antaira s Management Features
Optimizing Enterprise Network Bandwidth For Security Applications Improving Performance Using Antaira s Management Features By: Brian Roth, Product Marketing Engineer April 1, 2014 April 2014 Optimizing
Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH
Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH CONTENTS Introduction... 4 System Components... 4 OpenNebula Cloud Management Toolkit... 4 VMware
Enabling Large-Scale Testing of IaaS Cloud Platforms on the Grid 5000 Testbed
Enabling Large-Scale Testing of IaaS Cloud Platforms on the Grid 5000 Testbed Sébastien Badia, Alexandra Carpen-Amarie, Adrien Lèbre, Lucas Nussbaum Grid 5000 S. Badia, A. Carpen-Amarie, A. Lèbre, L. Nussbaum
CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study
CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what
CONFIGURING ACTIVE DIRECTORY IN LIFELINE
White Paper CONFIGURING ACTIVE DIRECTORY IN LIFELINE CONTENTS Introduction 1 Audience 1 Terminology 1 Test Environment 2 Joining a Lenovo network storage device to an AD domain 3 Importing Domain Users
How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet
How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet Professor Jiann-Liang Chen Friday, September 23, 2011 Wireless Networks and Evolutional Communications Laboratory
Petascale Software Challenges. Piyush Chaudhary [email protected] High Performance Computing
Petascale Software Challenges Piyush Chaudhary [email protected] High Performance Computing Fundamental Observations Applications are struggling to realize growth in sustained performance at scale Reasons
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
SCE: A Fully Integrated Software Tool for Beowulf Cluster System
SCE: A Fully Integrated Software Tool for Beowulf Cluster System Putchong Uthayopas, Thara Angskun, Somsak Sriprayoonskul, and Sugree Phatanapherom Parallel Research Group, CONSYL Department of Computer
760 Veterans Circle, Warminster, PA 18974 215-956-1200. Technical Proposal. Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA 18974.
760 Veterans Circle, Warminster, PA 18974 215-956-1200 Technical Proposal Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA 18974 for Conduction Cooled NAS Revision 4/3/07 CC/RAIDStor: Conduction
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
Moving Virtual Storage to the Cloud
Moving Virtual Storage to the Cloud White Paper Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage www.parallels.com Table of Contents Overview... 3 Understanding the Storage
ovirt and Gluster hyper-converged! HA solution for maximum resource utilization
ovirt and Gluster hyper-converged! HA solution for maximum resource utilization 31 st of Jan 2016 Martin Sivák Senior Software Engineer Red Hat Czech FOSDEM, Jan 2016 1 Agenda (Storage) architecture of
ovirt and Gluster hyper-converged! HA solution for maximum resource utilization
ovirt and Gluster hyper-converged! HA solution for maximum resource utilization 21 st of Aug 2015 Martin Sivák Senior Software Engineer Red Hat Czech KVM Forum Seattle, Aug 2015 1 Agenda (Storage) architecture
Axceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2
Axceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2 In the movie making, visual effects and 3D animation industrues meeting project and timing deadlines is critical to success. Poor quality
Virtual Machines. www.viplavkambli.com
1 Virtual Machines A virtual machine (VM) is a "completely isolated guest operating system installation within a normal host operating system". Modern virtual machines are implemented with either software
WEB CONFIGURATION. Configuring and monitoring your VIP-101T from web browser. PLANET VIP-101T Web Configuration Guide
WEB CONFIGURATION Configuring and monitoring your VIP-101T from web browser The VIP-101T integrates a web-based graphical user interface that can cover most configurations and machine status monitoring.
How To Build A Clustered Storage Area Network (Csan) From Power All Networks
Power-All Networks Clustered Storage Area Network: A scalable, fault-tolerant, high-performance storage system. Power-All Networks Ltd Abstract: Today's network-oriented computing environments require
Provisioning and Resource Management at Large Scale (Kadeploy and OAR)
Provisioning and Resource Management at Large Scale (Kadeploy and OAR) Olivier Richard Laboratoire d Informatique de Grenoble (LIG) Projet INRIA Mescal 31 octobre 2007 Olivier Richard ( Laboratoire d Informatique
Reborn Card NET. User s Manual
Reborn Card NET User s Manual Table of Contents Notice Before Installation:... 2 System Requirements... 3 1. First Installation... 4 2. Hardware Setup... 4 3. Express Installation... 6 4. How to setup
SUSE Manager in the Public Cloud. SUSE Manager Server in the Public Cloud
SUSE Manager in the Public Cloud SUSE Manager Server in the Public Cloud Contents 1 Instance Requirements... 2 2 Setup... 3 3 Registration of Cloned Systems... 6 SUSE Manager delivers best-in-class Linux
Interconnection Networks. Interconnection Networks. Interconnection networks are used everywhere!
Interconnection Networks Interconnection Networks Interconnection networks are used everywhere! Supercomputers connecting the processors Routers connecting the ports can consider a router as a parallel
PARALLELS SERVER 4 BARE METAL README
PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels
Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata
Implementing Network Attached Storage Ken Fallon Bill Bullers Impactdata Abstract The Network Peripheral Adapter (NPA) is an intelligent controller and optimized file server that enables network-attached
Deploying Baremetal Instances with OpenStack
Deploying Baremetal Instances with OpenStack Ver1.1 2013/02/10 Etsuji Nakai $ who am i Etsuji Nakai Senior solution architect and cloud evangelist at Red Hat. Working for NII (National Institute of Informatics
Chapter 14: Distributed Operating Systems
Chapter 14: Distributed Operating Systems Chapter 14: Distributed Operating Systems Motivation Types of Distributed Operating Systems Network Structure Network Topology Communication Structure Communication
Terminal Services for InTouch 7.1/7.11. Terminal Services for InTouch 7.1/7.11 PRODUCT POSITION PRODUCT DATASHEET
Terminal Services for InTouch 7.1/7.11 PRODUCT POSITION Terminal Services for InTouch 7.1/7.11 PRODUCT DATASHEET Terminal Services for InTouch 7.1/7.11 provides manufacturing users with all the benefits
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Embedded Linux development training 4 days session
Embedded Linux development training 4 days session Title Overview Duration Trainer Language Audience Prerequisites Embedded Linux development training Understanding the Linux kernel Building the Linux
installation administration and monitoring of beowulf clusters using open source tools
ation administration and monitoring of beowulf clusters using open source tools roger goff senior system architect hewlett-packard company [email protected] (970)898-4719 FAX (970)898-6787 dr. randy splinter
Study and installation of a VOIP service on ipaq in Linux environment
Study and installation of a VOIP service on ipaq in Linux environment Volkan Altuntas Chaba Ballo Olivier Dole Jean-Romain Gotteland ENSEIRB 2002 Summary 1. Introduction 2. Presentation a. ipaq s characteristics
Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
Res. Lett. Inf. Math. Sci., 2003, Vol.5, pp 1-10 Available online at http://iims.massey.ac.nz/research/letters/ 1 Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
IT4504 - Data Communication and Networks (Optional)
- Data Communication and Networks (Optional) INTRODUCTION This is one of the optional courses designed for Semester 4 of the Bachelor of Information Technology Degree program. This course on Data Communication
Using NetBooting on the Mac OS X Server for delivery of mass client deployment
23.07 Netbooting 6/2/07 1:30 PM Page 2 Using NetBooting on the Mac OS X Server for delivery of mass client deployment by Criss Myers Preface In this modern era of high technical and support costs, it is
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Aqua Connect Load Balancer User Manual (Mac)
Aqua Connect Load Balancer User Manual (Mac) Table of Contents About Aqua Connect Load Balancer... 3 System Requirements... 4 Hardware... 4 Software... 4 Installing the Load Balancer... 5 Configuration...
SERVER CLUSTERING TECHNOLOGY & CONCEPT
SERVER CLUSTERING TECHNOLOGY & CONCEPT M00383937, Computer Network, Middlesex University, E mail: [email protected] Abstract Server Cluster is one of the clustering technologies; it is use for
Running a Workflow on a PowerCenter Grid
Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)
CSAR: Cluster Storage with Adaptive Redundancy
CSAR: Cluster Storage with Adaptive Redundancy Manoj Pillai, Mario Lauria Department of Computer and Information Science The Ohio State University Columbus, OH, 4321 Email: pillai,[email protected]
Networking. Sixth Edition. A Beginner's Guide BRUCE HALLBERG
Networking A Beginner's Guide Sixth Edition BRUCE HALLBERG Mc Graw Hill Education New York Chicago San Francisco Athens London Madrid Mexico City Milan New Delhi Singapore Sydney Toronto Contents Acknowledgments
WHITE PAPER 1 WWW.FUSIONIO.COM
1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics
Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage
Moving Virtual Storage to the Cloud Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage Table of Contents Overview... 1 Understanding the Storage Problem... 1 What Makes
How To Monitor And Test An Ethernet Network On A Computer Or Network Card
3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel
Monitoring Infrastructure for Superclusters: Experiences at MareNostrum
ScicomP13 2007 SP-XXL Monitoring Infrastructure for Superclusters: Experiences at MareNostrum Garching, Munich Ernest Artiaga Performance Group BSC-CNS, Operations Outline BSC-CNS and MareNostrum Overview
Intro to Virtualization
Cloud@Ceid Seminars Intro to Virtualization Christos Alexakos Computer Engineer, MSc, PhD C. Sysadmin at Pattern Recognition Lab 1 st Seminar 19/3/2014 Contents What is virtualization How it works Hypervisor
A Load Balancing Technique for Some Coarse-Grained Multicomputer Algorithms
A Load Balancing Technique for Some Coarse-Grained Multicomputer Algorithms Thierry Garcia and David Semé LaRIA Université de Picardie Jules Verne, CURI, 5, rue du Moulin Neuf 80000 Amiens, France, E-mail:
Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB-02499-001_v02
Technical Brief DualNet with Teaming Advanced Networking October 2006 TB-02499-001_v02 Table of Contents DualNet with Teaming...3 What Is DualNet?...3 Teaming...5 TCP/IP Acceleration...7 Home Gateway...9
The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
Chapter 16: Distributed Operating Systems
Module 16: Distributed ib System Structure, Silberschatz, Galvin and Gagne 2009 Chapter 16: Distributed Operating Systems Motivation Types of Network-Based Operating Systems Network Structure Network Topology
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Cisco Active Network Abstraction Gateway High Availability Solution
. Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and
Building Secure Network Infrastructure For LANs
Building Secure Network Infrastructure For LANs Yeung, K., Hau; and Leung, T., Chuen Abstract This paper discusses the building of secure network infrastructure for local area networks. It first gives
Argon Client Management Services- Frequently Asked Questions (FAQ)
Simplifying Client Management FAQ Argon - Frequently Asked Questions (FAQ) What are the server requirements? Operating Systems: Windows 98, Windows NT Version 4.0 (Service Pack 4 or later, IE 4 or later),
Communications and Computer Networks
SFWR 4C03: Computer Networks and Computer Security January 5-8 2004 Lecturer: Kartik Krishnan Lectures 1-3 Communications and Computer Networks The fundamental purpose of a communication system is the
2057-15. First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring
2057-15 First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring 7-25 September 2009 TCP/IP Networking Abhaya S. Induruwa Department
Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions
Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions Abstract Coyote Point Equalizer appliances deliver traffic management solutions that provide high availability,
CT505-30 LANforge-FIRE VoIP Call Generator
1 of 11 Network Testing and Emulation Solutions http://www.candelatech.com [email protected] +1 360 380 1618 [PST, GMT -8] CT505-30 LANforge-FIRE VoIP Call Generator The CT505-30 supports SIP VOIP
DRBL and Clonezilla The deployment and restoration system
DRBL and Clonezilla The deployment and restoration system Steven Shiau, Chen Kai Sun, Yao Tsug Wang and Yu Chin Tsai http://drbl.nchc.org.tw, http://drbl.sf.net National Center for High Performance Computing
DB2 Connect for NT and the Microsoft Windows NT Load Balancing Service
DB2 Connect for NT and the Microsoft Windows NT Load Balancing Service Achieving Scalability and High Availability Abstract DB2 Connect Enterprise Edition for Windows NT provides fast and robust connectivity
Advanced Server Virtualization: Vmware and Microsoft Platforms in the Virtual Data Center
Advanced Server Virtualization: Vmware and Microsoft Platforms in the Virtual Data Center Marshall, David ISBN-13: 9780849339318 Table of Contents BASIC CONCEPTS Introduction to Server Virtualization Overview
