With the explosive growth in information that increases fast in our lives, the traffic

Size: px
Start display at page:

Download "With the explosive growth in information that increases fast in our lives, the traffic"

Transcription

1 Chapter 1 INTRODUCTION 1.1 Motivation With the explosive growth in information that increases fast in our lives, the traffic on the Internet is growing dramatically. People have used many network services and they are becoming one important part of our lives. People play for entertainment and learn knowledge on the Internet, and they can get more information about what they want to have. Distance Learning and E-learning [8] are put into practice in our life. Many people use the Internet to make their lives more convenient, and they do many things on the Internet such as play on-line games, listen to radio, watch MTV or TV, learn knowledge or get degrees from network schools, and search for information. People get much information and learn many kinds of knowledge by the Internet. Many multimedia sites such as network TV or movie, network KTV, and on-line games, use a large amount of data to provide multi-users for playing on-line and watching films. These applications on the Internet are very popular and convenient. The more people use the Internet, the more servers have to provide to meet their needs. Plenty of data has to be accessed from the Internet or local sites, and that increases lots of load on servers. For providing available network services and high quality of services (QoS), the servers have to get more powerful and more efficient. The single server solution, which is to upgrade the server to a higher performance server, has its shortcomings. The upgrading process is complex, and the original machine may be wasted. When requests increase, it will be overloaded soon so that we have to upgrade again. The server is a single point of failure. The higher end the server 1

2 is upgraded to, the higher cost we have to pay. Clusters of servers, connected by a fast network, are emerging as a viable architecture for building a high-performance and highly available server. This type of loose-coupled architecture is more scalable, more cost-effective and more reliable than a single processor system or a tightly coupled multiprocessor system. However, there are challenges to provide transparency, scalability and high availability of parallel services in the cluster. Computer technology develops very fast and computing speed is getting faster, and a large or super computer can process many tasks. Although a large computer is powerful, it cost a lot of money and it s hard to maintain the computer for a company, school, or government. A large or super computer is usually used in a scientific experiment or military unit. The large computer isn t popularly used and it is difficult to maintain. Personal computers are getting more and more powerful, and it costs less and less money, and network device technology has developed dramatically in recent years. It is popular to build cluster systems by personal computer, and these are powerful as large or super computers. Many clustering systems have been developed in these years and we also see personal computer based (PC-based) cluster systems for computing and scientific applications. Today, there are many reliable network systems providing services on the Internet, such as Yahoo, Amazon, and American On-Line (AOL) sites. Most of these big sites use large computers in their system and it costs lots of money to set up and manage. More and more sites are built by large computers for their systems and designed for high 2

3 performance computing, because it is required for a fast response to clients. The biggest disadvantage is it is too expensive to build a large and high performance system on the Internet. Thus, the system is difficult to be popularly used if it costs too much and it s hard to manage. To lower the cost of the computer system, a cluster system will be popularly applied to many applications, and it is less expensive and more reliable, and provides load balance and high extensibility for future needs. We research on most cluster systems for network services, then design for low cost and high performance solution, which is easy to set up and manage. 1.2 Objective We have to achieve four objectives for the Internet cluster server: it should be affordable, it should perform well, it should be reliable, and it should be easy to install and manage. For low cost design, personal computer is getting more powerful and some cluster systems are used in PC-based design in these days, thus we apply personal computers for the cluster system. For high performance design, a powerful and strong storage system is required and it must have high scalability and high performance, which provides easy expansion and high-speed transmission between servers and clients. For high reliability design, load balance on servers is necessary for providing fast response for clients, and storage system is not only high performance but available to 3

4 respond to cluster servers. For easy using and systemic management design, it has to be easy to install all the system and troubleshoot if error happens, and has to have useful and easy using utility or software for management. Finally, we have to achieve the systemic management that schedules all requests from the Internet or local clients and manages data on file system efficiently. 1.3 Thesis Organization In chapter 2, we would describe the background of the research on cluster technology for network service and network file system building a cluster system for network service in section 2.1. In section 2.2, we express the whole system of Parallel Virtual File System (PVFS) and how it works about parallel accessing. In final section 2.3, Linux Virtual Server (LVS) is introduced to how it works and its operations in different network structures and algorithms. In chapter 3, the concept of our cluster system for network service is expressed as in section 3.1. The main idea combined with PVFS and LVS is described the design in the cluster system and how PVFS and LVS to coordinate together in section 3.2. In section 3.3, we describe the algorithm applied in our cluster system. In final section 3.4, file partitioning and stripping in the cluster system and parallel accessing to the network file system are described here. In chapter 4, we explain the experiment on PVFS for different file sizes and process number. The test and network environment is expressed in section 4.1. In 4

5 section 4.2, we express and discuss the experimental results, and figure out the tendency of the system performance. In chapter 5, we describe the experiment on the cluster system. The test environment and network configuration are expressed in section 5.1. In section 5.2, describing the challenges to design the network structure for the cluster system. Experimental results on the cluster system are described in section 5.3, and we also figure out the tendency of the cluster system performance in this section. In chapter 6, we conclude the performance and possibility applied to others for the cluster system and future works. 5

6 Chapter 2 Related Work 2.1 Background Network File System For designing a powerful cluster system, we consider that parallel computing as main idea improving cluster performance. And cluster computing has recently emerged as a mainstream method for parallel computing in many application domains, with Linux leading the pack as the most popular operating system for clusters. As researchers continue to push the limits of the capabilities of clusters, new hardware and software have been developed to meet cluster s needs of computing. In particular, hardware and software for message passing have matured a great deal since the early days of Linux cluster computing. In many cases, cluster networks rival the networks of commercial parallel machines. These advances have broadened the range of problems that can be effectively solved on clusters. As a result, cluster and parallel computing is one of the best solutions for high performance network file system. Network file systems can be divided roughly into three groups: commercial parallel file systems, distributed file systems, and research parallel file systems. The three groups are described as following. The first group comprises commercial parallel file systems such as PFS for the Intel Paragon [5], PIOFS and GPFS for the IBM SP [24], HFS for the HP Exemplar [27], and XFS for the SGI Origin2000 [36]. These file systems provide high performance and functionality desired for I/O-intensive applications but are available only on the specific 6

7 platforms on which the vendor has implemented them. The second group comprises distributed file systems such as NFS [2], AFS/Coda [25][31], InterMezzo [26], and GFS [16]. These file systems are designed to provide distributed access to files from multiple client machines, and their consistency semantics and caching behavior are designed accordingly for such access. The types of workloads resulting from large parallel scientific applications usually do not mesh well with file systems designed for distributed access. And distributed file systems are not designed for high-bandwidth concurrent writes that parallel applications typically require. The third group some research projects exist in the areas of parallel I/O and parallel file systems, such as PIOUS [29], PPFS [14], Galley [19], and PVFS [13]. PIOUS focuses on viewing I/O from the viewpoint of transactions, PPFS research focuses on adaptive caching and prefetching, and Galley looks at disk-access optimization and alternative file organizations. These file systems may be freely available but are mostly research prototypes, not intended for everyday use by others. The PVFS project is an effort to provide a parallel file system for PC clusters, which provides a global name space, striping of data across multiple I/O nodes, and multiple user interfaces Cluster Technology for Network Service Clusters of servers, connected by a fast network, are emerging as a viable architecture for building highly scalable and available services. This type of loosely coupled architecture is more scalable, more cost effective and more reliable than a 7

8 tightly coupled multiprocessor system. However, a number of challenges must be addressed to make a cluster s function effectively for the scalable services. We can see that in client/server applications there are many ways to dispatch requests to a cluster of servers in the different levels. In general, these servers have the same service and the same set of contents. The contents are replicated on each server s local disk, share on a network file system, or served by a distributed file system. Request dispatching techniques can be classified into four groups: server-side Round-Robin DNS approach, client-side approach, server-side application-level scheduling approach, and server-side IP-level scheduling approaches. The four groups are described as follows [35]. 1. First group is server-side Round-Robin DNS approach, which has some problem such as the caching nature of clients and hierarchical DNS system. It easily leads to dynamic load imbalance among the servers, thus it is not easy for a server to handle its peak load. The Time To Live (TTL) value of a name mapping can't be well chosen at RR-DNS, with small values RR-DNS will be a bottleneck, and with high values the dynamic load imbalance will get even worse. And it is not so reliable, when a server node fails, the clients who maps the name to the IP address will find the server is down, and the problem still exists even if they press "reload" or "refresh" button in their browsers. 2. Second group is client-side approach, which has some problems that are not client transparent, they requires modification of client applications, so they cannot be applied to all TCP/IP services. Moreover, they will potentially increase network traffic by extra querying or probing. 8

9 3. Third group is server-side application-level scheduling approach, which has some problems, too. This approach requires establishing two TCP connections for each request, one is between the client and the load balancer, the other is between the load balancer and the server, the delay is high. The overhead of dealing HTTP requests and replies in the application-level is high. Thus the application-level load balancer will be a new bottleneck soon when the number of server nodes increases. 4. Fourth group is server-side IP- level scheduling approaches, such as Berkeley s Magic Router [9], Cisco s Local Director [4], IBM s TCP router, Net Dispatcher [10], ONE-IP [20], and Linux Virtual Server (LVS) [34]. The Magic Router doesn t survive to be a useful system for other users. The Local Director is too expensive. And the two systems are only support port of TCP protocol. IBM s TCP router uses the modified Network Address Translation approach to build scalable web server on IBM scalable Parallel SP-2 system. The advantage of the modified approach is that the TCP router avoids rewriting of the reply packets and the disadvantage is that it requires modification of the kernel code of every server in the cluster. Net Dispatcher is too expensive to build a cluster system and ONE-IP has some disadvantages. The advantage in ONE-IP is that the rewriting of response packets can be avoided and the disadvantage is that it cannot be applied to all operating systems because some operating systems will shutdown the network interface when detecting IP address collision, and the local filtering also requires modification of the kernel code of server. Linux Virtual Server is a high performance and highly available server built on 9

10 clusters of servers for scalable network services and extremely suitable for designing a cluster system with Parallel Virtual File System (PVFS) [21]. Prototypes of Linux Virtual Server have been used to build many sites of heavy load in the Internet, such as sourceforge.net and JANET web cache systems. 2.2 Parallel Virtual File System Introduction The Parallel Virtual File System (PVFS) [13][21][22][33] Project is developed by Parallel Architecture Research Laboratory (PARL) that is established by National Aeronautics and Space Administration (NASA) Goddard Space Flight Center. It is an effort to provide a high performance and scalable parallel file system for PC clusters. PVFS is open source and released under the Gnu's Not Unix (GNU) General Public License (GPL). It requires no special hardware or modifications to the kernel. PVFS provides four important capabilities, including a consistent file name space across the machine, transparent access for existing utilities, and physical distribution of data across multiple disks in multiple cluster nodes, and high-performance user space access for applications. For a parallel file system to be easily used it must provide a name space which is the same across the cluster and it must be accessible via the utilities to which we are all accustomed. PVFS file systems may be mounted on all nodes in the same directory simultaneously, allowing all nodes to see and access all files on the PVFS file system through the same directory scheme. Once mounted PVFS files and directories can be operated on with all the familiar tools such as Unix tools. For providing high-performance access to data stored on the system by many 10

11 clients, PVFS spreads data out across multiple cluster nodes, which is called I/O nodes. By spreading data across multiple I/O nodes, applications have multiple paths to data through the network and multiple disks on which data is stored. This eliminates single bottlenecks in the I/O path and increases the total potential bandwidth for multiple clients. While the traditional mechanism of system calls for file access is convenient and allows for all applications to access files stored on many different file system types, there is overhead in accessing through the kernel. With PVFS, clients can avoid making requests to the file system through the kernel by linking to the PVFS native API. This library implements a subset of the UNIX operations which directly contact PVFS servers rather than passing through the local kernel. This library can be utilized by applications or by libraries, such as the ROMIO MPI-IO library, for high speed PVFS access Architecture As shown in Figure 2-1 [3], the PVFS is composed of three main nodes, including compute nodes, management node, and I/O nodes. 11

12 12

13 13

14 PVFS library. The library includes calls to mount and unmount PVFS file systems from a compute node as well as calls to create, remove, open, close, read, and write PVFS files. In addition, a function call is provided to set file parameters including physical metadata parameters and logical partitioning parameters. Mounting and unmounting PVFS a file system involves the exchange of file system data between the PVFSD on a compute node and the PVFSMGR for the file system. These calls are normally only needed for utility applications and are not typically used by user programs. Creating, removing, opening, and closing files involve the exchange of file metadata between the application and the PVFSMGR via the local PVFSD. The PVFSMGR relays the status of each file to the affected IODs. Reading and writing of opened files is performed directly between the application library routines and the IODs. 14

15 file system. The manager then determines which IODs get data for that file. Finally, the file is opened by all IODs. The addresses of the IODs are then passed directly back to the application. Once a file is opened, all accesses to this file will take place by connecting directly to the IODs themselves as shown in Figure 2-3-b. Connections are maintained between accesses, and the PVFSD and PVFSMGR are not involved again until the file is closed Supported on Linux The PVFS Linux kernel support provides the functionality necessary to mount PVFS file systems on Linux nodes. This allows existing programs to access PVFS files without any modification. This support is not necessary for PVFS use by applications, but it provides an extremely convenient means for interacting with the system. The PVFS Linux kernel support includes a loadable module, an optional kernel patch to eliminate a memory copy, and a daemon, PVFSD that accesses the PVFS file system on behalf of applications. It uses functions from LIBPVFS to perform these operations. 15

16 Figure 2-4 shows data flow through the kernel when the Linux kernel support is used. This technique is similar to the mechanism used by the Coda file system. Operations are passed through system calls to the Linux VFS layer. They are queued for service by the PVFSD, which receives operations from the kernel through a device file. It then communicates with the PVFS servers and returns data through the kernel to the application Interfaces of the Application on Compute Nodes For any file system to be usable, convenient interfaces must be available for it. This issue becomes even more important when applications run in parallel. These applications place heavy demands on a file system. To meet the needs of multiple groups, there are three interfaces through which PVFS may be accessed: a. PVFS native API The PVFS native API provides an UNIX- like interface for accessing PVFS files. It also allows users to specify how files will be striped across the I/O nodes in the PVFS system. b. Linux kernel interface The Linux kernel interface allows applications to access PVFS file systems through the normal channels. This allows users to use all the common utilities to perform every day data manipulation. c. ROMIO MPI-IO interface 16

17 ROMIO [28] implements the MPI-2 I/O [11] calls in a portable library. This allows parallel programmers using MPI to access PVFS files through the MPI-IO interface. ROMIO implements two optimizations, including data sieving and two-phase collective I/O, which can be of great performance benefit. 2.3 Linux Virtual Server Introduction Linux Virtual Server (LVS) [17][18][34][35] is a Linux project developed by China National Laboratory for Parallel and Distributed Processing. It is a scalable and highly available server built on a cluster of loosely coupled independent servers. The architecture of the cluster is transparent to clients outside the cluster. Client interacts with the cluster equal to a single high-performance and highly available server. LVS directs network connections to the different servers according to scheduling algorithms in the kernel and makes parallel services of the cluster to appear as a service on a single IP address. Client applications interact with the cluster as if it were a single high-performance and highly available server. The clients are not affected by interaction with the cluster and do not need modification. Transparent adding or removing a node in the cluster achieves scalability; detecting node or daemon failures and reconfiguring the system appropriately provide high availability. 17

18 2.3.2 Topology 18

19 Network Address Translation (VS/NAT), Virtual Server via IP Tunneling (VS/TUN), and Virtual Server via Direct Routing. The director running the modified kernel acts as a load balancer of network connections from clients who know a single IP address for a service, to a set of servers that actually perform the work. In general, real servers are identical, they run the same service and they have the same set of contents. The contents are replicated on each server s local disk, shared on a network file system, or served by a distributed file system. It is data communication between a client s socket and a server s socket connection, no matter it talks TCP or UDP protocol. The following subsections describe the working principles of three techniques. 19

20 a. Virtual Sever via Network Address Translation 20

21 address and the port of the packet are rewritten to those of the chosen server, and the packet is forwarded to the server. When the incoming packet belongs to this connection and the chosen server can be found in the hash table, the packet will be rewritten and forwarded to the chosen server. When the reply packets come back, the load balancer rewrites the source address and port of the packets to those of the virtual service. After the connection terminates or timeouts, the connection record in the hash table will be removed. 21

22 b. Virtual Server via IP Tunneling As shown in Figure 2-7, when a user accesses a virtual service provided by the server cluster, a packet destined for virtual IP address (VIP) arrives. The load balancer examines the packet's destination address and port. If they are matched for the virtual service, a real server is chosen from the cluster according to a connection scheduling algorithm, and the connection is added into the hash table which records connections. Then, the load balancer encapsulates the packet within an IP datagram and forwards it to 22

23 the chosen server. When an incoming packet belongs to this connection and the chosen server can be found in the hash table, the packet will be again encapsulated and forwarded to that server. When the server receives the encapsulated packet, it decapsulates the packet, processes the request, and finally returns the result directly to the user according to its own routing table. After a connection terminates or timeouts, the connection record in the hash table will be removed. The workflow is illustrated in the following Figure

24 c. Virtual Server via Direct Routing 24

25 the server. When the server receives the forwarded packet, the server finds that the packet is for the address on its alias interface or for a local socket, so it processes the request and returns the result directly to the user finally. After a connection terminates or timeouts, the connection record in the hash table will be removed. The direct routing workflow is illustrated in the following Figure

26 a. Round-Robin (RR) Scheduling The round-robin scheduling algorithm sends each incoming request to the next server in its list. Thus in a three server cluster (servers A, B and C) request 1 would go to server A, request 2 would go to server B, request 3 would go to server C, and request 4 would go to server A, thus completing the cycling or 'round-robin' of servers. It treats all real servers as equals regardless of the number of incoming connections or response time each server is experiencing. Virtual Server provides a few advantages over traditional round-robin DNS. Round-robin DNS resolves a single domain to the different IP addresses, the scheduling method is host-based, and the caching of DNS queries hinders the basic algorithm, these factors lead to significant dynamic load imbalances among the real servers. The scheduling method of Virtual Server is network connection-based, and it is much superior to round-robin DNS due to the fine scheduling method. b. Weighted Round-Robin (WRR) Scheduling The weighted round-robin scheduling is designed to better handle servers with different processing capacities. Each server can be assigned a weight (Wi), an integer value that indicates the processing capacity. The default weight is 1. For example, the real servers, A, B and C, have the weights, 4, 3, 2 respectively, a good scheduling sequence will be AABABCABC in a scheduling period (sum(wi)). In the implementation of the weighted round-robin scheduling, a scheduling sequence will be generated according to the server weights after the rules of Virtual Server are modified. The network connections are directed to the different real servers based on the scheduling sequence in a round-robin manner. 26

27 The weighted round-robin scheduling is better than the round-robin scheduling, when the processing capacity of real servers are different. However, it may lead to dynamic load imbalance among the real servers if the load of the requests varies highly. In short, there is the possibility that a majority of requests requiring large responses may be directed to the same real server. c. Least-Connection (LC) Scheduling The least-connection scheduling algorithm directs network connections to the server with the least number of established connections. This is one of the dynamic scheduling algorithms because it needs to count live connections for each server dynamically. For a Virtual Server that is managing a collection of servers with similar performance, least-connection scheduling is good to smooth distribution when the load of requests vary a lot. Virtual Server will direct requests to the real server with the fewest active connections. At a first glance it might seem that least-connection scheduling can also perform well even when there are servers of various processing capacities, because the faster server will get more network connections. In fact, it cannot perform very well because of the TCP's TIME_WAIT state. The TCP's TIME_WAIT is usually 2 minutes, during this 2 minutes a busy web site often receives thousands of connections, for example, the server A is twice as powerful as the server B, the server A is processing thousands of requests and keeping them in the TCP's TIME_WAIT state, but server B is crawling to get its thousands of connections finished. So, the least-connection scheduling cannot get load well balanced among servers with various processing capacities. 27

28 d. Weighted Least-Connection (WLC) Scheduling The weighted least-connection scheduling is a superset of the least-connection scheduling, in which you can assign a performance weight to each real server. The servers with a higher weight value will receive a larger percentage of live connections at any one time. The Virtual Server Administrator can assign a weight to each real server, and network connections are scheduled to each server in which the percentage of the current number of live connections for each server is a ratio to its weight. The default weight is one. The weighted least-connections scheduling works as following. Supposing there is n real servers, each server i has weight Wi (i=1,..,n), and alive connections Ci (i=1,..,n), ALL_CONNECTIONS is the sum of Ci (i=1,..,n), the next network connection will be directed to the server j, in which (Cj/ALL_CONNECTIONS)/Wj = min { (Ci/ALL_CONNECTIONS)/Wi } (i=1,..,n) Since the ALL_CONNECTIONS is a constant in this lookup, there is no need to divide Ci by ALL_CONNECTIONS, it can be optimized as Cj/Wj = min { Ci/Wi } (i=1,..,n) High Availability As mentioned in [35], in a production system you want to be able to remove, upgrade, add or replace nodes, without interruption of service to the client. Presumably these changes will be planned, but machines may crash too, so a mechanism for automatically handling machine or service crashes is required too. 28

29 Redundancy of services on the real-servers is one of the useful features of LVS. One machine or service can be removed from the functioning virtual server for upgrade or moving of the machine and can be brought back on line later without interruption of service to the client. The LVS code itself does not provide high availability. Other software is used in conjunction with LVS to provide high availability. Several families of tools are available to automatically handle fail out for LVS. Conceptually they are a separate layer to LVS. Some separately setup LVS and the monitoring layer. Others will setup LVS for you and administratively the two layers are not separable. There are two types of failures with an LVS: director failure and real-server failure. The two failures of the cluster are described as below. a. Director Failure In director failure, this is handled by having a redundant director available. Director failover is handled by the Ultra Monkey Project or by the vrrpd in keepalived. The director maintains session information client IP, real-server IP, real-server port, and on failover this information must be available on the new director. On simple failover, where a new director is just swapped in, in place of the old one, the session information is not transferred to the new director and the client will loose their session. Transferring this information is handled by the server state synchronisation demon. The keepalived project by Alexandre Cassen works with both Linux-HA and LVS. Keepalived watches the health of services. It also controls failover of directors using vrrpd tool. b. Real-server Failure 29

LinuxDirector: A Connection Director for Scalable Internet Services

LinuxDirector: A Connection Director for Scalable Internet Services LinuxDirector: A Connection Director for Scalable Internet Services Wensong Zhang, Shiyao Jin, Quanyuan Wu National Laboratory for Parallel & Distributed Processing Changsha, Hunan 410073, China wensong@linuxvirtualserver.org

More information

Linux Virtual Server for Scalable Network Services

Linux Virtual Server for Scalable Network Services Linux Virtual Server for Scalable Network Services Wensong Zhang National Laboratory for Parallel & Distributed Processing Changsha, Hunan 410073, China wensong@linuxvirtualserver.org, http://www.linuxvirtualserver.org/

More information

Multicast-based Distributed LVS (MD-LVS) for improving. scalability and availability

Multicast-based Distributed LVS (MD-LVS) for improving. scalability and availability Multicast-based Distributed LVS (MD-LVS) for improving scalability and availability Haesun Shin, Sook-Heon Lee, and Myong-Soon Park Internet Computing Lab. Department of Computer Science and Engineering,

More information

Introduction. Linux Virtual Server for Scalable Network Services. Linux Virtual Server. 3-tier architecture of LVS. Virtual Server via NAT

Introduction. Linux Virtual Server for Scalable Network Services. Linux Virtual Server. 3-tier architecture of LVS. Virtual Server via NAT Linux Virtual Server for Scalable Network Services Wensong Zhang wensong@gnuchina.org Ottawa Linux Symposium 2000 July 22th, 2000 1 Introduction Explosive growth of the Internet The requirements for servers

More information

USING A CLUSTER FILE SYSTEM -- TH-CLUFS -- TO CONSTRUCT A SCALABLE CLUSTER SYSTEM OF WEB SERVERS *

USING A CLUSTER FILE SYSTEM -- TH-CLUFS -- TO CONSTRUCT A SCALABLE CLUSTER SYSTEM OF WEB SERVERS * USING A CLUSTER FILE SYSTEM -- TH-CLUFS -- TO CONSTRUCT A SCALABLE CLUSTER SYSTEM OF WEB SERVERS * Liu Wei, Zheng Weimin, Shen Meiming, Wu Min, Ou Xinming Department of Computer Science and Technology,

More information

High Performance Cluster Support for NLB on Window

High Performance Cluster Support for NLB on Window High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) arvindrathi88@gmail.com [2]Asst. Professor,

More information

The Three-level Approaches for Differentiated Service in Clustering Web Server

The Three-level Approaches for Differentiated Service in Clustering Web Server The Three-level Approaches for Differentiated Service in Clustering Web Server Myung-Sub Lee and Chang-Hyeon Park School of Computer Science and Electrical Engineering, Yeungnam University Kyungsan, Kyungbuk

More information

A Comparison on Current Distributed File Systems for Beowulf Clusters

A Comparison on Current Distributed File Systems for Beowulf Clusters A Comparison on Current Distributed File Systems for Beowulf Clusters Rafael Bohrer Ávila 1 Philippe Olivier Alexandre Navaux 2 Yves Denneulin 3 Abstract This paper presents a comparison on current file

More information

Linux Magazine November 2003 CLUSTERS Linux Virtual Server Clusters Linux Magazine November 2003

Linux Magazine November 2003 CLUSTERS Linux Virtual Server Clusters Linux Magazine November 2003 Page 1 of 14 Linux Magazine November 2003 Copyright Linux Magazine 2003 CLUSTERS Linux Virtual Server Clusters Build highly-scalable and highly-available network services at low cost by Wensong Zhang and

More information

Linux Virtual Server Clusters

Linux Virtual Server Clusters 1 de 11 20/10/2006 2:42 Linux Virtual Server Clusters Feature Story Written by Wensong Zhang and Wenzhuo Zhang Saturday, 15 November 2003 With the explosive growth of the Internet and its increasingly

More information

Performance Assessment of High Availability Clustered Computing using LVS-NAT

Performance Assessment of High Availability Clustered Computing using LVS-NAT Performance Assessment of High Availability Clustered Computing using LVS-NAT *Muhammad Kashif Shaikh, **Muzammil Ahmad Khan and ***Mumtaz-ul-Imam Abstract High availability cluster computing environment

More information

Network Attached Storage. Jinfeng Yang Oct/19/2015

Network Attached Storage. Jinfeng Yang Oct/19/2015 Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability

More information

Scala Storage Scale-Out Clustered Storage White Paper

Scala Storage Scale-Out Clustered Storage White Paper White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current

More information

Creating Web Farms with Linux (Linux High Availability and Scalability)

Creating Web Farms with Linux (Linux High Availability and Scalability) Creating Web Farms with Linux (Linux High Availability and Scalability) Horms (Simon Horman) horms@verge.net.au October 2000 http://verge.net.au/linux/has/ http://ultramonkey.sourceforge.net/ Introduction:

More information

Implementation, Simulation of Linux Virtual Server in ns-2

Implementation, Simulation of Linux Virtual Server in ns-2 Implementation, Simulation of Linux Virtual Server in ns-2 CMPT 885 Special Topics: High Performance Networks Project Final Report Yuzhuang Hu yhu1@cs.sfu.ca ABSTRACT LVS(Linux Virtual Server) provides

More information

Scalable Linux Clusters with LVS

Scalable Linux Clusters with LVS Scalable Linux Clusters with LVS Considerations and Implementation, Part I Eric Searcy Tag1 Consulting, Inc. emsearcy@tag1consulting.com April 2008 Abstract Whether you are perusing mailing lists or reading

More information

Load Balancing a Cluster of Web Servers

Load Balancing a Cluster of Web Servers Load Balancing a Cluster of Web Servers Using Distributed Packet Rewriting Luis Aversa Laversa@cs.bu.edu Azer Bestavros Bestavros@cs.bu.edu Computer Science Department Boston University Abstract We present

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

UNIVERSITY OF OSLO Department of Informatics. Performance Measurement of Web Services Linux Virtual Server. Muhammad Ashfaq Oslo University College

UNIVERSITY OF OSLO Department of Informatics. Performance Measurement of Web Services Linux Virtual Server. Muhammad Ashfaq Oslo University College UNIVERSITY OF OSLO Department of Informatics Performance Measurement of Web Services Linux Virtual Server Muhammad Ashfaq Oslo University College May 19, 2009 Performance Measurement of Web Services Linux

More information

Remaining Capacity Based Load Balancing Architecture for Heterogeneous Web Server System

Remaining Capacity Based Load Balancing Architecture for Heterogeneous Web Server System Remaining Capacity Based Load Balancing Architecture for Heterogeneous Web Server System Tsang-Long Pao Dept. Computer Science and Engineering Tatung University Taipei, ROC Jian-Bo Chen Dept. Computer

More information

Building a Highly Available and Scalable Web Farm

Building a Highly Available and Scalable Web Farm Page 1 of 10 MSDN Home > MSDN Library > Deployment Rate this page: 10 users 4.9 out of 5 Building a Highly Available and Scalable Web Farm Duwamish Online Paul Johns and Aaron Ching Microsoft Developer

More information

High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features

High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features UDC 621.395.31:681.3 High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features VTsuneo Katsuyama VAkira Hakata VMasafumi Katoh VAkira Takeyama (Manuscript received February 27, 2001)

More information

Content-Aware Load Balancing using Direct Routing for VOD Streaming Service

Content-Aware Load Balancing using Direct Routing for VOD Streaming Service Content-Aware Load Balancing using Direct Routing for VOD Streaming Service Young-Hwan Woo, Jin-Wook Chung, Seok-soo Kim Dept. of Computer & Information System, Geo-chang Provincial College, Korea School

More information

Overview of Computer Networks

Overview of Computer Networks Overview of Computer Networks Client-Server Transaction Client process 4. Client processes response 1. Client sends request 3. Server sends response Server process 2. Server processes request Resource

More information

A Novel Adaptive Distributed Load Balancing Strategy for Cluster *

A Novel Adaptive Distributed Load Balancing Strategy for Cluster * A Novel Adaptive Distributed Balancing Strategy for Cluster * Hai Jin, Bin Cheng, Shengli Li Cluster and Grid Computing Lab Huazhong University of Science & Technology, Wuhan, China {hjin,showersky}@hust.edu.cn

More information

Agenda. Distributed System Structures. Why Distributed Systems? Motivation

Agenda. Distributed System Structures. Why Distributed Systems? Motivation Agenda Distributed System Structures CSCI 444/544 Operating Systems Fall 2008 Motivation Network structure Fundamental network services Sockets and ports Client/server model Remote Procedure Call (RPC)

More information

3 PVFS Design and Implementation. 2 Related Work

3 PVFS Design and Implementation. 2 Related Work In Proc. of the Extreme Linux Track: 4th Annual Linux Showcase and Conference, October 2000. PVFS: A Parallel File System for Linux Clusters Philip H. Carns Walter B. Ligon III Parallel Architecture Research

More information

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* Junho Jang, Saeyoung Han, Sungyong Park, and Jihoon Yang Department of Computer Science and Interdisciplinary Program

More information

An Active Packet can be classified as

An Active Packet can be classified as Mobile Agents for Active Network Management By Rumeel Kazi and Patricia Morreale Stevens Institute of Technology Contact: rkazi,pat@ati.stevens-tech.edu Abstract-Traditionally, network management systems

More information

Architecture of distributed network processors: specifics of application in information security systems

Architecture of distributed network processors: specifics of application in information security systems Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia vlad@neva.ru 1. Introduction Modern

More information

UPPER LAYER SWITCHING

UPPER LAYER SWITCHING 52-20-40 DATA COMMUNICATIONS MANAGEMENT UPPER LAYER SWITCHING Gilbert Held INSIDE Upper Layer Operations; Address Translation; Layer 3 Switching; Layer 4 Switching OVERVIEW The first series of LAN switches

More information

Load Balancing a Cluster of Web Servers

Load Balancing a Cluster of Web Servers Load Balancing a Cluster of Web Servers Using Distributed Packet Rewriting Luis Aversa Laversa@cs.bu.edu Azer Bestavros Bestavros@cs.bu.edu Computer Science Department Boston University Abstract In this

More information

Creating Web Farms with Linux (Linux High Availability and Scalability)

Creating Web Farms with Linux (Linux High Availability and Scalability) Creating Web Farms with Linux (Linux High Availability and Scalability) Horms (Simon Horman) horms@verge.net.au December 2001 For Presentation in Tokyo, Japan http://verge.net.au/linux/has/ http://ultramonkey.org/

More information

Using High Availability Technologies Lesson 12

Using High Availability Technologies Lesson 12 Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?

More information

Cray DVS: Data Virtualization Service

Cray DVS: Data Virtualization Service Cray : Data Virtualization Service Stephen Sugiyama and David Wallace, Cray Inc. ABSTRACT: Cray, the Cray Data Virtualization Service, is a new capability being added to the XT software environment with

More information

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions Abstract Coyote Point Equalizer appliances deliver traffic management solutions that provide high availability,

More information

HyLARD: A Hybrid Locality-Aware Request Distribution Policy in Cluster-based Web Servers

HyLARD: A Hybrid Locality-Aware Request Distribution Policy in Cluster-based Web Servers TANET2007 臺 灣 網 際 網 路 研 討 會 論 文 集 二 HyLARD: A Hybrid Locality-Aware Request Distribution Policy in Cluster-based Web Servers Shang-Yi Zhuang, Mei-Ling Chiang Department of Information Management National

More information

Communications and Computer Networks

Communications and Computer Networks SFWR 4C03: Computer Networks and Computer Security January 5-8 2004 Lecturer: Kartik Krishnan Lectures 1-3 Communications and Computer Networks The fundamental purpose of a communication system is the

More information

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters COSC 6374 Parallel Computation Parallel I/O (I) I/O basics Spring 2008 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network

More information

Introduction to Linux Virtual Server and High Availability

Introduction to Linux Virtual Server and High Availability Outlines Introduction to Linux Virtual Server and High Availability Chen Kaiwang kaiwang.chen@gmail.com December 5, 2011 Outlines If you don t know the theory, you don t have a way to be rigorous. Robert

More information

Load Balancing Voice Applications with Piranha 1

Load Balancing Voice Applications with Piranha 1 Load Balancing Voice Applications with Piranha 1 Mustapha Hadim & Pierre Manneback Faculté Polytechnique de Mons - INFO Rue de Houdain 9, 7000 Mons Belgium Tel: 00 32 65 37 40 58 Fax: 00 32 65 37 45 00

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

Big data management with IBM General Parallel File System

Big data management with IBM General Parallel File System Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers

More information

A High Performance System Prototype for Large-scale SMTP Services

A High Performance System Prototype for Large-scale SMTP Services A High Performance System Prototype for Large-scale SMTP Services Sethalat Rodhetbhai g4165238@ku.ac.th Yuen Poovarawan yuen@ku.ac.th Department of Computer Engineering Kasetsart University Bangkok, Thailand

More information

CSAR: Cluster Storage with Adaptive Redundancy

CSAR: Cluster Storage with Adaptive Redundancy CSAR: Cluster Storage with Adaptive Redundancy Manoj Pillai, Mario Lauria Department of Computer and Information Science The Ohio State University Columbus, OH, 4321 Email: pillai,lauria@cis.ohio-state.edu

More information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

packet retransmitting based on dynamic route table technology, as shown in fig. 2 and 3.

packet retransmitting based on dynamic route table technology, as shown in fig. 2 and 3. Implementation of an Emulation Environment for Large Scale Network Security Experiments Cui Yimin, Liu Li, Jin Qi, Kuang Xiaohui National Key Laboratory of Science and Technology on Information System

More information

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks Computer Networks Lecture 06 Connecting Networks Kuang-hua Chen Department of Library and Information Science National Taiwan University Local Area Networks (LAN) 5 kilometer IEEE 802.3 Ethernet IEEE 802.4

More information

Evaluating parallel file system security

Evaluating parallel file system security Evaluating parallel file system security 1. Motivation After successful Internet attacks on HPC centers worldwide, there has been a paradigm shift in cluster security strategies. Clusters are no longer

More information

Using Multipathing Technology to Achieve a High Availability Solution

Using Multipathing Technology to Achieve a High Availability Solution Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend

More information

A 64-bit, Shared Disk File System for Linux

A 64-bit, Shared Disk File System for Linux A 64-bit, Shared Disk File System for Linux Kenneth W. Preslan Sistina Software, Inc. and Manish Agarwal, Andrew P. Barry, Jonathan E. Brassow, Russell Cattelan, Grant M. Erickson, Erling Nygaard, Seth

More information

Auspex Support for Cisco Fast EtherChannel TM

Auspex Support for Cisco Fast EtherChannel TM Auspex Support for Cisco Fast EtherChannel TM Technical Report 21 Version 1.0 March 1998 Document 300-TC049, V1.0, 980310 Auspex Systems, Inc. 2300 Central Expressway Santa Clara, California 95050-2516

More information

Server Load Balancer LB-8000. Administration Guide

Server Load Balancer LB-8000. Administration Guide Server Load Balancer LB-8000 Administration Guide 1 Trademarks Disclaimer Copyright 2003. Contents subject to revision without prior notice. PLANET is a registered trademark of All other trademarks belong

More information

Chapter 14: Distributed Operating Systems

Chapter 14: Distributed Operating Systems Chapter 14: Distributed Operating Systems Chapter 14: Distributed Operating Systems Motivation Types of Distributed Operating Systems Network Structure Network Topology Communication Structure Communication

More information

PEGASUS: Competitive load balancing using inetd

PEGASUS: Competitive load balancing using inetd PEGASUS: Competitive load balancing using inetd George Oikonomou 1 Vassilios Karakoidas 2 Theodoros Apostolopoulos 1 1 Department of Informatics Athens University of Economics and Business {g.oikonomou,tca}@aueb.gr

More information

Chapter 16: Distributed Operating Systems

Chapter 16: Distributed Operating Systems Module 16: Distributed ib System Structure, Silberschatz, Galvin and Gagne 2009 Chapter 16: Distributed Operating Systems Motivation Types of Network-Based Operating Systems Network Structure Network Topology

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata

Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata Implementing Network Attached Storage Ken Fallon Bill Bullers Impactdata Abstract The Network Peripheral Adapter (NPA) is an intelligent controller and optimized file server that enables network-attached

More information

A Heterogeneous Internetworking Model with Enhanced Management and Security Functions

A Heterogeneous Internetworking Model with Enhanced Management and Security Functions Session 1626 A Heterogeneous Internetworking Model with Enhanced Management and Security Functions Youlu Zheng Computer Science Department University of Montana Yan Zhu Sybase, Inc. To demonstrate how

More information

Introduction to Gluster. Versions 3.0.x

Introduction to Gluster. Versions 3.0.x Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster

More information

A Content Aware Scheduling System for Network Services in Linux Clusters 1

A Content Aware Scheduling System for Network Services in Linux Clusters 1 A Content Aware Scheduling System for Network Services in Linux Clusters 1 Yijun Lu, Hai Jin, Hong Jiang* and Zongfen Han Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China yijlu@cse.unl.edu

More information

On Cloud Computing Technology in the Construction of Digital Campus

On Cloud Computing Technology in the Construction of Digital Campus 2012 International Conference on Innovation and Information Management (ICIIM 2012) IPCSIT vol. 36 (2012) (2012) IACSIT Press, Singapore On Cloud Computing Technology in the Construction of Digital Campus

More information

Optimization of Cluster Web Server Scheduling from Site Access Statistics

Optimization of Cluster Web Server Scheduling from Site Access Statistics Optimization of Cluster Web Server Scheduling from Site Access Statistics Nartpong Ampornaramveth, Surasak Sanguanpong Faculty of Computer Engineering, Kasetsart University, Bangkhen Bangkok, Thailand

More information

Local-Area Network -LAN

Local-Area Network -LAN Computer Networks A group of two or more computer systems linked together. There are many [types] of computer networks: Peer To Peer (workgroups) The computers are connected by a network, however, there

More information

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters COSC 6374 Parallel I/O (I) I/O basics Fall 2012 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network card 1 Network card

More information

VMWARE WHITE PAPER 1

VMWARE WHITE PAPER 1 1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

More information

Cisco Application Networking Manager Version 2.0

Cisco Application Networking Manager Version 2.0 Cisco Application Networking Manager Version 2.0 Cisco Application Networking Manager (ANM) software enables centralized configuration, operations, and monitoring of Cisco data center networking equipment

More information

Load Balancing. Final Network Exam LSNAT. Sommaire. How works a "traditional" NAT? Un article de Le wiki des TPs RSM.

Load Balancing. Final Network Exam LSNAT. Sommaire. How works a traditional NAT? Un article de Le wiki des TPs RSM. Load Balancing Un article de Le wiki des TPs RSM. PC Final Network Exam Sommaire 1 LSNAT 1.1 Deployement of LSNAT in a globally unique address space (LS-NAT) 1.2 Operation of LSNAT in conjunction with

More information

Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師

Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師 Lecture 7: Distributed Operating Systems A Distributed System 7.2 Resource sharing Motivation sharing and printing files at remote sites processing information in a distributed database using remote specialized

More information

Tunnel Broker System Using IPv4 Anycast

Tunnel Broker System Using IPv4 Anycast Tunnel Broker System Using IPv4 Anycast Xin Liu Department of Electronic Engineering Tsinghua Univ. lx@ns.6test.edu.cn Xing Li Department of Electronic Engineering Tsinghua Univ. xing@cernet.edu.cn ABSTRACT

More information

Networking TCP/IP routing and workload balancing

Networking TCP/IP routing and workload balancing System i Networking TCP/IP routing and workload balancing Version 5 Release 4 System i Networking TCP/IP routing and workload balancing Version 5 Release 4 Note Before using this information and the product

More information

LinuxWorld Conference & Expo Server Farms and XML Web Services

LinuxWorld Conference & Expo Server Farms and XML Web Services LinuxWorld Conference & Expo Server Farms and XML Web Services Jorgen Thelin, CapeConnect Chief Architect PJ Murray, Product Manager Cape Clear Software Objectives What aspects must a developer be aware

More information

WAN Traffic Management with PowerLink Pro100

WAN Traffic Management with PowerLink Pro100 Whitepaper WAN Traffic Management with PowerLink Pro100 Overview In today s Internet marketplace, optimizing online presence is crucial for business success. Wan/ISP link failover and traffic management

More information

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development Fibre Channel Overview from the Internet Page 1 of 11 Fibre Channel Overview of the Technology Early History and Fibre Channel Standards Development Interoperability and Storage Storage Devices and Systems

More information

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture Yangsuk Kee Department of Computer Engineering Seoul National University Seoul, 151-742, Korea Soonhoi

More information

CS101 Lecture 19: Internetworking. What You ll Learn Today

CS101 Lecture 19: Internetworking. What You ll Learn Today CS101 Lecture 19: Internetworking Internet Protocol IP Addresses Routing Domain Name Services Aaron Stevens (azs@bu.edu) 6 March 2013 What You ll Learn Today What is the Internet? What does Internet Protocol

More information

THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING. José Daniel García Sánchez ARCOS Group University Carlos III of Madrid

THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING. José Daniel García Sánchez ARCOS Group University Carlos III of Madrid THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING José Daniel García Sánchez ARCOS Group University Carlos III of Madrid Contents 2 The ARCOS Group. Expand motivation. Expand

More information

2057-15. First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring

2057-15. First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring 2057-15 First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring 7-25 September 2009 TCP/IP Networking Abhaya S. Induruwa Department

More information

DNS ROUND ROBIN HIGH-AVAILABILITY LOAD SHARING

DNS ROUND ROBIN HIGH-AVAILABILITY LOAD SHARING PolyServe High-Availability Server Clustering for E-Business 918 Parker Street Berkeley, California 94710 (510) 665-2929 wwwpolyservecom Number 990903 WHITE PAPER DNS ROUND ROBIN HIGH-AVAILABILITY LOAD

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu

More information

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what

More information

ERserver. iseries. TCP/IP routing and workload balancing

ERserver. iseries. TCP/IP routing and workload balancing ERserver iseries TCP/IP routing and workload balancing ERserver iseries TCP/IP routing and workload balancing Copyright International Business Machines Corporation 1998, 2001. All rights reserved. US

More information

5 Performance Management for Web Services. Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology. stadler@ee.kth.

5 Performance Management for Web Services. Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology. stadler@ee.kth. 5 Performance Management for Web Services Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology stadler@ee.kth.se April 2008 Overview Service Management Performance Mgt QoS Mgt

More information

What communication protocols are used to discover Tesira servers on a network?

What communication protocols are used to discover Tesira servers on a network? Understanding device discovery methods in Tesira OBJECTIVES In this application note, basic networking concepts will be summarized to better understand how Tesira servers are discovered over networks.

More information

Chapter 1 - Web Server Management and Cluster Topology

Chapter 1 - Web Server Management and Cluster Topology Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management

More information

Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud

Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches

More information

AS/400e. TCP/IP routing and workload balancing

AS/400e. TCP/IP routing and workload balancing AS/400e TCP/IP routing and workload balancing AS/400e TCP/IP routing and workload balancing Copyright International Business Machines Corporation 2000. All rights reserved. US Government Users Restricted

More information

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller

More information

The Design and Implementation of Content Switch On IXP12EB

The Design and Implementation of Content Switch On IXP12EB The Design and Implementation of Content Switch On IXP12EB Thesis Proposal by Longhua Li Computer Science Department University of Colorado at Colorado Springs 5/15/2001 Approved by: Dr. Edward Chow (Advisor)

More information

Zarząd (7 osób) F inanse (13 osób) M arketing (7 osób) S przedaż (16 osób) K adry (15 osób)

Zarząd (7 osób) F inanse (13 osób) M arketing (7 osób) S przedaż (16 osób) K adry (15 osób) QUESTION NO: 8 David, your TestKing trainee, asks you about basic characteristics of switches and hubs for network connectivity. What should you tell him? A. Switches take less time to process frames than

More information

Distributed File Systems

Distributed File Systems Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.

More information

Building Secure Network Infrastructure For LANs

Building Secure Network Infrastructure For LANs Building Secure Network Infrastructure For LANs Yeung, K., Hau; and Leung, T., Chuen Abstract This paper discusses the building of secure network infrastructure for local area networks. It first gives

More information

Avaya P333R-LB. Load Balancing Stackable Switch. Load Balancing Application Guide

Avaya P333R-LB. Load Balancing Stackable Switch. Load Balancing Application Guide Load Balancing Stackable Switch Load Balancing Application Guide May 2001 Table of Contents: Section 1: Introduction Section 2: Application 1 Server Load Balancing Section 3: Application 2 Firewall Load

More information

CS 5480/6480: Computer Networks Spring 2012 Homework 4 Solutions Due by 1:25 PM on April 11 th 2012

CS 5480/6480: Computer Networks Spring 2012 Homework 4 Solutions Due by 1:25 PM on April 11 th 2012 CS 5480/6480: Computer Networks Spring 2012 Homework 4 Solutions Due by 1:25 PM on April 11 th 2012 Important: The solutions to the homework problems from the course book have been provided by the authors.

More information

Oracle Net Services for Oracle10g. An Oracle White Paper May 2005

Oracle Net Services for Oracle10g. An Oracle White Paper May 2005 Oracle Net Services for Oracle10g An Oracle White Paper May 2005 Oracle Net Services INTRODUCTION Oracle Database 10g is the first database designed for enterprise grid computing, the most flexible and

More information

Networking Topology For Your System

Networking Topology For Your System This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.

More information

Availability Digest. www.availabilitydigest.com. Redundant Load Balancing for High Availability July 2013

Availability Digest. www.availabilitydigest.com. Redundant Load Balancing for High Availability July 2013 the Availability Digest Redundant Load Balancing for High Availability July 2013 A large data center can comprise hundreds or thousands of servers. These servers must not only be interconnected, but they

More information