High Performance Cluster Support for NLB on Window



Similar documents
Building a Highly Available and Scalable Web Farm

White Paper. ThinRDP Load Balancing

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions

Scalable Linux Clusters with LVS

SiteCelerate white paper

Creating Web Farms with Linux (Linux High Availability and Scalability)

Aqua Connect Load Balancer User Manual (Mac)

Astaro Deployment Guide High Availability Options Clustering and Hot Standby

Availability Digest. Redundant Load Balancing for High Availability July 2013

LOAD BALANCING IN WEB SERVER

Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

Overview - Using ADAMS With a Firewall

Overview - Using ADAMS With a Firewall

A Study of Network Security Systems

HUAWEI OceanStor Load Balancing Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

Proxies. Chapter 4. Network & Security Gildas Avoine

Implementing Parameterized Dynamic Load Balancing Algorithm Using CPU and Memory

Configuring Windows Server Clusters

Load Balancing 101: Firewall Sandwiches

MailMarshal SMTP in a Load Balanced Array of Servers Technical White Paper September 29, 2003

High Availability Essentials

Load Balancing for Microsoft Office Communication Server 2007 Release 2

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.

Dissertation Title: SOCKS5-based Firewall Support For UDP-based Application. Author: Fung, King Pong

Fault-Tolerant Framework for Load Balancing System

Networking TCP/IP routing and workload balancing

Multicast-based Distributed LVS (MD-LVS) for improving. scalability and availability

Building a Scale-Out SQL Server 2008 Reporting Services Farm

LOAD BALANCING AS A STRATEGY LEARNING TASK

Implementing Reverse Proxy Using Squid. Prepared By Visolve Squid Team

Fundamentals of Windows Server 2008 Network and Applications Infrastructure

Smart Tips. Enabling WAN Load Balancing. Key Features. Network Diagram. Overview. Featured Products. WAN Failover. Enabling WAN Load Balancing Page 1

Module 8: Concepts of A Network Load Balancing Cluster

ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy

Proxy Server, Network Address Translator, Firewall. Proxy Server

Network Configuration Settings

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

Chapter 1 - Web Server Management and Cluster Topology

ms-help://ms.technet.2004sep.1033/win2ksrv/tnoffline/prodtechnol/win2ksrv/reskit/distsys/part3/dsgch19.htm

IMPLEMENTATION OF INTELLIGENT FIREWALL TO CHECK INTERNET HACKERS THREAT

Stateful Inspection Technology

WAN Traffic Management with PowerLink Pro100

Active-Active and High Availability

SCALABILITY AND AVAILABILITY

ERserver. iseries. TCP/IP routing and workload balancing

OpenFlow Based Load Balancing

Monitoring Traffic manager

AS/400e. TCP/IP routing and workload balancing

Netsweeper Whitepaper

The International Journal Of Science & Technoledge (ISSN X)

OVERVIEW OF TYPICAL WINDOWS SERVER ROLES

Chapter 6 Configuring the SSL VPN Tunnel Client and Port Forwarding

Parallels. Clustering in Virtuozzo-Based Systems

VERITAS Cluster Server Traffic Director Option. Product Overview

Improving Network Efficiency for SMB Through Intelligent Load Balancing

Routing Security Server failure detection and recovery Protocol support Redundancy

AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM. Dr. T.Ravichandran, B.E (ECE), M.E(CSE), Ph.D., MISTE.,

2 Prof, Dept of CSE, Institute of Aeronautical Engineering, Hyderabad, Andhrapradesh, India,

Intelligent Content Delivery Network (CDN) The New Generation of High-Quality Network

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture

Load Balancing using Pramati Web Load Balancer

Steelcape Product Overview and Functional Description

Firewalls. Ahmad Almulhem March 10, 2012

Back-End Forwarding Scheme in Server Load Balancing using Client Virtualization

Global Server Load Balancing (GSLB) Concepts

Chapter 10: Scalability

LinuxWorld Conference & Expo Server Farms and XML Web Services

Deploying in a Distributed Environment

Configuring Citrix NetScaler for IBM WebSphere Application Services

FAQ: BroadLink Multi-homing Load Balancers

ETM System SIP Trunk Support Technical Discussion

Monitoring Load-Balancing Services

DNS ROUND ROBIN HIGH-AVAILABILITY LOAD SHARING

Linux High Availability

Chapter 14: Distributed Operating Systems

ΕΠΛ 674: Εργαστήριο 5 Firewalls

Performance Assessment of High Availability Clustered Computing using LVS-NAT

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

Using Multipathing Technology to Achieve a High Availability Solution

Deployment Topologies

CDBMS Physical Layer issue: Load Balancing

TRUFFLE Broadband Bonding Network Appliance BBNA6401. A Frequently Asked Question on. Link Bonding vs. Load Balancing

How To - Configure Virtual Host using FQDN How To Configure Virtual Host using FQDN

ENTERPRISE DATA CENTER CSS HARDWARE LOAD BALANCING POLICY

Considerations In Developing Firewall Selection Criteria. Adeptech Systems, Inc.

Networking Topology For Your System

Load Balancing. Final Network Exam LSNAT. Sommaire. How works a "traditional" NAT? Un article de Le wiki des TPs RSM.

Web Application Hosting Cloud Architecture

Internet Firewall CSIS Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS net15 1. Routers can implement packet filtering

WAN Optimization, Web Cache, Explicit Proxy, and WCCP. FortiOS Handbook v3 for FortiOS 4.0 MR3

OS/390 Firewall Technology Overview

hp ProLiant network adapter teaming

Transcription:

High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) arvindrathi88@gmail.com [2]Asst. Professor, Department of CSE, GITM, Gurgaon, Haryana (India) kirtisinghcec@gmail.com [3]Asst. Professor, Department of CSE, GITM, Gurgaon, Haryana (India) pdm.dahiya@gmail.com Abstract: This paper illustrates a method regarding the mix of the window platform and virtualization technology. When a single server machine is not enough to handle the increasing traffic on the network it's time to look into building a cluster that uses multiple machines on the network acting as a single server. The present server is transferred into multiple virtual servers with the tool VMware workstation. Clustering is a collection of servers that act as a single Web Server. Administration of two or more servers and keeping them properly synched is actually a lot more work than administering a single server. The Network Load balancing (NLB) service enhances the supply and scalability of web server applications like those used on internet, FTP, Firewall, Proxy, VPN, and alternative mission-critical servers. One pc running windows will give a restricted level of server dependability and scalable performance. The load testing is performed here for find out the response time of the site. In this research, parameter used to measure the performance of website is on the basis of throughput, response time and hit per second. Experiment result and analysis shows that high performance clusters is an efficient and sufficient technique for today mission critical application. Keywords Server, Virtualization, Throughput, Response Time, Network Load Balancing (NLB) 1. Introduction: A load balancer is a network device placed in between clients and servers, acting this request to the cluster of servers, which will process them producing a reply, coming back to clients as an answer. Load balancer implements a set of scheduling methods which will dictate to which real-server current request will be forwarded. Moreover, there are also several forwarding methods which will influence, not only the way requests travel from the load balancer to the nodes of the cluster, but also how these nodes replies travel back to the clients.[3] DNS can be used to redirect requests to physical servers in a round-robin fashion. But simple roundrobin DNS cannot exploit more powerful servers, react to current conditions, or avoid unavailable servers. For a server cluster to achieve its high-performance and high-availability potential, load balancing is required Load balancing optimizes request distribution based on factors like capacity, availability, response time, current load, historical performance, and administrative weights. [2][3] A Load balancer sits between the Internet and a physical server cluster, acting as a virtual server. As each request arrives, the load balancer makes near-instantaneous intelligent decisions about the physical server best able to satisfy each incoming request. A well-tuned adaptive load balancer ensures that customer sites are available 24x7 with the best possible response time and resource utilization. [4] 1.1 What is Network Load Balancing (NLB)? In computing, load balancing is a technique used to spread workload among many processes, computers, networks, disks or other resources, so that no single resource is overloaded. The Linux Virtual Server (LVS) as 1

an advanced load balancing solution can be used to build highly scalable and highly available network services. [5] The basic load balancing transaction is as follows: The client attempts to connect with the service on the load balancer. The load balancer accepts the connection, and after deciding which host should receive the connection, changes the destination IP (and possibly port) to match the service of the selected host (note that the source IP of the client is not touched). The host accepts the connection and responds back to the original source, the client, via its default route, the load balancer. The load balancer intercepts the return packet from the host and now changes the source IP (and possible port) to match the virtual server IP and port, and forwards the packet back to the client. The client receives the return packet, believing that it came from the virtual server, and continues the process. [6] Fig.1.1 Representation of a Load Balancing System The Network Load Balancing (NLB) service enhances the availability and scalability of internet server applications such as those used on Web, FTP, Firewall, Proxy, VPN, and other mission-critical servers. A single computer running windows can provide a limited level of server reliability and scalable performance. However, by combining the resources of two or more computers running one of the products in the Windows Server 2003 family into a single cluster, Network Load Balancing can deliver the reliability and performance that web servers and other mission-critical servers need. The following diagram depicts two connected Network Load Balancing clusters. The first cluster consists of two hosts and the second cluster consists of four hosts: Fig.1.2 Two connected Network Load Balancing clusters. 2

Each host runs separate copies of the desired server applications, such as that for a Web, FTP, and Telnet Server. Network Load Balancing distributes incoming client requests across the hosts in the cluster. The load weight to be handled by each host can be configured as necessary. You can also add hosts dynamically to the cluster to handle increased load. In addition, Network Load Balancing can direct all traffic to a designated single host, called the default host. [7] 1.2 Network Load Balancing Parameter CPU overhead on the cluster hosts, which is the CPU percentage required to analyze and filter network packets (lower is better). Response time to clients, which increases with the non-overlapped portion of CPU overhead, called latency (lower is better). Throughput to clients, which increases with additional client traffic that the cluster can handle prior to saturating the cluster hosts (higher is better). Switch occupancy, which increases with additional client traffic (lower is better) and must not adversely affect port bandwidth. In addition, Network Load Balancing's scalability determines how its performance improves as hosts are added to the cluster. Scalable performance requires that CPU overhead and latency not grow faster than the number of hosts. [7] 1.3 Network Load Balancing Features Windows 2003 Network Load Balancing service provides the following configuration, performance, and management features: TCP/IP Support: Services and applications can be delivered to the client by using specified TCP/IP protocols and ports that can take advantage of Network Load Balancing. Load Balancing: Incoming client connections are load balanced among cluster members based on a distributed algorithm that the Network Load Balancing service executes and rules that you have configured for the cluster. High Availability: Detects the failure of a host within the cluster, and within seconds dynamically reconfigures and redistributes subsequent client requests to hosts that are still viable members of the cluster. Remote Manageability: Allows remote control of the cluster from any Windows 2003 or Microsoft Windows NT System. Scalable Performance: Load balances requests for individual TCP/IP services across the cluster. Supports up to 32 computers in a single cluster. Optionally load balances multiple server requests from a single client. Fault Tolerance: Automatically detects and recovers from a failed or offline computer. Automatically rebalances the network load when the cluster set changes. Recovers and redistributes the workload within 10 seconds. [4][8] 1.4 Load Balancing Algorithm DNS Round-Robin Scheduling: Round-Robin DNS is a technique of load distribution, load balancing, or fault-tolerance provisioning multiple, redundant Internet Protocol service hosts, e.g., Web Servers, FTP Servers, by managing the Domain Name System's (DNS) responses to address requests from client computers according to an appropriate statistical model. In its simplest implementation Round-Robin DNS works by responding to DNS requests not only with a single IP address, but a list of IP addresses of several servers that host identical services. The order in which IP addresses from the list are returned is the basis for the term Round-Robin. With each DNS response, the IP address sequence in the list is permuted. Usually, basic IP clients attempt connections with the first address 3

returned from a DNS query so that on different connection attempts clients would receive service from different providers, thus distributing the overall load among servers. There is no standard procedure for deciding which address will be used by the requesting application. Round-Robin DNS is a common solution for enabling a limited, static form of TCP/IP load balancing for internet server farms. Consider the following example in which there are three IP address entries for the same host name on a DNS server. Iyogi.project.com IN A 10.10.10.2 Iyogi.project.com IN A 10.10.10.3 Iyogi.project.com IN A 10.10.10.16 Using the previous list of Round-Robin DNS IP address entries, when a client sends a query, the DNS server returns all three IP addresses to the DNS client, but typically the client uses only the first one in the list. The next time the DNS server receives a query for this host the order of the list is changed in a Cyclic Permutation or Round-Robin, meaning that the address that was first in the previous list is now last in the new list. So if a client chooses the first IP address in the list, it now connects to a different server. In the event of a server failure, Round-Robins DNS will continue to route requests to the failed server until you manually remove the SRV (service) resource record from DNS. [2] Comparing Network Load Balancing Solutions Table 1.1: Comparison of Various NLB Solutions Comparing load balancing solutions will enable you to determine the advantages and disadvantages of each and to implement a solution that will provide ease of installation, avoid specialized hardware, and avoid single points of failure. Basic steps of algorithm are Load balancing algorithm directly influences the effect of balancing the server workloads. Its main task is to decide how to choose the next server and transfer a new connection request to it. There are four basic steps that that algorithms should follow given steps: 1) Monitoring server performance (load monitoring) 2) Exchanging this information between servers (synchronization with load-balancer) 3) Calculating new distributions and making the balancing decision. (Rebalancing criteria) 4) Actual request serve (fulfilling demand) 2. Virtual Server Clustering 2.1 The Concept of Clustering The concept of a Cluster is to take two or more computers and organize them to work together to provide higher performance, availability, reliability and scalability than can be obtained by using system. When failure occurs 4

in a cluster, resources can be redirected and the workload can be redistributed. Typically the end user experiences a limited failure, and may only have to refresh the browser or reconnect to an application to begin working again. [8] Collections of distributed computers, termed clusters, have been used for decades to help solve some of the world s most complicated problems. Cluster systems are popular architectures in the field of high performance computing. The term Cluster is one of those overloaded computing terms (like node ) that can have a plethora of meanings based on context, and hence, should always be explicitly defined when used. In the context of this paper, a computer cluster is defined as a group of loosely coupled computers that work together to accomplish a specific task or tasks but is viewed externally as though it is a single computer. [9][1][10] Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. [3] Heartbeat The network and remote procedure call (RPC) traffic that flows between servers in a cluster. Window 2000 and Window 2003 Clusters communicate by using RPC calls on IP sockets with User Datagram Protocol (UDP) packets. Heartbeats are single UDP packets sent between each node s every 1.2 seconds. These packets are used to conform that the node s network interface is still active. 2.2 Types of Connection between Nodes The types of connection between cluster server nodes: Active/Active From a software perspective, this describes application (or resources) that can exist as multiple instances in a cluster. This means that both nodes can be active servicing clients. Active/Passive This term describes application that run as a single instance in a cluster. This generally also means that one node typically sits idle until a failover occurs. However, you can have an Active/Passive implementation of an application in an Active/Active cluster. [8] 3. Experimental Result This section deals with experimental performance evaluation. We use Load Runner testing tool version 11.5 to measure network load balancer performances on the basis of throughput and response time. The Network Load Balancing can be applied on different system such as single, two, three etc. Here shows the working with a single machine. It shows the different characteristics and different output in the form of graph, performance, hit per second, throughput, average response time etc. Graph This shows the total transaction per second, total number of transaction and number of Vusers. 5

Fig. 1.3 Scenario Graph Based on Throughput It displays the amount of throughput (in bytes) on the Web Server during the load test. Throughput represents the amount of data that the Vusers received from the server at any given second. This graph helps you to evaluate the amount of load Vusers generate, in terms of server throughput. Based on Response Time Fig.1.4 System Throughput Display the average response time taken to perform transactions during each second of the load test. This graph helps you determine whether the performance of the server is within acceptable minimum and maximum transaction performance time ranges defined for your system. 6

Fig.1.5 Average Response Time Comparison Table This table shows the comparison between single, two and three machine. These are the same properties that shown above using a graph. Table 1.2 Comparison Table for Manual Scenario From the above table, after measuring the performance on basis of given parameter it has been concluded that when single machine is working the response time is 0.029 sec., when two machines are handling request the response time is 0.025 sec. and when three machine are handling request the response time is 0.023. Hence using virtual server clustering technology throughput has been increases and response time has been decreases. 4. Conclusion In this paper the main importance is placed on utilization of commodity-based hardware and software components by using virtualization technology and load balancing to achieve high performance and scalability and at the same time keeping the price low. The final conclusion is that, the client response times can be reduced by using an agent, referred to here as a load balance manager that can obtain some knowledge of the system state for selecting processing elements to service requests. In comparison of single, two and three machines, what we have analyzed are that the response time is inversely proportional to number of machines. This paper 7

also showed how to model internet traffic workloads that can be used for additional research to develop algorithms and protocols to be used in the internet and distributed networks. Scalability is achieved by transparently adding or removing a node in the cluster. High availability is provided by detecting node or daemon failures and reconfiguring the system appropriately. The solutions require no modification to either the clients or the servers, and they support most of TCP and UDP services. References: 1. Ankush P. Deshmukh et al Applying Load Balancing: A Dynamic Approach Volume 2, Issue 6, June 2012 ISSN: 2277 128X. International Journal of Advanced Research in Computer Science and Software Engineering. 2. Satoru Ohta and Ryuichi Andou : WWW Server Load Balancing Technique Based on Passive Performance Measurement Published by IEEE 978-1-4244- A3388-9/09. 3. Analysis on Linux Server Clustering Polytechnic University of Valencia Feb.2004. 4. Lisa Phifer White paper Deploying Load-Balanced Server Clusters with Cobalt RaQ and Coyote Point Equalizer 2002 www.ipcortex.co.uk/article.rhtm/equalizerwhite-paper-5719.pdf 5. Malaysian public sector open source software programme phase II Comparison Report on Load Balancer Techniques to Measure Linux Virtual server Performance, MARCH 2008 6. KJ (Ken) Salchow, Jr.Sr. Manager, Technical Marketing and Syndication White Paper Load Balancing 101: Nuts and Bolts 7. January 21, 2005 Introduction to Network Load Balancing http://technet.microsoft.com/enus/library/cc786264(v=ws.10).aspx 8. Hai Jin, Rajkumar Buyya*, Mark Baker Cluster computing Tools, Applications, and Australian Initiatives for Low Cost Supercomputing 9. Computer Cluster, Dec. 2006, http://en.wikipedia.org/wiki/computer_cluster 10. Using Microsoft Cluster Services for virtual machine clustering http://searchitchannel.techtarget.com/feature/using-microsoft-cluster-services for-virtual-machineclustering 8