Design and Analysis of Replicated Servers to Support IP-Host Mobility in Enterprise Networks

Similar documents
Mobile IP. Bheemarjuna Reddy Tamma IIT Hyderabad. Source: Slides of Charlie Perkins and Geert Heijenk on Mobile IP

RARP: Reverse Address Resolution Protocol

IMHP: A Mobile Host Protocol for the Internet. Abstract

G.Vijaya kumar et al, Int. J. Comp. Tech. Appl., Vol 2 (5),

Internet Control Protocols Reading: Chapter 3

Mobility Management 嚴 力 行 高 雄 大 學 資 工 系

Final for ECE374 05/06/13 Solution!!

ABSTRACT. Introduction. Table of Contents. By Yi-an Chen

REDUCING PACKET OVERHEAD IN MOBILE IPV6

Mobile IP Part I: IPv4

6 Mobility Management

PART III. OPS-based wide area networks

Telecommunication Services Engineering (TSE) Lab. Chapter III 4G Long Term Evolution (LTE) and Evolved Packet Core (EPC)

Neighbour Discovery in IPv6

IP Routing Features. Contents

MPLS VPN in Cellular Mobile IPv6 Architectures(04##017)

iseries TCP/IP routing and workload balancing

A Comparison Study of Qos Using Different Routing Algorithms In Mobile Ad Hoc Networks

Transport and Network Layer

Content-Aware Load Balancing using Direct Routing for VOD Streaming Service

Static and Dynamic Network Configuration

Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols

Internet Protocol Address

SFWR 4C03: Computer Networks & Computer Security Jan 3-7, Lecturer: Kartik Krishnan Lecture 1-3

Guide to TCP/IP, Third Edition. Chapter 3: Data Link and Network Layer TCP/IP Protocols

High Availability Failover Optimization Tuning HA Timers PAN-OS 6.0.0

CS 5480/6480: Computer Networks Spring 2012 Homework 4 Solutions Due by 1:25 PM on April 11 th 2012

Home Agent-Based Decentralized User-Mobility with EPA-Selection

Transport Layer Protocols

Secure Networking Using Mobile IP

Outline. Wireless System Integration WLAN. WAN Technologies ดร. อน นต ผลเพ ม LAN. WAN Server LAN. Wireless System Architecture Protocols

Reliable Multicast Protocol with Packet Forwarding in Wireless Internet

An Active Network Based Hierarchical Mobile Internet Protocol Version 6 Framework

PERFORMANCE OF MOBILE AD HOC NETWORKING ROUTING PROTOCOLS IN REALISTIC SCENARIOS

Introducing Reliability and Load Balancing in Mobile IPv6 based Networks

Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT:

Introduction to Network Operating Systems

Introduction to Mobile IPv6

Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme. Auxiliary Protocols

VXLAN: Scaling Data Center Capacity. White Paper

Mobile Internet Access and QoS Guarantees Using Mobile IP and RSVP with Location Registers. 1. Introduction

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Ethernet. Ethernet. Network Devices

CS268 Exam Solutions. 1) End-to-End (20 pts)

ICS 351: Today's plan

Web Server Software Architectures

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science

Networking Test 4 Study Guide

Procedure: You can find the problem sheet on Drive D: of the lab PCs. 1. IP address for this host computer 2. Subnet mask 3. Default gateway address

The IP Transmission Process. V1.4: Geoff Bennett

Maximizing the number of users in an interactive video-ondemand. Citation Ieee Transactions On Broadcasting, 2002, v. 48 n. 4, p.

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

Chapter 11. User Datagram Protocol (UDP)

More Internet Support Protocols

Packet Sniffing on Layer 2 Switched Local Area Networks

Integrated DNS and IPv6 mobility for next generation Internet

Research on Video Traffic Control Technology Based on SDN. Ziyan Lin

04 Internet Protocol (IP)

Performance advantages of resource sharing in polymorphic optical networks

VPN. Date: 4/15/2004 By: Heena Patel

Internet Protocol version 4 Part I

BASIC ANALYSIS OF TCP/IP NETWORKS

Guide to TCP/IP, Third Edition. Chapter 2: IP Addressing and Related Topics

EE6390. Fall Research Report. Mobile IP in General Packet Radio System

Peer to peer networking: Main aspects and conclusions from the view of Internet service providers

Mobility on IPv6 Networks

Computer Networks. Lecture 3: IP Protocol. Marcin Bieńkowski. Institute of Computer Science University of Wrocław

Mobile Routing. When a host moves, its point of attachment in the network changes. This is called a handoff.

8.2 The Internet Protocol

Analysis and Comparison of Different Host Mobility Approaches

Mobility (and philosophical questions about names and identity) David Andersen CMU CS The problem

SSL VPN Technology White Paper

Tomás P. de Miguel DIT-UPM. dit UPM

HOST AUTO CONFIGURATION (BOOTP, DHCP)

Teldat Router. ARP Proxy

Application Protocols for TCP/IP Administration

Performance of networks containing both MaxNet and SumNet links

Administrivia. CSMA/CA: Recap. Mobility Management. Mobility Management. Channel Partitioning, Random Access and Scheduling

Internet, Part 2. 1) Session Initiating Protocol (SIP) 2) Quality of Service (QoS) support. 3) Mobility aspects (terminal vs. personal mobility)

Ethernet Local Area Networks (LANS) Two basic ways to cable an 10 Mb/s Ethernet LAN: Bus-style (large Ethernet cable, or thin Ethernet cable)

IP Addressing A Simplified Tutorial

Network Protocol Configuration

On the Design of Mobility Management Scheme for based Network Environment

DNS Extensions to Support Location Management in IP Networks

Internet Protocols. Addressing & Services. Updated:

IP Anycast: Point to (Any) Point Communications. Draft 0.3. Chris Metz, Introduction

Network Security TCP/IP Refresher

Topic 7 DHCP and NAT. Networking BAsics.

Exponential Approximation of Multi-Skill Call Centers Architecture

Provisioning algorithm for minimum throughput assurance service in VPNs using nonlinear programming

Efficient Addressing. Outline. Addressing Subnetting Supernetting CS 640 1

Scheduling Allowance Adaptability in Load Balancing technique for Distributed Systems

Configuring Network Address Translation (NAT)

Dynamic Home Agent Reassignment in Mobile IP

Adaptive MAP Selection with Load Balancing Mechanism for the Hierarchical Mobile IPv6

Analysis of Delayed Reservation Scheme in Server-based QoS Management Network

LAN Switching and VLANs

FIREWALLS & NETWORK SECURITY with Intrusion Detection and VPNs, 2 nd ed. Chapter 5 Firewall Planning and Design

MEASURING WIRELESS NETWORK CONNECTION QUALITY

Transcription:

Design and Analysis of Replicated Servers to Support IP-Host Mobility in Enterprise Networks Jason P. Jue* and Dipak Ghosal** Department of Electrical and Computer Engineering* Department of Computer Science** University of California Davis, CA 9566 Abstract Mobility support in IP networks requires the use of servers to forward packets to mobile hosts and to maintain information pertaining to a mobile host s location in the network. In one proposed protocol, the mobile-ip protocol, location and packet forwarding functions are provided by servers referred to as home agents. These home agents may become the bottleneck when there are a large number of mobile hosts in the network. In this paper, we consider the design and analysis of load balancing mechanisms for multiple home agents in the mobile-ip protocol. We propose a load balancing scheme in which a home agent may periodically transfer the control of a mobile host to another home agent in the same network through the use of functions supported in mobile-ip. The periodicity with which this transfer is performed affects the load balancing gain as well as the associated overhead. We analyze our load balancing mechanism under bursty traffic arrival conditions using a Markov Modulated Poisson Process. The results show that the proposed load balancing scheme can yield modest gains over alternative load balancing strategies. Introduction A key objective in providing mobility support is to ensure that mobile hosts are able to send and receive messages when they move within a network or from one network to another. In IP networks, routing is based on the IP address of the host [3]. When a host moves from one IP subnet to another, all IP packets addressed to that host will be routed to the host s old network. In order for the mobile host to receive messages at its new location, it must obtain an IP address in the new network, and messages sent to the original IP address need to be forwarded to the new IP address. In the mobile-ip protocol [], this support is provided by two servers, namely, the foreign agent and the home agent. A foreign agents supports the mobile host in the foreign network, while the home agent maintains the mobility binding and forwards packets to the mobile host when it has roamed out of the network. When there are large number of mobile hosts, these servers can become bottlenecks. In such a situation, multiple foreign agents and home agents may be employed in an IP subnet. In order to effectively utilize the processing capacity of these mobility agents, it is necessary to evenly distribute the workload among them. The load at a particular mobility agent will depend on the number of active connections to mobile hosts which it is serving; therefore, evenly distributing the mobile hosts among mobility agents will not necessarily guarantee a balanced load. For example, two home agents each serving the same number of mobile hosts may have significantly different loads if the mobile hosts being served by one home agent are receiving a large amount of traffic, while the mobile hosts being served by the other home agent are not receiving any traffic. In this paper, we attempt to solve this problem by presenting a load balancing scheme in which mobility agents take turns in serving a mobile host. A mobility agent, after providing packet-forwarding functions to a mobile host for a certain amount of time, utilizes functions supported by the mobile-ip protocol to transfer control of the mobile host to another mobility agent. As a result, a mobile host which is receiving large amounts of traffic may be served by a number of different mobility agents rather than being statically assigned to a single mobility agent. One key design parameter is the time duration for which a mobility agent services a mobile host. If the time is small then the potential load balancing gains are high, since the input traffic can be uniformly distributed among the mobility agents. However, decreasing the time duration will result in higher overhead due to the increased number of transfers. Another design parameter is the selection policy for choosing the next mobility agent. In this paper, we will consider a random selection policy due to its simplicity. In this paper we develop a queueing model to analyze the performance of the load balancing scheme. In order to better appoximate the bursty characteristics of TCP/IP traffic, we use a Markov Modulated Poisson Process (MMPP) to model source traffic. We corroborate the analytical results with simulation. This paper is organized as follows. In Section, we discuss the basic mobile-ip protocol; we focus primarily on the aspect of the protocol dealing with mobile host registration and routing. In Section 3, we describe the mobile-ip protocol with multiple home agents and discuss load balancing schemes. In Section 4, we develop a queueing model and outline the analysis to determine the mean response time of a packet at a home agent. The results from the queueing analysis are discussed in Section 5 and compared to simulation results. Finally, in Section 6, we conclude this paper and discuss some key future research directions. Mobile-IP Protocol The basic mobile-ip protocol [] has evolved out of the efforts of the mobile-ip working group. The basic mobile-ip protocol defines the following entities:. Mobile Host: an IP host that, by virtue of its mobility, changes its point of attachment to the network. It is configured with ) a permanent IP address, ) the addresses of one or more Home Agents, and 3) for each pending registration, the Care-of-Address and the MAC address of a foreign agent, if applicable.. Home Agent: a router that maintains the current mobility binding for a mobile host. The mobility binding maps a permanent IP address to a temporary IP address. The home

agent is typically a router in the network identified by the permanent IP address of the mobile host. 3. Foreign Agent: a router in the network that the mobile host is visiting. For each mobile host currently visiting a network, the foreign agent maintains ) the media address of the mobile host, ) the permanent IP address of the mobile host, 3) the IP address of the home agent and 4) the lifetime for which the information is considered to be valid. 4. Care-of-Address: either the temporary IP address that is assigned to the mobile host in the foreign network (by a DHCP (Dynamic Host Configuration Protocol) server), or the IP address of the foreign agent. In the latter case, the foreign agent acts as a proxy for the mobile host, and all communication to the mobile host must pass through the foreign agent. When a mobile host roams into a new network, it registers by sending a registration request to the foreign agent. In the request, the mobile host provides its permanent IP address and the IP address of the home agent. The foreign agent performs the registration process on behalf of the mobile host by sending the registration request to the home agent. This message contains the permanent IP address of the mobile host and the IP address of the foreign agent. The message is encapsulated (or tunneled) within an IP packet whose source and destination addresses are the addresses of the foreign agent and the home agent, respectively. When the home agent receives the registration request, it updates the mobility binding of the mobile host and sends an acknowledgment back to the foreign agent. When the foreign agent receives the packet, it updates it own table and relays the reply to the mobile host. In the home network, the home agent uses a gratuitous ARP [3] to update the ARP cache of all hosts and routers that currently have a cache entry for the mobile host. When the mobile host moves back into the home network, it de-registers with the home agent, which in turn sends a gratuitous ARP to update cache entries in the mobile hosts and routers. In the basic mobile-ip protcol, IP packets destined to a mobile host that is outside of its home network are routed through the mobile host s home agent. The home agent acts as a proxy for the mobile host, and responds to ARP queries for the mobile host with a proxy ARP reply [3]. The result is that the mobile host s IP address is now bound to the link layer address of the home agent. When the home agent receives an IP packet destined to the mobile host, it uses the mobility binding to encapsulate (or tunnel) the IP packet within another IP packet. The new IP packet has a destination address which is the care-of-address of the mobile host and a source address which is the IP address of the source host. When the foreign agent receives the packet it de-capsulates it and sends it to the mobile host. Acknowledgements are directly sent back to the source. The forward path routing may be inefficient since messages must be routed first to the home agent before being sent to the mobile destination. If the source host and the mobile host are in the same network, but not in the home network of the mobile host, then the messages will experience unnecessary delay since they have to be first routed to the home agent which resides in the home network of the mobile host. One way to improve the mobile IP protocol is route optimization [] in which the messages are routed directly to the care-of address instead of first sending the message to the home agent. This is accomplished by having the correspondent host maintain a binding cache which contains the care-of addresses for the mobile hosts to which it is sending messages. One disadvantage of route optimization is that it requires the source host to be aware of the mobile host s mobility, wheras in the original mobile-ip, the mobilty of a destination host is transparent to the souce host. This loss of transparency requires the source host to run additional software in order to provide route optimization. The benefit of our load balancing scheme is that it is able to handle traffic from sources which do not provide route optimization, as well as intial bursts of traffic from sources which do provide route optimization. In the remainder of the paper, we will consider load balancing for multiple home agents; however, the work may be generalized to also accomodate foreign agents. 3 Mobile-IP With Multiple Home Agents In large networks with a high penetration of mobile users, the home agent can become the bottleneck. One way to reduce this bottleneck is to deploy multiple home agents. The servers may either be replicated with each server maintaining the same information, or partitioned in which case each server maintains information for a subset of mobile hosts. For the case in which partitioned multiple home agents are used, a mobile host may be allowed to dynamically select a home agent at random upon moving to a foreign network. Upon registering with that home agent, the mobile host will contine to be served by the same home agent until the mobile host returns to its home network. This approach has the effect of uniformly distributing mobile hosts among the multiple home agents, however it does not necessarily balance the load. If traffic to different mobile hosts is uneven, then some home agents will have to handle more traffic than other home agents, even if the number of mobile hosts per home agent is balanced. Also, since TCP traffic tends to be bursty, it is possible that some home agents will be receiving bursts of packets, causing queues to build up, while other home agents will be receiving no packets. Thus, rather than partitioning multiple home agents, it may be better to replicate the home agents and allow the dynamic transfer of mobile hosts from one home agent to another. In the following section we present a mechanism for accomplishing such transfers and discuss some of the performance issues involved in implementing the load balancing scheme. 3. Load Balancing Mechanisms In the proposed load balancing scheme, each mobile host has the IP address of all home agents. When a mobile host sends a registration request, it will randomly choose one of the home agents to service the request. As defined in the mobile-ip protocol, we assume the each mobile registration request has an unique identification and a lifetime which defines the time for which the registration is valid. The latter will be referred to as the T reg. Along with the mobility binding, the home agent also maintains the registration identification, the lifetime, and a stream transfer time counter, denoted as T stt, which defines the duration for which the home agent services a mobile host before forwarding the control to another home agent. (The term stream refers to a packet stream destined for a particular mobile host and may consist of multiple TCP streams). Once the T stt counter expires, the home agent will select another home agent, and then forward a registration request with the same identification field to the selected home agent, thus allowing the new home agent to serve the mobile host. The T stt counter only begins to decrement once packets start arriv-

Template filename: IM.fm Release.0 ing for a particular mobile host. This prevents excessive forwarding of registration requests when no packets are being sent to a mobile host. When a home agent receives a registration request it performs the following action:. If the identification of the registration request is the same as the identification of a binding already in its table, the home agent will perform a gratuitous ARP, updating the caches in local hosts and routers, and causing all packets destined for the mobile host to be sent to the home agent. The home agent will also reset the T stt counter associated with the binding.. If the identification of the registration request is different from the identification of all bindings already in the table, then the registration request is a new request and the following actions will take place: The home agent will send a mobility binding update message to all of the other home agents and, in parallel, broadcast a gratuitous ARP which will update the caches in the hosts and routers. The T stt counter associated with the binding is reset. All home agents receiving the mobility binding update message will update their mobility bindings with the new information, but will not send out gratuitous ARPs. The T stt counter is ignored by these home agents. When the lifetime T reg expires, the home agents will de-register the mobile host as specified in the Mobile IP protocol. Instead of using a transfer time counter, one may also implement a packet counter which counts the number of packets received for a particular mobile host. When the number reaches some chosen value, the registration is forwarded to another home agent. Another option is to monitor the queue size at a home agent. When the queue size exceeds some threshold, the home agent forwards one or more registrations to another home agent. The two factors which must be considered when implementing a load balancing policy are the transfer time, T stt, and the discipline for selecting the next home agent. There are a number of different policies for selecting the next home agent:. Random: In this case the next home agent is chosen randomly from the remaining home agents.. Round-Robin: In this case the next home agent is chosen using a round-robin policy. 3. Join the Shortest Queue (JSQ): In this case, the next home agent is the one which is the least loaded. In this study we will focus on the random policy because of its analytical tractability. The JSQ policy is optimal, but its implementation results additional complexity. Another important issue is the impact of T stt. If T stt is very small, then it is possible to randomly and uniformly distribute the load among the home agents on a packet-by-packet basis, and thereby achieve high load balancing gains. However, this incurs high overhead since each time a stream is transferred to another home agent, the receiving home agents broadcasts a gratuitous ARP which must be processed by all of the hosts including the other home agents. If almost every packet in a TCP/IP stream resulted in a gratuitous ARP, the network would soon become overloaded. On the other hand, if T stt is large then a mobile host is bound to a home agent for a long time. This may result in poor load balancing and consequently result in poor performance. 4 Queuing Model In order to quantify the performance of the load balancing scheme, we develop a queueing model of the system. In the model, there are N identical home agents; each home agent is modeled as a single server queue. The service rate is exponentially distributed with mean rate µ packets/second. Since all home agents are identical, we study the queue at a single home agent and aggregate the effects of the other home agents. There are S sources. Each source represents the aggregate of all traffic being forwarded to a single mobile host. The traffic generated by each source is modeled by a three state Markov Modulated Poisson Process as shown in Figure. The three states are (0,0) (,0) Figure The three state MMPP model of a source. denoted as S (0,0), S (,0), and S (0,). When the process is in state S (0,0), the source is off, and there are no arrivals. When the process is in state S (,0), the source is on, and arrivals occur according to a Poisson process with rate λ packets/second. In state S (0,), the source is on, but it is sending its packets to a home agent other than the home agent under consideration; thus, the arrival rate is zero. The source turns on with rate and turns off with rate. When a source turns on, it chooses one of the home agents randomly. A particular home agent is selected with probability / N. The transition from state S (0,0) to S (,0) can then be modeled as having rate /N, while the transition from state S (0,0) to state S (0,) has rate (N-)/N. When a source is transmitting to the tagged home agent, it will continue to transmit to that home agent for the duration of the stream transfer time, T stt, after which it will randomly select one of the N home agents and begin transmitting to this new home agent. In order to make the analysis tractable, we approximate the time for which a source transmits packets to a home agent as having an exponential distribution with mean T stt +/λ. The /λ term results from the fact that the timer isn t started until a packet arrives to the home agent. Also, since the selected home agent may be the same as the tagged home agent, the source transfers to another home agent with probability (N-)/N, and with probability /N it remains with the tagged home agent. When the source is transmitting to a home agent other than the tagged home agent, it will begin transmitting again to the original home agent after an average period of N(T stt +/λ) seconds. The transition rates from the state S (,0) to state S (0,) and from the state S (0,) to the state S (,0) are then given by γ and γ respectively, where Using the fact that the superposition of a number of MMPP sources is also an MMPP source, we may combine a number of γ (0,) γ

γ --------------------------- N () N T stt + -- γ =, = --------------------------- λ N T stt + -- λ these sources into a single MMPP. Combining S sources results in a ( S + ) + ( S + ) state MMPP. The state diagram for this process is shown Figure. (0,0) (,0) S S γ γ (0,) (S-) (S-) Figure State diagram for superposition of S MMPP sources. We now need to account for overhead packets which correspond to the transfer of registration packets between the home agents. Since these transfers occur during the on-period of a connection, the arrival process of these overhead packets is correlated with the on periods of data traffic. In this analysis, we assume that the two arrival processes are independent. Furthermore, instead of solving a two priority queueing model, we will use the shadow server approximation proposed in [9] to analyze two priority queues. In this approximation, we will aggregate the high priority traffic by appropriately modifying the service time of the data packets. This can be done by multiplying the service rate of data packets by (-U o ) where U o is the utilization of the server by overhead packets and is given by λ U o o = ----- µ o where λ o is the aggregate arrival rate of overhead packets to the tagged home agent from all the sources, and the service time for overhead packets is assumed to be exponential with rate µ o = µ/c. At any given time, the expected number of sources in the onperiod N on is given by S N on = ----------------- (3) + This follows from the observation that the probability that a source is in the on-period is equal to /( + ). In order to compute λ o, we need to determine the number of overhead packets that are generated during the on-period of a source. The mean duration of an on-period is /. During this period, a source generates overhead packets with rate /(T stt +/λ) packets per second. Therefore, an average of / (T stt +/λ) overhead packets arrive during an on-period. However, an additional overhead packet is generated after the end of on-period (the last timer T stt expires after the source has entered the off-period). On average, this final packet is generated at time T stt after the on-period ends. (S-) (S-) γ γ () (,0) (,) (0,) γ γ The rate of overhead packets generated by a single stream, λ on, is given by ------------------------------------------ + (4) λ ( T stt + λ) on = --------------------------------------------------- ----- + T stt From Eq. (4) and Eq. (5) and using the argument that each home agent receives only /N of the overhead packets, λ o is then given by S λ ( + ( T stt + λ) ) o = ----------------------------------------------------------------------------------------------------- N ( + ) ( T stt + λ) ( + T stt ) The effective service rate is calculated as C S µ ( + ( T stt + λ) ) = µ eff ----------------------------------------------------------------------------------- N ( + ) ( T stt + λ) ( + T stt ) From the Q and Λ matrices corresponding to the MMPP arrival process, and the service time distribution H(x), the MMPP/ G/ queueing model may be solved for the mean response time. An approach for solving such models is given in [7]. 5 Results and Discussion In order to validate the analytical model of the random selection policy derived in the previous section, we develop a simulation model of the network. The simulation model does not aggregate the effect of each home agent as in the queuing analysis, but instead models each home agent individually as a twoclass non-preemptive priority queue. Also, in the simulation, the time between transfers is the sum of two exponential random variables with means T stt and /λ, while in the analytical model, this time is approximated as a single exponential random variable with mean T stt +/λ. In our numerical results, our primary measure of interest is the mean packet delay. The delay is defined as the time from a packet s arrival to a home agent to its departure from the home agent. In all the subsequent figures, the mean delay is shown normalized to the average packet service time. We study the effects of the transfer time on the average packet delay for various levels of burstiness and for different overhead costs. The transfer time determines the granularity of the load balancing. A transfer time of zero corresponds to load balancing on a packet-by-packet basis. A transfer time of infinity would normally correspond to a mobile host being served by the same home agent for the lifetime of the registration. However, in our analytical model, when a burst is over (the source returns to state S (0,0) in Figure ) the source does not retain information about the home agent with which it was associated. Thus, when the source returns to the on mode, it randomly selects a new home agent. Therefore, in our results, a transfer time of infinity corresponds to load balancing on a burst-by-burst basis. This will be referred to as burst-level load balancing. In order to compare the performance of our load balancing scheme to situations in which there are no transfers, we consider the situation in which the streams are evenly distributed among all the home agents and no transfers take place. This will be referred to as the equal partition scheme. Note that this is a conservative comparison, since equally distributing the streams among home agents is the best possible scheme for cases in which no transfers are allowed. For the purpose of this study, we will define degree-of-burst-

Template filename: IM.fm Release.0 iness, β, is as the ratio of the peak arrival rate to the mean arrival rate. The mean arrival rate to a single home agent, λ mean, is defined as S ------------------- λ + λ mean = ----------------------------------------- N Thus β = λ/λ mean. In our numerical examples, we will adjust the degree of burstiness by changing the parameters,, and λ, while keeping the mean arrival rate to be the same. In Figure 3, we plot delay vs. transfer time for low levels of Delay (number of packets).0 0.0 8.0 6.0 4.0 C = 0 C = 0 (sim) C = C = (sim) Equal Partition Burst Level Load Balancing.0 0.0 5.0 0.0 5.0 0.0 5.0 30.0 35.0 40.0 45.0 50.0 Figure 3 Mean packet delay vs. stream transfer time for C=0 and C=. (S=4, N=, λ=, =/50, = /50, µ=.5) burstiness (β = ). We examine the case in which there is no overhead (C=0), and the case in which the overhead is equivalent to the service time of a single packet (C=). In order to measure the potential gains of the load balancing scheme, we also plot the delay for the burst level load balancing scheme. For the case in which there is overhead, we see that for low transfer times, the delay increases rapidly. This increase is caused by overhead packets that are generated every time a stream is transferred from one home agent to another. As the transfer time is increased, the delay drops due to less overhead. For high transfer times, the delay increases as the load becomes more unbalanced at the home agents. From the figure we also observe that the load balancing mechanism performs worse than the equal partition case, even when there is no overhead. The difference can be explained by the fact that for the random scheme, while on average each home agent is supporting an equal number of sources, there is a nonzero probability that a single home agent will be supporting more sources than the other home agents. This results in a higher variability in the number of packets at a given home agent, and thus a higher average delay. The simulation results are also shown in the figure. While the simulation results corroborate the analysis for most of the range, for low values of T stt, the analysis differs from the analysis. This difference is due to the shadow server approximation used to model the overhead. In situations with highly bursty traffic (β = 5), as shown in Figure 4, the potential gain of the load balancing scheme is higher than for the case of non-bursty traffic. When traffic is very bursty, packets tend to build up in a queue quickly, resulting in high delays. By breaking up a burst and spreading the burst over a Delay (number of packets) 60.0 50.0 40.0 30.0 C = 0 C = 0 (sim) C = C = (sim) Equal Partition Burst Level Load Balancing 0.0 0.0 5.0 0.0 5.0 0.0 5.0 30.0 35.0 40.0 45.0 50.0 Figure 4 Mean packet delay vs. stream transfer time, T stt, for C=0 and C=. ( S=4, N=, λ=5, =/90, = /0, µ=.5) number of home agents, the queue at any single home agent doesn t grow as quickly. In general, load balancing gains are high when the burst arrival rate is higher than a home agent s service rate, or when there is a high probability that the aggregated arrival rate of multiple sources at any point in time is higher than the service rate. We now plot load balancing gain versus T stt. We define percent load balancing gain, G, as G = R bll R 00 R bll where R is the mean packet delay for a given T stt, and R bll is the mean packet delay for the burst level load balancing scheme. Figure 5 plots the load balancing gain, G, for two different values Load Balancing Gain (%) 50.0 40.0 30.0 0.0 0.0 Service Rate =.5 Service Rate =.0 0.0 0.0 0.0 0.0 30.0 40.0 50.0 Figure 5 The percent load balancing gain, G, vs. stream transfer time, T stt, for two different loads. (S=4, N=, λ=, =/50, =/50). of load obtained by changing the mean packet service time µ. From the plot we observe that higher load yields higher load balancing gains. The maximum gains are reasonably high (35% to 40%) even with the low degree of burstiness. Finally, we note that the value of T stt that maximizes the gain does not change much with higher load. However, when the load is increased to point where all of the home agents are always busy, then we expect that transfers will not result in significant load balancing gains. In Figure 6, we plot load balancing gain for two different system sizes - one with 4 sources and home agents and the other with 8 sources and 4 home agents. The results show that the load balancing gains are higher for larger system sizes. This is because

Load Balancing Gain (%) 60.0 40.0 0.0 4 Sources, Home Agents 8 Sources, 4 Home Agents 0.0 0.0 0.0 0.0 30.0 40.0 50.0 May 986. [6] R. Mirchandaney, D. Towsley, and J. A. Stankovic, Analysis of the effect of delays on load sharing, IEEE Transactions on Computers, vol. 38, no., Nov. 989, pp. 53-55. [7] W. Fischer and K. Meier-Hellstern, The Markovmodulated Poisson process (MMPP) cookbook, Performance Evaluation 8, pp.49-7, 99. [8] J. P. Jue and D. Ghosal, Performance and Architectural Issues in Supporting IP-Host Mobility in ATM Networks, University of California, Davis, Department of Computer Science, Technical Report #TR-CSE-96-0, April 996. [9] K. C. Sevcik, Priority scheduling disciplines in queueing network models for computer systems, in Proc. IFIP Congress. Amsterdam: North-Holland, 977. Figure 6 The percent load balancing gain, G, vs. stream transfer time, T stt, for two different system sizes. (λ=, =/50, = /50, µ=.5) for larger systems, there is a higher probability that burst level load balancing will result in an uneven distribution of sources among home agents, creating higher delays. This situation allows for greater load balancing gains when the sources are allowed to transfer from one home agent to another. 6 Conclusions and Future Work When the Mobile IP protocol is deployed, subnets which are supporting a large number of mobile hosts will need to have multiple mobility agents in order to provide an adequate level of service. In this paper we presented a means of evenly distributing the load among multiple home agents in the Mobile IP protocol. By providing a mechanism which allows incoming packet streams to be transferred from one home agent to another, we may achieve gains over schemes in which each packet stream is only served by a single home agent. The gains are highest when there is a high level of burstiness, which is the case for TCP/IP traffic. This work may be extended to the case in which the underlying network is an ATM network [8]. In this situation, IP to ATM address translations are performed by an entity known as the Address Resolution Protocol server (ARP server). This centralized server may be used to implement better load balancing schemes by keeping track of ATM connections and balancing the load on an ATM connection level. 7 References [] C. Perkins (Editor), IP Mobility Support, IETF Internet Draft, 3 March 996. [] David B. Johnson, Route Optimization in Mobile IP, Draft-IETF-mobileip-optim-0.txt, July 995. [3] Richard W. Stevens, TCP/IP Illustrated, Volume, The Protocols, Addison-Wesley Professional Computing Series, 994. [4] S. Deering, ICMP Router Discovery Message, RFC 56. [5] D. Eager, E. Lazowska, and J. Zahorjan, Dynamic load sharing in homogeneous distributed systems, IEEE Trans. Software Engineering, vol. SE-, pp. 66--675,