APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM



Similar documents
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

Transport Layer Protocols

Data Networks Summer 2007 Homework #3

Attenuation (amplitude of the wave loses strength thereby the signal power) Refraction Reflection Shadowing Scattering Diffraction

Student, Haryana Engineering College, Haryana, India 2 H.O.D (CSE), Haryana Engineering College, Haryana, India

Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)

First Midterm for ECE374 03/24/11 Solution!!

Low-rate TCP-targeted Denial of Service Attack Defense

Transport layer issues in ad hoc wireless networks Dmitrij Lagutin,

TCP in Wireless Mobile Networks

A Survey: High Speed TCP Variants in Wireless Networks

First Midterm for ECE374 03/09/12 Solution!!

Key Components of WAN Optimization Controller Functionality

Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT:

High-Speed TCP Performance Characterization under Various Operating Systems

Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network

The Problem with TCP. Overcoming TCP s Drawbacks

A TCP-like Adaptive Contention Window Scheme for WLAN

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

A Survey on Congestion Control Mechanisms for Performance Improvement of TCP

IEEE Ad Hoc Networks: Performance Measurements

TCP Westwood for Wireless

Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols

First Midterm for ECE374 02/25/15 Solution!!

SJBIT, Bangalore, KARNATAKA

Final for ECE374 05/06/13 Solution!!

CSE331: Introduction to Networks and Security. Lecture 6 Fall 2006

A NOVEL RESOURCE EFFICIENT DMMS APPROACH

Adaptive DCF of MAC for VoIP services using IEEE networks

Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control

TCP in Wireless Networks

Security Scheme for Distributed DoS in Mobile Ad Hoc Networks

Enhanced Power Saving for IEEE WLAN with Dynamic Slot Allocation

International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July ISSN

D1.2 Network Load Balancing

Influence of Load Balancing on Quality of Real Time Data Transmission*

CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage

Computer Networks - CS132/EECS148 - Spring

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING

A Transport Protocol for Multimedia Wireless Sensor Networks

Performance Evaluation of Priority based Contention- MAC in Mobile Ad-Hoc Networks

Modeling and Simulation of Quality of Service in VoIP Wireless LAN

... neither PCF nor CA used in practice

VMWARE WHITE PAPER 1

4 High-speed Transmission and Interoperability

CHAPTER 6 SECURE PACKET TRANSMISSION IN WIRELESS SENSOR NETWORKS USING DYNAMIC ROUTING TECHNIQUES

THE UNIVERSITY OF AUCKLAND

Intelligent Agents for Routing on Mobile Ad-Hoc Networks

Exercises on ns-2. Chadi BARAKAT. INRIA, PLANETE research group 2004, route des Lucioles Sophia Antipolis, France

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

PORTrockIT. Spectrum Protect : faster WAN replication and backups with PORTrockIT

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project

EECS 122: Introduction to Computer Networks Multiaccess Protocols. ISO OSI Reference Model for Layers

Dynamic Source Routing in Ad Hoc Wireless Networks

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links

MOBILITY AND MOBILE NETWORK OPTIMIZATION

Lab Exercise Objective. Requirements. Step 1: Fetch a Trace

TCP over Wireless Networks

A Workload-Based Adaptive Load-Balancing Technique for Mobile Ad Hoc Networks

Master s Thesis. Design, Implementation and Evaluation of

Supporting VoIP in IEEE Distributed WLANs

Windows Server Performance Monitoring

Research of User Experience Centric Network in MBB Era ---Service Waiting Time

Research of TCP ssthresh Dynamical Adjustment Algorithm Based on Available Bandwidth in Mixed Networks

Congestions and Control Mechanisms n Wired and Wireless Networks

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

File Transfer Protocol (FTP) Throughput Testing by Rachel Weiss

Optimal Bandwidth Monitoring. Y.Yu, I.Cheng and A.Basu Department of Computing Science U. of Alberta

Lecture 8 Performance Measurements and Metrics. Performance Metrics. Outline. Performance Metrics. Performance Metrics Performance Measurements

Energy Efficient MapReduce

Outline. TCP connection setup/data transfer Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics

How To Analyze The Security On An Ipa Wireless Sensor Network

CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT

Avaya ExpertNet Lite Assessment Tool

ISSN: ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 5, September

Analysis of TCP Performance Over Asymmetric Wireless Links

Congestion Control Review Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?

Prefix AggregaNon. Company X and Company Y connect to the same ISP, and they are assigned the prefixes:

- An Essential Building Block for Stable and Reliable Compute Clusters

CSMA/CA. Information Networks p. 1

Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment

Denial of Service Attacks at the MAC Layer in Wireless Ad Hoc Networks

A Congestion Control Algorithm for Data Center Area Communications

Frequently Asked Questions

A Simulation Study of Effect of MPLS on Latency over a Wide Area Network (WAN)

CS268 Exam Solutions. 1) End-to-End (20 pts)

Bandwidth Estimation using Passive Monitoring for WLANs Case Study Report

Copyright 1

ECE 358: Computer Networks. Homework #3. Chapter 5 and 6 Review Questions 1

TCP Flow Control. TCP Receiver Window. Sliding Window. Computer Networks. Lecture 30: Flow Control, Reliable Delivery

An Experimental Study of Throughput for UDP and VoIP Traffic in IEEE b Networks

() XCP-i: explicit Control Protocol for heterogeneous inter-networking November 28th, of high-speed networks / 15

Chapter 4. VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network)

Transcription:

152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented at the user level using C. Here the overhead of the C programme written to simulate ATP is compared with the overhead of the C program written to simulate PPATPAN. The work reported in chase et al (2001) finds the performance of the system based on CPU usage and memory usage. The same technique is adopted here to find the performance. The configuration of the system in which the implementation is done is Fedora release (sulphur) kernel linux 2.6.25-14.fc9. i686 gnome 2.22.1 memory 4939 MiB Processor 0-Genuine Intel CPU T2050 @ 1.60GHz, Processor 1-Genuine Intel CPU T2050 @ 1.60GHz. The implemented system is stress tested at the intermediate node by allowing five flows from source to destination. A1.2 TOOL (SYSTEM MONITOR) The system monitor application enables to monitor the system processes and usage of system resources. It can also be used to modify the behaviour of the system. It is used to display the current usage of a system resource, position of the mouse pointer over the corresponding graph in the

153 applet. A tool tip displays the current usage as a percentage. The system monitor window contains the following tabbed sections: System It displays various basic information about the hardware and software of the computer. System Status It shows the currently available disk space. Processes It shows active processes and how processes are related to each other. It provides detailed information about individual processes and enables to control active processes. Resources It displays the current usage of the following system resources: CPU time Memory and swap space Network usage %CPU process. It displays the percentage of CPU time currently being used by the

154 CPU Time the process. It is used to display the amount of CPU time that has been used by By using the system monitoring tool, CPU usage and memory usage of user level implementation of ATP and PPATPAN are discussed in this section. A1.3 CPU UTILIZATION The central processing unit, as viewed from Linux, is always in one of the following states: idle: available for work, waiting user: high-level functions, data movement, math, etc. system: performing kernel functions, I/O and other hardware interaction nice: like user, a job with low priority will yield the CPU to another task with a higher priority By noting the percentage of time spent in each state, overloading of one state or another can be discovered. Too much idle means nothing is being done, too much system time indicates a need for faster I/O or additional devices to spread the load. Each system will have its own profile when running its workload. By watching these numbers over time, the percentage of CPU utilization for that workload in that system can be determined. A1.4 MEMORY When the processes are running and using up available memory, the system will slow down as processes get paged or swapped. Memory

155 utilization graphs help to point out these memory problems and also it gives the memory usage. A1.5 RESULTS CPU Usage Figure A 1.1 CPU Usage of ATP (Color View) Figure A 1.2 CPU Usage of PPATPAN(Color View) Figure A 1.3 CPU Usage of ATP (Normal View) Figure A 1.4 CPU Usage of PPATPAN (Normal View)

156 Load Average (1 Minute) Figure A1.5 Load Average of ATP Figure A1.6 Load Average of PPATPAN Physical Memory Figure A1.7 Physical Memory of ATP Figure A1.8 Physical Memory of PPATPAN

157 The graphs shown in Figures A1.1 to A1.8 compare the CPU usage and memory usage in ATP and PPATPAN. The results show that most of the CPU cycles are required for the user level programme of ATP than that of PPATPAN. These graphs are obtained by executing the user level programme of ATP and PPATPAN in fixed topology. The graphs shown in Figures A1.1 to A1.4 are snapshots of CPU usage. From the results, it can be found out that the power consumption is high in ATP (79%) than in PPATPAN (28%). The snapshots in Figures A1.5 and A1.6 show the load average of system, when ATP and PPATPAN are implemented in user level. It clearly depicts that the load of ATP is high compared to PPATPAN. As it is explained already, ATP requires unnecessary copying of data than that of PPATPAN. Therefore, in the implementation, three additional function calls are used to emulate the behavior of ATP whereas it is not done in PPATPAN. The memory usage of ATP and PPATPAN are shown in Figures A1.7 and A1.8. The memory usage of PPATPAN is 0.4 whereas the memory usage of ATP is 0.8. According to this observation, ATP needs 2 times more memory, than that of the PPATPAN. The user level implementation in linux is done to provide additional result to substantiate that the processing power required in ATP is greater than that of PPATPAN. The results of user level implementation of ATP and PPATPAN in windows system and Linux system are not compared. Moreover, the magnitude of results in windows system and Linux system of ATP and PPATPAN cannot be compared because different tools are used in both the systems to evaluate the performance. In windows system, processor explorer tool is used whereas in Linux system, system monitor tool is used. In both the tools the mechanism in which the results are projected are different. Therefore, it is not wise to compare the results obtained in both the systems.

158 But in both Windows as well as in Linux system the CPU usage and memory usage of ATP is greater than that of PPATPAN. A1.6 CONCLUSION In this section, the CPU usage and memory usage of the user level implementation of ATP and PPATPAN in Linux system are presented whereas in chapter two the CPU usage and memory usage of ATP and PPATPAN in Windows system are presented. The result reveals that the CPU usage, load average and physical memory of ATP are greater than that of PPATPAN. But the CPU usage and memory usage of ATP and PPATPAN in Windows system and in Linux system are not identical. This is attributed to the fact that the processor explorer tool used in Windows system and the system monitor tool used in Linux system are completely different from the mechanism and scenario in which they are deployed.

159 APPENDIX 2 DEPLOYMENT OF ADAPTIVE FAIR SHARE OF CONGESTION WINDOW SIZE IN TRADITIONAL TCP A2.1 INTRODUCTION In this section, investigation is done to study the fairness related issues of traditional TCP. This preliminary analysis on window based traditional TCP protocol is done to find out the path to analyze the state-ofthe-art rate based transport protocol ATP. The analysis is carried out on concurrent TCP flows in this section. The analysis on concurrent ATP flows is described in chapter 3. Fairness When network resources are shared by multiple TCP connections, there is a question whether each connection gets a fair share of the resources. Users (application programmes) generally care about how much bandwidth they get, so one definition of fairness would be that each connection should get the same bandwidth. Therefore, to provide fairness, congestion control algorithm of TCP must be altered such that it tends to give roughly equal windows to competing connections. This section proposes a small modification to the way traditional TCP increases its congestion window to provide fair share of available bandwidth and also to increase the performance (number of packets transmitted per RTT). The work is carried out for the TCP connections emerging from the same node.

160 Unfairness Issue of Traditional TCP In traditional TCP congestion control, slow start results in poor performance because TCP spends more time in slow start. When time out occurs, the congestion window is initialized to one segment and the congestion threshold is set to half the present value. Each time an ACK is received, the congestion window is increased by one segment until the congestion window reaches the congestion threshold. This behaviour is too slow, so that most of the connection can not use the maximum speed. This means that if the resources are allocated unfairly to start with, all connections increase their window at the same rate, but a connection with a larger share will decrease more quickly than one with a smaller share. Therefore, an additive increase multiplicative decrease (AIMD) policy tends to make the allocation which does not provide fair share. This issue is addressed in this section. The source has no way of knowing the capacity of the network beforehand, so to avoid congestion it gradually increases its window from one packet over several round trip times. This is a frustrating under use of the network and is a hard problem. To overcome this drawback, a new technique is used where the congestion window size is incremented based on average window size of concurrent application. In the proposed technique, it is assumed that most of the losses are due to only wireless errors and loss does not occur due to congestion. Because of this, obviously the capacity of link will be utilized to a maximum extend within a few number of RTT. A2.2 PROPOSED SCHEME In ad hoc network, the topology of network changes dynamically. The TCP spends more time in slow start, so the time required for TCP to attain the maximum available capacity of link will be high. Within this time

161 the topology of the network will again change leading to route failure. Therefore, it can be said that the TCP does not use the available capacity of the link in ad hoc network efficiently. The TCP spends most of its time in slow start and thus requires more time to effectively use the full bandwidth of the link. While TCP crosses slow start and when it is about to use the maximum link capacity by increasing its congestion window according to AIMD, the topology of ad hoc network changes and causes route failures. This decreases the throughput and causes unfairness in terms of instantaneous fairness ratio. In this section, investigation is done to study the fairness related issues of traditional TCP in multihop ad hoc wireless network. In the proposed system, it is assumed that there are more than one application which has the same sender and receiver end systems (concurrent flows within the same sender and receiver). If time out occurs for one or more applications then the congestion window size is fixed as the congestion window size of an application which has the small window size divided by a number of concurrent applications. This method is followed to fix the congestion window size of an application for which time out had occurred instead of starting from one segment. For example, consider two applications with window size of 512 and 32, emerging from the same sender. If time out occurs, then to determine the current window size, the application which has smaller window size is divided by total number of applications currently running. In this example, the application having smaller window size is 32. So 32/2 = 16 is fixed as the window size of both the applications. By using the above method, the number of segments transmitted in a given number of RTT can be increased and the fairness index can be improved.

162 A2.3 COMPARISON OF EXISTING AND PROPOSED SYSTEM Existing System BASED ON NUMBER OF RTT By assuming that the timeout occurs at RTT-7, according to the existing method, the congestion window size is decreased to 1 and starts increasing from one as shown in Figure A 2.1 Since the timeout occurrence is considered for 2 applications, the window size for the two applications starts from 1. According to this scheme the total number of segments transmitted is equal to 1044. Application 1starts Timeout Occurred (assumption) {Rtt 7} 1 2 4 16 32 64 128 256 1 2 4 8 16 32 64 128 1 2 4 8 16 1 2 4 8 16 32 64 128 Application 2 starts Congestion Window Size Total number of transmitted segments = 1044. Figure A2.1 Congestion Window Size in Traditional TCP (Based on RTT) Table A2.1 Fairness Index of Existing System (Based on RTT) Application program No. of transmitted segments Fairness index 1 758 2 286 0.8301

163 Proposed System Application 1starts Timeout Occurred (assumption) {Rtt 7} 1 2 4 16 32 64 128 256 8 16 32 64 128 256 512 1024 1 2 4 8 16 8 16 32 64 128 256 512 1024 Application 2 starts Congestion Window Size Total number of transmitted segments = 4614 Figure A2.2 Congestion Window Size in Proposed System (Based on RTT) Table A2.2 Fairness Index of Proposed System (Based on RTT) Application program No. of transmitted segments Fairness index 1 2543 2 2071 0.9896 In the proposed scheme, after the occurrence of the timeout, instead of starting the congestion window size from 1, the congestion window size is started from average window size of the application which is having the lower window size. Here average window size represents congestion window size of an application which has the small window size divided by a number of applications running in the sender. The time out occurs at RTT-7, during this stage, window size of application-1 and application-2 are 256 and 16 respectively. Out of these two window sizes 16 is the least window size, 16/2 = 8 can be used as the window sizes of the two applications to start the transmission after timeout as shown in Figures A2.1and A2.2 According to

164 this scheme, the total number of segments transmitted is equal to 4614. Table A 2.1 and A 2.2 show the fairness index of the existing system and the proposed system. The number of transmitted segments in existing system for application programme-1 is 2.65 times greater than that of application programme-2. But in case of the proposed system, the number of transmitted segments in application programme-1 is only 1.23 times greater than the number of transmitted segments in application programme-2. The fairness index (Jainn s fairness index) in the existing system is 0.8301 as compared to the fairness index of the proposed system which is 0.9896. From this it is clear that fairness index of the proposed system is higher than that of the existing system. A2.4 COMPARISON OF THE EXISTING AND PROPOSED SYSTEMS BASED ON OCCURRENCE OF TIMEOUT ON EXCEEDING THE MAXIMUM LOAD (512 SEGMENTS) Existing System Application 1starts Timeout Occurred (assumption) {Rtt 8} 1 2 4 16 32 64 128 256 512 1 2 4 8 16 32 64 128 256 1 2 4 8 16 32 1 2 4 8 16 32 64 128 256 Application 2 starts Congestion Window Size Total number of transmitted segments = 1556. Figure A2.3 Congestion Window Size in Traditional TCP (Based on Load)

165 Table A2.3 Fairness Index of Existing System (Based on Load) Application program No. of transmitted segments Fairness index 1 1014 2 542 0.9157 It is assumed that the link connected to the sender can carry only 512 segments, if the number of segments is exceeding beyond this limit then timeout occurs. That is the maximum load or capacity is assumed to be 512 segments. During RTT-8 since the total number segments transmitted are 512+32 = 544, which is greater than 512, timeout occurs according to the assumption. After the occurrence of time out the congestion window size of existing scheme is increased staring from 1 as shown in Figure A 2.3. The occurrence of timeout on exceeding the maximum load of 512 segments is shown in bold numbers. Proposed system Application 1starts Timeout Occurred (assumption) {Rtt 8} 1 2 4 16 32 64 128 256 512 16 32 64 128 256 64 128 256 128 1 2 4 8 16 32 16 32 64 128 256 64 128 256 128 Application 2 starts Congestion Window Size Total number of transmitted segments = 1812. Figure A 2.4 Congestion Window Size in Proposed System (Based on Load)

166 Table A2.4 Fairness Index of Proposed System (Based on Load) Application program No. of transmitted segments Fairness index 1 1142 2 670 0.9365 The congestion window size of the proposed scheme is shown in Figure A2.4. The representation of congestion window size in bold letter shows the occurrence of timeout. The timeout occurs because according to the assumption, maximum capacity is only 512 segments. From the comparison, it is clearly revealed that the total number of segments transmitted in proposed scheme is far better than that of the existing scheme. This increases the performance of TCP. Table A2.3 and A2.4 show the fairness index of the existing system and the proposed system. The number of transmitted segments in existing system for application programme-1 is 1.87 times greater than that of application programme-2. But in the case of proposed system, the number of transmitted segments in application programme-1 is only 1.70 times greater than the number of transmitted segments in application programme-2. The fairness index in existing system is 0.9157 as compared to the proposed system for which the fairness index is 0.9365. From this it is clear that fairness index of proposed system is higher than that of existing system. A2.5 COMPARISON OF FAIRNESS INDEX From the analytical results, a bar chart is drawn to show the comparison of fairness index between existing and proposed system. The fairness index comparison chart reveals that the fairness index of proposed system is greater than that of existing system. In TCP, the number of

167 segments transmitted depends upon the minimum value among the receiver window size and congestion window size. Usually receiver window size will be greater than congestion window size. Therefore, the number of segments transmitted depends only on the congestion window size. In the proposed method, the average congestion window size is taken as the window size for concurrent flows emerging from the same sender. Because of using the average window size as the window size in concurrent flows, the number of segments transmitted in each and every concurrent flow gets equalized. Among the concurrent flows, the bandwidth is equally shared in terms of number of segments transmitted. This is the reason for good fairness index in the proposed system as compared to the existing system. Existing System Proposed System Comparison of Fairness Index Fairness Index 1 0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0.82 0.8 Based on Rtt Based on Load Figure A2.5 Comparison of Fairness Index

168 A2.6 CONCLUSION In this section, a new technique called adaptive fair share of congestion window size for traditional TCP to speed up short transfers such as web page downloads is presented. This technique enables the TCP connection to reduce the overhead of slow start by increasing the congestion window size based on average window size. It allows to use the link capacity of network to an optimum level. The key results are: large improvements in the utilization of network bandwidth, improvement of TCP performance (number of transmitted segments divided by RTT), fair sharing of bandwidth to the application emerging from the same sender and destining at the same receiver.

169 APPENDIX 3 DEPLOYMENT OF SMART START (SS) TECHNIQUE FOR OPTIMAL USE OF AVAILABLE CAPACITY IN TRADITIONAL TCP A3.1 INTRODUCTION The traditional TCP congestion control mechanism encounters a number of new problems and suffers poor performance, if it is deployed in ad hoc wireless networks. In traditional TCP congestion control, slow start results in poor performance. During slow start the number of TCP segments transmitted per unit time is considerably small compared to the capacity of the Network. It increases gradually according to the RTT. Under utilization of network resources is a big problem in traditional TCP. Although slow start uses an exponential increase while increasing the congestion window size, the increase mechanism is still non aggressive by design as it takes several RTT periods before a connection operates at its true available bandwidth. That is, to attain the maximum channel utilization it takes more time. This is not a serious problem in wired networks as connections are expected to spend most of their lifetimes in the congestion avoidance phase. However, because of the dynamic nature of ad hoc networks, connections are prone to frequent losses which in turn result in frequent timeouts and hence, more slow start phases. From the data presented in Sundaresan et al (2005), it can be observed that connections spend a considerable amount of time in the slow start phase, with the proportion of time going above 50 percent for the higher loads. Essentially, this means that connections spend a significant portion of

170 their lifetime probing for the available bandwidth in lieu of operating at the available bandwidth. Therefore, slow start mechanism followed in traditional TCP is not suitable for ad hoc network. Traditional TCP increments its window size by doubling after the reception of acknowledgement. In contrast to this, it is decided to increment the window size in multiples of three and four than the previous window size. In this section, an attempt is made to increase the performance of the TCP by introducing a new technique called Smart Start (SS). In Smart Start the number of TCP segments transmitted is not doubled but it is increased three times of the older window size. The time required to achieve the maximum link capacity is calculated for window size increments in the order of two, three and four. The performance is evaluated through numerical example. The result shows that the network resource is optimally utilized within minimum number of RTT. A3.2 SMART START TECHNIQUE The slow start and congestion avoidance algorithms must be used by a TCP sender to control the amount of outstanding data being injected into the network. To implement these algorithms, two variables are added to the TCP per connection state. The congestion window (CWND) is a sender side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment, while the receiver's advertised window (RWND) is a receiver side limit on the amount of outstanding data. The minimum of CWND and RWND govern data transmission. In slow start, after the first successful transmission and acknowledgement of a TCP segment, TCP increases the window to two segments. After successful transmission of these two segments and completion of acknowledgements, the window is increased to four segments. Then eight segments, then sixteen segments and so on, doubling from there of to the maximum window size advertised by the receiver or until congestion finally does occur (Behrouz 2004). In slow start even though, the window size is increased faster the connection takes much

171 long time to reach the maximum utilization of bandwidth available in the link. To overcome this, a procedure such as increasing the window size either in the multiples of three or four is proposed rather than increasing the window size in multiples of two for each acknowledgement. A3.3 ASSUMPTIONS RTT=500 time unit, Acknowledgement is received for each RTT, Bandwidth used is calculated by, used bandwidth = sender window size (SWS) * 3. The units of all the parameter are treated as general (it may be millisecond (ms) for RTT, number of bytes for sender window size, Kbps/Mbps for used bandwidth) Table A3.1 SWS in Multiples of 2 Sl.No. Time (RTT) Sender window size Used bandwidth 1 0 1 3 2 500 2 6 3 1000 4 12 4 1500 8 24 5 2000 16 48 6 2500 32 96 7 3000 64 192 8 3500 128 384 9 4000 256 768 10 4500 512 1536 11 5000 1024 3072 12 5500 2048 6144 13 6000 4096 12288 14 6500 8192 24576 15 7000 16384 49152 16 7500 32768 98304 17 8000 65536 196608

172 In the Table A3.1, sender window size is increased by 2 for each RTT, that is, sender window size = previous sender window Size * 2. If the unit of RTT is considered as millisecond, then from the table it can be proved that it takes 8 seconds till the TCP flow reaches 100 Mbps at a RTT of 500ms. Table A3.2 SWS in Multiples of 3 Sl.No. Time (RTT) Sender window size 1 0 1 3 2 500 3 9 3 1000 9 27 4 1500 27 81 Used bandwidth 5 2000 81 243 6 2500 243 729 7 3000 729 2187 8 3500 2187 6561 9 4000 6561 19683 10 4500 19683 59049 11 5000 59049 177147 In the Table A3.2 sender window size is increased by 3 for each RTT, that is, sender window size = previous sender window size * 3. If the unit of RTT is considered as millisecond, then from the Table A3.2, it can be proved that it takes 5 seconds till the TCP flow reaches 100 Mbps at a RTT of 500 ms. In the Table A3.3, sender window size is increased by 4 for each RTT, that is, sender window size = previous sender window size * 4. If the unit of RTT is considered as ms, then from the table it can be proved that it takes 4 seconds till the TCP flow reaches 100 Mbps at a RTT of 500 ms.

173 Table A 3.3 SWS in Multiples of 4 SI.No. Time (RTT) Sender window size 1 0 1 3 2 500 4 12 3 1000 16 48 Used bandwidth 4 1500 64 192 5 2000 256 768 6 2500 1024 3072 7 3000 4096 12288 8 3500 16384 49152 9 4000 65536 196608 Table A3.4 Comparison of Maximum Time Taken by the Connection to Attain 100 Mbps and SWS Sl. No Magnitude at which the sender window size is increased Maximum time taken by the connection to attain 100 Mbps in seconds 1 2 8 2 3 5 3 4 4 By examining the data given in Table A3.4, it can be understood that, if the sender window size is increased in multiples of three or four times than the previous window size for each and every reception of acknowledgement, then the connection can attain the maximum link utilization in 3 or 4 times faster than that of the existing system (traditional TCP).

174 A3.4 CONCLUSION In this section, a technique called Smart Start (SS) is presented, which is used to speed up the sending rate of the sending TCP to optimally use the available capacity of the link. These techniques enable the TCP connection to reduce the overhead of slow start by increasing the congestion window size quickly. It allows using the capacity of a link in a network to an optimum level. The key results are: large improvements in the utilization of network bandwidth and improvement of TCP performance (number of transmitted segments divided by RTT).

175 APPENDIX 4 THEORETICAL ANALYSIS OF RATE BASED TRANSPORT PROTOCOLS A general overview of theoretical analysis for determining the bandwidth of wireless link is provided in this section. With this basis, the theory for the four new protocols examined in this thesis can be developed. Bandwidth estimation is an important issue in rate based transport protocol. More specifically, how much bandwidth is allocated for the control channel and how much is allocated for the data channel are important factors affecting the final performance of rate based transport protocol. The control packet and data packet lengths are the important factors in deciding the best partitioning of the bandwidth. This section gives an overview of Theoretical Analysis for determining the bandwidth of control channel in rate based transport protocols. This serves as the basis for modeling the rate based transport protocols PPATPAN, PFRTP, DRF and a node-to-node rate based transport protocol discussed in this thesis. First some parameters are defined to assist the theoretical study. The time used for transmitting an RTS and CTS is defined as control-time-slot (CTslot), and the time used for transmitting a data packet and an ACK as data-time-slot (DTslot). If the backoff window, SIFS and DIFS are not considered the bandwidth estimation should allow that one CTslot equals one DTslot. Otherwise, too much bandwidth is wasted in the data channel waiting for the completion of an RTS/CTS exchange in the control channel. However, control channel should not be allowed to occupy too much bandwidth, which would result in the available bandwidth for the data channel being too small. Thus, at least four CTslots equals one DTslot. With this observation and with

176 the assumption of data packet size being 1500 bytes with an IP header of 82 bytes, the reasonable bandwidth estimation of the channel is as follows: RTS CTS RTS CTS Data ACK CP 4( RTS CTS) 4( RTS CTS) DATA ACK (A4.1) 48 56 CP 48 56 (1500 82) 38 4(48 56) 4 (48 56) (1500 82) 38 (A4.2) 6.03% < CP < 20.43% (A4.3) where CP refers to the control channel bandwidth in percentage. Hence, when the total bandwidth is 2 Mbps, the control channel should be at least 0.12 Mbps and at most 0.42 Mbps. Thus, if it is assumed that RTSs arrive according to a Poisson distribution, the average rate of should be MinControlBandwidth RTS CTS MaxControlBandwidth RTS (A4.4) 1200 < < 8000 (A4.5) The probability of a successful RTS transmission Psuccess equals no arrival RTS during the propagation or medium sense () time. The following is the success probability function: Psuccess e (A4.6)

177 Thus the failure probability is Pfailure 1 e. The equation to calculate the number of retries needed, on average, for a successful, collision-free RTS 10 5 with errorrate is as follows: ( 1 e ) k (A4.7) log( ) k log(1 e ) (A4.8) k must be an integer, so if = 1200, k = 2; and if = 8000, k = 3. A reasonable bandwidth estimation should allow the RTS/CTS exchange to be completed before the completion of a transmission on the data channel. Therefore, the time used for the control channel to finish k attempts should equal the data transmission time. During the k times contention, the control channel transmits at least k RTS and one CTS packets and at most k RTS and k CTS packets, if we ignore the SIFS, DIFS and backoff time. So the mean value is k RTS and (k/2 + 1/2) CTS packets. Hence, the following equation is obtained: k 1 K * RTS * CTS 2 Control_BW Data ACK Data _ BW (A4.9) k 1 K * RTS * CTS CP 2 (A4.10) k 1 K * RTS * CTS Data ACK 2 According to the above function, when k = 2, CP = 9.9% and when k = 3, CP = 15.6%. Thus, theoretically, without considering SIFS, DIFS or backoff time and without considering the hidden node problem, the control channel should occupy 9.9% to 15.6% of the total bandwidth, under the condition that the control and data channels be fully utilized.