On the Efficiency and Fairness of TCP over Wired/Wireless Networks. Dimitrios Vardalis. Master of Science in Computer Science

Size: px
Start display at page:

Download "On the Efficiency and Fairness of TCP over Wired/Wireless Networks. Dimitrios Vardalis. Master of Science in Computer Science"

Transcription

1 On the Efficiency and Fairness of TCP over Wired/Wireless Networks by Dimitrios Vardalis Master of Science in Computer Science State University of New York at Stony Brook 2001 The continuous growth in the number of wireless components comprising the underlying network infrastructure of the Internet, in conjunction with some fundamental differences they exhibit from the wired ones, created the need for exploring TCP s behavior in such compound wired/wireless environments. Many researchers have investigated the efficiency of TCP in heterogeneous environments and proposed solutions to improve it, but presently no studies on TCP s fairness in such environments are available. In this work, we address the issue of the efficiency and fairness of TCP in networks consisting of both wired and wireless components under various conditions such as wireless error, congestion introduced by multiple competing flows, and different Round Trip Time (RTT). We base this thesis on testing two TCP versions, implementing a conservative and an aggressive congestion control strategy. In order to properly evaluate the protocol performance, we define a new metric for the fairness of a system with multiple flows. We also employ a handful of other metrics (i.e. overhead, goodput) that reflect recently raised requirements from wireless transport protocols. Throughout the experiments, we identify the cases where one congestion control strategy is favored over the other, and analyze the factors that lead to these results.

2 To Gogo, Xeni, and Gabrilo, The best family in the world

3 Table of Contents LIST OF FIGURES V LIST OF TABLES VI ACKNOWLEDGMENTS VII 1 INTRODUCTION INTERNET EVOLUTION AND TCP Congestion control The Wireless Days THE AIMD PRINCIPLE FAIRNESS ISSUES IN WIRED NETWORKS Variant propagation delay Asymmetry and head-start Packet dropping policy Large number of flows MAIMD THESIS DESCRIPTION PRESENTATION PLAN TCP OVERVIEW GENERAL PROTOCOL CHARACTERISTICS TCP TAHOE TCP RENO TESTING ENVIRONMENT SIMULATED CONDITIONS DATA TRANSFERS RANDOMNESS IN THE ENVIRONMENT TESTING METHODOLOGY AND PARAMETERS OF SIGNIFICANCE AGGRESSIVE VS. CONSERVATIVE STRATEGIES TESTING STAGES Low RTT-one flow (LR1) Low RTT-two flows (LR2) Low RTT-three flows (LR3) High RTT-one flow (HR1) High RTT-two flows (HR2) TESTING PARAMETERS Error phase duration Data size Number of repetitions PERFORMANCE METRICS Throughput Goodput iii

4 4.4.3 Bandwidth Utilization Overhead Fairness CAPTURING FAIRNESS BANDWIDTH UTILIZATION LOW RTT, TWO FLOWS SAME PROTOCOL DIFFERENT PROTOCOL LOW RTT, THREE FLOWS SAME PROTOCOL DIFFERENT PROTOCOL HIGH RTT, TWO FLOWS TCP IN A HIGH-RTT ENVIRONMENT FAIRNESS IN A HIGH-RTT ENVIRONMENT CONCLUSIONS OPEN ISSUES FUTURE WORK ENERGY CONCERNS REFERENCES iv

5 List of Figures FIGURE 1: THROUGHPUT AS A FUNCTION OF LOAD... 3 FIGURE 2: THROUGHPUT ACHIEVED BY ONE, TWO AND THREE TAHOE FLOWS...23 FIGURE 3: TIME TO COMPLETE TRANSFER FOR ONE AND TWO FLOWS OF THE SAME PROTOCOL FIGURE 4: FAIRNESS INDEX FOR TWO TAHOES, TWO RENOS AND ONE TAHOE ONE RENO FIGURE 5: TIME FOR A RENO AND A TAHOE COMPETING FLOWS...29 FIGURE 6: THROUGHPUT ACHIEVED BY A TAHOE AND A RENO COMPETING FLOWS30 FIGURE 7: OVERHEAD INTRODUCED BY TAHOE AND RENO WHEN RUNNING IN TWO COMPETING FLOWS FIGURE 8: TIME FOR A SINGLE TAHOE AND A SINGLE RENO FLOW, AND FOR THREE TAHOE AND THREE RENO FLOWS FIGURE 9: AVERAGE GOODPUT OF THREE TAHOE AND THREE RENO COMPETING FLOWS FIGURE 10: FAIRNESS INDEX FOR THREE TAHOE AND THREE RENO FLOWS...34 FIGURE 11: AVERAGE TIME FOR THREE TAHOE AND THREE RENO FLOWS, IN TWO SETS OF EXPERIMENTS FIGURE 12: AVERAGE OVERHEAD FOR THREE TAHOE AND THREE RENO FLOWS IN TWO SETS OF EXPERIMENTS FIGURE 13: AVERAGE OVERHEAD FOR THREE TAHOE AND THREE RENO FLOWS, IN TWO SETS OF EXPERIMENTS FIGURE 14: THROUGHPUT ACHIEVED BY TAHOE WITH HIGH AND LOW RTT...39 FIGURE 15: TIME TO COMPLETE TRANSFER FOR ONE AND TWO FLOWS OF THE SAME PROTOCOL FIGURE 16: TIME FOR TAHOE AND RENO COMPETING FLOWS...41 FIGURE 17: FAIRNESS INDEX FOR TWO TAHOES, TWO RENOS AND A TAHOE AND A RENO v

6 List of Tables TABLE 1: TIME VALUES FOR FIVE RUNS UNDER 0% ERROR RATE...33 TABLE 2: TIME VALUES FOR FIVE RUNS UNDER 1% ERROR RATE...33 vi

7 Acknowledgments I would like to thank the members of my Thesis committee: Hussein Badr, Tzi-ker Chiueh, and Vassilios Tsaoussidis. I am also grateful to H. Badr for his valuable assistance and responsiveness. I would especially like to thank V. Tsaoussidis for closely watching, directing, and guiding me throughout each stage of this work. I am greatly obliged to him for his faith and support over the last five years.

8 1 Introduction The Transmission Control Protocol (TCP) has been the dominant reliable transport layer protocol ever since the appearance of its original version in 1981 [28]. The motivation behind TCP was to add reliability on top of an inherently unreliable IP network. The original TCP incorporated a sliding window mechanism, which, in conjunction with packet acknowledgments and segment sequence numbers, guaranteed a reliable data transmission as well as flow control. 1.1 Internet evolution and TCP In the early 1980 s, network congestion did not constitute a focus of concern due to the limited number of interconnected hosts, and TCP s original version was deemed adequate. As the number of hosts that joined the Internet increased, congestion problems, caused by lack of available bandwidth, became more and more evident. The deficiency of the original TCP was the absence of a mechanism that would adjust the sending rate responding to changes in the network load, namely congestion control. As a result, the network would flood and its overall performance would be severely degraded, leading to a series of congestion collapses in the mid 1980 s Congestion control It was not until 1988 that a widely accepted congestion control algorithm was finally suggested [20]. This algorithm employed the Additive Increase Multiplicative Decrease (AIMD) principle. According to the AIMD, a protocol should increase its sending rate by a constant amount and decrease it by a fraction of its original value, each time an adjustment is necessary. This mechanism is the base of virtually all TCP implementations used in today s Internet, since it is proven to converge to both a desirable level of efficiency as well as a desirable level of fairness among competing flows [12]. In the years that followed the establishment of AIMD as the standard algorithm to be used in TCP, Internet underwent numerous changes and rapidly increasing popularity. With the availability of widespread services such as and the World Wide Web (WWW), the Internet became accessible to a broader range of people, including users lacking any particular familiarity with computers. Although new competing technologies emerged and the demands from a transport layer protocol were highly increased, TCP not only survived but also became an integral ingredient of the Internet, experiencing only minor modifications. These modifications reflect to the different in-use TCP versions (TCP-Tahoe, TCP-Reno, TCP-NewReno) [20, 2, 14], experimental TCP versions 1

9 (TCP-SACK, TCP-Vegas) [25, 10], as well as special-purpose TCP versions (T/TCP) [9]. Some of these versions are described in forthcoming sections of this work The Wireless Days Among other innovations in computer communication, wireless networks became a very important part of the Internet in a number of different forms. Some of the wireless networking applications include mobile networking devices that are able to roam through the cells of a wireless network, wireless LANs connecting steady hosts in a small area, and satellite links. The proliferation of wireless networks combined with some fundamental differences they have from the wired networks led to a need for reevaluation of TCP performance in combined wired/wireless environments. More specifically, the way that TCP handles network congestion revealed a major deficiency in networks with wireless components. The congestion control algorithm, embedded in most TCP versions, uses packet losses as an indication for heavy network load and adjusts the sending rate accordingly. In a wireless environment, transient link interruptions that can be caused by weak signal, weather conditions, physical obstacles or handoff procedures, introduce a highly bursty error that is not related to congestion. In these cases, TCP makes a false assumption as to what the real cause of the error is and consequently may take the wrong action in response. Studies on TCP over wireless LANs can be found in [21, 22, 36], while in [23] the authors examine high RTT TCP connections when a wireless error is present. Besides the preexisting requirements of a transport protocol that is to be efficient and fair, mobile networking introduced energy expenditure issues, due to the limited power supply of mobile devices. The observations mentioned above triggered a series of new propositions and debates among the research community as to what the solution to this problem should be. Among the proposed solutions to improve TCP s efficiency, we find a large variety of mechanisms focusing on different characteristics of wireless networks. As a general way of distinguishing between random error and congestion, the authors in [29] propose an Explicit Congestion Notification (ECN) from the network. In [1] and [19] the researchers describe schemes optimized for channels with high bandwidth and propagation delay, characteristics that correspond to satellite links. In networks where a mobile host is connected to the Internet through a base station, there is a clear distinction between the wired part and the wireless part of the path. In such cases, TCP improvements involve caching on the base station and locally retransmitting lost packets by either splitting the TCP connection [3, 11] or employing some specialized link-layer protocol [5, 6, 7, 30]. 2

10 Finally, numerous end-to-end solutions have been proposed using more sophisticated acknowledgment techniques [13], probing mechanisms to detect network conditions [31, 32], and feedback from the wireless network adapter on the receiver [16]. 1.2 The AIMD principle As mentioned earlier, the basic concept of AIMD was proven to yield satisfactory results when the network infrastructure consisted of hard-wire connected components. One year after the appearance of AIMD in 1988, the authors in [12] provided a detailed analysis of different congestion control strategies, as well as what renders the existence of such a strategy in a transport protocol crucial. Below we give a few important points made in this work. The major issue of concern to a transport protocol is its efficiency. On a network link crossed by a number of different flows running the same protocol, the ideal situation is to utilize as much of the available bandwidth without introducing congestion (i.e. packets queuing up on the router). In Figure 1, we see the achieved throughput as a function of the network load. It becomes clear that we need to avoid overloading the link, since the achieved throughput will diminish. For a protocol to operate in the area between the points labeled as Knee and Cliff, a congestion control mechanism is necessary. In [12] efficiency is defined as the closeness of the total load to the Knee, which is a good starting point. Figure 1: Throughput as a function of load Besides utilizing a high portion of the available bandwidth, a transport protocol must also be fair to the rest of the flows traversing the same part of the network. An efficient transport protocol does not necessarily mean that it is also fair. A single flow might take up the largest portion of the available bandwidth 3

11 while the rest remain idle. Obviously, this is an undesirable behavior and in certain cases, gaining higher fairness is worthwhile even at the cost of reduced efficiency. Intuitively, fairness is the closeness of the throughput achieved by each flow to its fair share. To measure fairness, the authors in [12] define a fairness index as: F ( x 2 ( i) ( x) = 2 n xi Where, x i is the throughput of the i th flow and n is the total number of flows. The fairness index of a system ranges from 0 to 1, with 0 being totally unfair and 1 being totally fair. In the third chapter we define our own fairness index and provide a more detailed analysis of this one. Along the lines of efficiency and fairness, as determined previously, four different scenarios were tested: Additive Increase Additive Decrease, Additive Increase Multiplicative Decrease, Multiplicative Increase Additive Decrease, and Multiplicative Increase Multiplicative Decrease. These scenarios were evaluated in terms of how fast they converged to the desirable efficiency and fairness levels. The AIMD scheme was found to be the one that better matched the required characteristics. Recent studies [37] provide a more in-depth analysis, regarding the impact of the AIMD parameters on the performance of TCP. 1.3 Fairness issues in wired networks With the Internet growing in size and complexity, further experimenting and analysis indicated that, under certain circumstances, TCP s performance suffered even in wired networks. Below we describe some of the occasions where TCP s fairness was questioned Variant propagation delay Bandwidth allocation on a link was found to be far from fair when the flows sharing it had different end-to-end propagation delays. Flows with smaller delay occupied more bandwidth than their fair share, while these with larger delay suffered from low throughput. ) 4

12 1.3.2 Asymmetry and head-start In [4], the authors report on a case where two competing flows share a path with congested reverse channel link. The major parameters used in these experiments are the congestion experienced only by the acknowledgments, and the second flow starting its transmission a few seconds after the first one. It appears that the first flow occupies virtually all of the reverse bandwidth and when the second flow enters, it is unable to expand. The bandwidth allocation on this series of experiments was close to being totally unfair. In a scenario very common in today s Internet, a flow merely initiating a transfer after other flows have already expanded (even if no special restrictions apply to the forward and the reverse paths) still raises the question: How long does it take TCP to converge to a fair bandwidth allocation? The answer to this question is very crucial, since most TCP connections are very short-lived. Claiming that a protocol is fair when it converges within ten minutes appears to be an oxymoron. The authors in [26] suggest a scheme, where the buffer queues on the network routers are different for long- and short-lasting TCP flows, thus protecting short flows from being unable to obtain their fair share of the available bandwidth. Their claim is that such a distinction significantly improves the stability and fairness convergence of TCP. However, implementing this approach requires certain modifications to the network infrastructure Packet dropping policy Another critical issue to TCP performance appeared to be the packet dropping policy on the Internet routers. Since the only way for TCP to receive feedback from the network is by packet losses, an unfair dropping policy directly impacts fairness among the flows. A simple FIFO drop-tail queue, that is, the router drops the tail of its FIFO queue at a buffer overflow, tends to discard packets unevenly. Consequently, some flows experienced heavier error than others and fairness is violated. A solution to this problem came with a router packet dropping policy introduced in [15], called Random Early Detection (RED). According to that policy, the possibility of a router to drop a certain incoming packet depends on the closeness of the recent average queue length to the maximum threshold, as well as the time that the last drop occurred. This way, the dropping probability increases whenever congestion builds up and no packet drops have occurred. Essentially, RED forces the transport protocols to reduce their sending rate, before the available buffer space is exhausted. A router implementing RED does not discard subsequent packets (that are more likely to belong to the same flow), so that each 5

13 flow senses roughly the same packet loss rate, resulting in a more fair bandwidth allocation Large number of flows In [27], TCP was challenged with another consequence of the Internet s expansion, which is the large number of competing flows. Two sets of experiments are presented here, one with 30 and one with 1500 competing flows. While in both cases TCP achieves high bandwidth utilization, the fairness results are not analogous. In the 30-flow case the protocol is roughly fair and the throughput for most flows is slightly below the fair share. When the number of competing flows is as high as 1500 and the availability of buffer capacity is limited on the intermediate router, the configuration exhibits high variation in the bandwidth achieved by each flow. This causes the system to be unfair over intervals many seconds long. Part of the problem is detected in TCP s inability to send less than one packet per RTT, but more than one packet per timeout. In this configuration, TCP would either be idle, or send at a rate higher than its fair share, due to its relatively coarse grain transmission rate MAIMD In spite of the AIMD principle s popularity and public acceptance, it was recently disputed by the authors in [17]. In this work, there is an extensive analysis of AIMD in comparison with a different policy called Multiplicative- Additive Increase/Multiplicative Decrease (MAIMD). In MAIMD, the policy for increasing the sending rate (when the bandwidth utilization is under the desirable level) involves multiplying the previous value by a factor greater than one and then adding a constant. MAIMD essentially speeds up the rate by which the protocol increases its sending rate. The authors argue that in a number of cases MAIMD yields faster convergence to the desirable fairness and efficiency levels than AIMD. However, the outcome is highly affected by the corresponding parameters used to control the sending rate adjustment (i.e. the constant and the factor). An interesting point made in this work is that AIMD does not converge in terms of fairness, and converges slowly in terms of efficiency when the system is asynchronous; meaning that multiple flows do not readjust their sending rate neither at the same time nor at the same frequency. This last definition of an asynchronous system matches the Internet, in that different flows are not synchronized in any sense, and the congestion control mechanism functions individually in each one of them. Applying MAIMD on an asynchronous system does not yield fairness convergence either. 6

14 Simulation experiments presented in [17] showed that TCP does not converge in terms of fairness in the case when one flow initiates a connection while another has already taken up all the available bandwidth. In the following years, it will become apparent if the perception of AIMD as presented in this paper will replace the common belief that it is the optimal algorithm to be used in transport protocols. 1.4 Thesis description In this work we address the issue of how efficient and fair TCP is in networks consisting of both wired and wireless components under various conditions, such as wireless error, congestion introduced by multiple competing flows, and different Round Trip Time (RTT). We base this analysis on testing TCP Tahoe and TCP Reno that implement a conservative and an aggressive congestion control strategy, respectively. The motivation behind our selection was to evaluate these strategies. New protocols exist, however they are comparable in this respect. For the tested TCP versions, we identify the cases where one is favored over the other and analyze the factors that lead to these results. Related studies, comparing different versions of TCP can be found in [18, 35]. To measure the performance of the tested protocols we define a new fairness index and employ newly introduced performance metrics such as Goodput and Overhead. The traditional performance metrics were deemed insufficient, for reason explained in following chapters. Throughout the analysis, our basic argument is that fairness in a heterogeneous wired/wireless environment must be evaluated in conjunction with efficiency. In a wired environment, high bandwidth utilization is, in most cases, achieved so that throughput difference of the competing flows can be safely interpreted as a fairness issue. On the contrary, when an error of wireless nature is present, high bandwidth utilization is not necessarily preserved. In such a situation, differences among the throughput achieved by each flow should not be directly translated into fairness problems, since the flows barely affect each other. For example, consider a situation where the average bandwidth utilization is 10% and two competing flows are present. If one occupies twice as much bandwidth as the other, it would be misleading to derive that the configuration is unfair. At such low bandwidth utilization values, the interaction between the two flows is very limited. However, assuming that the flows do not affect each other at all would also be misleading, since during certain periods of the communication time, the two flows may occupy a great portion of the available bandwidth. Hence, they have an impact on one another, while very low bandwidth usage during the rest of the time would result in the measured average of 10%. In 7

15 this work, we find a way to bypass this problem by conducting multiple experiments under different configurations and cross-examine the results. Coupling efficiency and fairness is also important to get a clear view of the overall performance of each protocol, as well as to identify any efficiency and fairness tradeoffs. For example, achieving high fairness factor while the level of efficiency is low should not be viewed as desirable behavior. Instead it might be preferable that we gain overall efficiency at the cost of losing fairness. Our analysis also includes identifying such tradeoffs, and discusses their impact on the system performance. 1.5 Presentation plan The rest of this work is structured as follows: In Chapter 2, we present an overview of the congestion control strategies, used in TCP-Tahoe and TCP-Reno. Chapter 3 describes the hosts used in our experiments; the simulated network conditions, as well as important parameters that affect the results, such as the size of the transferred data and the random nature of the error. Chapter 4 includes a comparison of an aggressive vs. a conservative strategy. Here, we also list the configurations of the conducted experiments and comment on our choices of the experiment parameters. Finally, we define the performance metrics that are used in the result analysis, and explain their use. In Chapter 5, general comments on the bandwidth utilization of TCP are presented. The results report on the portion of the available bandwidth occupied when one, two and three flows are present on the network. Chapter 6 reports on the results of experiments consisting of two flows in a low-rtt environment. The tests include three combinations of protocols: Tahoe-Tahoe, Tahoe-Reno, Reno-Reno. The metrics calculated from this set of experiments are presented separately for the cases where the competing flows run the same TCP version and where each flow runs a different TCP. Chapter 7 reports on the results of experiments consisting of three flows in a low-rtt environment. The tests include five combinations of protocols: 3 Tahoes, 2 Tahoes and 1 Reno, 1 Tahoe and 2 Renos, 3 Renos. As in Chapter 6, the performance metrics are presented in two categories. In Chapter 8, we consider two flows competing in a high-rtt environment. Our report involves the implications of the high RTT to 8

16 the bandwidth utilization achieved by TCP, as well as comments on the gathered statistics. Chapter 9 presents the conclusions made throughout this work. Chapter 10 describes some of the issues that are not covered, but are closely related to this work. 9

17 2 TCP overview In this section, we briefly describe the two TCP versions we experimented with, TCP Tahoe and TCP Reno, focusing our attention on the congestion control scheme. TCP Tahoe was the first modification to the original TCP [28], which incorporated the congestion control algorithm proposed in [20]. The newer TCP Reno introduced the Fast Recovery algorithm [2], and was followed by New Reno [14] and the Partial Acknowledgment mechanism for multiple losses in a single window of data. 2.1 General protocol characteristics TCP Tahoe and Reno use the same algorithm at the receiver but follow a different approach during the transmission process at the sender. In the beginning of the communication session, the receiver advertises a window size according to its available receiving buffer. Throughout the whole TCP session, the sender ensures that it keeps the number of the unacknowledged bytes below the advertised value. This way TCP implements flow control guaranteeing that the server will not overload the receiver. Having set up the connection, the receiver sends an acknowledgment for each correctly received packet, including the number of the next in-sequence packet. On the sender side, a window mechanism defines the maximum number of inflight (unacknowledged) packets, controlling the rate by which the sender transmits data. During the communication time, the size of this window can increase and decrease in order to adjust the transmission rate, never exceeding the receiver s advertised window. The sender also maintains a timeout value, which is dynamically calculated every time a new acknowledgment arrives. This is based on the weighted average of previous RTT measurements as well as the standard deviation of these samples. Whenever a packet loss occurs, the timeout value is doubled, reflecting the more general philosophy that being too conservative is better than being too aggressive. When the network is heavily loaded and the intermediate routers are dropping packets, resending the lost packets quickly would introduce a positive feedback, further overloading the network. Doubling the timeout value ensures that at the occurrence of packet drops, TCP will exponentially back off, eliminating the possibility of adding more burden to the network. The timeout value is very important to the protocol efficiency. If it is too small, the sender may resend a packet even though it was not dropped. Thus unnecessarily injecting data to the network. If it is too large, the protocol does not sense that a packet was lost until it has been idle for a significant amount of time, resulting in degraded throughput. 10

18 In TCP, the mechanism to recover from lost packets is oriented towards congestion control. Congestion control and error recovery are very closely related to each other, since TCP s only feedback regarding the congestion present on the network comes in the form of missing packets. The goal of congestion control is to determine the available network capacity and to adjust the congestion window accordingly. Acknowledgements received at the sender are interpreted as available bandwidth, while missing packets as an indication of network congestion. TCP Tahoe and TCP Reno differ in the actions they take in response to the events mentioned above. 2.2 TCP Tahoe TCP Tahoe congestion control incorporates the Slow Start, congestion avoidance, and Fast Retransmit mechanisms [2, 20]. When a new session is initiated, the protocol enters the Slow Start phase. During this phase, the congestion window is expanded by one packet at the receipt of each acknowledgment, leading to exponential growth of the window. The protocol remains in this stage until a timeout occurs. When the timeout timer goes off, the protocol divides the current congestion window size by two and stores this value to a variable for later use. This value is called congestion threshold. Each subsequent Slow Start phase ends when the window size reaches the congestion threshold. Setting the congestion threshold to half the congestion window and exercising Slow Start until the threshold value is reached corresponds to the Multiplicative Decrease part of the AIMD principle. The protocol will then enter the congestion avoidance phase, during which it will increase the current window by one packet for each full window of data that is acknowledged. The window expansion during this stage of the communication is linear and corresponds to the Additive Increase part of the AIMD. In the Fast Retransmit mechanism, a number of successive (the threshold number is usually three) duplicate acknowledgments for the same packet trigger a retransmission without waiting for the associated timeout to occur. In response to such an early timeout, Tahoe takes the same action as it would for a regular one, setting the congestion threshold to half the current congestion window and entering Slow Start. However, Slow Start is not always efficient, especially if the error is not caused by network congestion. In such cases, shrinking the congestion window is unnecessary and can lead to low bandwidth utilization. 11

19 2.3 TCP Reno TCP Reno introduces Fast Recovery used in conjunction with Fast Retransmit. Upon the arrival of a duplicate acknowledgment (dack) on the sender side, the protocol expands the congestion window by one, interpreting a dack as an indication of available bandwidth. When the protocol receives the threshold number of duplicate acknowledgments, it enters the Fast Recovery phase. The sender retransmits one segment, halves the congestion window, and sets the congestion threshold to the size of the congestion window. For as long as it remains in Fast Recovery, it increases the congestion window by one for each additional dack received. The sender exits the Fast Recovery phase when an acknowledgment for new data is received. It then sets the size of the congestion window to the congestion threshold and resets the dack counter. Compared to Tahoe, Reno uses a more aggressive error recovery strategy. When it receives the threshold number of dacks, it does not enter Slow Start, which would shrink the window to a single packet, but effectively sets it to half its previous value. 12

20 3 Testing environment The two TCP versions tested in this work were implemented using the xkernel protocol framework [35]. The experiments involve real data transfers between two hosts connected over a 10Mbps switched Ethernet. The hosts were two Sun Ultra-5 machines with 64MB RAM, running SunOS 5.4. The configuration of these machines guarantees that the outcome of the experiments is not biased by any additional delays due to lack of processing power. The experiments were conducted at times when the two computers were not performing any operations besides running the protocols, and the local network was essentially idle. Typically the tests took place between 11:00 pm and 6:00 am. This way the protocols were allowed to exhibit their behavior without being affected by any uncontrolled activity. 3.1 Simulated conditions Although the experiments involved actual data transfers as opposed to a simulation, there are still some parameters of the connection that need to be controlled, namely the wireless error as well as the Round Trip Time. The wireless error is a fundamental parameter throughout the whole series of tests conducted here, since our primary objective was to expose the protocols to an error pattern that resembles the error present on a wireless network. For this purpose we employ an error model developed for the xkernel platform [33]. The error model consists of states A and B, or Off and On, respectively. For each of these states, we set the mean sojourn time (the mean time the protocol will remain in each state) as well as the error rate. The error mechanism visits each state and settles there for an exponentially distributed amount of time before going to the next state. The error rate during the Off phase is set to 0%, while during the On phase it ranges between 0% and 50%. An error of 10% in the On state will cause the error model to drop all incoming packets for a continuous 10% of the overall time it spends in this state. We have used the well-known, two-state Markov model for simulating the wireless channel errors. However, we have not simulated network congestion; instead, real congestion is present on the network. This way, we are able to combine wired and wireless errors. To test the protocol performance under different RTT values, we use a mechanism that delays incoming packets. This mechanism takes two parameters representing the minimum and the maximum delay a certain packet will experience during the session. The minimum value corresponds to the propagation delay of the connection, while the maximum value corresponds to any additional delays due to increased congestion and packet queuing that 13

21 possibly occurs on the intermediate routers. The delay value is randomly sampled between the min and the max values, every time a packet arrives. 3.2 Data transfers Throughout the experiments, we set up a number of TCP connections that transmit a predefined chunk of data. The sender, in such a connection, is saturated so that there is always data available to be sent and the application layer does not introduce any additional delays. At the end of each experiment, we record the amount of time required to complete the transfer and the total bytes that were ultimately transmitted, including the protocol headers and packet retransmissions. From these two pieces of information, a handful of other metrics are derived, such as Throughput, Fairness, etc. More detailed description of the used metrics is provided in the next chapter. In experiments where multiple flows are involved, multiple TCP sessions are set up between the same two machines. Preliminary experiments have shown that the tested protocols exhibit the same behavior when there is only one connection between each pair of machines (i.e. multiple pairs of machines are used), and when all connections are set up between only two hosts. The latter scenario was preferred, due to increased control over the synchronization among multiple flows. More specifically, multiple flows must all start simultaneously to ensure that no flow will have time to expand before the rest initiate their transfers. Running all protocols on the same machine allows initiating all flows virtually at the same time. 3.3 Randomness in the environment The experimenting configuration involves an error of random nature. Both the duration of the error, as well as the time it occurs, are random. The former has the obvious consequence that a protocol experiences either heavier or lighter error conditions on different runs (under the same error configuration), while the latter appears to also affect the results, for the reasons explained below. Depending on the state a protocol exists in when the error occurs, the impact it has on the performance can significantly vary. For instance, if the protocol has developed a large congestion window when the channel switches to a bad phase, it will result in multiple packet drops and consequently, multiple timeouts. A large number of timeouts leads to an exponential increase of the timeout value, which renders the protocol unable to detect further packet losses in a timely fashion. On the other hand, if the error burst occurs when the window is relatively small, fewer packets will be lost and the impact on the overall protocol performance will be much milder. 14

22 Having the randomness introduced by the error scheme in mind, each experiment is repeated several times and averaged statistics are extracted from all runs. This will ensure that our conclusions will not be based on single nonreproducible results but rather, on the average of multiple separate runs. At the end of each transfer, the TCP connection is released and enough time is allowed for all coexisting flows to complete their transfer before the next experiment is initiated. 15

23 4 Testing methodology and parameters of significance The experiments presented here were conducted using two versions of TCP, namely Tahoe and Reno. As described in Chapter 2, Tahoe implements a conservative congestion control algorithm, while Reno a more aggressive one. The motivation behind this selection was to derive more general conclusions on the impact of a conservative/aggressive strategy on the protocol performance, in a compound wired/wireless environment. Identifying the cases where each strategy yields better results is useful to determine the ideal behavior of a transport protocol. 4.1 Aggressive vs. conservative strategies Reno s aggressive congestion control was released as an improvement to the original congestion control mechanism implemented in Tahoe. The ability of Reno to recover from a single packet drop per window faster than Tahoe, rendered it more appropriate to be used in the Internet (most of today s TCP implementations incorporate the congestion control implemented in Reno). However, in a network with wireless components, this observation is not always true. The key factor that determines the protocol behavior is the tradeoff between the amount of sent data and the timeout value. An aggressive strategy is more persistent, and does not immediately back off at the occurrence of some packet drop. If this drop was caused by a wireless link interruption, it is likely that a number of subsequent packets will also be lost, experiencing the same bad phase of the communication channel. Consequently, even more packets can be lost resulting to an extended timeout value (TCP doubles its timeout value every time the timeout timer goes off). An overextended timeout slows down subsequent packet drop detection and degrades the protocol efficiency. On the other hand, a conservative strategy will immediately back off without having the chance to quickly recover. However, this will eliminate the possibility of overextending the timeout value. 4.2 Testing stages Our experiments with Tahoe and Reno can be divided into five main categories. In all of the below listed cases, the protocols were tested for six different error rates: 0%, 1%, 10%, 20%, 33% and 50%. 16

24 4.2.1 Low RTT-one flow (LR1) In this set of experiments, Tahoe and Reno are tested separately in a low RTT environment where only one flow is present on the network. The RTT perceived from the application layer of the xkernel platform (queuing delays due to congestion are not included) was measured to be roughly 1 ms. The results from LR1 show the performance achieved by the two protocols when no congestion is present, and are used as a guide for evaluating their relative behavior when competing flows exist Low RTT-two flows (LR2) The tests conducted in this category involve two competing Tahoe flows (Ta- Ta), two competing Reno flows (Re-Re), and a Tahoe vs. a Reno competing flows (Ta-Re). This set of experiments reveals how fair and efficient each combination is Low RTT-three flows (LR3) According to the previous set, we test all four different combinations of the two TCPs. These combinations are three Tahoe flows (Ta3), three Reno flows (Re3), two Tahoe and one Reno flows (Ta2Re1), and one Tahoe and two Reno flows (Ta1Re2) High RTT-one flow (HR1) Here, Tahoe and Reno are separately tested in a high-rtt environment. To simulate the high RTT, we added a delay of ms according to the scheme described in section 3.1, bringing the RTT to a value of roughly 50 ms. The results indicate the performance achieved by the protocols when no congestion is present High RTT-two flows (HR2) Finally, we experiment on the fairness and efficiency of the two TCP versions in a high RTT environment, when a competing flow is also present. The tested combinations are the same as in LR2: Ta-Ta, Re-Re and Ta-Re. 17

25 4.3 Testing parameters A number of different testing parameters had to be determined in order to carry out the experiments included in this work. The configuration we used was based on a series of preliminary tests that helped conclude as to what values would produce the most widely accepted and representative results Error phase duration For the set of experiments in a low RTT environment, the values for the On and Off error state durations were both set to 1500 ms. During the stage of determining these values, Tahoe appeared to be favored when the On state was significantly smaller than the Off state; whereas Reno showed the opposite results. A value of 1500 ms for both states was a reasonable intermediate choice, being well above the typical UNIX timer precision of 500 ms. In the high RTT, case an additional delay of ms is introduced in both directions. For this case, the durations of the two states remained equal, but instead of 1500 ms, they were set to 4500 ms. Due to the increased RTT, TCP needs more time to sense changes in the network condition. By increasing the On and Off states average sojourn time, we essentially slow down the rate by which changes are taking place on the network Data size The size of the data used in the TCP transfers was set to 5 MB and 20 MB, depending on the rest of the experiment parameters. In a low RTT environment with two competing flows, the data amount was 20 MB. In an environment with either high RTT value or with three coexisting flows, the amount of data was 5 MB. The data size was selected so that the communication time would be adequate in length for the protocols to exhibit their characteristics. In the low RTT two flows case, the efficiency of the individual connections is better than those in the case of high RTT or three flows, and the larger amount of sent data was chosen to increase the communication time Number of repetitions For each different experiment configuration, that is one of the five basic categories for a particular error rate, 15 separate tests were conducted. We observed that further increase of the repetition number for a certain configuration 18

26 did not significantly alter the aggregate statistics. A single test will be referred to as a run. 4.4 Performance metrics Essentially, the only measurements we take at each run are the transmission time (Time) and the total number of bytes transferred (TotalBytes), including protocols headers and packet retransmissions. From these two, along with the size of the data set transmitted from the application layer, we calculate Throughput, Goodput, Bandwidth Utilization, Overhead, and Fairness Throughput The throughput of a flow is defined as: Throughput = TotalBytes Time Throughput represents the bandwidth taken up by a flow, but it is not always related to the efficiency of the protocol running. An inefficient strategy might yield bandwidth utilization very close to 100%, but the produced throughput could consist of a significant amount of unnecessary retransmissions, rendering the protocol efficiency poor Goodput As opposed to the throughput mentioned in the previous section, goodput gives the actual transmission rate perceived on the receiver s application layer. The protocol headers and any packet retransmissions are not reflected in this metric. Goodput is defined as: Goodput = DataSize Time Bandwidth Utilization Bandwidth utilization is the percentage of the available bandwidth occupied by all competing flows. In our case, the available bandwidth is 10Mbps. Although on an Ethernet the available bandwidth is practically less than 10Mbps, bandwidth utilization can be used to compare the performance of different configurations. 19

27 4.4.4 Overhead Overhead is the extra number of bytes the protocol transmits, expressed as a percentage, over and above the size of the data delivered to the application at the receiver, from connection initiation to connection termination. The overhead is given by the formula: TotalBytes DataSize Overhead = 100 Datasize Fairness To measure the overall fairness in a system with multiple flows we introduce a new performance metric. Our motivation was to create an index that will reflect the fairness experienced by each flow. The formula for our index is: FairnessIn dex = n i= 1 1 T i Avg 2( n 1) Avg Where, n is the number of flows, T i is the throughput of flow i, and Avg is the average throughput achieved by all n flows. As defined above, the Fairness Index displays the following properties: It ranges between 0 and 1 for any number of flows. A totally fair bandwidth allocation will have an index of 1 (all flows have the same throughput). A totally unfair allocation will have an index of 0 (one flow takes up all the bandwidth and the rest of them are idle). It is continuous (every change in the throughput of any flow affects the value of the index). The unit of measurement used does not affect the index. If out of n flows, k are equally sharing all the bandwidth and the rest n-k are idle, the fairness index is (k-1)/(n-1). The numerator of the fraction in the formula is the sum of the absolute value of the difference of each flow s throughput from the average throughput. The larger this sum is, the more unfair the system is, since the individual throughputs diverge from the average. In the worst-case scenario, one flow is the only active flow while the rest of them remain idle. In this situation, the absolute difference for each idle flow is going to be Avg (since T i = 0), while for the active flow, the absolute difference will be (n-1)*avg. The sum of the first n-1 terms is (n- 1)*Avg. Adding the absolute difference for the active flow, we get 2(n- 1)*Avg, which is the value we divide by to normalize the result. 20

28 In section 1.2, we presented the original fairness index introduced in [12]. Below, we provide a brief comparison of the two indices and explain the reasons that led us to the decision of defining a new one. In the following paragraphs, the fairness index defined in [12] will be referred to as the old index, while the index defined here will be the new one: The old index ranges from 1/n to 1, where n is the number of flows. For instance, if there are two flows and only one is active while the other is idle, the old fairness index will be 0.5. The new index always ranges from 0 to 1, regardless of what the value of n is. In the previously mentioned example, the value of the new index would be 0. The new index is more sensitive to changes, especially when the number of flows is small. For example, in the case where there are two flows and one achieves 50% more throughput than the other, the old index would give 0.96, while the new would give 0.8. When one flow uses exactly twice as much bandwidth as the other, the old index gives 0.9, while the new one gives In general, the new index reflects the system fairness more accurately, especially when the number of flows is small, which is the case for the experiments conducted in this work. When dealing with only two flows, the formula for the fairness index is reduced to: T1 + T2 T1 + T2 FairnessIn dex = 1 2 T1 + T T1 ( T1 + T2 ) T1 T2 = 1 = 1 T + T T + T T1 + T2 2 = This is essentially one minus the portion of the overall throughput represented by the difference in the two individual throughput values. That means, if two flows are present and the fairness index has a value of 0.95, the throughput difference of the two flows is 5% of the overall throughput achieved by both flows. The ratio of the higher throughput over the lower one can be found by solving the above equation for T 1 /T 2 : T 1 /T 2 = (2 FI) / FI. For the previous example, this would be roughly 1.10, which denotes that one flow, took up 10% more bandwidth than the other. At this point, it is worthwhile noticing that when examining the fairness, we consider the throughput achieved by a flow as opposed to the goodput. A flow being unfair and taking up most of the available bandwidth does not necessarily mean that this flow is more efficient than the rest, since part of the transmitted 21

29 data consists of packet retransmissions. Thus, fairness is not concerned with how the occupied bandwidth is used, but rather with the bandwidth utilization itself, and is unrelated to any efficiency metrics. It is understood that if goodput were used as a metric to compute fairness, the results would not reflect the reality as perceived on the link layer. Another fairness index used in the literature is the min-max ratio [24], defined as: xi M = min{ } i j x, j Where, x i and x j are the throughput values for flows i and j, respectively. This is essentially the ratio of the throughput achieved by the flow that took up the smallest portion of the available bandwidth over the throughput of the flow that took up the greatest portion of the available bandwidth. This index is useroriented, as opposed to the other two, which are system-oriented. The min-max ratio is 0 if any of the flows has 0 throughput, even if the rest share the bandwidth equally. The assumption made by this index is that if one user is dissatisfied, the system is unfair regardless of how the rest of the bandwidth allocation is done. In our analysis, we do not use the min-max ratio because in the cases we examined (i.e. small number of flows), it does not provide us with any additional information about the overall system fairness. 4.5 Capturing fairness During our study, we combine the results of a vertical and horizontal analysis over the gathered statistics. In the horizontal analysis, the fairness index for each individual experiment under a certain error rate is calculated. The average of the fairness index for each run represents the fairness index for this error rate. This is an indication of how well the protocols interact with each other and how fair the combination is. The fairness index calculated during the horizontal analysis might be slightly inaccurate when the difference in the time required by the two flows is large. Once a flow has transferred the predefined amount of data, it stops transmitting and essentially the second flow continues with no competing traffic for the rest of the connection. In such a case, the fairness index assumes that the two flows were competing until the slower flow completed its transfer. What the horizontal analysis does not achieve is determining which one of the two flows was favored, and by what amount. The vertical analysis involves averaging the throughput values of each flow over all runs, for a certain error rate. By comparing the outcomes we can conclude which protocol was favored over the other. When all flows run the same protocol, the vertical analysis is of no significance since the averaged throughput values will ultimately be equal. 22

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP) TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page

More information

TCP in Wireless Mobile Networks

TCP in Wireless Mobile Networks TCP in Wireless Mobile Networks 1 Outline Introduction to transport layer Introduction to TCP (Internet) congestion control Congestion control in wireless networks 2 Transport Layer v.s. Network Layer

More information

Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)

Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1) Lecture Objectives Wireless and Mobile Systems Design Lecture 07 Mobile Networks: TCP in Wireless Networks Describe TCP s flow control mechanism Describe operation of TCP Reno and TCP Vegas, including

More information

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage Lecture 15: Congestion Control CSE 123: Computer Networks Stefan Savage Overview Yesterday: TCP & UDP overview Connection setup Flow control: resource exhaustion at end node Today: Congestion control Resource

More information

Congestions and Control Mechanisms n Wired and Wireless Networks

Congestions and Control Mechanisms n Wired and Wireless Networks International OPEN ACCESS Journal ISSN: 2249-6645 Of Modern Engineering Research (IJMER) Congestions and Control Mechanisms n Wired and Wireless Networks MD Gulzar 1, B Mahender 2, Mr.B.Buchibabu 3 1 (Asst

More information

Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network

Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network 作 者 :Daiqin Yang, Ka-Cheong Leung, and Victor O. K. Li 出 處 :Wireless Communications and Networking Conference, 2007.WCNC

More information

TCP over Wireless Networks

TCP over Wireless Networks TCP over Wireless Networks Raj Jain Professor of Computer Science and Engineering Washington University in Saint Louis Saint Louis, MO 63130 Audio/Video recordings of this lecture are available at: http://www.cse.wustl.edu/~jain/cse574-10/

More information

1. The subnet must prevent additional packets from entering the congested region until those already present can be processed.

1. The subnet must prevent additional packets from entering the congested region until those already present can be processed. Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because routers are receiving packets faster than they can forward them, one

More information

15-441: Computer Networks Homework 2 Solution

15-441: Computer Networks Homework 2 Solution 5-44: omputer Networks Homework 2 Solution Assigned: September 25, 2002. Due: October 7, 2002 in class. In this homework you will test your understanding of the TP concepts taught in class including flow

More information

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM 152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

More information

TCP and Wireless Networks Classical Approaches Optimizations TCP for 2.5G/3G Systems. Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme

TCP and Wireless Networks Classical Approaches Optimizations TCP for 2.5G/3G Systems. Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme Chapter 2 Technical Basics: Layer 1 Methods for Medium Access: Layer 2 Chapter 3 Wireless Networks: Bluetooth, WLAN, WirelessMAN, WirelessWAN Mobile Networks: GSM, GPRS, UMTS Chapter 4 Mobility on the

More information

Mobile Communications Chapter 9: Mobile Transport Layer

Mobile Communications Chapter 9: Mobile Transport Layer Mobile Communications Chapter 9: Mobile Transport Layer Motivation TCP-mechanisms Classical approaches Indirect TCP Snooping TCP Mobile TCP PEPs in general Additional optimizations Fast retransmit/recovery

More information

Congestion Control Review. 15-441 Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?

Congestion Control Review. 15-441 Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control? Congestion Control Review What is congestion control? 15-441 Computer Networking What is the principle of TCP? Lecture 22 Queue Management and QoS 2 Traffic and Resource Management Resource Management

More information

A Survey on Congestion Control Mechanisms for Performance Improvement of TCP

A Survey on Congestion Control Mechanisms for Performance Improvement of TCP A Survey on Congestion Control Mechanisms for Performance Improvement of TCP Shital N. Karande Department of Computer Science Engineering, VIT, Pune, Maharashtra, India Sanjesh S. Pawale Department of

More information

TCP for Wireless Networks

TCP for Wireless Networks TCP for Wireless Networks Outline Motivation TCP mechanisms Indirect TCP Snooping TCP Mobile TCP Fast retransmit/recovery Transmission freezing Selective retransmission Transaction oriented TCP Adapted

More information

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Wireless Networks 6 (2000) 375 379 375 An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Jian Ma a, Jussi Ruutu b and Jing Wu c a Nokia China R&D Center, No. 10, He Ping Li Dong Jie,

More information

AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS

AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS Srikanth Tiyyagura Department of Computer Science and Engineering JNTUA College of Engg., pulivendula, Andhra Pradesh, India.

More information

Low-rate TCP-targeted Denial of Service Attack Defense

Low-rate TCP-targeted Denial of Service Attack Defense Low-rate TCP-targeted Denial of Service Attack Defense Johnny Tsao Petros Efstathopoulos University of California, Los Angeles, Computer Science Department Los Angeles, CA E-mail: {johnny5t, pefstath}@cs.ucla.edu

More information

Transport Layer Protocols

Transport Layer Protocols Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements

More information

First Midterm for ECE374 03/24/11 Solution!!

First Midterm for ECE374 03/24/11 Solution!! 1 First Midterm for ECE374 03/24/11 Solution!! Note: In all written assignments, please show as much of your work as you can. Even if you get a wrong answer, you can get partial credit if you show your

More information

Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks

Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks Vasilios A. Siris and Despina Triantafyllidou Institute of Computer Science (ICS) Foundation for Research and Technology - Hellas

More information

Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan

Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan Application Level Congestion Control Enhancements in High BDP Networks Anupama Sundaresan Organization Introduction Motivation Implementation Experiments and Results Conclusions 2 Developing a Grid service

More information

Active Queue Management (AQM) based Internet Congestion Control

Active Queue Management (AQM) based Internet Congestion Control Active Queue Management (AQM) based Internet Congestion Control October 1 2002 Seungwan Ryu (sryu@eng.buffalo.edu) PhD Student of IE Department University at Buffalo Contents Internet Congestion Control

More information

Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and.

Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and. Master s Thesis Title A Study on Active Queue Management Mechanisms for Internet Routers: Design, Performance Analysis, and Parameter Tuning Supervisor Prof. Masayuki Murata Author Tomoya Eguchi February

More information

Outline. TCP connection setup/data transfer. 15-441 Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics

Outline. TCP connection setup/data transfer. 15-441 Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics Outline 15-441 Computer Networking Lecture 8 TCP & Congestion Control TCP connection setup/data transfer TCP Reliability Congestion sources and collapse Congestion control basics Lecture 8: 09-23-2002

More information

Data Networks Summer 2007 Homework #3

Data Networks Summer 2007 Homework #3 Data Networks Summer Homework # Assigned June 8, Due June in class Name: Email: Student ID: Problem Total Points Problem ( points) Host A is transferring a file of size L to host B using a TCP connection.

More information

Final for ECE374 05/06/13 Solution!!

Final for ECE374 05/06/13 Solution!! 1 Final for ECE374 05/06/13 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam taker -

More information

TCP in Wireless Networks

TCP in Wireless Networks Outline Lecture 10 TCP Performance and QoS in Wireless s TCP Performance in wireless networks TCP performance in asymmetric networks WAP Kurose-Ross: Chapter 3, 6.8 On-line: TCP over Wireless Systems Problems

More information

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.

More information

Robust Router Congestion Control Using Acceptance and Departure Rate Measures

Robust Router Congestion Control Using Acceptance and Departure Rate Measures Robust Router Congestion Control Using Acceptance and Departure Rate Measures Ganesh Gopalakrishnan a, Sneha Kasera b, Catherine Loader c, and Xin Wang b a {ganeshg@microsoft.com}, Microsoft Corporation,

More information

Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, dlagutin@cc.hut.fi

Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, dlagutin@cc.hut.fi Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, dlagutin@cc.hut.fi 1. Introduction Ad hoc wireless networks pose a big challenge for transport layer protocol and transport layer protocols

More information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

More information

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project Effects of Filler Traffic In IP Networks Adam Feldman April 5, 2001 Master s Project Abstract On the Internet, there is a well-documented requirement that much more bandwidth be available than is used

More information

Computer Networks. Chapter 5 Transport Protocols

Computer Networks. Chapter 5 Transport Protocols Computer Networks Chapter 5 Transport Protocols Transport Protocol Provides end-to-end transport Hides the network details Transport protocol or service (TS) offers: Different types of services QoS Data

More information

Mobile Computing/ Mobile Networks

Mobile Computing/ Mobile Networks Mobile Computing/ Mobile Networks TCP in Mobile Networks Prof. Chansu Yu Contents Physical layer issues Communication frequency Signal propagation Modulation and Demodulation Channel access issues Multiple

More information

International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1169 ISSN 2229-5518

International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1169 ISSN 2229-5518 International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1169 Comparison of TCP I-Vegas with TCP Vegas in Wired-cum-Wireless Network Nitin Jain & Dr. Neelam Srivastava Abstract

More information

Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow

Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow International Journal of Soft Computing and Engineering (IJSCE) Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow Abdullah Al Masud, Hossain Md. Shamim, Amina Akhter

More information

Chapter 6 Congestion Control and Resource Allocation

Chapter 6 Congestion Control and Resource Allocation Chapter 6 Congestion Control and Resource Allocation 6.3 TCP Congestion Control Additive Increase/Multiplicative Decrease (AIMD) o Basic idea: repeatedly increase transmission rate until congestion occurs;

More information

High-Speed TCP Performance Characterization under Various Operating Systems

High-Speed TCP Performance Characterization under Various Operating Systems High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,

More information

A Survey: High Speed TCP Variants in Wireless Networks

A Survey: High Speed TCP Variants in Wireless Networks ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com A Survey:

More information

17: Queue Management. Queuing. Mark Handley

17: Queue Management. Queuing. Mark Handley 17: Queue Management Mark Handley Queuing The primary purpose of a queue in an IP router is to smooth out bursty arrivals, so that the network utilization can be high. But queues add delay and cause jitter.

More information

A Passive Method for Estimating End-to-End TCP Packet Loss

A Passive Method for Estimating End-to-End TCP Packet Loss A Passive Method for Estimating End-to-End TCP Packet Loss Peter Benko and Andras Veres Traffic Analysis and Network Performance Laboratory, Ericsson Research, Budapest, Hungary {Peter.Benko, Andras.Veres}@eth.ericsson.se

More information

An Improved TCP Congestion Control Algorithm for Wireless Networks

An Improved TCP Congestion Control Algorithm for Wireless Networks An Improved TCP Congestion Control Algorithm for Wireless Networks Ahmed Khurshid Department of Computer Science University of Illinois at Urbana-Champaign Illinois, USA khurshi1@illinois.edu Md. Humayun

More information

QoS issues in Voice over IP

QoS issues in Voice over IP COMP9333 Advance Computer Networks Mini Conference QoS issues in Voice over IP Student ID: 3058224 Student ID: 3043237 Student ID: 3036281 Student ID: 3025715 QoS issues in Voice over IP Abstract: This

More information

Basic Multiplexing models. Computer Networks - Vassilis Tsaoussidis

Basic Multiplexing models. Computer Networks - Vassilis Tsaoussidis Basic Multiplexing models? Supermarket?? Computer Networks - Vassilis Tsaoussidis Schedule Where does statistical multiplexing differ from TDM and FDM Why are buffers necessary - what is their tradeoff,

More information

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS? 18-345: Introduction to Telecommunication Networks Lectures 20: Quality of Service Peter Steenkiste Spring 2015 www.cs.cmu.edu/~prs/nets-ece Overview What is QoS? Queuing discipline and scheduling Traffic

More information

The Problem with TCP. Overcoming TCP s Drawbacks

The Problem with TCP. Overcoming TCP s Drawbacks White Paper on managed file transfers How to Optimize File Transfers Increase file transfer speeds in poor performing networks FileCatalyst Page 1 of 6 Introduction With the proliferation of the Internet,

More information

MOBILITY AND MOBILE NETWORK OPTIMIZATION

MOBILITY AND MOBILE NETWORK OPTIMIZATION MOBILITY AND MOBILE NETWORK OPTIMIZATION netmotionwireless.com Executive Summary Wireless networks exhibit uneven and unpredictable performance characteristics which, if not correctly managed, can turn

More information

R2. The word protocol is often used to describe diplomatic relations. How does Wikipedia describe diplomatic protocol?

R2. The word protocol is often used to describe diplomatic relations. How does Wikipedia describe diplomatic protocol? Chapter 1 Review Questions R1. What is the difference between a host and an end system? List several different types of end systems. Is a Web server an end system? 1. There is no difference. Throughout

More information

Protagonist International Journal of Management And Technology (PIJMT) Online ISSN- 2394-3742. Vol 2 No 3 (May-2015) Active Queue Management

Protagonist International Journal of Management And Technology (PIJMT) Online ISSN- 2394-3742. Vol 2 No 3 (May-2015) Active Queue Management Protagonist International Journal of Management And Technology (PIJMT) Online ISSN- 2394-3742 Vol 2 No 3 (May-2015) Active Queue Management For Transmission Congestion control Manu Yadav M.Tech Student

More information

Per-Flow Queuing Allot's Approach to Bandwidth Management

Per-Flow Queuing Allot's Approach to Bandwidth Management White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth

More information

Ina Minei Reuven Cohen. The Technion. Haifa 32000, Israel. e-mail: faminei,rcoheng@cs.technion.ac.il. Abstract

Ina Minei Reuven Cohen. The Technion. Haifa 32000, Israel. e-mail: faminei,rcoheng@cs.technion.ac.il. Abstract High Speed Internet Access Through Unidirectional Geostationary Satellite Channels Ina Minei Reuven Cohen Computer Science Department The Technion Haifa 32000, Israel e-mail: faminei,rcoheng@cs.technion.ac.il

More information

The Quality of Internet Service: AT&T s Global IP Network Performance Measurements

The Quality of Internet Service: AT&T s Global IP Network Performance Measurements The Quality of Internet Service: AT&T s Global IP Network Performance Measurements In today's economy, corporations need to make the most of opportunities made possible by the Internet, while managing

More information

CSE 473 Introduction to Computer Networks. Exam 2 Solutions. Your name: 10/31/2013

CSE 473 Introduction to Computer Networks. Exam 2 Solutions. Your name: 10/31/2013 CSE 473 Introduction to Computer Networks Jon Turner Exam Solutions Your name: 0/3/03. (0 points). Consider a circular DHT with 7 nodes numbered 0,,...,6, where the nodes cache key-values pairs for 60

More information

Chaoyang University of Technology, Taiwan, ROC. {changb,s9227623}@mail.cyut.edu.tw 2 Department of Computer Science and Information Engineering

Chaoyang University of Technology, Taiwan, ROC. {changb,s9227623}@mail.cyut.edu.tw 2 Department of Computer Science and Information Engineering TCP-Taichung: A RTT-based Predictive Bandwidth Based with Optimal Shrink Factor for TCP Congestion Control in Heterogeneous Wired and Wireless Networks Ben-Jye Chang 1, Shu-Yu Lin 1, and Ying-Hsin Liang

More information

Passive Queue Management

Passive Queue Management , 2013 Performance Evaluation of Computer Networks Objectives Explain the role of active queue management in performance optimization of TCP/IP networks Learn a range of active queue management algorithms

More information

4 High-speed Transmission and Interoperability

4 High-speed Transmission and Interoperability 4 High-speed Transmission and Interoperability Technology 4-1 Transport Protocols for Fast Long-Distance Networks: Comparison of Their Performances in JGN KUMAZOE Kazumi, KOUYAMA Katsushi, HORI Yoshiaki,

More information

Optimal Bandwidth Monitoring. Y.Yu, I.Cheng and A.Basu Department of Computing Science U. of Alberta

Optimal Bandwidth Monitoring. Y.Yu, I.Cheng and A.Basu Department of Computing Science U. of Alberta Optimal Bandwidth Monitoring Y.Yu, I.Cheng and A.Basu Department of Computing Science U. of Alberta Outline Introduction The problem and objectives The Bandwidth Estimation Algorithm Simulation Results

More information

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29. Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet

More information

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT 1. TIMING ACCURACY The accurate multi-point measurements require accurate synchronization of clocks of the measurement devices. If for example time stamps

More information

Key Components of WAN Optimization Controller Functionality

Key Components of WAN Optimization Controller Functionality Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications

More information

A Congestion Control Algorithm for Data Center Area Communications

A Congestion Control Algorithm for Data Center Area Communications A Congestion Control Algorithm for Data Center Area Communications Hideyuki Shimonishi, Junichi Higuchi, Takashi Yoshikawa, and Atsushi Iwata System Platforms Research Laboratories, NEC Corporation 1753

More information

SELECTIVE-TCP FOR WIRED/WIRELESS NETWORKS

SELECTIVE-TCP FOR WIRED/WIRELESS NETWORKS SELECTIVE-TCP FOR WIRED/WIRELESS NETWORKS by Rajashree Paul Bachelor of Technology, University of Kalyani, 2002 PROJECT SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF

More information

Delay-Based Early Congestion Detection and Adaptation in TCP: Impact on web performance

Delay-Based Early Congestion Detection and Adaptation in TCP: Impact on web performance 1 Delay-Based Early Congestion Detection and Adaptation in TCP: Impact on web performance Michele C. Weigle Clemson University Clemson, SC 29634-196 Email: mweigle@cs.clemson.edu Kevin Jeffay and F. Donelson

More information

Comparative Analysis of Congestion Control Algorithms Using ns-2

Comparative Analysis of Congestion Control Algorithms Using ns-2 www.ijcsi.org 89 Comparative Analysis of Congestion Control Algorithms Using ns-2 Sanjeev Patel 1, P. K. Gupta 2, Arjun Garg 3, Prateek Mehrotra 4 and Manish Chhabra 5 1 Deptt. of Computer Sc. & Engg,

More information

VPN over Satellite A comparison of approaches by Richard McKinney and Russell Lambert

VPN over Satellite A comparison of approaches by Richard McKinney and Russell Lambert Sales & Engineering 3500 Virginia Beach Blvd Virginia Beach, VA 23452 800.853.0434 Ground Operations 1520 S. Arlington Road Akron, OH 44306 800.268.8653 VPN over Satellite A comparison of approaches by

More information

Random Early Detection Gateways for Congestion Avoidance

Random Early Detection Gateways for Congestion Avoidance Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson Lawrence Berkeley Laboratory University of California floyd@eelblgov van@eelblgov To appear in the August 1993 IEEE/ACM

More information

TCP Behavior across Multihop Wireless Networks and the Wired Internet

TCP Behavior across Multihop Wireless Networks and the Wired Internet TCP Behavior across Multihop Wireless Networks and the Wired Internet Kaixin Xu, Sang Bae, Mario Gerla, Sungwook Lee Computer Science Department University of California, Los Angeles, CA 90095 (xkx, sbae,

More information

CS268 Exam Solutions. 1) End-to-End (20 pts)

CS268 Exam Solutions. 1) End-to-End (20 pts) CS268 Exam Solutions General comments: ) If you would like a re-grade, submit in email a complete explanation of why your solution should be re-graded. Quote parts of your solution if necessary. In person

More information

Network Protocol Design and Evaluation

Network Protocol Design and Evaluation Network Protocol Design and Evaluation 08 - Analytical Evaluation Stefan Rührup Summer 2009 Overview In the last chapter: Simulation In this part: Analytical Evaluation: case studies 2 Analytical Evaluation

More information

Student, Haryana Engineering College, Haryana, India 2 H.O.D (CSE), Haryana Engineering College, Haryana, India

Student, Haryana Engineering College, Haryana, India 2 H.O.D (CSE), Haryana Engineering College, Haryana, India Volume 5, Issue 6, June 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A New Protocol

More information

Adaptive Virtual Buffer(AVB)-An Active Queue Management Scheme for Internet Quality of Service

Adaptive Virtual Buffer(AVB)-An Active Queue Management Scheme for Internet Quality of Service Adaptive Virtual Buffer(AVB)-An Active Queue Management Scheme for Internet Quality of Service Xidong Deng, George esidis, Chita Das Department of Computer Science and Engineering The Pennsylvania State

More information

A Transport Protocol for Multimedia Wireless Sensor Networks

A Transport Protocol for Multimedia Wireless Sensor Networks A Transport Protocol for Multimedia Wireless Sensor Networks Duarte Meneses, António Grilo, Paulo Rogério Pereira 1 NGI'2011: A Transport Protocol for Multimedia Wireless Sensor Networks Introduction Wireless

More information

Visualizations and Correlations in Troubleshooting

Visualizations and Correlations in Troubleshooting Visualizations and Correlations in Troubleshooting Kevin Burns Comcast kevin_burns@cable.comcast.com 1 Comcast Technology Groups Cable CMTS, Modem, Edge Services Backbone Transport, Routing Converged Regional

More information

COMP 361 Computer Communications Networks. Fall Semester 2003. Midterm Examination

COMP 361 Computer Communications Networks. Fall Semester 2003. Midterm Examination COMP 361 Computer Communications Networks Fall Semester 2003 Midterm Examination Date: October 23, 2003, Time 18:30pm --19:50pm Name: Student ID: Email: Instructions: 1. This is a closed book exam 2. This

More information

FEW would argue that one of TCP s strengths lies in its

FEW would argue that one of TCP s strengths lies in its IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 13, NO. 8, OCTOBER 1995 1465 TCP Vegas: End to End Congestion Avoidance on a Global Internet Lawrence S. Brakmo, Student Member, IEEE, and Larry L.

More information

EKSAMEN / EXAM TTM4100 18 05 2007

EKSAMEN / EXAM TTM4100 18 05 2007 1.1 1.1.1...... 1.1.2...... 1.1.3...... 1.1.4...... 1.1.5...... 1.1.6...... 1.1.7...... 1.1.8...... 1.1.9...... 1.1.10.... 1.1.11... 1.1.16.... 1.1.12... 1.1.17.... 1.1.13... 1.1.18.... 1.1.14... 1.1.19....

More information

Analyzing Marking Mod RED Active Queue Management Scheme on TCP Applications

Analyzing Marking Mod RED Active Queue Management Scheme on TCP Applications 212 International Conference on Information and Network Technology (ICINT 212) IPCSIT vol. 7 (212) (212) IACSIT Press, Singapore Analyzing Marking Active Queue Management Scheme on TCP Applications G.A.

More information

Clearing the Way for VoIP

Clearing the Way for VoIP Gen2 Ventures White Paper Clearing the Way for VoIP An Alternative to Expensive WAN Upgrades Executive Overview Enterprises have traditionally maintained separate networks for their voice and data traffic.

More information

CSE 123: Computer Networks

CSE 123: Computer Networks CSE 123: Computer Networks Homework 4 Solutions Out: 12/03 Due: 12/10 1. Routers and QoS Packet # Size Flow 1 100 1 2 110 1 3 50 1 4 160 2 5 80 2 6 240 2 7 90 3 8 180 3 Suppose a router has three input

More information

Secure SCTP against DoS Attacks in Wireless Internet

Secure SCTP against DoS Attacks in Wireless Internet Secure SCTP against DoS Attacks in Wireless Internet Inwhee Joe College of Information and Communications Hanyang University Seoul, Korea iwjoe@hanyang.ac.kr Abstract. The Stream Control Transport Protocol

More information

Research of TCP ssthresh Dynamical Adjustment Algorithm Based on Available Bandwidth in Mixed Networks

Research of TCP ssthresh Dynamical Adjustment Algorithm Based on Available Bandwidth in Mixed Networks Research of TCP ssthresh Dynamical Adjustment Algorithm Based on Available Bandwidth in Mixed Networks 1 Wang Zhanjie, 2 Zhang Yunyang 1, First Author Department of Computer Science,Dalian University of

More information

Lecture 8 Performance Measurements and Metrics. Performance Metrics. Outline. Performance Metrics. Performance Metrics Performance Measurements

Lecture 8 Performance Measurements and Metrics. Performance Metrics. Outline. Performance Metrics. Performance Metrics Performance Measurements Outline Lecture 8 Performance Measurements and Metrics Performance Metrics Performance Measurements Kurose-Ross: 1.2-1.4 (Hassan-Jain: Chapter 3 Performance Measurement of TCP/IP Networks ) 2010-02-17

More information

CH.1. Lecture # 2. Computer Networks and the Internet. Eng. Wafaa Audah. Islamic University of Gaza. Faculty of Engineering

CH.1. Lecture # 2. Computer Networks and the Internet. Eng. Wafaa Audah. Islamic University of Gaza. Faculty of Engineering Islamic University of Gaza Faculty of Engineering Computer Engineering Department Networks Discussion ECOM 4021 Lecture # 2 CH1 Computer Networks and the Internet By Feb 2013 (Theoretical material: page

More information

The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN

The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN By Jim Metzler, Cofounder, Webtorials Editorial/Analyst Division Background and Goal Many papers have been written

More information

Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols

Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols Purvi N. Ramanuj Department of Computer Engineering L.D. College of Engineering Ahmedabad Hiteishi M. Diwanji

More information

Dynamic Source Routing in Ad Hoc Wireless Networks

Dynamic Source Routing in Ad Hoc Wireless Networks Dynamic Source Routing in Ad Hoc Wireless Networks David B. Johnson David A. Maltz Computer Science Department Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213-3891 dbj@cs.cmu.edu Abstract

More information

TCP/IP Over Lossy Links - TCP SACK without Congestion Control

TCP/IP Over Lossy Links - TCP SACK without Congestion Control Wireless Random Packet Networking, Part II: TCP/IP Over Lossy Links - TCP SACK without Congestion Control Roland Kempter The University of Alberta, June 17 th, 2004 Department of Electrical And Computer

More information

TCP, Active Queue Management and QoS

TCP, Active Queue Management and QoS TCP, Active Queue Management and QoS Don Towsley UMass Amherst towsley@cs.umass.edu Collaborators: W. Gong, C. Hollot, V. Misra Outline motivation TCP friendliness/fairness bottleneck invariant principle

More information

(Refer Slide Time: 02:17)

(Refer Slide Time: 02:17) Internet Technology Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No #06 IP Subnetting and Addressing (Not audible: (00:46)) Now,

More information

Multipath TCP in Practice (Work in Progress) Mark Handley Damon Wischik Costin Raiciu Alan Ford

Multipath TCP in Practice (Work in Progress) Mark Handley Damon Wischik Costin Raiciu Alan Ford Multipath TCP in Practice (Work in Progress) Mark Handley Damon Wischik Costin Raiciu Alan Ford The difference between theory and practice is in theory somewhat smaller than in practice. In theory, this

More information

TCP/IP Performance with Random Loss and Bidirectional Congestion

TCP/IP Performance with Random Loss and Bidirectional Congestion IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 5, OCTOBER 2000 541 TCP/IP Performance with Random Loss and Bidirectional Congestion T. V. Lakshman, Senior Member, IEEE, Upamanyu Madhow, Senior Member,

More information

First Midterm for ECE374 02/25/15 Solution!!

First Midterm for ECE374 02/25/15 Solution!! 1 First Midterm for ECE374 02/25/15 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam

More information

Analysis of TCP Performance Over Asymmetric Wireless Links

Analysis of TCP Performance Over Asymmetric Wireless Links Virginia Tech ECPE 6504: Wireless Networks and Mobile Computing Analysis of TCP Performance Over Asymmetric Kaustubh S. Phanse (kphanse@vt.edu) Outline Project Goal Notions of Asymmetry in Wireless Networks

More information

Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control

Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control Professor M. Chiang Electrical Engineering Department, Princeton University ELE539A February 21, 2007 Lecture Outline TCP

More information

Burst Testing. New mobility standards and cloud-computing network. This application note will describe how TCP creates bursty

Burst Testing. New mobility standards and cloud-computing network. This application note will describe how TCP creates bursty Burst Testing Emerging high-speed protocols in mobility and access networks, combined with qualityof-service demands from business customers for services such as cloud computing, place increased performance

More information

La couche transport dans l'internet (la suite TCP/IP)

La couche transport dans l'internet (la suite TCP/IP) La couche transport dans l'internet (la suite TCP/IP) C. Pham Université de Pau et des Pays de l Adour Département Informatique http://www.univ-pau.fr/~cpham Congduc.Pham@univ-pau.fr Cours de C. Pham,

More information

A packet-reordering solution to wireless losses in transmission control protocol

A packet-reordering solution to wireless losses in transmission control protocol Wireless Netw () 9:577 59 DOI.7/s76--55-6 A packet-reordering solution to wireless losses in transmission control protocol Ka-Cheong Leung Chengdi Lai Victor O. K. Li Daiqin Yang Published online: 6 February

More information

1 Introduction to mobile telecommunications

1 Introduction to mobile telecommunications 1 Introduction to mobile telecommunications Mobile phones were first introduced in the early 1980s. In the succeeding years, the underlying technology has gone through three phases, known as generations.

More information

Ad hoc and Sensor Networks Chapter 13: Transport Layer and Quality of Service

Ad hoc and Sensor Networks Chapter 13: Transport Layer and Quality of Service Ad hoc and Sensor Networks Chapter 13: Transport Layer and Quality of Service António Grilo Courtesy: Holger Karl, UPB Overview Dependability requirements Delivering single packets Delivering blocks of

More information