Master s Thesis. Design, Implementation and Evaluation of
|
|
|
- Rolf Rodgers
- 9 years ago
- Views:
Transcription
1 Master s Thesis Title Design, Implementation and Evaluation of Scalable Resource Management System for Internet Servers Supervisor Prof. Masayuki Murata Author Takuya Okamoto February, 2003 Department of Informatics and Mathematical Science, Graduate School of Engineering Science, Osaka University
2 Master s Thesis Design, Implementation and Evaluation of Scalable Resource Management System for Internet Servers Takuya Okamoto Abstract A great deal of research has been devoted to solving the problem of network congestion posed against the context of increasing Internet traffic. However, there has been little concern regarding improvements of the performance of Internet servers such as Web/Web proxy servers in spite of the projections that the performance bottleneck will shift from networks to endhosts. For example, Web servers or Web proxy servers must accommodate many TCP connections simultaneously and their server throughputs degrade when resource management schemes are not considered. In the previous work, we have proposed a scalable resource management scheme for Web servers in order to overcome above-mentioned problems, which has two major mechanisms: one is a dynamic allocation of send socket buffer, which assigns a proper size of send socket buffer to each TCP connection based on an expected throughput estimated at a TCP sender host. The other is a memory-copy reduction scheme, which decreases the number of memory access operations in a data transmission at a TCP sender host. We have validated the effectiveness of our proposed scheme through some simulation and implementation experiments, and confirmed that our scheme can improve a server performance and fair and effective usage of the send socket buffer can be achieved. In the current Internet, however, many Web document transfer requests are transferred via Web proxy servers, which are usually prepared by ISPs (Internet Service Providers) for their customers. They must accommodate many TCP connections both upward TCP connections (from the proxy 1
3 server to Web servers) and downward TCP connections (from client hosts to the proxy server). Hence, a proxy server becomes a likely spot for bottlenecks to occur during Web document transfers, even when the bandwidth of the network and Web server performance are adequate. In this thesis, therefore, we propose a novel scalable resource management scheme for Web proxy servers. Our proposed scheme has two following mechanisms: one is a dynamic allocation of receive socket buffer for each TCP connection at a TCP receiver host, which assigns it based on its utilization monitored at the receiver host. Another is a connection management scheme that intentionally tries to close idle persistent TCP connections, which transfer no data, in order to prevent newly arriving TCP connections from being rejected when the server resources becomes shorthanded. We validate the effectiveness of our proposed scheme through some simulation and implementation experiments. From these results, we conclusively confirm that our proposed scheme can improve the server performance by up to 50% at maximum, and document transfer delay perceived by client hosts can be decreased by up to about 30%. Keywords TCP (Transmission Control Protocol), HTTP (Hypertext Transfer Protocol), Web/Web Proxy Server, Socket Buffer, Persistent TCP Connection, Resource Management 2
4 Contents 1 Introduction 5 2 Background Web/Web proxy servers Server Resources at Internet Servers Persistent TCP Connection by HTTP/ Algorithm Socket Buffer Management Scheme Control of Send Socket Buffer Control of Receive Socket Buffer Handling the Relation between Upward and Downward TCP Connections Connection Management Scheme Simulation Experiments Performance of the Scalable Socket Buffer Tuning Overall Performance of Resource Management Scheme Implementation and Experiments Implementation Overview Implementation Experiments Concluding Remarks 45 Acknowledgements 46 References 47 3
5 List of Figures 1 Mechanisms of Web/Web Proxy Servers Analysis Model Probability that a TCP Connection is Active Persistent Connection List in Connection Management Scheme Network Model (1) Simulation Results: Change of Assigned Size of Send Socket Buffer Simulation Results: Change of Assigned Size of Receive Socket Buffer Network Model (2) Simulation Results: Effect of Socket Buffer Size Simulation Results: Effect of Dividing Ratio of Socket Buffer Simulation Results: Performance of Connection Management Scheme Network Environment for Implementation Experiment Experimental Results: Server Throughput and Document Transfer Time Experimental Results: Assigned Socket Buffer Size
6 1 Introduction The rapid increase of the Internet users has been the impetus for much research into solving network congestion posed against the context of increasing network traffic [1-4]. However, little work has been done in the area of improving the performance of Internet servers despite the projected shift in the performance bottleneck from networks to endhosts [5-7]. There are already hints of this scenario emerging as evidenced by the proliferation of busy Web servers on the present-day Internet that receive hundreds of document transfer requests every second during peak volume periods. Needless to say, busy Web servers must have many concurrent HTTP sessions. Then the server throughput degrades when an effective resource management is not considered, even with a large network capacity. In order to improve Web server performance under the peak periods, in our previous work, we have proposed a scalable resource management scheme [8, 9]. Our proposed scheme has two major mechanisms: one is a dynamic allocation of send socket buffer, which assigns send socket buffer based on an expected throughput estimated at the sender host in order to avoid the waste of send socket buffer. By assigning a proper size of send socket buffer, we can expect the performance improvement as a following reason. Suppose that a Web server sends a TCP data to two client hosts of 64 Kbps dial-up (say, client A) and 100 Mbps LAN (say client B). When the server assigns a same size of send socket buffer to both client hosts, it is likely that the assigned size of send socket buffer is too large for client A and it is too insufficient for client B. This is because the network capacity (more strictly, bandwidth-delay products) between client A and the Web server is different form that between client B and the Web server. The other mechanism of our proposed scheme is a memory-copy reduction scheme, which decreases the number of memory access operations in a data transmission at a TCP sender host. When a TCP sender host transfers data located at a file system, the sender copies it from the file system to an application buffer, then it copies from the application buffer to the socket buffer. Our algorithm omits one 5
7 of the memory copy operations, and improves the server performance. We have validated the effectiveness of our proposed scheme through some simulation and implementation experiments, and confirmed that our scheme can improve a server performance and fair and effective usage of the send socket buffer can be achieved. In the current Internet, there are many Web document requests via Web proxy servers [10]. Then the proxy servers must also accommodate a large number of TCP connections, since they are usually provided by ISPs (Internet Service Providers) for their customers. Furthermore, proxy servers must handle both upward TCP connections (from the proxy server to Web servers) and downward TCP connections (from client hosts to the proxy server). Hence, the proxy server becomes a likely spot for bottlenecks to occur during Web document transfers, even when the bandwidth of the network and Web server performance are adequate. It is the contention that any effort expended to study ways to reduce document transfer time of Web documents must consider improvements in the performance of Internet servers. Although our proposed scheme explained above can be applied to a Web proxy server, it is not the best solution since we have not taken into consideration about characterization of proxy servers in our previous work. Therefore, in this thesis, we propose a novel scalable resource management scheme for Web proxy servers. We first discuss several problems that arise from the handling a large number of TCP connections at Web proxy servers. One of these problems involves a receive socket buffer allocation for TCP connections. When a TCP connection is not assigned a proper size of receive socket buffer based on its bandwidth-delay product, the assigned socket buffer may be left unused or insufficient, which results in the waste of the assigned resources and the server performance degradation. Another problem considered in this thesis is a management of persistent TCP connections provided by HTTP/1.1, which waste resources at busy Web proxy servers. When Web proxy servers accommodate many persistent TCP connections without effective management schemes, their resources continue to be assigned to those connections until they are closed by timer expiration although most of the persistent connections transfer no data. This means that new 6
8 TCP connections cannot be established since there is a shortage of server resources. We then propose a novel resource management scheme for Web proxy servers. Our proposed scheme has two major mechanisms: one is a control scheme of receive socket buffer of each TCP connection. We integrate two schemes for the send and receive socket buffer into a complete mechanism with considering their relationship. The other is a connection management scheme [11] that prevents newly arriving Web document transfer requests from being rejected at the proxy server due to the lack of its resources. The scheme involves the management of persistent TCP connections, which intentionally tries to dynamically close them when the server resources become shorthanded. We verify the effectiveness of our proposed scheme through simulation and implementation experiments. In the simulations, we evaluate its basic performance and characteristics by comparing with those of the original proxy server. Further, we discuss the results of implementation experiments, and confirm that Web proxy server throughput can be improved by up to 50%, and document transfer time perceived by client hosts can be decreased significantly. We last note that the application of our proposed scheme is not limited to the above-mentioned Web/Web proxy servers, but to the Internet hosts handling many TCP connections simultaneously; a popular peer in P2P networks would be one example that can enjoy our proposed scheme. The rest of this thesis is organized as follows. In Section 2, we provide an outline of Internet servers, such as Web servers and Web proxy servers, and their resources for handling TCP connections, and discuss the advantages and disadvantages of persistent TCP connections through HTTP/1.1. In Section 3, we propose a new resource management scheme for Web/Web proxy servers, and confirm its effectiveness by detailing the results we obtained in our simulation experiments and implementation experiments in Sections 4 and 5. Finally, we present our concluding remarks in Section 6. 7
9 2 Background In this section, we first explain a mechanism of Internet servers and their resources, in Subsections 2.1 and 2.2, respectively. We then discuss the potential that a persistent TCP connection can improve Web document transfer times. However, as will be clarified in Subsection 2.3, it requires a careful treatment at Internet servers. 2.1 Web/Web proxy servers A Web server delivers Web documents using HTTP (Hypertext Transfer Protocol) in response to document transfer requests from Web client hosts as shown in Fig. 1(a). A Web client host transfers Web document requests and browses the requested documents using a Web browser. In recent years, the Internet services including WWW have been developed rapidly and the number of Internet users has increased, which result in a serious network congestion. On the current Internet, a Web server must accommodate hundreds of TCP connections for document transfer requests from Web client hosts at peak periods. These connections have different characteristics, such as RTT, packet loss rate, and the available bandwidth. Then the required resources for each TCP connection are different. Almost all current OSs, however, do not consider such differences when they assign their resources to TCP connections, which causes unfair and ineffective usage of their resources. It is important for realizing fair services to assign the resources fairly and effectively to each TCP connection at the Web server. A Web proxy server works as an agent for Web client hosts that request Web documents [12]. When it receives a Web document transfer request from a Web client host, it obtains the requested document from the original Web servers on behalf of the client host and delivers it to the client. It also caches the obtained Web document. When other client hosts request the same documents, it transfers the cached documents, which results in that the document transfer time is much reduced, as described in Fig. 1(b). For example, it is reported in [13] that using Web proxy servers reduces 8
10 document transfer time by up to 30%. Also, when the cache is hit, the document transfer is performed without any connection establishment to the Web servers. Thus, the congestion within the network and at Web servers can also be reduced. Web proxy servers accommodate a large number of connections, which are connected from Web client hosts and to Web servers. It is a different point from Web servers. The proxy server behaves as a sender host for the downward TCP connection (between the client host and the proxy server) and as a receiver host for the upward TCP connection (between the proxy server and the Web server). Therefore, if the resource management is not appropriately taken into consideration at the proxy server, the document transfer time increases even when the network is not congested or the load of the Web server is not high. Thus, even when the bandwidth of the network is efficiently large, data transfer throughput is degraded when Web/Web proxy servers have nonoptimal performance. In the past literature, a number of studies has characterized Web server performance [14-16]. Also, some researchers have compared and evaluated the performance of Web/Web proxy servers for HTTP/1.0 and HTTP/1.1 in [13, 17]. Various studies have focused on cache replacement algorithms [18-20]. However, little work has been done on the management of server resources. Server resources are finite and cannot be increased when the server is running. If the remaining resources become shorthanded, the server cannot function fully and this results in server performance degradation. Moreover, a Web proxy server behaves as a sender host for downward TCP connections and as a receiver host for upward TCP connections. It means that more careful treatment of the resources should be done at the proxy server. On the current Internet, however, most Web proxy servers including those of [21, 22] do not take into consideration such a case. In following subsections, we explain the resources of Internet servers for establishing a TCP connection and show problems at busy Internet servers. 9
11 Web servers download the document from the Web server Internet Request a document to a Web server Client hosts (a) Web Server Web servers get the document from the original Web server Internet Upward TCP connection Request the document to the original Web server Miss Downward TCP connection Web proxy server Hit Internet Request a document deliver the document Client hosts (b) Web Proxy Server Figure 1: Mechanisms of Web/Web Proxy Servers 10
12 2.2 Server Resources at Internet Servers The resources at the Web/Web proxy servers that we focus on in this thesis are mbuf, file descriptor, control blocks, and socket buffer. These are closely related to the performance of TCP connections when transferring Web documents. Mbuf, file descriptor, and control blocks are resources for TCP connections. The socket buffer is used for storing transferred documents through TCP connections. When there are few resources, the Web/Web proxy servers are unable to establish a new TCP connection. Thus, the client host has to wait for existing TCP connections to be closed and for their assigned resources to be released. If this cannot be accomplished, these servers reject the request. In what follows, we describe the resources of Web proxy servers and how they deal with TCP connections. Although we have considered FreeBSD 4.6 [23] in our discussion, we believe that the results, in essence, can be extended to other OSs, such as Linux. Mbuf Each TCP connection is assigned an mbuf, which is located in the kernel memory space and used to move the transmission data between the socket buffer and the network interface. When the data size exceeds the size of mbuf, the data is stored in another memory space, called the mbuf cluster, which is listed to the mbuf. Several mbuf clusters are used for storing data based on its size. The number of mbufs prepared by the OS is configured in building the kernel; the number of defaults is 4096 in FreeBSD [24]. Since each TCP connection is assigned at least one mbuf when established, the default number of connections the server can simultaneously establish is This would be too small for busy servers. 11
13 File descriptor A file descriptor is assigned to each file in a file system so that the kernel and user applications can identify it. This is also associated with a TCP connection when it is established, and is called a socket file descriptor. The number of connections that can be established simultaneously is limited to the number of file descriptors prepared by the OS. The number of default file descriptors is 1064 in FreeBSD [24]. In contrast to mbuf, the number of file descriptors can be changed after the kernel is booted. However, since user applications, such as Squid [21], occupy memory space based on the number of available file descriptors when they are booted, it is very difficult to inform the applications of the change in the number at run time. That is, we cannot dynamically change the number of file descriptors used by the applications dynamically. Control blocks When establishing a new TCP connection, it is necessary to use more memory space for data structures that are used in storing connection information, such as inpcb, tcpcb, and socket. The inpcb structure is used to store source and destination IP addresses, port numbers, and other details. The tcpcb structure is for storing network information, such as the RTT (Round Trip Time), RTO (Retransmission Time Out), and congestion window size, which are used by TCP s congestion control mechanism [25]. The socket structure is used for storing information about the socket. The number of maximum structures that can be built in the memory space is initially Since the memory space for these data structures has been set in building the kernel and remains unchangeable while the OS is running, a new TCP connection cannot be established as the amount of memory spaces is shorthanded. 12
14 Socket buffer The socket buffer is used for data transfer operations between user applications and the sender/receiver TCP. When the user application transmits data using TCP, the data is copied to the send socket buffer and is subsequently copied in the mbufs (or mbuf clusters). The assigned size of the socket buffer is a key issue in the effective transfer of data by using TCP. Suppose that a server host is sending TCP data to two client hosts; one a 64 Kbps dial-up (say, client A) and the other a 100 Mbps LAN (client B). If the server host assigns equal size send socket buffers to both client hosts, it is likely that the amount of assigned buffer will be too large for client A and too small for client B, because of the differences in the capacity (more strictly, bandwidth-delay products) of their connections. A compromise in buffer usage should be considered so that buffers can be effectively allocated to both client hosts. 2.3 Persistent TCP Connection by HTTP/1.1 In recent years, many Web/Web proxy servers and client hosts have supported a persistent connection option, which is one of the most important functions of HTTP/1.1 [26]. In the older version of HTTP (HTTP/1.0), the TCP connection between the server and client hosts is immediately closed when a document transfer is completed. However, since Web documents have many in-line images, it is necessary to establish TCP connections many times to download them by using HTTP/1.0. This results in a significant increase in document transfer time since the average Web documents at a typical Web server is about 10 KBytes [14, 27]. The use of the three-way handshake at each TCP connection establishment makes the situation worse. In HTTP/1.1 the server preserves the status of the TCP connection, including the congestion window size, RTT, RTO, and ssthresh, when it finishes document transfer. It then re-uses the connection and its status when other documents are transferred using the same HTTP session (and corresponding TCP connection). The three-way handshake can thus be avoided and the latency 13
15 is reduced. However, the server maintains established TCP connections, irrespective of whether the connection is active (being used for packet transfer) or not. That is, the resources at the server are wasted when the TCP connection is inactive. This results in a significant portion of resources being wasted to maintain these numerous persistent TCP connections. In what follows, we introduce a rough analytical estimate that can be used to determine how many TCP connections are active or idle, and explain why excessive resources are wasted in the server when native persistent TCP connections are utilized. To achieve this, we use the network topology in Figure 2, where a Web client host and a Web server are connected via a proxy server, and derive the probability that a persistent TCP connection will be active, i.e., in sending TCP packets. The notations in the figure, p c, rtt c, and rto c are the respective packet loss ratio, RTT, and RTO between the client host and the proxy server. Similarly, p s, rtt s, and rto s are those between the proxy and Web server. The mean throughput of the TCP connection between the proxy server and the client host, and that between the Web server and the proxy server, denoted as ρ c and ρ s respectively, can be obtained by using the analysis results presented in our previous work [28]. Note that we obtained a more accurate estimate of TCP throughput estimation than in [29] and [30], especially for small file transfers. Using the results obtained in [28], we can derive the time for Web document transfer via a proxy server. We also introduce the parameters h and f, which respectively represent the cache hit ratio of the document at the Web proxy server and the size of the document being transferred. Note that the proxy server is likely to cache the Web page, which includes the main document and some in-line images. That is, when the main document is found in the proxy server s cache, the in-line images that follow it are likely to be cached, and vice versa. Thus, h is not an adequate metric when we examine the effects of persistent connections, but the following observation is also applicable to the above-mentioned case. When the requested document is cached by the proxy server, a request to the original Web server is not required, and the document is directly delivered from the proxy server to the client 14
16 hit ratio : h RTT: rttc RTO: rtoc RTT: rtts RTO: rtos loss probability: p loss probability: p c s Client Host HTTP Proxy Server Web Server Figure 2: Analysis Model host. However, when the proxy server does not have the requested document, it must be transferred from the appropriate Web server to the client host via the proxy server. Thus, document transfer time, T (f), can be determined as follows; ) ) T (f) = h (S c + fρc +(1 h) (S c + S s + fρc + fρs where S c and S s represent the connection setup times of a TCP connection between the client host and proxy server and that between the proxy server and Web server, respectively. To derive S c and S s, we must consider the effect of the persistent connections provided by HTTP/1.1, which deletes the three-way handshake. Here, we define X c as the probability that the TCP connection between the client host and the proxy server can be maintained by the persistent connection, and X s as the corresponding probability that the TCP connection between the proxy server and the Web server can be maintained. Then, S c and S s can be described as follows; S c = X c 1 2 rtt c +(1 X c ) 3 2 rtt c (1) S s = X s 1 2 rtt s +(1 X s ) 3 2 rtt s (2) X c and X s are dependent on the length of the persistent timer, T p. That is, if the idle time between two successive document transfers is smaller than T p, the TCP connection can be used for second document transfer. However, if the idle time is larger than T p, the TCP connection has been closed and a new TCP connection must be established. According to results in [14], where the authors modeled the access pattern of a Web client host 15
17 and found that the idle time between Web document transfers follows a Pareto distribution whose probability density function is given by; p(x) = αk α x α+1 (3) where α = 1.5 and k = 1. We can then calculate X c and X s as follows; X c = d(t p ) (4) X s = (1 h) d(t p ) (5) where d(x) is the cumulative distribution function of p(x). The average document transfer time, T (f), can be determined from Eqs. (1) (5). We can finally derive U, which is the utilization of the persistent TCP connection as follows; U = Tp 0 p(x) T (f) T (f)+x dx +(1 d(t p)) T (f) T (f)+t p Figure 3 plots the probability that a TCP connection is active, as a function of the length of persistent timer T p, in cases where there are various parameter sets of rtt c, p c, rtt s, and p s. Here we set rto c =4 rtt c, rto s =4 rtt s, and h =0.5, the packet size to 1460 KBytes, and f to 12 KBytes based on the average size of Web documents reported in [14]. These figures reveal that the utilization of the TCP connections is very low, regardless of the network conditions (RTTs and packet loss ratios on links between the proxy server and the client host/the Web server). Thus, if idle TCP connections are maintained at the proxy server, a large part of the resources at the proxy server are wasted. Furthermore, we can see that utilization increases when the persistent timer is short (< 5 sec). This is because the smaller T p value can prevent situations from emerging where the proxy server s resources will be wasted. That is, one solution to this problem is to simply discard HTTP/1.1 and to use HTTP/1.0, as the latter closes the TCP connection immediately after document transfer is complete. However, as HTTP/1.1 has other elegant mechanisms, such as pipelining and content 16
18 negotiation [26], we should develop an effective resource management scheme for it. Our solution is that when resources become shorthanded, the server intentionally closes persistent TCP connections that are unnecessarily wasting them at the server. In [17], the authors also pointed out other shortcomings of HTTP/1.1. They evaluated the performance of a Web server with HTTP/1.0 [31] and HTTP/1.1 through three kinds of implementation experiments where the network created the bottleneck, where CPU processing speed created the bottleneck, and where the disk system created the bottleneck in the system. Through experimental results, they presented that HTTP/1.1 may degrade Web server performance. This is because when memory is fully utilized by being assigned to mostly idle connections and a new HTTP session requires memory, the Web documents cached in memory are paged out, which means that subsequent requests for these documents will require disk I/O. They also proposed a new connection management method, called early close, which establishes one TCP connection at every Web objects, including embedded files. However, this does not provide quintessential solution to resource management by Web servers, since it does not consider the remaining server resources. Also, the bulk of the past reported researches on improving the performance of Web proxy servers has focused on cache replacement algorithms [18-20, 32, 33]. In [13], for example, the authors have evaluated the performance of Web proxy servers, focusing on the difference between HTTP/1.0 and HTTP/1.1 through simulation experiments, including the effect of using cookies and aborting document transfers by client hosts. However, little work has been done on resource management at the proxy server, and no effective mechanism has been proposed. Then we will describe our scheme in detail in the next section. 17
19 Connection Utilization rtt s =0.2 sec, p s =0.1 rtt s =0.2 sec, p s =0.01 rtt s =0.02 sec, p s =0.01 rtt s =0.02 sec, p s = Persistent Timer [sec] (a) rtt c =0.02 sec, p s = Connection Utilization rtt s =0.2 sec, p s =0.1 rtt s =0.2 sec, p s =0.01 rtt s =0.02 sec, p s =0.01 rtt s =0.02 sec, p s = Persistent Timer [sec] (b) rtt c =0.2 sec, p s = 0.01 Figure 3: Probability that a TCP Connection is Active 18
20 3 Algorithm In this section, we propose a new resource management scheme that is suitable for Web/Web proxy servers, which solves the problems identified in the previous section. 3.1 Socket Buffer Management Scheme As explained previously, Web/Web proxy servers have to accommodate a numerous TCP connections. Consequently, the server performance degrades when the proper size of send/receive socket buffers is not assigned. Furthermore, when we consider Web proxy servers, resources of both of the sender and receiver sides should be taken into account. We proposed a scalable socket buffer management scheme, called Scalable Socket Buffer Tuning (SSBT) in this thesis, which dynamically assigns send/receive socket buffers to each TCP connection Control of Send Socket Buffer Previous research has assumed that the bottleneck in data transfer is not in the endhost but in the network. Consequently, the allocation size of send socket buffer is fixed, for example in the previous version of FreeBSD [23], this was 16 KBytes and it is 32 KBytes in the current version. The assigned size is not large enough for the high-speed Internet or it is too large for narrow links. Therefore, it is necessary to assign a send socket buffer to each TCP connection with considering its bandwidth-delay product. In [9], we have proposed a control scheme of the send socket buffer assigned to TCP connections and confirmed its effectiveness by simulation and implementation experiments. In what follows, we summarize the scheme briefly. The equation-based automatic TCP buffer tuning (E-ATBT) we have proposed in [9] solves the above problem. In E-ATBT, the sender host estimates the expected throughput for each TCP connection by monitoring three parameters (packet loss probability, RTT, and RTO values). It then determines the required buffer size of the connection from the estimated throughput, not from the 19
21 current window size of the TCP as the ATBT scheme proposed in [5] does. The estimation method used to estimate TCP throughput is based on the analysis results obtained in [28]. The authors derived the average throughput of a TCP connection for a model where multiple connections with different input link bandwidths share a bottleneck router, employing the RED algorithm [34]. The following parameters are necessary to derive the throughput: p: packet dropping probability of the RED algorithm rtt: the RTT value of the TCP connection rto: the RTO value of the TCP connection In their analysis, the average window size of the TCP connection is first calculated using the above three parameters. The average throughput is then obtained by considering the degradation in performance caused by TCP retransmission timeout. The analysis developed in [35] can be easily applied to our scheme, by regarding the packet dropping probability of the RED algorithm as observed packet loss rate. The parameter set (p, rtt and rto) is obtained at the sender host as follows. Rtt and rto can be directly obtained from the sender TCP. Also, the packet loss rate p can be estimated from the number of successfully transmitted packets and the number of lost packets detected at the sender host via acknowledgement packets. Note that the appropriateness of the estimation method used in the E-ATBT algorithm has been validated [35]. However, we can apply another estimation result of TCP throughput given in [29] using an additional parameter for W max, the buffer size of the receiver. λ = min W max RT T, 1 RT T 2bp 3 + T 0 min(1, 3 3bp 8 )p(1 + 32p2 ) where b is set to 2 if the receiver uses the TCP s Delayed ACK option, and 1 if not. We denote the estimated throughput of connection i by ρ i. From ρ i, we simply determine B i, the required buffer size of connection i, as; B i = ρ i rtt i 20
22 where rtt i is the RTT value of connection i. By this mechanism, a stable assignment of the send socket buffers to TCP connections is expected to be provided if the parameter set (p, rtt, and rto) used in the estimate is stable. However, in ATBT, the assignment is inherently unstable even when the three parameters are stable, since the window size oscillates more significantly regardless of the stability of the parameters. As in ATBT, our E-ATBT also adopts a max-min fairness policy for re-assigning excess buffers. Different to the ATBT algorithm, however, E-ATBT employs a proportional re-assignment policy. That is, when an excess buffer is re-assigned to connections needing more buffer, the buffer is re-assigned proportionally to the required buffer size calculated from the analysis. Whereas ATBT re-assigns excess buffers equally, since it has no means of knowing the expected throughput for the connections Control of Receive Socket Buffer As in the case of the send socket buffer, most past researches have assumed that the receive socket buffer at a TCP receiver host is sufficiently large. Many current OSs assign a small, fixed-sized receive socket buffer to each TCP connection. For example, the default size of the receive socket buffer is fixed at 16 or 56 KBytes in FreeBSD systems. As reported in [36], however, this size is now regarded as small because network capacity has dramatically increased on the current Internet, and the performance of servers has also increased. Furthermore, similar to the send socket buffer for the sender TCP, the appropriate size for a receive socket buffer should be changed based on network conditions involving available bandwidth and the number of competing connections. Therefore, the receive socket buffer assigned to the TCP connection may be insufficient or remain unused when it is not appropriately assigned. Ideally, the receive socket buffer should be set to the same size as the congestion window size of the corresponding sender TCP, to avoid performance limitations. The problem is that the receiver cannot be informed of the congestion window size of the sender host. Furthermore, the 21
23 receiver TCP does not maintain RTT and RTO values as in the sender TCP, and the packet loss probability is very difficult. However, the sender TCP s congestion window size can be estimated by monitoring the utilization rate of the receive socket buffer and the occurrence of packet losses in the network as follows. Suppose that the data processing speed of the upper-layer application at the receiver is sufficiently high. In such a case, when the TCP packets stored in the receive socket buffer become ready to be passed to the application, the packets are immediately removed from the receive socket buffer. Let us now consider changes of the usage of the receive socket buffer, with or without packet loss in the network. Case 1: Packet loss occurs Here, the utilization of the receive socket buffer increases since the data packets successively arriving at the receiver host remain stored in the receive socket buffer and wait for the lost packet to be retransmitted from the sender. Assuming that the lost packets are retransmitted by the Fast Retransmit algorithm [37], it takes about one RTT for the lost packet to be retransmitted. Therefore, the number of packets stored in the receive socket buffer almost equals to the current congestion window size at the sender host, since the TCP sends data packets within the congestion window size in one RTT. Consequently, the appropriate size for the receive socket buffer should be a little larger than the maximum number of stored packets at that time. As a result, we control the receive socket buffer as follows: If the utilization of the receive socket buffer becomes close to 100%, the assigned buffer size is increased since the congestion window size of the sender host is considered to be limited by the receive socket buffer, not by network congestion. When the maximum utilization of the receive socket buffer is substantially lower than 100%, the buffer size is decreased since excess buffer remains unused. 22
24 Case 2: No packet loss occurs Different to the Case 1, the utilization of the receive socket buffer remains small. Two situations can be considered: (a) the assigned size of the receive socket buffer is sufficiently large so that the congestion window size of the sender host is not limited, and (b) the assigned size of the receive socket buffer is so small that it limits the sender s congestion window size. This depends on whether the bandwidth delay product of TCP connection is larger than the assigned receive socket buffer size or not. Since the receiver host cannot know the bandwidth-delay product of the TCP connection, we distinguish between the two cases by increasing the assigned receive socket buffer and monitoring the change in throughput for receiving data packets from the sender as follows: When throughput does not increase, it corresponds to case (a). Therefore, we do not increase the assigned size of the receive socket buffer since it is already assigned enough. When the throughput does not increase, corresponding to case (b), the proxy server continues increasing the receive socket buffer, since it is considered that increasing the socket buffer size would allow the congestion window size at the sender to be increased. To precisely control the receive socket buffer, we should also consider the situation where the data processing speed of the receiver application is slower than the network speed. Here, the utilization of the receive socket buffer remains high even when no packet loss occurs in the network, because the data transmission rate of the sender is limited by the data processing speed of the receiver application regardless of the receive socket buffer size. That is, increasing the receive socket buffer size has little effect on the throughput of TCP connection. Therefore, the assigned size of the receive socket buffer should remain unchanged. Based on the above considerations, we can determine the appropriate size of the receive socket buffer using the following algorithms. The receive socket buffer is resized at regular intervals. During the ith interval, the receiver monitors the maximum utilization for the receive socket buffer (U i ) and the rate at which packets are received from the sender (ρ i ). When packet loss occurs 23
25 during the ith interval, the assigned receive socket buffer (B i ) is updated through the following equations: B i = α U i B i 1 when U i <T u B i = β B i 1 when U i <T l where α =2.0, β =0.9, T l =0.4 and T u =0.8. These values are used in the simulation and implementation studies in the following sections. However, when no packet loss occurs during the ith interval, we use the following equations are used to update B i : B i = α B i 1 when U i <T l and ρ i ρ i 1 B i = β B i 1 when U i <T l and ρ i <ρ i 1 B i = B i 1 when U i >T l Handling the Relation between Upward and Downward TCP Connections A Web proxy server relays a document transfer request to a Web server for a Web client host. Thus, there is a close relation between an upward TCP connection (from the proxy server to the Web server) and a downward TCP connection (from the client host to the proxy server). That is, the difference in expected throughput for both connections should be taken into account when socket buffers are assigned to both connections. For example, when the throughput of a certain downward TCP connection is larger than that of other concurrent downward TCP connections, the larger socket buffer size should be assigned to the TCP connection using E-ATBT. However, if the throughput for the upward TCP connection corresponding to the downward TCP connection is low, the send socket buffer assigned to the downward TCP connection is not likely to be utilized fully. When this happens, the unused send socket buffer should be assigned to the other concurrent TCP connections with smaller socket buffers, improving their throughputs. 24
26 There is one problem that must be overcome in implementing the above method. Although TCP connections can be identified with the control block, called tcpcb, by the kernel, the relation between the upward and downward connections cannot be determined explicitly. Therefore, we need to estimate the relation by using the following algorithm. The proxy server monitors the utilization of the send socket buffer for downward TCP connections, which is assigned by the E-ATBT algorithm. When the send socket buffer is not fully utilized, it decreases the assigned buffer size, since the low utilization of the send socket buffer is considered to be caused by the low throughput of the corresponding upward TCP connection. 3.2 Connection Management Scheme As explained in Subsection 2.3, a careful treatment of persistent TCP connections on the Web/Web proxy server is necessary to efficiently use resources at the server that considers the extent of remaining resources. The key idea is as follows. When the Web/Web proxy server is not heavily loaded and remaining resources are sufficient, it tries to keep as many TCP connections open as possible. When resources at the server are going to be shorthanded, the server tries to close persistent TCP connections to free their resources, so that they can be used for new TCP connections. To achieve this control the remaining resources at the Web/Web proxy server should be monitored. The resources for establishing TCP connections in our case are mbuf, file descriptor, and control blocks. When they are limited, no additional TCP connection can be established. The amount of resources cannot be changed dynamically once the kernel is booted. However, the total and remaining amounts of resources can be monitored in the kernel system. Therefore, we introduced threshold values to utilize resources, and if one of the utilization levels for these resources reaches its threshold, the server starts closing persistent TCP connections and releases the resources assigned to those connections. We also have to maintain persistent TCP connections at the server to maintain or close them according to how resources are being utilized. Figure 4 has our mechanism for managing persistent 25
27 persistent connection list persistent connection list ( sfd, proc ) ( sfd, proc ) NULL insert NULL delete Figure 4: Persistent Connection List in Connection Management Scheme TCP connections at the server. When a TCP connection finishes transmitting a requested document and becomes idle, the server records the socket file descriptor and the process number as a new entry in the persistent connection list, which is used by the kernel to handle the persistent TCP connections. Note that new entries are added to the end of the list. When the server decides to close some persistent TCP connections, it selects connections from the top of the list. In this way, the server can close the oldest persistent connections first. When a certain persistent TCP connection in the list becomes active before being closed, or when it is closed by the expiration of the persistent timer, the server removes the corresponding entry from the list. All operations on the persistent connection list can be done with simple pointer manipulations. To manage resources even more effectively, we add a mechanism where the amount of resources assigned to the persistent TCP connections gradually decreases after the connection becomes inactive. No socket buffer is needed when the TCP connection is idle. Therefore, we can gradually decrease the send/receive socket buffer for persistent TCP connections by taking account of the fact that as the connection idle time continues, the possibility that the TCP connection will 26
28 be terminated becomes large. 27
29 bandwidth: 100 [Mbps] propagation delay: 0.1 [sec] loss probability: Sender Host Receiver Host Figure 5: Network Model (1) 4 Simulation Experiments In this section, we evaluate the performance of our proposed mechanism through simulation experiments using ns-2 [38]. The implementation overview of our proposed scheme and the results of implementation experiments will be provided in Section Performance of the Scalable Socket Buffer Tuning We first discuss the results of simulation experiments to confirm the behavior of the control mechanisms of socket buffers. We use the network model in Fig. 5 to validate the effectiveness of the control mechanism of send socket buffer. In this model, we set the bandwidth of the link to 100 Mbps, the propagation delay to 0.1 sec, and the packet loss probability to 1%, 0.1%, and 0.01%. We implement our proposed scheme of control send socket buffer to the sender host, and the sender host transfers an infinite size to the receiver host through a TCP connection. The simulation time is 400 sec. Figure 6 shows the change in the assigned size of the send socket buffer during the simulation time. It is obvious that as the packet loss probability increases, the assigned buffer size decreases. In this case, the throughput of the TCP connection between the sender and receiver host becomes low when the packet loss probability is high. Thus our algorithm tries to assign a smaller size of send socket buffer to the connection based on the estimated results of the throughput. 28
30 Assigned Buffer Size [Packets] % 0.1% 1% Time (sec) Figure 6: Simulation Results: Change of Assigned Size of Send Socket Buffer We next evaluate the control mechanism of receive socket buffer. In this simulation, we use the same network model as for the send socket buffer, and set the bandwidth of the link to 100 Mbps, the propagation delay to 0.1 sec, and the packet loss probability to 1% and 0.1%. We implement our proposed algorithm on the receiver host, and the sender host transfers an infinite size to the receiver host. The simulation time is 400 sec. Figures 7(a) and 7(b) show the changes in the assigned size of the receive socket buffer and the size of the stored data in the receive socket buffer as a function of simulation time when the packet loss probability is set to 0.1% and 1%, respectively. From these figures, it is obvious that the scheme we propose can assigned the appropriate size of the receive socket buffer. This means that it can precisely monitor the usage of the receive socket buffer and assign the receive socket buffer the appropriate size according to the monitored utilization. 29
31 Assigned Size Stored Data Size Number of Packets Time (sec) (a) Packet Loss Probability: 0.1% Assigned Size Stored Data Size Number of Packets Time (sec) (b) Packet Loss Probability: 1% Figure 7: Simulation Results: Change of Assigned Size of Receive Socket Buffer 30
32 propagation delay : sec loss probability : h = 0.5 Web proxy server Web servers Client Hosts propagation delay : sec loss probability : # of client hosts : 432 # of Web servers : 432 Figure 8: Network Model (2) 4.2 Overall Performance of Resource Management Scheme We then comprehensively evaluate the effectiveness of our proposed scheme, including the connection management scheme. Figure 8 shows the simulation model. It is used to simulate a situation where many Web client hosts and Web servers in a heterogeneous environment communicate with a Web proxy server to send and receive Web documents. To do that, we intentionally set the bandwidths, propagation delay, and packet loss probability of the links between the proxy server and the Web clients/servers. The bandwidths of the links between the client hosts and the proxy server are changed to 100 Mbps, 1.5 Mbps, and 128 Kbps, and those between the proxy server and the Web servers are changed to 100 Mbps, 10 Mbps, and 1.5 Mbps. The packet loss probability on each link between the Web proxy servers and client hosts is selected from , 0.001, and 0.01, and that between the proxy server and Web servers is selected from 0.001, 0.01, and 0.1. The propagation delay for each link is set to 1 sec, 0.1 sec, and 0.01 sec. Both of the numbers of Web servers and client hosts are fixed at 432. In the simulation experiments, each client host randomly selects one of the Web servers and generates a document transfer request to the proxy server. The distribution for the requested 31
33 document size follows that reported in [14]. That is, it is given by a combination of log-normal distribution for small documents and a Pareto distribution for large ones. The access model for the client hosts also follows that in [14], where the client host first requests the main document, and then requests some in-line images, which are included in the document after a short interval (following [14], we call it active off time), and then requests the next document after a somewhat longer interval (inactive off time). Web client hosts transfer Web document requests to the Web proxy server only when the resources at the proxy server are sufficient to accommodate those connections. When the remaining amount of server resources is shorthanded, the client hosts have to wait for other TCP connections to be terminated and for the assigned resources to be released. After the Web proxy server accepts the request, the proxy server decides whether to transfer the requested document to the client host directly, or to download it from the original Web server and then deliver it to the client host based on the cache hit ratio h, which is fixed at 0.5 in our simulation experiments. As well as Web client hosts, the proxy server can download the requested document only when the proxy server has sufficient resources to establish a new TCP connection. The socket buffer that the proxy server can use is divided equally between the send socket buffer and receive socket buffer. That is, when the system has 50 MBytes socket buffer, 25 MBytes are assigned equally to each buffer. We did not simulate the other limitations of the proxy server resources precisely explained in Subsection 2.1. Instead, we introduced N max, the maximum number of TCP connections which can be established simultaneously at the proxy server. In addition to the proposed mechanism, we conducted some simulation experiments with the traditional mechanisms, where a fixed-size send/receive socket buffer is assigned to each TCP connection for comparison. The simulation time is 1000 sec. We compare the performance of the proposed mechanism and the traditional mechanism, focusing on the following aspects; Server-side proxy throughput: which is defined by the total transfer data size from Web servers to the proxy server divided by simulation time. 32
34 Client-side proxy throughput: which is defined by the total transfer data size from the proxy server to client hosts divided by simulation time. Average document transfer time: which is defined by the average time from when client host sends a Web document request to when the client host finishes receiving it. First, we evaluate the performance of the Web proxy server when the total amount of the socket buffer is changed. In this simulation, we change the total socket buffer size to 150 MBytes, 50 MBytes, 30 MBytes, and 10 MBytes, each of which is equally divided for send and receive socket buffers. N max is set to 846, meaning that, all the TCP connections both from Web client hosts and to Web servers are not rejected being established due to the lack of other resources. Figure 9 shows the server-side proxy throughput, client-side proxy throughput, and document transfer time. Each graph in this figure shows the results for the traditional mechanisms with various size of the send socket buffers and the receive socket buffers at the left 16 bar charts. For example, (32 KB, 64 KB) means the traditional scheme assigns a fixed 32 KBytes for the send socket buffer and a fixed 64 KBytes for the receive socket buffer. The results for the proposed mechanism at the right 4 bar charts. Here, we did not use the connection management scheme explained in Subsection 3.2 since it shows no effect on the performance of the proposed scheme. Note that we have confirmed that the connection management scheme does not have any adverse effects in this case. Figure 9 reveals that when the total amount of the socket buffer is small in the traditional schemes, the average proxy server throughput decreases especially when the assigned buffer size for each TCP connection is large. This is because some TCP connections do not use the assigned socket buffer when the assigned size is large. This also causes newly arriving TCP connections to wait to become established until other TCP connections have closed and released the assigned socket buffer. However, the throughput of the scheme we proposed is high even when the total amount of the socket buffer is quite small. This is because we can assign an appropriate size for the 33
35 socket buffer for each TCP connection based on its estimated demand. Consequently, the excess buffers of narrow-link users are re-assigned to wide-link users who request a large socket buffer. One possible way to improve the throughput of the proxy server in the traditional schemes is to configure the ratio of the send and receive socket buffers, instead of equally dividing for send and receive socket buffer as we did in the above simulation experiments. That is, if the ratio could be changed according to the size required, the utilization of the socket buffer would increase, resulting in improved proxy server throughput. To investigate this further, let us look at the results when we change the ratio for the send and receive socket buffers. In this simulation, we set N max to 846 and total amount of the socket buffer is fixed to 50 MBytes. We then divided the socket buffer for send and receive socket buffers in the ratios of 1:1, 2:1, 4:1, and 1:2. Figure 10 shows that in the traditional schemes, the most appropriate value of the division ratio changes when the assigned size of the socket buffer changes. It is also affected by the various factors such as the cache hit ratio, the total number of TCP connections at the proxy server, and so on. That is, it is very difficult to find the best setting of the division ratio of the send/receive socket buffer. However, the proxy server throughput in our scheme still remains high even if we set the ratio incorrectly or the total socket buffer is very small. That is, we can say that our proposed scheme has good robustness against the setting of the division ratio of the socket buffer. Let us now discuss the results we obtain from the evaluating the connection management scheme. Here, we set N max to 600, which is sufficiently small to accept all TCP connections arriving at the proxy server. The other parameters are the same as in Fig. 9. Figure 11 shows the simulation results for our scheme with and without the connection management scheme (SSBT and SSBT+Conn, respectively). From this figure, it is obvious that in the traditional schemes, both proxy server throughput and document transfer time are worse than those in Fig. 9. This small value for N max means some TCP connections must wait to become established because of the lack of proxy server resources, even if most TCP connections at the proxy server are not used for data transmission and only waste proxy server resources. This phenomenon can be seen even in 34
36 our scheme without the connection management scheme. On the other hand, when the connection management scheme is used, the proxy server throughput increases and document transfer time decreases. The connection management scheme proposed in this thesis intentionally terminates persistent TCP connections which do not transfer any data, and the released resources are used for newly arriving TCP connections, which can start transmitting Web documents immediately. This improves the resource utilization of the proxy server, which also reduces the document transfer time perceived by Web clients. 35
37 Throughput [Kbps] MB 50MB 30MB 10MB Throughput [Kbps] MB 50MB 30MB 10MB (16KB, 16KB) (32KB, 64KB) (64KB, 64KB) Traditional Schemes (256KB, 256KB) Proposed Scheme 0 (16KB, 16KB) (32KB, 64KB) (64KB, 64KB) Traditional Schemes (256KB, 256KB) Proposed Scheme (a) Server-side proxy throughput (b) Client-side proxy throughput 1000 Document Transfer Time [sec] MB 50MB 30MB 10MB 1 (16KB, 16KB) (32KB, 64KB) (64KB, 64KB) Traditional Schemes (256KB, 256KB) Proposed Scheme (c) Document Transfer Time Figure 9: Simulation Results: Effect of Socket Buffer Size 36
38 Throughput [Kbps] ( 1 : 1 ) ( 2 : 1 ) ( 4 : 1 ) ( 1 : 2 ) Throughput [Kbps] ( 1 : 1 ) ( 2 : 1 ) ( 4 : 1 ) ( 1 : 2 ) (16KB, 16KB) (32KB, 64KB) (64KB, 64KB) Traditional Schemes (256KB, 256KB) Proposed Scheme 0 (16KB, 16KB) (32KB, 64KB) (64KB, 64KB) Traditional Schemes (256KB, 256KB) Proposed Scheme (a) Server-side proxy throughput (b) Client-side proxy throughput 1000 Document Transfer Time [sec] ( 1 : 1 ) ( 2 : 1 ) ( 4 : 1 ) ( 1 : 2 ) (16KB, 16KB) (32KB, 64KB) (64KB, 64KB) Traditional Schemes (256KB, 256KB) Proposed Scheme (c) Document Transfer Time Figure 10: Simulation Results: Effect of Dividing Ratio of Socket Buffer 37
39 Throughput [Kbps] MB 50MB 30MB 10MB Throughput [Kbps] MB 50MB 30MB 10MB (16KB, 16KB) (32KB, 64KB) (64KB, 64KB) (256KB, 256KB) SSBT SSBT+Conn. Traditional Schemes Proposed Schemes 0 (16KB, 16KB) (32KB, 64KB) (64KB, 64KB) Traditional Schemes (256KB, 256KB) SSBT SSBT+Conn. Proposed Schemes (a) Server-side proxy throughput (b) Client-side proxy throughput 1000 Document Transfer Time [sec] MB 50MB 30MB 10MB 1 (16KB, 16KB) (32KB, 64KB) Traditional Schemes (64KB, 64KB) (256KB, 256KB) SSBT SSBT+Conn. Proposed Schemes (c) Document Transfer Time Figure 11: Simulation Results: Performance of Connection Management Scheme 38
40 5 Implementation and Experiments In this section, we discuss the results we obtain in implementation experiments and confirm the effectiveness of our proposed scheme within an actual system. 5.1 Implementation Overview Our scheme consists of two algorithms; the socket buffer management scheme discussed in Subsection 3.1, and the connection management scheme described in Subsection 3.2. We implement it on a PC running FreeBSD 4.6, by modifying the source code of the kernel system and the Squid proxy server [21]. The total number of lines of added source code is about The socket buffer management scheme is composed of two mechanisms: control of the send socket buffer and the receive socket buffer. To control the send socket buffer, we have to obtain the estimated throughput of each TCP connection established at the proxy server from the three parameters in Subsection These parameters can easily be monitored at a TCP sender host, such as a Web server or a Web proxy server. We monitor these parameters in the kernel system at regular intervals. In the following experiments, we set the interval at 1 sec. We then calculate the estimated throughput of a TCP connection and assign a send socket buffer using the algorithm in Subsection To control the receive socket buffer, we modify the kernel system to monitor the utilization of the receive socket buffer at regular intervals as described in Subsection 3.1.2, which is set to 1 sec in our implementation. Note that it should be careful to treat the receive socket buffer when the assigned buffer size is decreased in our scheme. This is because if we decrease the receive socket buffer size without sending ACK packets with a new value for the advertised window size to the TCP sender host, transferred packets may be lost due to the lack of receive socket buffer [39]. Therefore, we decrease the receive socket buffer size 0.5 sec after informing the sender host of the new advertised window size by sending an ACK packet. To implement the connection management scheme, we have to monitor the utilization of server 39
41 Client host Web proxy server Web server # of users 200, 600 upper limit : 400 connections threshhold : 300 connections persistent timer : 15 seconds cache hit Ratio : 0.5 Figure 12: Network Environment for Implementation Experiment resources and maintain an adequate number of persistent TCP connections which are concurrently established at the proxy server, as described in Subsection 3.2. We therefore have to monitor the remaining server resources in the kernel system every second and compare them with their threshold values for the resources. Furthermore, to manage persistent TCP connections at the proxy server, we have to establish the persistent connection list explained in Subsection 3.2 in the kernel system. 5.2 Implementation Experiments Figure 12 outlines our experimental system. The Web proxy server host has dual Intel Xeon Processor 2 GHz CPUs and 2 GBytes of RAM, the Web server host has 2 GHz CPU and 2 GBytes of RAM, and the client host has 2 GHz CPU and 1 GByte of RAM. All of three machines run a FreeBSD 4.6 system. The amount of proxy server resources is set such that the proxy server can accommodate up to 400 TCP connections simultaneously. The threshold value, at which the proxy server begins to close persistent TCP connections, is set to 300 connections. We intentionally set the size of the proxy server cache to be 1024 KBytes, so that the cache hit ratio becomes about 0.5. The length of the persistent timer used by the proxy server is set to 15 seconds. The client host uses httperf [40] to generate document transfer requests to the Web proxy server, which can 40
42 emulate numerous users making Web accesses to the proxy server. The number of emulated users in the experiments is set to 200 and 600. In the traditional system, therefore, all TCP connections at the Web proxy server can be established when there are 200 TCP connections, but all TCP connections cannot be accepted when this number is 600 due to the lack of server resources. As in the simulation experiments, the access model for each user at the client host (the distribution for the requested document size and think time between successive requests) follows that reported in [14]. When the request is rejected by the proxy server due to the lack of resources, the client host resends the request immediately. We compare the proposed scheme and the original scheme in which no mechanism proposed in this thesis is used. Figure 13 shows the average throughput of the proxy server, and the average document transfer time perceived by the client host. Here, we define the average throughput of the proxy server as the total size of the transferred documents from the proxy server to client hosts divided by the experimentation time (500 sec). From this figure, when the number of users is 200, the average throughput of traditional scheme (16 KB, 16 KB) is lower than that of another traditional scheme (256 KB, 256 KB). This is because the assigned size of send/receive socket buffer is too small in spite of that the network capacity and the Web server performance is sufficiently. When the number of users increases to 600, however, the performance of the traditional schemes decreases in both terms of the proxy server throughput and the document transfer time, Especially, the traditional scheme (256 KB, 256 KB) degrades its performance significantly. This is because the proxy server resources are much shorthanded in this case, since too large size of socket buffer is assigned to TCP connections. When we look at the results of our proposed scheme, we can observe that when the number of users is 200, the proxy server throughput is almost the same as that of the traditional scheme (256 KB, 256 KB). In 600 users case, however, our proposed scheme does not degrade its throughput in spite of that the number of users is much larger than the capacity of the proxy server. It also provides the smallest document transfer time for the Web client host. This results mean that our 41
43 scheme can effectively control the persistent connections at the proxy server. Furthermore, we exhibit the pretty good performance of the proposed scheme in terms of the resource utilization at the proxy server. Figure 14 shows the average size of the assigned socket buffer to all TCP connections. It clearly shows that our scheme can save the socket buffer amazingly at the proxy server. Note that the traditional scheme (256 KB, 256 KB) uses about 10 times larger socket buffer but it can provide only 75% throughput and larger document transfer time, compared with the proposed scheme. From the above results, we can conclude that our proposed scheme works effectively on the actual system, providing quite better performance than the traditional scheme. 42
44 Average Throughput of Proxy Server [Mbps] users 600 users (16KB, 16KB) (256KB, 256KB) Proposed Scheme Traditional Schemes (a) Average Throughput of Proxy Server Average Document Transfer TIme [sec] users 600 users (16KB, 16KB) (256KB, 256KB) Proposed Scheme Traditional Schemes (b) Average Document Transfer Time Figure 13: Experimental Results: Server Throughput and Document Transfer Time 43
45 Assigned Buffer Size [MBytes] users 600 users (16KB, 16KB) (256KB, 256KB) Proposed Scheme Traditional Schemes Figure 14: Experimental Results: Assigned Socket Buffer Size 44
46 6 Concluding Remarks In this thesis, we have proposed a new resource management mechanism for TCP connections at Internet servers. Our proposed scheme has two algorithms. The first is a socket buffer management scheme which effectively assigns the send/receive socket buffers to heterogeneous TCP connections based on their expected demands. It takes into account for the dependency between the upward and downward TCP connections at proxy servers. The second is a scheme for managing persistent TCP connections. It monitors server resources, and intentionally closes idle TCP connections when the remaining server resources are shorthanded. We have evaluated the our scheme through various simulation and implementation experiments, and confirmed that it can improve the proxy server performance, and reduce the document transfer time for Web client hosts. As future studies, we will evaluate the server performance by implementation experiments where more Web server/client hosts exists in the network. Furthermore, in order to improve the server performance, we will have to consider the management of other server resources, such as a CPU utilization. 45
47 Acknowledgements I would like to express my sincere appreciation and deep gratitude to my advisor, Professor Masayuki Murata of Osaka University, who introduced me to the area of computer networks including the subjects in the thesis and advice through my studies and preparation of this manuscript. All works of this thesis would not been possible without the support of Associate Professor Go Hasegawa of Osaka University. He has been constant sources of encouragement and advice through my studies and preparation of this thesis. I would like to extend my appreciation to Mr. Kazuhiro Azuma of Osaka University for his valuable supports and many suggestions. Thanks are also due to Professor Hideo Miyahara and Professor Shinji Shimojo of Osaka University who gave me many advises. I am indebted to Associate Professor Masashi Sugano of Osaka Prefecture College of Health Sciences, Associate Professor Ken-ichi Baba of Osaka University, Associate Professor Naoki Wakamiya of Osaka University, Associate Professor Hiroyuki Ohsaki of Osaka University, Research Assistant Shin ichi Arakawa of Osaka University who gave me helpful comments and feedbacks, and Research Assistant Ichinoshin Maki of Osaka University who gave me helpful comments and feedbacks. Finally, I am heartily thankful to many friends and colleagues in the Department of Informatics and Mathematical Science, Graduate School of Engineering Science, Osaka University and in the Department of Information Networking, Graduate School of Information Science and Technology, Osaka University for their generous, enlightening and valuable suggestions. 46
48 References [1] K. Murakami and M. Maruyama, MAPOS - multiple access protocol over SONET/SDH version 1 etc., Request for Comments (RFC) , June [2] J. Anderson, J. Manchester, A. Rodriguez-Moral, and M. Veeraraghavan, Protocols and architectures for IP optical networking, Bell Labs Technical Journal, January-March [3] V. Jacobson, Congestion avoidance and control, in Proceedings of ACM SIGCOMM 88, pp , Aug [4] M. Allman, V. Paxson, and W. Stevens, TCP congestion control, Request for Comments (RFC) 2581, Apr [5] J. Semke, J. Mahdavi, and M. Mathis, Automatic TCP buffer tuning, in Proceedings of ACM SIGCOMM 98, pp , Aug [6] J. Liu and J. Ferguson, Automatic TCP socket buffer tuning, in Proceedings of SC 2000, Nov [7] B. L. Tierney, TCP Tuning Guide for Distributed Aplication on Wide Area Networks, vol. 26. USENIX ;login:, Feb [8] T. Terai, T. Okamoto, G. Hasegawa, and M. Murata, Design and implementation experiments of scalable socket buffer tuning, in Proceedings of 4th Asia-Pacific Symposium on Information and Telecommunication Technologies (APSITT 2001), Nov [9] G. Hasegawa, T. Terai, T. Okamoto, and M. Murata, Scalable socket buffer tuning for highperformance Web servers, in Proceedings of IEEE ICNP 2001, Nov [10] Proxy Survey, available at 47
49 [11] T. Okamoto, T. Terai, G. Hasegawa, and M. Murata, A resouorce/connection management scheme for HTTP proxy servers, in Proceedings of Second International IFIP-TC6 Networking Conference, pp , May [12] A. Luotonen and K. Altis, World-Wide Web proxies, Computer Networks and ISDN Systems, vol. 2, no. 27, pp , [13] A. Feldmann, R. Caceres, F. Douglis, G. Glass, and M. ael Rabinovich, Performance of Web proxy caching in heterogeneous bandwidth environments, in Proceedings of IEEE IN- FOCOM 99, pp , June [14] P. Barford and M. Crovella, Generating representative Web workloads for network and server performance evaluation, in Proceedings of the 1998 ACM SIGMETRICS International Conference on Meas urement and Modeling of Computer Systems, pp , July [15] V. Almedia, A. Bestavros, M. Crovella, and A. de Oliveria, Characterizing reference locality in the WWW, in Proceedings of 1996 International Conference on Parallel and Distributed Information Systems (PDIS 96), pp , Dec [16] M. Arlitt and C. Williamson, Web server workload characterization: The search for invariants, in Proceedings of the ACM SIGMETRICS 96 Conference, Apr [17] P. Barford and M. Crovella, A performance evaluation of Hyper Text Transfer Protocols, in Proceedings of ACM SIGMETRICS 99, Oct [18] P. Cao and S. Irani, Cost-aware WWW proxy caching algorithms, in Proceedings of the 1997 USENIX Symposium on Internet Technologies and Systems, pp , Dec [19] L. Rizzo and L. Vicisano, Replacement policies for a proxy cache, IEEE/ACM Transactions On Networking, vol. 8, pp , Apr
50 [20] L. Cherkasova and G. Ciardo, Role of aging, frequency, and size in Web cache replacement polocies, in Proceedings of European Conference on High Performance Computing and Networking Europe (HPCN 2001), pp , June [21] Squid Home Page, available at [22] Apache proxy mod proxy, available at mod_proxy.html. [23] FreeBSD Home Page, available at [24] M. K. McKusick, K. Bostic, M. J. Karels, and J. S. Quarterman, The Design and Implementation of the 4.4 BSD Operating System. Reading, Massachusetts: Addison-Wesley, [25] W. R. Stevens, TCP/IP Illustrated, Volume 1: The Protocols. Reading, Massachusetts: Addison-Wesley, [26] R. Fielding, J. Gettys, J. Mogul, H. Frystyk, and T. Berners-Lee, Hypertext Transfer Protocol HTTP/1.1, Request for Comments (RFC) 2068, Jan [27] M. Nabe, M. Murata, and H. Miyahara, Analysis and modeling of World Wide Web traffic for capacity dimensioning of Internet access lines, Performance Evaluation, vol. 34, pp , Dec [28] K. Tokuda, G. Hasegawa, and M. Murata, TCP throughput analysis with variable packet loss probability fo r improving fairness among long/short-lived TCP connections, in Proceedings of The 16th International Workshop on Communication Quality & Reliability (CQR2002), May [29] J. Padhye, V. Firoiu, D. Towsley, and J. Krusoe, Modeling TCP throughput: A simple model and its empirical validation, in Proceedings of ACM SIGCOMM 98, pp , Aug
51 [30] N. Cardwell, S. Savage, and T. Anderson, Modeling TCP latency, in Proceedings of IEEE INFOCOM 2000, pp , Mar [31] T. Berners-Lee, R. Fielding, and H. Frystyk, Hypertext Transfer Protocol HTTP/1.0, Request for Comments (RFC) 1945, May [32] J. Chuang, S. Kafka, and K. Norlen, Efficiency and performance of web cache reporting strategies, in Proceedings of IEEE International Workshop on Data Semantics in Web Information Systems, Dec [33] S. Jin and A. Bestavros, Popularity-aware GreedyDual-Size Web proxy caching algorithms, in Proceedings of IEEE ICDCS 2000, Apr [34] S. Floyd and V. Jacobson, Random early detection gateways for congestion avoidance, IEEE/ACM Transactions on Networking, vol. 1, pp , Aug [35] G. Hasegawa, T. Matsuo, M. Murata, and H. Miyahara, Comparisons of packet scheduling algorithms for fair service among connections on the Internet, in Proceedings of IEEE INFOCOM 2000, Mar [36] M. Allman, A Web server s view of the transport layer, ACM Computer Communication Review, vol. 30, pp , Oct [37] W. Stevens, TCP slow start, congestion avoidance, fastretransmit, and fast recovery algorithms, Request for Comments (RFC) 2001, Jan [38] The VINT Project, UCB/LBNL/VINT network simulator - ns (version 2). available at [39] G. R. Wright and W. R. Stevens, TCP/IP Illustrated, Volume 2: The Implementation. Reading, Massachusetts: Addison-Wesley,
52 [40] D. Mosberger and T. Jin, httperf: A tool for measuring Web server performance, Performance Evaluation Review, vol. 26, pp , Dec
TCP in Wireless Mobile Networks
TCP in Wireless Mobile Networks 1 Outline Introduction to transport layer Introduction to TCP (Internet) congestion control Congestion control in wireless networks 2 Transport Layer v.s. Network Layer
Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and.
Master s Thesis Title A Study on Active Queue Management Mechanisms for Internet Routers: Design, Performance Analysis, and Parameter Tuning Supervisor Prof. Masayuki Murata Author Tomoya Eguchi February
The Problem with TCP. Overcoming TCP s Drawbacks
White Paper on managed file transfers How to Optimize File Transfers Increase file transfer speeds in poor performing networks FileCatalyst Page 1 of 6 Introduction With the proliferation of the Internet,
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
1. The subnet must prevent additional packets from entering the congested region until those already present can be processed.
Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because routers are receiving packets faster than they can forward them, one
Key Components of WAN Optimization Controller Functionality
Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications
High-Speed TCP Performance Characterization under Various Operating Systems
High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,
First Midterm for ECE374 03/24/11 Solution!!
1 First Midterm for ECE374 03/24/11 Solution!! Note: In all written assignments, please show as much of your work as you can. Even if you get a wrong answer, you can get partial credit if you show your
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page
Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan
Application Level Congestion Control Enhancements in High BDP Networks Anupama Sundaresan Organization Introduction Motivation Implementation Experiments and Results Conclusions 2 Developing a Grid service
Applications. Network Application Performance Analysis. Laboratory. Objective. Overview
Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
A Passive Method for Estimating End-to-End TCP Packet Loss
A Passive Method for Estimating End-to-End TCP Packet Loss Peter Benko and Andras Veres Traffic Analysis and Network Performance Laboratory, Ericsson Research, Budapest, Hungary {Peter.Benko, Andras.Veres}@eth.ericsson.se
First Midterm for ECE374 02/25/15 Solution!!
1 First Midterm for ECE374 02/25/15 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam
Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT:
Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT: In view of the fast-growing Internet traffic, this paper propose a distributed traffic management
Low-rate TCP-targeted Denial of Service Attack Defense
Low-rate TCP-targeted Denial of Service Attack Defense Johnny Tsao Petros Efstathopoulos University of California, Los Angeles, Computer Science Department Los Angeles, CA E-mail: {johnny5t, pefstath}@cs.ucla.edu
Data Networks Summer 2007 Homework #3
Data Networks Summer Homework # Assigned June 8, Due June in class Name: Email: Student ID: Problem Total Points Problem ( points) Host A is transferring a file of size L to host B using a TCP connection.
4 High-speed Transmission and Interoperability
4 High-speed Transmission and Interoperability Technology 4-1 Transport Protocols for Fast Long-Distance Networks: Comparison of Their Performances in JGN KUMAZOE Kazumi, KOUYAMA Katsushi, HORI Yoshiaki,
Transport Layer Protocols
Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements
Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks
Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks Vasilios A. Siris and Despina Triantafyllidou Institute of Computer Science (ICS) Foundation for Research and Technology - Hellas
Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)
Lecture Objectives Wireless and Mobile Systems Design Lecture 07 Mobile Networks: TCP in Wireless Networks Describe TCP s flow control mechanism Describe operation of TCP Reno and TCP Vegas, including
International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1169 ISSN 2229-5518
International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1169 Comparison of TCP I-Vegas with TCP Vegas in Wired-cum-Wireless Network Nitin Jain & Dr. Neelam Srivastava Abstract
Robust Router Congestion Control Using Acceptance and Departure Rate Measures
Robust Router Congestion Control Using Acceptance and Departure Rate Measures Ganesh Gopalakrishnan a, Sneha Kasera b, Catherine Loader c, and Xin Wang b a {[email protected]}, Microsoft Corporation,
Outline. TCP connection setup/data transfer. 15-441 Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics
Outline 15-441 Computer Networking Lecture 8 TCP & Congestion Control TCP connection setup/data transfer TCP Reliability Congestion sources and collapse Congestion control basics Lecture 8: 09-23-2002
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment TrueSpeed VNF provides network operators and enterprise users with repeatable, standards-based testing to resolve complaints about
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
TCP, Active Queue Management and QoS
TCP, Active Queue Management and QoS Don Towsley UMass Amherst [email protected] Collaborators: W. Gong, C. Hollot, V. Misra Outline motivation TCP friendliness/fairness bottleneck invariant principle
Homework 2 assignment for ECE374 Posted: 02/21/14 Due: 02/28/14
1 Homework 2 assignment for ECE374 Posted: 02/21/14 Due: 02/28/14 Note: In all written assignments, please show as much of your work as you can. Even if you get a wrong answer, you can get partial credit
Mobile Communications Chapter 9: Mobile Transport Layer
Mobile Communications Chapter 9: Mobile Transport Layer Motivation TCP-mechanisms Classical approaches Indirect TCP Snooping TCP Mobile TCP PEPs in general Additional optimizations Fast retransmit/recovery
Comparative Analysis of Congestion Control Algorithms Using ns-2
www.ijcsi.org 89 Comparative Analysis of Congestion Control Algorithms Using ns-2 Sanjeev Patel 1, P. K. Gupta 2, Arjun Garg 3, Prateek Mehrotra 4 and Manish Chhabra 5 1 Deptt. of Computer Sc. & Engg,
Congestion Control Review. 15-441 Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?
Congestion Control Review What is congestion control? 15-441 Computer Networking What is the principle of TCP? Lecture 22 Queue Management and QoS 2 Traffic and Resource Management Resource Management
Secure SCTP against DoS Attacks in Wireless Internet
Secure SCTP against DoS Attacks in Wireless Internet Inwhee Joe College of Information and Communications Hanyang University Seoul, Korea [email protected] Abstract. The Stream Control Transport Protocol
Mobile Computing/ Mobile Networks
Mobile Computing/ Mobile Networks TCP in Mobile Networks Prof. Chansu Yu Contents Physical layer issues Communication frequency Signal propagation Modulation and Demodulation Channel access issues Multiple
AKAMAI WHITE PAPER. Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling
AKAMAI WHITE PAPER Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling Delivering Dynamic Web Content in Cloud Computing Applications 1 Overview
15-441: Computer Networks Homework 2 Solution
5-44: omputer Networks Homework 2 Solution Assigned: September 25, 2002. Due: October 7, 2002 in class. In this homework you will test your understanding of the TP concepts taught in class including flow
TCP and Wireless Networks Classical Approaches Optimizations TCP for 2.5G/3G Systems. Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme
Chapter 2 Technical Basics: Layer 1 Methods for Medium Access: Layer 2 Chapter 3 Wireless Networks: Bluetooth, WLAN, WirelessMAN, WirelessWAN Mobile Networks: GSM, GPRS, UMTS Chapter 4 Mobility on the
Student, Haryana Engineering College, Haryana, India 2 H.O.D (CSE), Haryana Engineering College, Haryana, India
Volume 5, Issue 6, June 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A New Protocol
A Congestion Control Algorithm for Data Center Area Communications
A Congestion Control Algorithm for Data Center Area Communications Hideyuki Shimonishi, Junichi Higuchi, Takashi Yoshikawa, and Atsushi Iwata System Platforms Research Laboratories, NEC Corporation 1753
Answer: that dprop equals dtrans. seconds. a) d prop. b) d trans
Chapter 1 1) p. 98: P-6 This elementary problem begins to explore propagation delay and transmission delay, two central concepts in data networking. Consider two hosts, A and B, connected by single link
TCP for Wireless Networks
TCP for Wireless Networks Outline Motivation TCP mechanisms Indirect TCP Snooping TCP Mobile TCP Fast retransmit/recovery Transmission freezing Selective retransmission Transaction oriented TCP Adapted
Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network
Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network 作 者 :Daiqin Yang, Ka-Cheong Leung, and Victor O. K. Li 出 處 :Wireless Communications and Networking Conference, 2007.WCNC
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect [email protected] Validating if the workload generated by the load generating tools is applied
A Performance Study of VoIP Applications: MSN vs. Skype
This full text paper was peer reviewed by subject matter experts for publication in the MULTICOMM 2006 proceedings. A Performance Study of VoIP Applications: MSN vs. Skype Wen-Hui Chiang, Wei-Cheng Xiao,
Fault-Tolerant Framework for Load Balancing System
Fault-Tolerant Framework for Load Balancing System Y. K. LIU, L.M. CHENG, L.L.CHENG Department of Electronic Engineering City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong SAR HONG KONG Abstract:
Analysis of Internet Transport Service Performance with Active Queue Management in a QoS-enabled Network
University of Helsinki - Department of Computer Science Analysis of Internet Transport Service Performance with Active Queue Management in a QoS-enabled Network Oriana Riva [email protected] Contents
Performance of networks containing both MaxNet and SumNet links
Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for
COMP 361 Computer Communications Networks. Fall Semester 2003. Midterm Examination
COMP 361 Computer Communications Networks Fall Semester 2003 Midterm Examination Date: October 23, 2003, Time 18:30pm --19:50pm Name: Student ID: Email: Instructions: 1. This is a closed book exam 2. This
Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008
Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008 When you buy a broadband Wide Area Network (WAN) you want to put the entire bandwidth capacity to
Exercises on ns-2. Chadi BARAKAT. INRIA, PLANETE research group 2004, route des Lucioles 06902 Sophia Antipolis, France
Exercises on ns-2 Chadi BARAKAT INRIA, PLANETE research group 2004, route des Lucioles 06902 Sophia Antipolis, France Email: [email protected] November 21, 2003 The code provided between the
CS268 Exam Solutions. 1) End-to-End (20 pts)
CS268 Exam Solutions General comments: ) If you would like a re-grade, submit in email a complete explanation of why your solution should be re-graded. Quote parts of your solution if necessary. In person
Per-Flow Queuing Allot's Approach to Bandwidth Management
White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth
Influence of Load Balancing on Quality of Real Time Data Transmission*
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 3, December 2009, 515-524 UDK: 004.738.2 Influence of Load Balancing on Quality of Real Time Data Transmission* Nataša Maksić 1,a, Petar Knežević 2,
Chapter 4. VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network)
Chapter 4 VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network) 4.1 Introduction Traffic Engineering can be defined as a task of mapping traffic
A NOVEL RESOURCE EFFICIENT DMMS APPROACH
A NOVEL RESOURCE EFFICIENT DMMS APPROACH FOR NETWORK MONITORING AND CONTROLLING FUNCTIONS Golam R. Khan 1, Sharmistha Khan 2, Dhadesugoor R. Vaman 3, and Suxia Cui 4 Department of Electrical and Computer
Multipath TCP in Practice (Work in Progress) Mark Handley Damon Wischik Costin Raiciu Alan Ford
Multipath TCP in Practice (Work in Progress) Mark Handley Damon Wischik Costin Raiciu Alan Ford The difference between theory and practice is in theory somewhat smaller than in practice. In theory, this
Computer Networks. Chapter 5 Transport Protocols
Computer Networks Chapter 5 Transport Protocols Transport Protocol Provides end-to-end transport Hides the network details Transport protocol or service (TS) offers: Different types of services QoS Data
WEB SERVER PERFORMANCE WITH CUBIC AND COMPOUND TCP
WEB SERVER PERFORMANCE WITH CUBIC AND COMPOUND TCP Alae Loukili, Alexander Wijesinha, Ramesh K. Karne, and Anthony K. Tsetse Towson University Department of Computer & Information Sciences Towson, MD 21252
Adaptive DCF of MAC for VoIP services using IEEE 802.11 networks
Adaptive DCF of MAC for VoIP services using IEEE 802.11 networks 1 Mr. Praveen S Patil, 2 Mr. Rabinarayan Panda, 3 Mr. Sunil Kumar R D 1,2,3 Asst. Professor, Department of MCA, The Oxford College of Engineering,
The Interaction of Forward Error Correction and Active Queue Management
The Interaction of Forward Error Correction and Active Queue Management Tigist Alemu, Yvan Calas, and Alain Jean-Marie LIRMM UMR 5506 CNRS and University of Montpellier II 161, Rue Ada, 34392 Montpellier
A Survey on Congestion Control Mechanisms for Performance Improvement of TCP
A Survey on Congestion Control Mechanisms for Performance Improvement of TCP Shital N. Karande Department of Computer Science Engineering, VIT, Pune, Maharashtra, India Sanjesh S. Pawale Department of
Experiences with Interactive Video Using TFRC
Experiences with Interactive Video Using TFRC Alvaro Saurin, Colin Perkins University of Glasgow, Department of Computing Science Ladan Gharai University of Southern California, Information Sciences Institute
LOAD BALANCING AS A STRATEGY LEARNING TASK
LOAD BALANCING AS A STRATEGY LEARNING TASK 1 K.KUNGUMARAJ, 2 T.RAVICHANDRAN 1 Research Scholar, Karpagam University, Coimbatore 21. 2 Principal, Hindusthan Institute of Technology, Coimbatore 32. ABSTRACT
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Active Queue Management (AQM) based Internet Congestion Control
Active Queue Management (AQM) based Internet Congestion Control October 1 2002 Seungwan Ryu ([email protected]) PhD Student of IE Department University at Buffalo Contents Internet Congestion Control
ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM
ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 Outline The transport service Elements of transport protocols A
Applying Active Queue Management to Link Layer Buffers for Real-time Traffic over Third Generation Wireless Networks
Applying Active Queue Management to Link Layer Buffers for Real-time Traffic over Third Generation Wireless Networks Jian Chen and Victor C.M. Leung Department of Electrical and Computer Engineering The
The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN
The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN By Jim Metzler, Cofounder, Webtorials Editorial/Analyst Division Background and Goal Many papers have been written
SiteCelerate white paper
SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance
Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project
Effects of Filler Traffic In IP Networks Adam Feldman April 5, 2001 Master s Project Abstract On the Internet, there is a well-documented requirement that much more bandwidth be available than is used
CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT
81 CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT 5.1 INTRODUCTION Distributed Web servers on the Internet require high scalability and availability to provide efficient services to
CSE 473 Introduction to Computer Networks. Exam 2 Solutions. Your name: 10/31/2013
CSE 473 Introduction to Computer Networks Jon Turner Exam Solutions Your name: 0/3/03. (0 points). Consider a circular DHT with 7 nodes numbered 0,,...,6, where the nodes cache key-values pairs for 60
First Midterm for ECE374 03/09/12 Solution!!
1 First Midterm for ECE374 03/09/12 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam
Oscillations of the Sending Window in Compound TCP
Oscillations of the Sending Window in Compound TCP Alberto Blanc 1, Denis Collange 1, and Konstantin Avrachenkov 2 1 Orange Labs, 905 rue Albert Einstein, 06921 Sophia Antipolis, France 2 I.N.R.I.A. 2004
Challenges of Sending Large Files Over Public Internet
Challenges of Sending Large Files Over Public Internet CLICK TO EDIT MASTER TITLE STYLE JONATHAN SOLOMON SENIOR SALES & SYSTEM ENGINEER, ASPERA, INC. CLICK TO EDIT MASTER SUBTITLE STYLE OUTLINE Ø Setting
SJBIT, Bangalore, KARNATAKA
A Comparison of the TCP Variants Performance over different Routing Protocols on Mobile Ad Hoc Networks S. R. Biradar 1, Subir Kumar Sarkar 2, Puttamadappa C 3 1 Sikkim Manipal Institute of Technology,
packet retransmitting based on dynamic route table technology, as shown in fig. 2 and 3.
Implementation of an Emulation Environment for Large Scale Network Security Experiments Cui Yimin, Liu Li, Jin Qi, Kuang Xiaohui National Key Laboratory of Science and Technology on Information System
An enhanced TCP mechanism Fast-TCP in IP networks with wireless links
Wireless Networks 6 (2000) 375 379 375 An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Jian Ma a, Jussi Ruutu b and Jing Wu c a Nokia China R&D Center, No. 10, He Ping Li Dong Jie,
A comparison of TCP and SCTP performance using the HTTP protocol
A comparison of TCP and SCTP performance using the HTTP protocol Henrik Österdahl ([email protected]), 800606-0290, D-01 Abstract This paper discusses using HTTP over SCTP as an alternative to the traditional
AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS
AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS Srikanth Tiyyagura Department of Computer Science and Engineering JNTUA College of Engg., pulivendula, Andhra Pradesh, India.
XCP-i : explicit Control Protocol for heterogeneous inter-networking of high-speed networks
: explicit Control Protocol for heterogeneous inter-networking of high-speed networks D. M. Lopez-Pacheco INRIA RESO/LIP, France Email: [email protected] C. Pham, Member, IEEE LIUPPA, University of
Midterm Exam CMPSCI 453: Computer Networks Fall 2011 Prof. Jim Kurose
Midterm Exam CMPSCI 453: Computer Networks Fall 2011 Prof. Jim Kurose Instructions: There are 4 questions on this exam. Please use two exam blue books answer questions 1, 2 in one book, and the remaining
Final for ECE374 05/06/13 Solution!!
1 Final for ECE374 05/06/13 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam taker -
How To Provide Qos Based Routing In The Internet
CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this
Computer Networks - CS132/EECS148 - Spring 2013 ------------------------------------------------------------------------------
Computer Networks - CS132/EECS148 - Spring 2013 Instructor: Karim El Defrawy Assignment 2 Deadline : April 25 th 9:30pm (hard and soft copies required) ------------------------------------------------------------------------------
Computer Networks Homework 1
Computer Networks Homework 1 Reference Solution 1. (15%) Suppose users share a 1 Mbps link. Also suppose each user requires 100 kbps when transmitting, but each user transmits only 10 percent of the time.
EFFECT OF TRANSFER FILE SIZE ON TCP-ADaLR PERFORMANCE: A SIMULATION STUDY
EFFECT OF TRANSFER FILE SIZE ON PERFORMANCE: A SIMULATION STUDY Modupe Omueti and Ljiljana Trajković Simon Fraser University Vancouver British Columbia Canada {momueti, ljilja}@cs.sfu.ca ABSTRACT Large
Burst Testing. New mobility standards and cloud-computing network. This application note will describe how TCP creates bursty
Burst Testing Emerging high-speed protocols in mobility and access networks, combined with qualityof-service demands from business customers for services such as cloud computing, place increased performance
17: Queue Management. Queuing. Mark Handley
17: Queue Management Mark Handley Queuing The primary purpose of a queue in an IP router is to smooth out bursty arrivals, so that the network utilization can be high. But queues add delay and cause jitter.
Introducing the Microsoft IIS deployment guide
Deployment Guide Deploying Microsoft Internet Information Services with the BIG-IP System Introducing the Microsoft IIS deployment guide F5 s BIG-IP system can increase the existing benefits of deploying
On Packet Marking Function of Active Queue Management Mechanism: Should It Be Linear, Concave, or Convex?
On Packet Marking Function of Active Queue Management Mechanism: Should It Be Linear, Concave, or Convex? Hiroyuki Ohsaki and Masayuki Murata Graduate School of Information Science and Technology Osaka
Mike Canney Principal Network Analyst getpackets.com
Mike Canney Principal Network Analyst getpackets.com 1 My contact info contact Mike Canney, Principal Network Analyst, getpackets.com [email protected] 319.389.1137 2 Capture Strategies capture Capture
Analysis of Effect of Handoff on Audio Streaming in VOIP Networks
Beyond Limits... Volume: 2 Issue: 1 International Journal Of Advance Innovations, Thoughts & Ideas Analysis of Effect of Handoff on Audio Streaming in VOIP Networks Shivani Koul* [email protected]
1 Analysis of HTTPè1.1 Performance on a Wireless Network Stephen Cheng Kevin Lai Mary Baker fstephenc, laik, [email protected] http:èèmosquitonet.stanford.edu Techical Report: CSL-TR-99-778 February
TCP and UDP Performance for Internet over Optical Packet-Switched Networks
TCP and UDP Performance for Internet over Optical Packet-Switched Networks Jingyi He S-H Gary Chan Department of Electrical and Electronic Engineering Department of Computer Science Hong Kong University
