Evaluating Cooperative Web Caching Protocols for Emerging Network Technologies 1
|
|
|
- Annabella Allen
- 10 years ago
- Views:
Transcription
1 Evaluating Cooperative Web Caching Protocols for Emerging Network Technologies 1 Christoph Lindemann and Oliver P. Waldhorst University of Dortmund Department of Computer Science August-Schmidt-Str Dortmund, Germany Abstract While bandwidth for previous IP backbone networks deployed by Internet Service Providers typically has been limited to 34 Mbps, current and future IP networks provide bandwidth ranging from 155 Mbps to 2.4 Gbps. Thus, it is important to investigate the impact of emerging network technologies on the performance of cooperative Web caching protocols. In this paper, we present a comprehensive performance study of four cooperative Web caching protocols. We consider the Internet cache protocol,, the cache array routing protocol,, and the Web cache coordination protocol,. The performance of these protocols is evaluated using tracedriven simulation with measured Web traffic from NLANR. The goal of this performance study lies in understanding the behavior and limiting factors of the considered protocols for cooperative Web caching under measured traffic conditions. Based on this understanding, we give recommendations for Internet Service Providers, Web clients, and Application Service Providers. Key words: Client, proxy and server-side caches of Web content, Data and service replication on the web, End-to-end performance evaluation of caching and replication, Trace-driven simulation. 1 Introduction Cooperative Web caching means the sharing and coordination of cached Web documents among multiple communicating proxy caches in an IP backbone network. Cooperative caching of data has its roots in distributed file and virtual memory systems in a high-speed local area network environment. Cooperative caching has shown to substantially reduce latencies in such distributed computing systems because network transfer time is much smaller than disk access time to serve a miss. While bandwidth for previous IP backbone networks deployed by Internet Service Providers typically has been limited to 34 Mbps, current and future IP networks provide bandwidth ranging from 155 Mbps to 2.4 Gbps (see e.g., CA*Net-3 [1], or Internet-2 [5]). Thus, it is important to investigate the impact of emerging network technologies on the performance of cooperative Web caching protocols. Protocols for cooperative Web caching can be categorized as message-based, directory-based, hash-based, or router-based [4]. A popular example for a message-based protocol is the Internet cache protocol, [10]. Directory-based protocols include [9]. The most notable hashbased cooperative Web caching protocol constitutes the cache array routing protocol, [8]. An example for a router-based protocol is the Web cache coordination protocol, [3]. In previous work, the performance of cooperative Web caching protocols has mostly been studied just in comparison to [7], [8], [9], [11]. To best of our knowledge, a comprehensive performance study of the protocols,,, and based on the same IP backbone network topology and workload characteristics has not been reported so far. Furthermore, the effect of rapidly increasing bandwidth availability to these cooperative Web caching protocols has not been investigated. We present a comprehensive performance study for the cooperative Web caching protocols,,, and. The performance of the considered protocols is evaluated using measured Web traffic from NLANR [6] using a discrete-event simulator of an IP backbone network. The presented curves illustrate that for Internet Service Providers (ISPs) operating IP networks with high bandwidth 1 This work was supported by the DFN-Verein with funds of the BMBF.
2 availability (622 Mbps and 2.4 Gbps), clearly either or is the protocol of choice. From the clients point of view, is most beneficial because achieves lowest latency. For Application Service Providers (ASPs), is the protocol of choice because achieves the best cost/performance ratio with respect to cache utilization. Sensitivity studies show that temporal locality and low cache utilization due to low participation to cooperative Web caching make considerably impact to the performance of individual protocols. The paper is organized as follows. A brief description of the considered protocols is provided in Section 2. The simulation environment for evaluating the considered protocols for cooperative Web caching is introduced in Section 3. In Sections 4, we present performance curves for the considered protocols for cooperative Web caching derived from the trace driven simulator. Finally, concluding remarks are given. 2 Protocols for Cooperative Web Caching The aim of Web proxy caching lies in reducing both document retrieval latency and network traffic. Cooperative Web caching means that if a requested document is not contained in the queried Web cache, other caches are queried first before the document is downloaded from the origin Web server. Protocols for cooperative Web caching can be categorized as message-based protocols, directory-based protocols, hash-based protocols, and router-based protocols. Message-based protocols define a query/response dialog for exchanging information about cached content. An example for a message-based protocol is the Internet cache protocol,. Directory based protocols summarize information into frequently exchanged directories. constitutes an example for a directory-based protocol. Hash-based protocols as e.g., the cache array routing protocol,, employ hash functions to distribute the URL space among cooperating Web caches. -based protocols intercept Web traffic on IP layer and redirect it to a caching proxy. An example for a router-based protocol constitutes the Web cache coordination protocol,. In this section, we recall the major functionality of these protocols for cooperative Web caching in a mesh of loosely coupled Web proxy caches. Most protocols include further functionality, e.g., for dynamic reconfiguration of the caches or fault tolerance. This functionality is beyond the scope of our study and is omitted in the description. We describe the actions performed by a cache after a miss for a client request. We call the queried Web cache the target cache, all other caches in the mesh are denoted as peer caches [4]. is an application layer protocol based on the user datagram protocol/internet protocol, UPD/IP. It was originally introduced by the Harvest Web cache for coordinating hierarchical Web caching. defines a set of lightweight messages used to exchange information about cached documents among caches. If a target cache suffers a miss on a client request, it queries its configured peers using a special query message. A peer cache replies with an hit message, if it holds the requested document. Otherwise, the peer cache sends an miss message. The target cache waits either an hit is received or all configured siblings have reported a miss. Since is based on UDP, messages are not reliably delivered. A timeout prevents a cache from infinite waiting for lost messages. The target cache fetches the document from the first peer cache signaling a hit. If no peer cache signals a hit, the document is fetched from the origin Web server or a configured parent cache. The document is stored locally at the target cache and delivered to the client. Note that with multiple copies of a particular document may be stored in a cache mesh. can adapt to network congestion and cache utilization by locating the peer cache which sends the fastest response. The disadvantage of lies in introducing additional latency on a cache miss due to waiting until receiving replies from all peer caches or reaching the timeout. defines a protocol based on the hypertext transfer protocol, HTTP. As, Cache Digests operates on the application layer of the TCP/IP protocol stack. was introduced by Rousskov and Wessels and is implemented in the Squid Web cache [9]. Cache Digests enables caching proxies to exchange information about cached content in compact form. A summary of documents stored by a particular caching proxy is coded by an array of bits; i.e., so called Bloom filter. To add a document to a Bloom filter, a number of hash functions are calculated on the document URL. The results of the hash functions specify which bits of the bit array have to be turned on. On a lookup for a particular document, the same hash functions are calculated and the specified bits are checked. If all bits are turned on, the document is considered to be in the cache. With a certain probability, a document is erroneously reported to be in the cache. The probability depends on the size of the Bloom filter. Cooperating caching proxies build as a summary of locally cached content. Each target cache requests digests from all peer caches on a regular basis. If a target cache misses on a client request, it searches for the requested document in the digests of its peer caches. If the requested document is reported to be cached by any digest, the document is requested form the corresponding peer cache. Otherwise, the document is fetched from the origin Web server. In both cases, the target
3 cache stores a copy of the requested document and the document is delivered to the client. Note that as in, multiple copies of a particular document may be stored in a cache mesh. An advantage of lies in only generating network traffic on the digest exchange and not on each document request. The disadvantage of lies in causing false cache hits. False hits can occur in two ways: First, a digest reports that a particular document is cached, but the document has been already evicted since the generation of the digest. Second, a bloom filter may report a hit when the document is not in the cache. The number of false hits influences the performance of a cache mesh by causing unnecessary network traffic [9]. constitutes also a cooperative Web caching protocol on the application layer of the TCP/IP protocol stack. is based on HTTP and was introduced by Valloppillil and Ross [8]. As, is based on hash functions. The version of we consider in Section 4 implements hashing inside the caches. Thus, the considered version of does not require changing the client software. Note that another version of is implemented such that hashing is performed in the client software [8]. Each cache in the cache mesh stores an unique hash function, which maps the URL space to a hash space. Each element of the hash space is associated with a particular caching proxy. If a target cache receives a client request, it calculates the hash key of the document URL. The result is combined with the hash keys of the names of all caches in the mesh. The request is forwarded to the cache, which achieves the smallest value of the combined hash key. We refer to this cache as the selected cache. The selected cache can be any member of the cache mesh including the target cache. On a hit, the selected cache delivers the document to the target cache, if both are different. On a miss, the selected cache retrieves the document from the origin Web server, stores a copy and delivers it to the target cache. The target cache delivers the document to the client. Note that the target cache does not store a copy of the document. Each document is stored only at the selected cache. This unifies several physically distributed caching proxies to one logical cache. The advantage of lies in not wasting cache memory for replicated documents. As a disadvantage, employs expensive HTTP communication among the caching proxies on each request. is a router-based protocol on the network layer of the TCP/IP protocol stack. Recently, the specification of v2.0 has been made public available in an Internet draft [3]. takes the responsibility for the distribution of requests away from the caches. It allows a Web cache to join a service group. A service group consists of one or more caches and one ore more routers. The routers of a service group intercept IP packets of HTTP requests sent from a client to the origin Web server. The packets are redirected to a particular caching proxy in the service group. As in, a hash key chooses the caching proxy. Opposed to, in general the router does not know the complete URL of a requested document. Therefore, it uses the IP address of the destination of intercepted packets as input for the hash function, i.e., the IP address of the origin Web server holding the requested document. The result of the hash function yields an index into a redirection hash table. This redirection hash table provides the IP address of the Web cache to which the request is forwarded. Due to hashing based on IP addresses, all objects of a particular site are serviced by the same caching proxy. This constitutes the main difference to the version of we consider. The selected cache services the request from cache storage or the origin Web server. The requested document is delivered directly to the client by the cache. Note that as in, at most a single copy of a document is stored in a local cache mesh. As a further advantage, does not increase the load on caching proxies by distributing documents via the routers. As a disadvantage, increases the load on routers due to the calculation of hash functions and packet redirection. As mentioned above, we use implementations that do not require changes in user agents. That is the client has to be configured to issue document request to a statically assigned caching proxy or the origin Web server. 3 Simulation Environment 3.1 Network model In our experiments, we investigate the performance of cooperative caching for caching proxies distributed across the backbone network of a national Internet service provider (ISP). We assume that caches are placed at five access routers, which connect one or more regional networks to the backbone. Clients of a national cache consist of user agents, institutional and regional caches, which are directly connected to the associated access router via institutional and regional networks. Our simulator comprises of several building blocks, which can be utilized to assemble arbitrary configurations for backbone networks. These building blocks are CSIM simulation components modeling virtual links, access routers, clients, caches, and Web servers. Figure 1 illustrates the network configuration used for the performance studies presented in Section 4. All parameters of the network model are shown in Table 1.
4 Remote Clients 5 Cache 5 5 Remote Clients 4 Virtual Link 14 Clients 1 Cache 4 1 Cache 1 Low Bandwidth Link Clients Remote Cache 3 Remote Clients 2 Cache 2 Remote Figure 1. Model of network configuration with virtual links and aggregated clients Virtual links The simulator represents connectivity of basic components by virtual links. That is as simplifying assumption; we do not represent each physical link in the local national backbone network. A virtual link connects each pair of access routers. Each virtual link is associated with a round trip time RTT, i.e., the time, which is needed by an IP packet to be transmitted from the source router of the virtual link to the destination and back. Furthermore, a packet loss probability P loss is assigned to each link. This is the probability that a packet is lost while being transmitted on a virtual Parameter Description Value RTT P loss Average round trip time in local backbone network Packet loss probability in local backbone network T HTTP Time for of HTTP connection T Cache Time for cache lookups/updates (0.5*T HTTP ) T Time for processing an query/response (0.1*T HTTP ) varying 0.01 % 3.0 msec. 1.5 msec. 0.3 msec. link. Virtual links are used to transmit two types of messages. The first type are messages which use UDP/IP as underlying transport protocol. We assume an message to get lost with probability P loss and to get delayed with a mean of RTT 2 otherwise. The second type of transmission is a HTTP request/response dialog. HTTP uses the transmission control protocol/internet protocol, TCP/IP as underlying transport protocol. To capture the mean delay introduced by TCP/IP, we employ the analytical model introduced in [2]. This model takes RTT and P loss as well as some parameters of the TCP/IP protocol stack together with the size of the transmitted message as input for deriving the mean delay for connection setup and data transmission. To model the request part of HTTP, we calculate the mean time for connection set up as well as the transmission time for the request in TCP s slow start mode. For the response, we assume the mean delay for document transmission consisting of the slow start phase and a following steady state transmission phase if the document is large enough. We make the assumption that RTT and P loss are not significantly influenced by traffic originating from inter-cache communication, i.e., they are constant during the simulation run. This assumption holds for inter-cache traffic in most national backbone networks because of low cache utilization. To evaluate cooperative Web caching for emerging network technologies, we need to configure virtual links for different bandwidth available in the local backbone network. In our experiments, we examined network bandwidth of 34 Mbps, 155 Mbps (STM-1 connection), 622 Mbps (STM-4 connection), and 2.4 Gbps (STM-16 connection). We estimate round trip times for links offering different bandwidth. For a 155 Mbps and 622 Mbps bandwidth, we assume an average round trip times of 15 msec. and 10 msec., respectively. These delays are observed in current Gigabit networks. To estimate round trip times for 34 Mbps and 2.4 Gbps connections, we employ linear regression. From the measured delays, we observe that increasing network bandwidth by factor four approximately decreases average round trip time by 5 msec. Thus, we assume average round trip times of 20 msec. and 5 msec. for 34 Mbps and 2.4 Gbps connections, respectively. T T Time for calculation of 0.6 msec. destination cache (0.2*T HTTP ) Time for calculation of 0.3 msec. target IP address and processing encapsulated IP packets (0.1*T HTTP ) Table 1. Parameters of the simulator Local Web Caches Web proxy caches are connected to the access routers connecting regional networks to the local national backbone. The simulator implements the basic functionality of a caching proxy similar to that described in [12]. Each cache implements a cache manager using the LRU replacement scheme and a CPU server servicing requests in a FIFO queue. The cache manager can be configured to
5 provide the functionality of,, or as described in Section 2. The core functionality of is implemented in the routers and, therefore, is not part of the cache manager. The simulator does not represent modifications of Web documents. Thus, the cache manager does not expire documents as real Web caches do. Therefore, the simulator slightly overestimates the amount of traffic saved by cooperative Web caching. The service time of the CPU server depends on the operation performed. For processing a HTTP request time T HTTP is required. Time for cache lookups and updates is denoted by T Cache. Following [12], we assume that TCache 0. 5 THTTP. A single query or response is processed in time T. It holds T < T HTTP, since messages are short and simple UDP transmissions [10]. Therefore, we assume that T 0. 1 THTTP. Calculation of hash functions takes time T for. The hash function defined by uses several arithmetic operations as 32 bit multiplication [8]. Therefore, we assume that hashing is more expensive than processing messages, i.e., T 0. 2 THTTP. We assume that calculation of an entry of a cache digests based on hash functions also takes the delay T. Requesting and sending digests is performed based by HTTP and takes the delay T HTTP. The delay T HTTP is determined experimentally as follows. Following [12], we perform simulation runs for various values of T HTTP and monitor the utilization of the simulated caches for each value of T HTTP. Since the correct cache utilization has also been measured during trace recording, the values for cache utilization corresponding to different values of T HTTP are used to calibrate the delay T HTTP. This calibration yields T HTTP = 3.0 msec. routers We refer to a router connecting one or more regional networks to the national backbone network as an access router. routers receive transmissions from a client, Web cache, or link component and pass it to an other Web cache, client, or link. Because all network transmission delays are included in the round trip time associated with virtual links, an access router does not further delay a transmission. The transmission redirected by constitutes an exception to this. In, an access router has to calculate the new target cache of a client request using a hash function and encapsulate the IP package for redirection. T denotes the combined time needed for these operations. Opposed to, the hash function defined by is based on simple bit shifting and XOR operations, which can be easily calculated by the router [3]. Redirection is done by rewriting a IP packet with an additional header, which is also a fast operation. Therefore, it holds T T, and we assume T 0. 1 T. Clients HTTP Several user agents, institutional, and regional caches can be connected to an access router via local and regional networks. The comparative simulation study presented in Section 4 focuses on distributed caching in a national backbone network. Thus, we refer to all these sources of requests as clients. We assume that all clients are configured to peer with the closest national cache, i.e., the cache located at the access router connecting their regional network to the national backbone network. We aggregate all clients connected to one access router in one clients component. The clients component assumed to be connected to the access router via a client link. Client links are used to transmit HTTP responses and introduce the same delay to a transmission as a virtual link does. The clients component issues requests to the caching proxy located at the same router. The request stream consists of the aggregated request streams of all clients located in the lower level networks connected to the central router. The request interarrival times are given by the timestamps of the requests recorded in the trace. A detailed description how client request streams are obtained from the trace files is given in Section 3.2. Origin in Remote Backbone Network In the simulator, all origin Web servers are assumed to be located in a remote national backbone network. This remote network is connected via a low bandwidth connection to the local backbone network in which distributed Web caching using the considered protocols is explicitly represented. This low bandwidth connection to the remote backbone clearly dominates the delay for receiving a document from its origin Web server in the remote network. To reach the origin Web server, additional delays are introduced by passing the national, regional and institutional networks under the remote backbone network. However, these delays are negligible compared to the delay between two backbone networks. Finally, the utilization of the origin Web server constitutes another important factor for the overall document retrieval latency. The sum of these latencies is denoted as the download time of a Web document. Estimates for these mean delays are derived from the considered traces as described in Section 3.2.
6 3.2 Characteristics of the Considered Workload To derive meaningful performance results for distributed Web caching in a national backbone network, we need to feed our simulator with representative workloads collected at top-level caches. The performance studies presented in Section 4 are based on traces from log files of ten top-level proxy caches of the National Laboratory for Applied Network Research (NLANR) cache hierarchy [6]. At time of trace collection 584 individual clients peered to the caches. We evaluated the trace files for a period of one week. Due to space limitations, we present only the results for a single day in the particular week. For all other days recorded in the trace files, similar performance results will be achieved. To make the trace files usable for our performance study, some preprocessing has to be performed. First, from request recorded in the trace file we consider only GET requests to cacheable documents. We exclude uncachable documents by commonly known heuristics, e.g., by looking for string cgi or? in the requested URL. From the remaining requests, we consider responses with HTTP status codes 200 (OK), 203 (Non Authoritative Information), 206 (Partial Content), 300 (Multiple Choices), 301 (Moved Permanently), 302 (Found), and 304 (Not Modified) as cacheable. To determine the correct size of data transmissions with status code 304, we scanned the trace in a two pass scheme. Furthermore, we exclude all log file entries indicating communication between the caches, because the simulator all performs intercache communication. Trace characteristics after preprocessing are shown in Table 2. Recall that the modeled network configuration contains five cooperating Web caches. Following [11], the overall request stream recorded in a measured trace is divided into fractional request streams issued by individual clients (i.e., 584 request streams for NLANR). Subsequently, these fractional request streams are agglomerated into five subtraces, one for each aggregated clients of Figure 1, such that the total number of requests issued by clients is approximately equal for each subtrace. Table 3 states the properties of the subtraces for NLANR. To estimate the mean delays for retrieving documents from the origin Web server, we consider requests recorded with a TCP_MISS/200 status written by the Web proxy cache Squid. The transmission of a document from a cache to the client can be widely overlapped with the delay for transmitting the document form the origin Web server to the cache. Therefore, the time needed by a top level cache to retrieve a document from the Trace NLANR Date December 6, 2000 Duration (hours) 24 Number of Web documents 1,935,226 Overall document size (GB) Number of requests 3,361,744 Requested data (GB) Average requests/sec Number of clients 584 Table 2. Properties of traces measured in DFN and NLANR origin Web server and deliver it to the client is a close approximation for the download delay specific for the document. If a document appears in several log file entries with status code TCP_MISS/200, we calculate download delay as mean value of the recorded time intervals. By scanning log files for an entire week, we were able to determine download delays for about 98% of the requested documents. The important factor for the performance of cooperative Web caching in the backbone of a national ISP is the ratio between the document transmission time in the local backbone network to the document download time from the origin Web server. Table 4 provides numerical results for the average download times derived from the NLANR traces as well as mean document transmission times for different network bandwidth. These mean document transmission times are derived by the performance modeling for TCP latency of [2]. From Table 4, we conclude that for a bandwidth of 622 Mbps in the local backbone network, the ratio between the document transmission time and the document download time is about 1/15 for NLANR. 4 Comparative Performance Study of Considered Protocols 4.1 Bandwidth Consumption In a first experiment, we study bandwidth consumption in the cache mesh of the local backbone network for the considered cooperative Web caching protocols. Figure 2 plots the amount of saved traffic, i.e. the volume of requests served form inside the cache mesh, as a function of individual cache size for the NLANR trace. The results show that without cooperative Web caching an amount of 26 GB is saved, i.e., 39% of the overall requested data.
7 NLANR Average number of requests 672,349 Average requested data (GB) Average requests/sec Table 3. Properties of DFN and NLANR subtraces for modeling aggregated clients Figure 3 plots the inter-cache traffic as a function of cache size for the NLANR trace. Note that without cooperative Web caching no inter-cache communication occurs. For and, we observe that the amount for inter-cache communication is about 80% of the requested total amount of data. introduces overhead traffic for every cache miss in a target cache [9] and therefore generates medium cache traffic. cause the smallest amount of inter-cache communication for because does not introduce communication on every request as or on every miss as. We conclude form Figures 2 and 3 that hash-based schemes, as and, are the protocols of choice for ISP if sufficient bandwidth is available inside the local backbone. 4.2 Document Retrieval Latency We provide a comparison of the latencies introduced by the protocols for increasing cache sizes. The experiments are performed for bandwidths of 155 Mbps and 2.4 Gbps in the local backbone. Performance curves for the experiments are shown in Figures 4 and 5. We observe that shows worst performance of all protocols, while yields lowest latency. We conclude that from the client s point of view, is the protocol of choice. Comparing the results for bandwidths of 155 Mbps and 2.4 Gbps, we observe that the impact of bandwidth availability in the local backbone network on document retrieval latency is rather low. In order to further illustrate this effect, we keep in Figure 6 the relative cache size fixed to15% and investigate document retrieval Document transmission time in local backbone network Document download time Bandwidth NLANR 2.4 Gbps msec. 622 Mbps msec. 155 Mbps 263,28 msec. 34 Mbps msec. unknown msec. Table 4. Document transmission and document download times derived from trace data latency for network bandwidth of 34 Mbps, 155 Mbps, 622 Mbps and 2.4 Gbps. We observe that document retrieval latency scales linear when bandwidth availability in the backbone network scales by factor four. From Figure 6, we conclude that the availability of additional bandwidth in the local backbone network has little impact on document retrieval latency. Currently, document retrieval latency is dominated by document download time observed in the traces. To show the impact of the ratio between document download time and document transmission time, Figure 7 presents a sensitivity study. From this study, we conclude that reducing download delays is a more effective approach for reducing client latency than cooperative web caching. 4.3 Cache Utilization Figure 8 plots the average cache utilization as a function of cache size for the different protocols. Without cooperative Web caching low cache utilization is achieved because besides a constant number of client requests no further messages are generated. Opposed to this, yields the highest utilization due to its overhead on a cache miss. offers lowest load to the caches due to balancing requests in the routers. From Figures 8, we also observe that the NLANR trace offers a moderate load to the caches. To investigate protocol performance for increasing request rates, we perform a sensitivity study shown in Figure 9. The relative cache size is kept fixed to 15%, network bandwidth is fixed to 622 Mbps and the request interarrival times in the traces are varied. We find that relative order in cache utilization between the protocols does not change with increasing workload. achieves highest cache utilization because of its overhead on each miss. achieves even better scalability than no cooperative Web caching because of its load balancing among the caches. To ensure scalability, is clearly the protocol of choice. Conclusion We presented a comprehensive performance study of the cooperative Web caching protocols,,, and. We investigated performance of individual protocols from viewpoints of ISPs, clients, and Internet enterprises like ASPs. Consequently, as performance measures we considered bandwidth consumption, user latency, and cache utilization. Recall that ISPs are most interested in the amount of saved traffic and in protocol efficiency. The curves presented in Figures 2 and 3 indicate that ISPs clearly benefit from cooperative Web caching. For currently operating backbone networks having
8 Total Saved Traffic (GB) Total Inter-Cache Traffic (GB) Figure 2. Bandwidth consumption of different protocols; saved traffic; 622 Mbps bandwidth 0 Figure 3. Bandwidth consumption of different protocols; inter-cache traffic; 622 Mbps bandw Average Latency (msec) Average Latency (msec) Figure 4. Document latency of different protocols; 155 Mbps bandwidth 2200 Figure 5. Document latency of different protocols; 2.4 Gbps bandwidth Average Latency (msec) Average Latency (msec) Mbps 155 Mbps 622 Mbps 2.4 Gbps Backbone Bandwidth (Mbps) Figure 6. Document latency for different bandwidth availability /8 1/10 1/12 1/14 Transmission Time / Download Time Figure 7. Sensitivity study: Latency vs. transmission/download ratio; 622 Mbps bandw Average Cache Utilization (%) Average Cache Utilization (%) Figure 8. Cache utilization versus cache size; 622 Mbps bandwidth Requests / Second Figure 9. Sensitivity study: Cache utilization vs. request rate; 622 Mbps bandwidth
9 a bandwidth of 155 Mbps, is the protocol of choice because of its high protocol efficiency. For future backbone networks having bandwidth of 2.4 Gbps, is superior to because yields higher saved traffic than while the low protocol efficiency of does not matter. Recall that ASPs are most interested in low cache utilization because this implies good cost/performance trade-offs. As illustrated in Figures 8 and 9, ASPs benefit from cooperative Web caching in case of future backbone networks providing bandwidth of 622 Mbps or 2.4 Gbps. In this case, yields the lowest cache utilization of all protocols and also lower cache utilization than no cooperative Web caching. Recall that clients are most interested in low document retrieval latency. We observe only little differences in document retrieval latencies achieved by the individual protocols. Moreover, these latency curves are in the neighborhood of the latency curve for no cooperative Web caching. Thus, clients have not much benefit from cooperative Web caching regardless which protocol is employed. As illustrated in a sensitivity study, clients will benefit most from reducing the ratio between transmission time and document download time. Therefore, providing higher bandwidth in links to remote backbone networks or deploying a content delivery network are considerably more effective approaches for reducing document retrieval latency than cooperative Web caching. Furthermore, we found that the impact of low cache utilization due to low participation in cooperative Web caching make considerably impact to the performance of individual protocols. References [1] CA* Net 3, Canada s Advanced Internet Development Organization html [2] N. Cardwell, S. Savage, and T. Anderson, Modeling TCP Latency, Proc. 20th Conf. on Computer Communications (IEEE INFOCOM), Tel Aviv Israel, [3] M. Cieslak, D. Foster, G. Tiwana, and R. Wilson, Web Cache Coordination Protocol v2.0. IETF Internet draft, [4] I. Cooper, I. Melve, and G. Tomlinson, Internet Web Replication and Caching Taxonomy, IETF Internet draft, 2000 [5] Internet 2, University Corporation for Advanced Internet Development. [6] NLANR, National Laboratory for Applied Network Research, nlanr.net/ [7] P. Rodriguez, C. Spanner, and E.W. Biersack, Analysis of Web Caching Architectures: Hierarchical and Distributed Caching, IEEE/ACM Trans. on Networking 2001 [8] K.W. Ross, Hash Routing for Collections of Shared Web Caches, IEEE Network, 11, pp , [9] A. Rousskov and D. Wessels, Cache Digests, Computer Networks and ISDN Systems, 30, pp , [10] D. Wessels and K.C. Claffy, and the Squid Web Cache, IEEE Journal on Select. Areas in Comm., 16, pp , [11] A. Wolman, G. Voelker, N. Sharma, N. Cardwell, A. Karlin, and H. Levy, On the Scale and Performance of Cooperative Web Proxy Caching, Proc. 17th ACM Symp. on Operating Systems Principles (SOSP), Kiawah Island SC, pp , [12] K.-L. Wu and P.S. Yu. Latency-sensitive Hashing for Collaborative Web Caching, Proc. 9th World Wide Web Conference, Amsterdam The Netherlands, pp , 2000.
Computer Networks - CS132/EECS148 - Spring 2013 ------------------------------------------------------------------------------
Computer Networks - CS132/EECS148 - Spring 2013 Instructor: Karim El Defrawy Assignment 2 Deadline : April 25 th 9:30pm (hard and soft copies required) ------------------------------------------------------------------------------
SiteCelerate white paper
SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance
Concept of Cache in web proxies
Concept of Cache in web proxies Chan Kit Wai and Somasundaram Meiyappan 1. Introduction Caching is an effective performance enhancing technique that has been used in computer systems for decades. However,
Web Caching and CDNs. Aditya Akella
Web Caching and CDNs Aditya Akella 1 Where can bottlenecks occur? First mile: client to its ISPs Last mile: server to its ISP Server: compute/memory limitations ISP interconnections/peerings: congestion
On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu
On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu Dilip Kumar Krishnappa, Samamon Khemmarat, Lixin Gao, Michael Zink University of Massachusetts Amherst,
Internet Content Distribution
Internet Content Distribution Chapter 2: Server-Side Techniques (TUD Student Use Only) Chapter Outline Server-side techniques for content distribution Goals Mirrors Server farms Surrogates DNS load balancing
ICP. Cache Hierarchies. Squid. Squid Cache ICP Use. Squid. Squid
Caching & CDN s 15-44: Computer Networking L-21: Caching and CDNs HTTP APIs Assigned reading [FCAB9] Summary Cache: A Scalable Wide- Area Cache Sharing Protocol [Cla00] Freenet: A Distributed Anonymous
Computer Networks Homework 1
Computer Networks Homework 1 Reference Solution 1. (15%) Suppose users share a 1 Mbps link. Also suppose each user requires 100 kbps when transmitting, but each user transmits only 10 percent of the time.
How To Understand The Power Of A Content Delivery Network (Cdn)
Overview 5-44 5-44 Computer Networking 5-64 Lecture 8: Delivering Content Content Delivery Networks Peter Steenkiste Fall 04 www.cs.cmu.edu/~prs/5-44-f4 Web Consistent hashing Peer-to-peer CDN Motivation
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
DATA COMMUNICATOIN NETWORKING
DATA COMMUNICATOIN NETWORKING Instructor: Ouldooz Baghban Karimi Course Book: Computer Networking, A Top-Down Approach, Kurose, Ross Slides: - Course book Slides - Slides from Princeton University COS461
1. Comments on reviews a. Need to avoid just summarizing web page asks you for:
1. Comments on reviews a. Need to avoid just summarizing web page asks you for: i. A one or two sentence summary of the paper ii. A description of the problem they were trying to solve iii. A summary of
First Midterm for ECE374 02/25/15 Solution!!
1 First Midterm for ECE374 02/25/15 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam
First Midterm for ECE374 03/24/11 Solution!!
1 First Midterm for ECE374 03/24/11 Solution!! Note: In all written assignments, please show as much of your work as you can. Even if you get a wrong answer, you can get partial credit if you show your
EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science
EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science Examination Computer Networks (2IC15) on Monday, June 22 nd 2009, 9.00h-12.00h. First read the entire examination. There
High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features
UDC 621.395.31:681.3 High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features VTsuneo Katsuyama VAkira Hakata VMasafumi Katoh VAkira Takeyama (Manuscript received February 27, 2001)
Understanding Slow Start
Chapter 1 Load Balancing 57 Understanding Slow Start When you configure a NetScaler to use a metric-based LB method such as Least Connections, Least Response Time, Least Bandwidth, Least Packets, or Custom
Internet Content Distribution
Internet Content Distribution Chapter 4: Content Distribution Networks (TUD Student Use Only) Chapter Outline Basics of content distribution networks (CDN) Why CDN? How do they work? Client redirection
Bloom Filter based Inter-domain Name Resolution: A Feasibility Study
Bloom Filter based Inter-domain Name Resolution: A Feasibility Study Konstantinos V. Katsaros, Wei Koong Chai and George Pavlou University College London, UK Outline Inter-domain name resolution in ICN
Hierarchical Content Routing in Large-Scale Multimedia Content Delivery Network
Hierarchical Content Routing in Large-Scale Multimedia Content Delivery Network Jian Ni, Danny H. K. Tsang, Ivan S. H. Yeung, Xiaojun Hei Department of Electrical & Electronic Engineering Hong Kong University
Implementing Reverse Proxy Using Squid. Prepared By Visolve Squid Team
Implementing Reverse Proxy Using Squid Prepared By Visolve Squid Team Introduction What is Reverse Proxy Cache About Squid How Reverse Proxy Cache work Configuring Squid as Reverse Proxy Configuring Squid
Chapter 15: Advanced Networks
Chapter 15: Advanced Networks IT Essentials: PC Hardware and Software v4.0 1 Determine a Network Topology A site survey is a physical inspection of the building that will help determine a basic logical
Netsweeper Whitepaper
Netsweeper Inc. Corporate Headquarters 104 Dawson Road Suite 100 Guelph, ON, Canada N1H 1A7 CANADA T: +1 (519) 826 5222 F: +1 (519) 826 5228 Netsweeper Whitepaper Deploying Netsweeper Internet Content
Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at
Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at distributing load b. QUESTION: What is the context? i. How
Chapter 5. Data Communication And Internet Technology
Chapter 5 Data Communication And Internet Technology Purpose Understand the fundamental networking concepts Agenda Network Concepts Communication Protocol TCP/IP-OSI Architecture Network Types LAN WAN
ADAPTIVE DISTRIBUTED CACHING WITH MINIMAL MEMORY USAGE
ADAPTIVE DISTRIBUTED CACHING WITH MINIMAL MEMORY USAGE Markus J. Kaiser, Kwok Ching Tsui and Jiming Liu Department of Computer Science Hong Kong Baptist University Kowloon Tong, Kowloon, Hong Kong ABSTRACT
Performance of networks containing both MaxNet and SumNet links
Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for
Network Architecture and Topology
1. Introduction 2. Fundamentals and design principles 3. Network architecture and topology 4. Network control and signalling 5. Network components 5.1 links 5.2 switches and routers 6. End systems 7. End-to-end
Using IPM to Measure Network Performance
CHAPTER 3 Using IPM to Measure Network Performance This chapter provides details on using IPM to measure latency, jitter, availability, packet loss, and errors. It includes the following sections: Measuring
Global Server Load Balancing
White Paper Overview Many enterprises attempt to scale Web and network capacity by deploying additional servers and increased infrastructure at a single location, but centralized architectures are subject
Scalable Internet Services and Load Balancing
Scalable Services and Load Balancing Kai Shen Services brings ubiquitous connection based applications/services accessible to online users through Applications can be designed and launched quickly and
The Problem with TCP. Overcoming TCP s Drawbacks
White Paper on managed file transfers How to Optimize File Transfers Increase file transfer speeds in poor performing networks FileCatalyst Page 1 of 6 Introduction With the proliferation of the Internet,
GLOBAL SERVER LOAD BALANCING WITH SERVERIRON
APPLICATION NOTE GLOBAL SERVER LOAD BALANCING WITH SERVERIRON Growing Global Simply by connecting to the Internet, local businesses transform themselves into global ebusiness enterprises that span the
Communications and Computer Networks
SFWR 4C03: Computer Networks and Computer Security January 5-8 2004 Lecturer: Kartik Krishnan Lectures 1-3 Communications and Computer Networks The fundamental purpose of a communication system is the
Homework 2 assignment for ECE374 Posted: 02/21/14 Due: 02/28/14
1 Homework 2 assignment for ECE374 Posted: 02/21/14 Due: 02/28/14 Note: In all written assignments, please show as much of your work as you can. Even if you get a wrong answer, you can get partial credit
Key Components of WAN Optimization Controller Functionality
Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications
Load Distribution in Large Scale Network Monitoring Infrastructures
Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
A Survey Study on Monitoring Service for Grid
A Survey Study on Monitoring Service for Grid Erkang You [email protected] ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
Request Routing, Load-Balancing and Fault- Tolerance Solution - MediaDNS
White paper Request Routing, Load-Balancing and Fault- Tolerance Solution - MediaDNS June 2001 Response in Global Environment Simply by connecting to the Internet, local businesses transform themselves
Indirection. science can be solved by adding another level of indirection" -- Butler Lampson. "Every problem in computer
Indirection Indirection: rather than reference an entity directly, reference it ( indirectly ) via another entity, which in turn can or will access the original entity A x B "Every problem in computer
Deploying in a Distributed Environment
Deploying in a Distributed Environment Distributed enterprise networks have many remote locations, ranging from dozens to thousands of small offices. Typically, between 5 and 50 employees work at each
High volume Internet data centers. MPLS-based Request Routing. Current dispatcher technology. MPLS-based architecture
MPLS-based Request Routing High volume Internet data centers MPLS-based Request Routing Arup Acharya, Anees Shaikh, Renu Tewari, Dinesh Verma IBM TJ Watson Research Center Web server cluster + front-end
DEPLOYMENT GUIDE Version 1.1. DNS Traffic Management using the BIG-IP Local Traffic Manager
DEPLOYMENT GUIDE Version 1.1 DNS Traffic Management using the BIG-IP Local Traffic Manager Table of Contents Table of Contents Introducing DNS server traffic management with the BIG-IP LTM Prerequisites
Computer Networks & Security 2014/2015
Computer Networks & Security 2014/2015 IP Protocol Stack & Application Layer (02a) Security and Embedded Networked Systems time Protocols A human analogy All Internet communication is governed by protocols!
Protocols. Packets. What's in an IP packet
Protocols Precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet Protocol (bottom level) all packets shipped from network to network as IP packets
Facility Usage Scenarios
Facility Usage Scenarios GDD-06-41 GENI: Global Environment for Network Innovations December 22, 2006 Status: Draft (Version 0.1) Note to the reader: this document is a work in progress and continues to
Computer Network. Interconnected collection of autonomous computers that are able to exchange information
Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
Project #2. CSE 123b Communications Software. HTTP Messages. HTTP Basics. HTTP Request. HTTP Request. Spring 2002. Four parts
CSE 123b Communications Software Spring 2002 Lecture 11: HTTP Stefan Savage Project #2 On the Web page in the next 2 hours Due in two weeks Project reliable transport protocol on top of routing protocol
Internet Working 5 th lecture. Chair of Communication Systems Department of Applied Sciences University of Freiburg 2004
5 th lecture Chair of Communication Systems Department of Applied Sciences University of Freiburg 2004 1 43 Last lecture Lecture room hopefully all got the message lecture on tuesday and thursday same
A Tool for Evaluation and Optimization of Web Application Performance
A Tool for Evaluation and Optimization of Web Application Performance Tomáš Černý 1 [email protected] Michael J. Donahoo 2 [email protected] Abstract: One of the main goals of web application
Final for ECE374 05/06/13 Solution!!
1 Final for ECE374 05/06/13 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam taker -
Networking Topology For Your System
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
Traffic Monitoring in a Switched Environment
Traffic Monitoring in a Switched Environment InMon Corp. 1404 Irving St., San Francisco, CA 94122 www.inmon.com 1. SUMMARY This document provides a brief overview of some of the issues involved in monitoring
Building a Highly Available and Scalable Web Farm
Page 1 of 10 MSDN Home > MSDN Library > Deployment Rate this page: 10 users 4.9 out of 5 Building a Highly Available and Scalable Web Farm Duwamish Online Paul Johns and Aaron Ching Microsoft Developer
AS/400e. TCP/IP routing and workload balancing
AS/400e TCP/IP routing and workload balancing AS/400e TCP/IP routing and workload balancing Copyright International Business Machines Corporation 2000. All rights reserved. US Government Users Restricted
Implementation of a Lightweight Service Advertisement and Discovery Protocol for Mobile Ad hoc Networks
Implementation of a Lightweight Advertisement and Discovery Protocol for Mobile Ad hoc Networks Wenbin Ma * Department of Electrical and Computer Engineering 19 Memorial Drive West, Lehigh University Bethlehem,
Optimizing a ëcontent-aware" Load Balancing Strategy for Shared Web Hosting Service Ludmila Cherkasova Hewlett-Packard Laboratories 1501 Page Mill Road, Palo Alto, CA 94303 [email protected] Shankar
NETWORK DESIGN BY USING OPNET IT GURU ACADEMIC EDITION SOFTWARE
RIVIER ACADEMIC JOURNAL, VOLUME 3, NUMBER 1, SPRING 2007 NETWORK DESIGN BY USING OPNET IT GURU ACADEMIC EDITION SOFTWARE Arti Sood * Graduate Student, M.S. in Computer Science Program, Rivier College Abstract
Basic Multiplexing models. Computer Networks - Vassilis Tsaoussidis
Basic Multiplexing models? Supermarket?? Computer Networks - Vassilis Tsaoussidis Schedule Where does statistical multiplexing differ from TDM and FDM Why are buffers necessary - what is their tradeoff,
Scalable Internet Services and Load Balancing
Scalable Services and Load Balancing Kai Shen Services brings ubiquitous connection based applications/services accessible to online users through Applications can be designed and launched quickly and
Application Layer -1- Network Tools
EITF25 Internet: Technology and Applications Application Layer -1- Network Tools 2015, Lecture 08 Kaan Bür Previously on EITF25 Addressing above IP Ports, sockets Process-to-process delivery Transport
Introduction to computer networks and Cloud Computing
Introduction to computer networks and Cloud Computing Aniel Nieves-González Fall 2015 Computer Netwoks A computer network is a set of independent computer systems that are connected by a communication
Homework 2 assignment for ECE374 Posted: 02/20/15 Due: 02/27/15
1 Homework 2 assignment for ECE374 Posted: 02/20/15 Due: 02/27/15 ote: In all written assignments, please show as much of your work as you can. Even if you get a wrong answer, you can get partial credit
Applications. Network Application Performance Analysis. Laboratory. Objective. Overview
Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying
Overlay Networks. Slides adopted from Prof. Böszörményi, Distributed Systems, Summer 2004.
Overlay Networks An overlay is a logical network on top of the physical network Routing Overlays The simplest kind of overlay Virtual Private Networks (VPN), supported by the routers If no router support
Network Address Translation (NAT) Adapted from Tannenbaum s Computer Network Ch.5.6; computer.howstuffworks.com/nat1.htm; Comer s TCP/IP vol.1 Ch.
Network Address Translation (NAT) Adapted from Tannenbaum s Computer Network Ch.5.6; computer.howstuffworks.com/nat1.htm; Comer s TCP/IP vol.1 Ch.20 Long term and short term solutions to Internet scalability
Large-Scale IP Traceback in High-Speed Internet
2004 IEEE Symposium on Security and Privacy Large-Scale IP Traceback in High-Speed Internet Jun (Jim) Xu Networking & Telecommunications Group College of Computing Georgia Institute of Technology (Joint
Real-Time Analysis of CDN in an Academic Institute: A Simulation Study
Journal of Algorithms & Computational Technology Vol. 6 No. 3 483 Real-Time Analysis of CDN in an Academic Institute: A Simulation Study N. Ramachandran * and P. Sivaprakasam + *Indian Institute of Management
AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM. Dr. T.Ravichandran, B.E (ECE), M.E(CSE), Ph.D., MISTE.,
AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM K.Kungumaraj, M.Sc., B.L.I.S., M.Phil., Research Scholar, Principal, Karpagam University, Hindusthan Institute of Technology, Coimbatore
DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v10 with Microsoft IIS 7.0 and 7.5
DEPLOYMENT GUIDE Version 1.2 Deploying the BIG-IP System v10 with Microsoft IIS 7.0 and 7.5 Table of Contents Table of Contents Deploying the BIG-IP system v10 with Microsoft IIS Prerequisites and configuration
G.Vijaya kumar et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1413-1418
An Analytical Model to evaluate the Approaches of Mobility Management 1 G.Vijaya Kumar, *2 A.Lakshman Rao *1 M.Tech (CSE Student), Pragati Engineering College, Kakinada, India. [email protected]
A Fast Path Recovery Mechanism for MPLS Networks
A Fast Path Recovery Mechanism for MPLS Networks Jenhui Chen, Chung-Ching Chiou, and Shih-Lin Wu Department of Computer Science and Information Engineering Chang Gung University, Taoyuan, Taiwan, R.O.C.
Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol
IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 3, JUNE 2000 281 Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol Li Fan, Member, IEEE, Pei Cao, Jussara Almeida, and Andrei Z. Broder Abstract--The
Secure SCTP against DoS Attacks in Wireless Internet
Secure SCTP against DoS Attacks in Wireless Internet Inwhee Joe College of Information and Communications Hanyang University Seoul, Korea [email protected] Abstract. The Stream Control Transport Protocol
DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v9.x with Microsoft IIS 7.0 and 7.5
DEPLOYMENT GUIDE Version 1.2 Deploying the BIG-IP System v9.x with Microsoft IIS 7.0 and 7.5 Deploying F5 with Microsoft IIS 7.0 and 7.5 F5's BIG-IP system can increase the existing benefits of deploying
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT:
Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT: In view of the fast-growing Internet traffic, this paper propose a distributed traffic management
HW2 Grade. CS585: Applications. Traditional Applications SMTP SMTP HTTP 11/10/2009
HW2 Grade 70 60 CS585: Applications 50 40 30 20 0 0 2 3 4 5 6 7 8 9 0234567892022223242526272829303323334353637383940442 CS585\CS485\ECE440 Fall 2009 Traditional Applications SMTP Simple Mail Transfer
Department of Computer Science Institute for System Architecture, Chair for Computer Networks. Caching, Content Distribution and Load Balancing
Department of Computer Science Institute for System Architecture, Chair for Computer Networks Caching, Content Distribution and Load Balancing Motivation Which optimization means do exist? Where should
D. SamKnows Methodology 20 Each deployed Whitebox performs the following tests: Primary measure(s)
v. Test Node Selection Having a geographically diverse set of test nodes would be of little use if the Whiteboxes running the test did not have a suitable mechanism to determine which node was the best
1.1. Abstract. 1.2. VPN Overview
1.1. Abstract Traditionally organizations have designed their VPN networks using layer 2 WANs that provide emulated leased lines. In the last years a great variety of VPN technologies has appeared, making
BlackBerry Enterprise Service 10. Secure Work Space for ios and Android Version: 10.1.1. Security Note
BlackBerry Enterprise Service 10 Secure Work Space for ios and Android Version: 10.1.1 Security Note Published: 2013-06-21 SWD-20130621110651069 Contents 1 About this guide...4 2 What is BlackBerry Enterprise
A TECHNICAL REVIEW OF CACHING TECHNOLOGIES
WHITEPAPER Over the past 10 years, the use of applications to enable business processes has evolved drastically. What was once a nice-to-have is now a mainstream staple that exists at the core of business,
Transport and Network Layer
Transport and Network Layer 1 Introduction Responsible for moving messages from end-to-end in a network Closely tied together TCP/IP: most commonly used protocol o Used in Internet o Compatible with a
CS514: Intermediate Course in Computer Systems
: Intermediate Course in Computer Systems Lecture 7: Sept. 19, 2003 Load Balancing Options Sources Lots of graphics and product description courtesy F5 website (www.f5.com) I believe F5 is market leader
DATA COMMUNICATOIN NETWORKING
DATA COMMUNICATOIN NETWORKING Instructor: Ouldooz Baghban Karimi Course Book: Computer Networking, A Top-Down Approach By: Kurose, Ross Introduction Course Overview Basics of Computer Networks Internet
EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications
ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani
Architecture and Performance of the Internet
SC250 Computer Networking I Architecture and Performance of the Internet Prof. Matthias Grossglauser School of Computer and Communication Sciences EPFL http://lcawww.epfl.ch 1 Today's Objectives Understanding
Flexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks
Flexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks Prashil S. Waghmare PG student, Sinhgad College of Engineering, Vadgaon, Pune University, Maharashtra, India. [email protected]
Chapter 9A. Network Definition. The Uses of a Network. Network Basics
Chapter 9A Network Basics 1 Network Definition Set of technologies that connects computers Allows communication and collaboration between users 2 The Uses of a Network Simultaneous access to data Data
White Paper. Optimizing the video experience for XenApp and XenDesktop deployments with CloudBridge. citrix.com
Optimizing the video experience for XenApp and XenDesktop deployments with CloudBridge Video content usage within the enterprise is growing significantly. In fact, Gartner forecasted that by 2016, large
Stateful Inspection Technology
Stateful Inspection Technology Security Requirements TECH NOTE In order to provide robust security, a firewall must track and control the flow of communication passing through it. To reach control decisions
