Caching Dynamic Content with Automatic Fragmentation. Ikram Chabbouh and Mesaac Makpangou

Size: px
Start display at page:

Download "Caching Dynamic Content with Automatic Fragmentation. Ikram Chabbouh and Mesaac Makpangou"

Transcription

1 Caching Dynamic Content with Automatic Fragmentation Ikram Chabbouh and Mesaac Makpangou Abstract. In this paper we propose a fragment-based caching system that aims at improving the performance of Webbased applications. The system fragments the dynamic pages automatically. Our approach consists in statically analyzing the programs that generate the dynamic pages rather than their output. This approach has the considerable advantage of optimizing the overhead due to fragmentation. Furthermore, we propose a mechanism that increases the reuse rate of the stored fragments, so that the site response time can be improved among other benefits. We validate our approach by using TPC-W as a benchmark. 1. Introduction As Web-based applications become increasingly popular, the topic of maintaining an acceptable level of performance for these applications becomes critical to businesses. In particular, site response time and availability are key aspects of online satisfaction. Typically, the Web pages that support online applications are computed dynamically. This means that the delays experienced by users are directly affected by server performance and not simply due to download times. Besides, more requests are made to the servers, and often the magnitude of user demand outstrips server capacity. The outcome of this observation is that users may be denied access to the server or the access device may become unacceptably slow. Caching is currently the primary mechanism used to reduce server load as well as users observed latency and bandwidth consumption, but caching dynamic pages requires specific techniques as by their very definition, dynamic pages are not supposed to be cacheable. One approach to cache dynamic pages is fragmentbased caching, which consists in considering a page as a container holding distinct objects (called fragments) with heterogeneous characteristics. Recent work on dynamic content caching, has proven the advantages of fragment-based schemes ( [2], [5]). It goes without saying that the efficiency of a fragment-based caching system is conditioned by the relevance of the fragments calculated. In this paper, we propose a fragment-based caching system that aims at maximizing the reuse of the stored fragments in order to lighten the application server s burden and minimize the generation delay of the pages. Our system has two distinct functionalities. The first is to fragment the pages of a site automatically, and the second is a proxy cache functionality taking advantage of the defined fragments to answer clients requests. The emerging studies on the automatic detection of fragments tend to run the programs generating the dynamic pages, several times with the same parameters in order to infer properties that are used to fragment the pages, whereas our approach consists in statically analyzing these programs rather than their output. This approach is more rigorous, insofar as no approximation or assumption is made to determine the fragments. Moreover, our approach has the considerable advantage of operating once and off-line on the parent of the generated pages rather than on each instance of the execution, so that no redundant processing is done and no extra-traffic is generated on the server. Finally, unlike the other approaches where the whole page has to be regenerated whenever a part of it changes, the criteria retained to select the fragments make it possible to ask for them separately. In order to evaluate and prove the effectiveness of the proposed caching system, we conducted a set of experi- INRIA Rocquencourt, Team Regal, France, ikram.chabbouh@inria.fr INRIA Rocquencourt Team Regal, France, mesaac.makpangou@inria.fr

2 ments using the TPC-W Benchmark, which is the most popular benchmark for e-commerce applications. The remainder of the paper is organized as follows: Section 2. delves deeper into the fragmentation issue and covers related work. Section 3. explains the underlying principles of our system. Section 4. presents experimental results, carried out with the TPC-W Benchmark. Finally, in Section 5. we give our concluding remarks and we raise some open questions related to our future work. 2. Background A dynamic Web page written in a scripting language (see figure 1) typically consists of a number of code blocks, each of which performs some work (such as retrieving or formatting content) and produces an HTML fragment as output [4]. A write-to-out statement, which follows each code block, places the resulting HTML fragment in a buffer. Code Block Write to Out Code Block Write to Out Code Block Write to Out Navigation Component Ad Component Scripted Program Dynamically Generated Page Figure 1. Dynamic scripting Process Because of personalization and data freshness aspects, dynamic pages are likely not to be fully reused by the cache, so fragmentation turns out to be the best way to ensure the reuse of cached entries. The idea behind fragmentation is to isolate parts of the dynamic page that exhibit potential benefits and thus are cost-effective as cache units [8]. A fragment is then a part of the generated HTML which does not necessarily correspond to a logical entity. Caching at fragment level aims to achieve several benefits such as the reduction of server load, bandwidth consumption and cache storage space. In this paper we will focus on server CPU time and bandwidth consumption as they both directly affect the latency observed by clients. Server CPU time and bandwidth consumption are reduced when the response can be retrieved from the cache instead of asking the original server for it again. In the particular context of dynamic pages, CPU time is saved if instead of generating the whole page, the server can only generate the missing and the stale fragments of the documents already cached. The same is true for bandwidth consumption, as this parameter is reduced if only missing and stale fragments of a cached document are sent rather than the entire document. Therefore, to increase efficiency, the server should be aware of the fragment entity and should be able to serve fragments separately, which is seldom the case for existing sites Related work Existing fragment-based caching solutions rely on different hypotheses as regards the initial structure of the site. Several studies assume that the pages of the site are already fragmented, which generally either implies that the site is constructed with specific tools that make it possible to create and handle fragments ([10],[9]), or

3 that the administrator does the fragmentation manually ([11],[3]). The first assumption is too restrictive, as existing sites seldom handle fragments originally, while the second is costly, error prone and simply not scalable. Though there have been considerable efforts to exploit the potential of fragment-based schemes, there has been little research on detecting the fragments automatically on existing Web sites. To the best of our knowledge, only two research studies gone more deeply into the automation of the fragmentation ([7] and [6]). Both studies rely on the generation of a modified HTML tree 1 but differ in the selection criteria of the fragments. The first study identifies two criteria: i) a fragment is deemed relevant if it is shared among M already existing fragments (where M > 1), and ii) if it has different lifetime characteristics from those of its encompassing candidate fragment. To detect and flag candidate fragments from pages, the study proceeds in three steps [8]. First, a data structure (called Augmented Fragment Tree) representing the dynamic web pages is constructed. Second, the system applies the fragment detection algorithm to augmented fragment trees to detect candidate fragments. The algorithm for detecting shared fragments works on a collection of different dynamic pages generated from the same Web site, whereas the algorithm that detects the lifetime characteristics works on different versions of each Web page which are obtained from a single query being repeatedly submitted to the given Web site. In the third step, the system collects statistics about the fragments (such as size, access rates etc.), that are meant to help the administrator to decide whether to activate the fragmentation or not. The second study [6], considers the relative size of HTML portions (regarding the size of the whole page), as the prominent factor in the characterization of fragments. The fragmentation algorithm comprises two phases: training and updating. In the training phase, each Web page is analyzed for a period of time. The training algorithm fetches the latest instance of the page from its corresponding URL, parses it and constructs an HTML tree. The tree is then analyzed in order to produce an index tree which contains particular information on every node in the HTML tree. Finally, the training algorithm analyzes the index tree and calculates which areas of the page will be extracted as Web fragments. The update phase begins after the training phase has been completed. The update algorithm proceeds in the same way as the training algorithm except that, afterwards, the former checks whether there are differences between the index tree structures (of the same page) computed during the two phases. If there is any difference, the algorithm updates the latest instance of the Web page by calculating the new fragments and storing them. As the two studies proceed more or less in the same way, they have more or less the same disadvantages. First and foremost, both methods request a certain number of versions of the page to be fragmented from the original Web server. This results at least in two drawbacks. The first is the delay required by both systems in order to fragment a Web page, and the second is the large amount of traffic generated on the application server side - which instead of relieving the servers puts extra-strain on them. Finally, for both approaches, it is difficult to know when a fragment becomes stale, and even when this is known, it is still difficult to obtain the new version of the fragment as the application is not aware of the fragments calculated. 3. Description of the solution The objective of our work is to propose a fragmentation that increases the reuse rate of the cached content in order to lower the number of requests that hit the server and to decrease the generation time of the dynamic pages. We also aim to develop the overall fragment-based caching system relying on the defined fragments to ascertain the benefits of the proposed fragmentation. In Section 3.1., we summarize the fragments selection criteria, then in Section 3.2. we give an overview of the solution and we detail the architecture of the system. 1 an HTML tree roughly corresponds to a structure in which the tags present in the page are internal nodes and the visible text strings are leaves

4 3.1. Candidate fragments The fragmentation we propose separates the dynamic content from the static content in dynamic Web pages so that, at least, the static content can be fully reused. This choice is motivated by the observation made on a large number of dynamic sites, that the redundancy rate of fragments between pages that do not execute the same program is very small. In other words, the pages generated by the respective URLs: and are likely to share many more fragments with each other than with the page generated by the URL: Figure 2 compares the fragments 2 reuse rate between ten different pages randomly accessed on the popular site of the BBC ( and ten other distinct pages generated by the same program on the same site. Figure 2. Distribution of fragments We can notice for instance, that for random URLs, more than 80% of the fragments belong to less than two pages, while for the different instances 3 of the page, about 70% of the fragments are shared between the ten pages. Thus the fragments reuse rate for the randomly accessed pages is very low, while it is far superior for different instances of the same page. This is easily understandable since the pages that are generated by the same program, share at least all their static part; then, depending on the scripts inputs and the variability of the 2 The fragments were calculated by comparing different versions of the same page in order to localize the HTML portions that are likely to correspond to the output of different scripts 3 we say that two dynamic pages are instances of the same page if they execute the same program even with different parameters, thus and are instances of the same page

5 handled data, dynamic parts can be partially or totally shared. Therefore, it is interesting to focus on the reuse of fragments between the different instances of the page whenever this is possible. The fragmentation we propose also selects the fragments that can be separately fetchable from the server, because, as we explained in Section 2., great benefits can be achieved when the server is aware of the fragment entity and when fragments can be served separately. In order to be separately fetchable, the fragments are likely to correspond to the execution of independent programs, therefore, the identification of fragments simply boils down to the detection of the code independent scripts Architecture of the system As mentioned previously, our caching system has two functionalities, the first is to fragment the dynamic Web pages on the server side and the second is to deploy the logic of caching and handling fragments outside the application server. The first functionality is performed by a module which is located on the application server side, while the second is performed by a reverse proxy cache module that is positioned in front of the Web server. Figure 3 depicts the different entities interacting with the caching system. The fragmentation module takes the dynamic pages of the Web site as input, analyzes them, extracts useful information on the scripts, determines the fragments then augments the pages with fragmenting instructions. The execution of the augmented dynamic pages, by the Web server, then results in the generation of fragmented pages containing meta-data of the embedded fragments. The most interesting element in the metadata calculated is what we call the fragment filter. A fragment filter is a piece of information, associated with a script, which is used to produce a unique identifier per different output generated by the script. Particular attention should be paid to the importance of the filters in our system. In order to increase the stored fragments reuse rate and to lighten the application server s load, the cache should only ask for the missing fragments of a served page whenever this is possible. In the best case scenario, the cache would know in advance the fragments it is supposed to ask for when it receives a request. But normally, the attribution of an identifier to a fragment is performed after the generation of the page containing the fragment itself, and thus it is difficult to know in advance which fragments a page contains unless the page has already been generated. As will be explained later, the filters provide a generic characterization of the scripts (i.e. not specific to a particular execution) so that it becomes possible for a cache to calculate the identifiers of fragments which should be present in a page that has not already been requested, if only the program generating the page has been executed with different parameters. The following subsections specify the functioning of the system s components The fragmentation module The fragmentation module statically parses the code of the programs generating dynamic pages as this code contains the exact set of variables that are actually used, the operations made on them, the set of database accesses and there is still a clear separation between the static and the dynamic content (unlike the dynamic pages generated in which the code has already been executed and the HTML generated). When parsing the code, the fragmentation module constructs a semantic tree describing the attributes and the dependencies of the different scripts of the page. It goes without saying that to do so the program must handle the scripting language in which the program has been written. This is done by parsers that are intended to abstract the content away from the grammar of the language. The current version of our system only includes a PHP parser. The semantic tree produced is then given as input to a program that analyzes it and extracts different sets of meaningful information for the subsequent caching modules. This information is stored in separate files called configuration files.

6 In this paper, we will only concentrate on the information that is relevant to the fragmenation and the construction of filters. It should be noted that our automatic fragmentation will insert tagging instructions into the scripts in order to generate the markup that will delimit the fragments and specify the metadata characterizing them. Thus, we first need to know where to insert the tagging instructions. In practice, this requires determining the begin and end offset of each fragment. Some other important information to fragment the pages and calculate their filter, is the set of variables used by the script. In this context we distinguish between two types of variables: the script s local variables and the environment variables. The first kind of variables is used to determine the page independent scripts, as two independant scripts must not share the same local variables. The second kind of variables is used to compute the fragment filter. As the filter is meant to characterize the output of a script in a unique way, we propose finding out the parameters of the request that do affect the generated result. It is important to recall that the output of a script also depends on the updates of the database (if any displayed information is retreived from a database), but in this paper, we will only focalize on the first kind of dependency, as the second concerns another requirement of caching dynamic pages (i.e. invalidation of the stored content which is beyond the scope of our study). Therefore, the relevant parameters are: parameters sent in the GET request, parameters sent in the POST request, cookies, HTTP elements, CGI elements. Hence based on the configuration file created, the fragmentation module selects the fragments and determines their attributes (identifier, filter and subfragments). Then the final step of the fragmentation simply consists in augmenting the analyzed programs in order to generate fragmented pages and to enable the application to serve fragments separately. The modification consists in the following actions: 1. Strip the body of the fragment from the page, surround it with the appropriate tagging instructions and store it as an independant program in the same directory as the page, 2. replace the occurence of the fragment in the page by an include instruction referring to it. Henceforth, a dynamic page will be assimilated to a template containing the references of underlying fragments. Let us take the example of figure 4 to illustrate the fragmentation process. Figure 4 depicts a dynamic page generated by calling a PHP program with two arguments. Our system considers all the static part of the page as a single fragment, and considers the output of the independent scripts as separate fragments, thereby fragmenting the page as shown in figure 5. Now let us take the example of figure 6 to illustrate the principle of filters. As we can see in this figure, the output of fragment Home-1.php depends on the variable C ID, while the output of the fragment Home-2.php depends on the variable I ID (both variables are contained in the query string). The filter associated to the fragment Home-1.php consists in a structure containing the set of corresponding labels of environment variables affecting the script. The fragment key is then constructed by mapping the filter labels to their corresponding value and combining them to the fragment ID (see subection 3.3. for details).

7 Dynamic Pages Fragmentation Module Dynamic Pages+ Element 1 SerchResult.php <html> static 1 Web Server Element 4 pgm1 Fragment Aware Reverse Proxy Element 3 Element 5 Element 5 pgm2 pgm3 pgm4 Client Client Client static 2 Element 2 </html> Figure 3. Global view of the interactions Figure 4. Example of a PHP dynamic page Fragment 1 SearchResult.php <html> Augmented SearchResult.php Fragment 2 Fragment 3 Fragment 4 static 1 pgm1 pgm2 pgm3 echo ("<Frag name=\"searchresult.php\" subfrags=...>"); echo("<include SearchResult 1.php>"); echo("<include SearchResult 2.php>"); echo("<include SearchResult 3.php>"); echo("<include SearchResult 4.php>"); echo("</frag>"); include( SearchResult 1.php> ); include( SearchResult 2.php> ); include( SearchResult 3.php> ); include( SearchResult 4.php> ); Fragment 5 pgm4 static 2 </html> SearchResult 1.php echo ("<Frag name=\"searchresult 1.php\" filter=\"search_type=$search_type...\" >"); pgm1 echo("</frag>");... Figure 5. Fragmentation of a dynamic page

8 3.3. Fragment-aware reverse proxy cache As its name suggests, the fragment-aware reverse proxy cache manipulates fragments as base entity. To explain the functioning of the reverse proxy cache, we will first illustrate it by a concrete scenario, and then we will give the general algorithm describing the logic. The fragment-aware proxy cache maintains a map indexed by the accessed URLs and containing the fragment s attributes. Let us assume that the reverse proxy receives the following request as the first request to the home page: GET /Home.php?C_ID=210&I_ID=1 HTTP/1.1 it initially checks whether it has an entry in the map corresponding to the URL but as the URL is not already stored, the proxy requests the original server for it. The server sends an augmented response which contains markups delimiting the fragments as well as their metadata (see figure 6). Upon receiving the response, the proxy cache parses it, extracts the embedded name, filter and subfragments, and stores this information in the map. The entries of the map also have a pointer to a chained list whose nodes describe the stored different instances of the same URL. Each node of the structre contains the actual body (HTML) and the key of the instance (see figure 7). The key of a fragment is a structure containing the current values of the variables stored in the filter. Now when the proxy receives the following request: GET /Home.php?C_ID=300&I_ID=1 HTTP/1.1 it locates an entry in the map corresponding to the URL and as the template (also called root fragment) is static, it will be reused as it is. Next, the proxy checks the subfragments of the root, for Home.php there are two subfragments Home-1.php and Home-2.php. The two subfragments also have their own entries in the map, and thus in order to know whether the required instances are already stored, the proxy checks the filters and finds out that the fragments depend respectively on C ID and I ID. Based on the current request parameters and the stored filters, the proxy will calculate the keys of the fragments that are to be sent in the response. In this case it will find that there is no instance stored of Home-1.php with the value 300 of C ID, whereas there is already an instance of Home-2.php with the value 1 of I ID, so it will only ask the orginal server for the first fragment. Then it renconstructs the page and sends the reponse. C_ID=210&I_ID=1 static 1 <Frag name="home 1.php" filter="c_id=210"> Home 1.php?C_ID=210 C_ID I_ID Home.php static 1 Home 1.php static 2 Home 2.php static 3 </Frag> static 2 <Frag name="home 2.php" filter="i_id=1"> </Frag> static 3 C_ID=300&I_ID=1 static 1 <Frag name="home 1.php" filter="c_id=300"> </Frag> static 2 <Frag name="home 2.php" filter="i_id=1"> Home 2.php?I_ID=1 Home 1.php?C_ID=300 Home 2.php?I_ID=1 Home.php Home 1.php Home 2.php SearchRequest.php... subfrags: filter: stored_instances: subfrags: filter: stored_instances: Home 1.php, Home 2.php none key_val: body: next none C_ID key_val: body: next: none buffer 210 buffer key_val: body: next: 300 buffer </Frag> static 3 Figure 6. Example of dependency Figure 7. The following simplified algorithm describes the rationale of the process that handles clients requests in the

9 reverse proxy cache: handle request(request){ if (!stored template) { request server(request); analyze response(response); store response(analyze output); send response(formatted response); else //the template is stored { fetch template(url); lookup subfragments(template) while (subfragments) { if (!stored URL fragment){ request server(fragment name, query string); analyze response(response); store fragment(analyze output); else{ extract filter(fragment); calculate fragment key(filter, query string); if (!stored instance) { request server(fragment name, query string); analyze response(response); store fragment(analyze output); reconstruct page(fragments body); send response(constructed response); 4. Performance evaluation In order to validate our approach and prove the benefits of the system, we decided to focus on e-commerce as a particular Web-based application. Thus it seemed natural to turn to TPC-W [1] as a Benchmark. TPC-W specifies an e-commerce workload that simulates the activities of a retail store website. Emulated users can browse and order products from the site. Users are emulated via several Remote Browser Emulators (RBEs)

10 and all RBEs can be configured to generate different interaction mixes: Browsing mix (95% of browsing and 5% of ordering), Shopping mix (80% of browsing and 20% of ordering), and Ordering mix (50% of browsing and 50% of ordering). We ran our fragmentation module on the pages of the site, and it should be noticed that not all the fragments were cacheable. In particular, the fragments modifying the backend data base were not cached. The fragmented pages contained three fragments on average given that the pages were relatively small. As the system aims to lighten the server s load and the generation delay of the pages, we first measured the fragments reuse rate for the different interactions mixes. As one would expect, the greater the percentage of browsing interactions, the greater the reuse rate. For the browsing mix the average calculated on ten simulations, run with 500 clients and 1000 items in the database, was about 60% which means that almost 2/3 of the fragments needed, per simulation, were served from the cache instead of hitting the original server. For both other mixes, approximately 1 fragment out of 3 was retreived from the cache (the mean percentages were respectively 48% and 39% under the same test conditions). Reusing fragments from the cache also significantly reduces the amount of traffic that flows between the server and the cache. In particular, in browsing mix sessions the system was able to save up to 60% of the bandwidth consumption. The benefits briefly discussed above have a direct impact on the generation delay of the pages. It is important to stress that the measurements presented in this paper were made on the server and on the cache, hence these are the minimum gains that can be achieved as the propagation delay over the network is not taken into account 4. Figure 8, represents the time required by the proxy to answer 100 randomly generated requests to the home page. We can notice that after the first request, the reverse proxy cache response time decreases greatly and remains low. It is worth noting that the cache response time for the first request is not higher than the server response time for the same request. Moreover, the server response time does not decrease over the time unless internal caching is used Generation time on the reverse proxy Generation time on the reverse proxy cache Generation time on the original server Response time/request (seconds) Response time (seconds) Request number Request number Figure 8. Home page reuqests - Server vs cache Figure 9. SearchResults - Server vs cache Figure 9 represents the respective response time of the cache and the server to answer the same search requests made by the clients. While the server response time increases linearly with the number of requests, the cache response time increases much more slowly, the difference is even more noticeable for heavy scripts. The more 4 as the proxy is usually assumed to be nearer the clients, the propagation delay over the network should be lower between the cache and the clients

11 time the scripts take to execute, the higher the savings. Other characteristics such as the percentage of dynamic content and the number of scripts contained in a page may also influence the performance. To test the performances of our system under different configurations, we developed a configurable generator of php pages. This generator takes values of the characteristics above as input and generates pages accordingly. To give an idea of the performances and limitations of the system we will present the best and the worst case scenarios obtained for the benchtest considered. In our benchtest, we maintained constant the average number of scripts in a page (i.e. 10 scripts/page), and we varied other parameters such as the percentage of heavy / light scripts and the redundancy rate of fragments. We call heavy script, a script that makes heavy requests to the database, whereas a light script is a script that only executes few instructions that do not access to the database. Figure 10 represents the response time observed when all the fragments of the pages are heavy. We notice that in this case, independently of the fragments reuse rate, it is always worth fragmenting the pages and asking for the fragments separately. Figure 10. Response time for heavy scripts Figure 11. Response time for light scripts Figure 11 represents the response time observed when all the fragments of the page execute a single printing instruction. Here, we can notice that for the fragments considered, the fragmentation only becomes worthwhile beyond a certain threshold of fragments reuse rate. This stems from the fact that the cost of the function calling the script is no longer made up for by the execution time of the script in question. In fact, the execution time of the include instruction becomes greater than the execution time of the script itself, and thus when the reuse rate is low, and when most of the fragments have to be generated, the cost of generating the page may even double. This only shows that the fragments should have a minimum size in order to be cost effective as cache units. 5. Discussion and perspectives In this paper we have proposed a fragment-based caching system for dynamically generated pages. Our approach consists in statically analyzing the code of the programs generating the dynamic pages. Such a static analysis avoids redundant processing and lowers the overhead of fragmentation inasmuch as the entire analysis is made once and off-line on the programs themselves. Special care was taken to increase the fragments reuse rate. Thanks to calculated filters, our system enables the cache to know in advance the identifiers of fragments required to construct a page (if just the template of the page is already stored). This results in optimizing the requests, lowering the generation delays, reducing the load on the original server insofar as, henceforth, only the missing fragments are requested.

12 One might consider modifying of the site repository as a drawback; nonetheless, this is a minor intrusion as the application logic and processing remain unchanged and only the organization of the page changes. In future versions of the system, we aim at deploying the fragments handling logic to a hierarchy of proxies, and we are now working on the specification of the collaboration protocol between the proxies. Furthermore, the current version of the system fully automates the fragmentation. We are now considering the possibility of allowing the administrator to modify the automatic selection of fragments if necessary, as human intervention is likely to improve the performances since it leads to a better understanding of the application particular needs. Finally we intend to study more closely the effect of fragment characteristics (such as size, execution time) on the performances of the system. References [1] [2] CHALLENGER, J., IYENGAR, A., C., K. W., FERSTAT, AND REED, P. Publishing system for efficiently creating dynamic web content. Proceedings of the IEEE INFOCOM 2000 (May 2000). [3] CHALLENGER, J., IYENGAR, A., AND DANTZIG, P. A scalable system for consistently caching dynamic web data. Proceedings of the IEEE INFOCOM 99, New York (1999). [4] DATTA, A., DUTTA, K., THOMAS, K. R. H., AND VANDERMEER, D. Dynamic content acceleration: A caching solution to enable scalable dynamic web page generation. Proceeding of the fifteenth ACM Symposium on Operating Systems Principles (SIGOPS) (May 2001). [5] DATTA, A., HELEN, K. D., SURESHA, T. D. V., AND RAMAMRITHAM, K. Proxy-based acceleration of dynamically generated content on the worl wide web: An approach and implementation. ACM SIGMOD 2002 (june 2002). [6] MISEDAKIS, I., KAPOULAS, V., AND BOURAS, C. Web fragmentation and content manipulation for constructing personalized portals. APWeb 2004, LNCS 3007 (2004). [7] RAMASWAMY, L., IYENGAR, A., LIU, L., AND DOUGLIS, F. Techniques for efficient fragment detection in web pages. Proceedings of the 12th International Conference on Information and Knowledge Management, CIKM 2003 (November 2003). [8] RAMASWAMY, L., IYENGAR, A., LIU, L., AND DOUGLIS, F. Automatic detection of fragments in dynamically generated web pages. WWW2004 ACM X/04/0005 New York USA (May 2004). [9] YUAN, C., CHEN, Y., AND ZHANG, Z. Evaluation of edge caching/offloading for dynamic content delivery. Proceedings of the 12th international conference on World Wide Web WWW-2003 (November 2003). [10] YUAN, C., HUA, Z., AND ZHANG, Z. Proxy+ : Simple proxy augmentation for dynamic content processing. Tech. rep., Microsoft Research Asia, [11] ZHU, H., AND YANG, T. Class-based cache management for dynamic web content. Tech. rep., University of California Santa Barbara, 2001.

SiteCelerate white paper

SiteCelerate white paper SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance

More information

Chapter-1 : Introduction 1 CHAPTER - 1. Introduction

Chapter-1 : Introduction 1 CHAPTER - 1. Introduction Chapter-1 : Introduction 1 CHAPTER - 1 Introduction This thesis presents design of a new Model of the Meta-Search Engine for getting optimized search results. The focus is on new dimension of internet

More information

A STUDY OF WORKLOAD CHARACTERIZATION IN WEB BENCHMARKING TOOLS FOR WEB SERVER CLUSTERS

A STUDY OF WORKLOAD CHARACTERIZATION IN WEB BENCHMARKING TOOLS FOR WEB SERVER CLUSTERS 382 A STUDY OF WORKLOAD CHARACTERIZATION IN WEB BENCHMARKING TOOLS FOR WEB SERVER CLUSTERS Syed Mutahar Aaqib 1, Lalitsen Sharma 2 1 Research Scholar, 2 Associate Professor University of Jammu, India Abstract:

More information

Binonymizer A Two-Way Web-Browsing Anonymizer

Binonymizer A Two-Way Web-Browsing Anonymizer Binonymizer A Two-Way Web-Browsing Anonymizer Tim Wellhausen Gerrit Imsieke (Tim.Wellhausen, Gerrit.Imsieke)@GfM-AG.de 12 August 1999 Abstract This paper presents a method that enables Web users to surf

More information

Web Caching With Dynamic Content Abstract When caching is a good idea

Web Caching With Dynamic Content Abstract When caching is a good idea Web Caching With Dynamic Content (only first 5 pages included for abstract submission) George Copeland - copeland@austin.ibm.com - (512) 838-0267 Matt McClain - mmcclain@austin.ibm.com - (512) 838-3675

More information

EXTENDING JMETER TO ALLOW FOR WEB STRUCTURE MINING

EXTENDING JMETER TO ALLOW FOR WEB STRUCTURE MINING EXTENDING JMETER TO ALLOW FOR WEB STRUCTURE MINING Agustín Sabater, Carlos Guerrero, Isaac Lera, Carlos Juiz Computer Science Department, University of the Balearic Islands, SPAIN pinyeiro@gmail.com, carlos.guerrero@uib.es,

More information

Performance evaluation of Web Information Retrieval Systems and its application to e-business

Performance evaluation of Web Information Retrieval Systems and its application to e-business Performance evaluation of Web Information Retrieval Systems and its application to e-business Fidel Cacheda, Angel Viña Departament of Information and Comunications Technologies Facultad de Informática,

More information

Locality Based Protocol for MultiWriter Replication systems

Locality Based Protocol for MultiWriter Replication systems Locality Based Protocol for MultiWriter Replication systems Lei Gao Department of Computer Science The University of Texas at Austin lgao@cs.utexas.edu One of the challenging problems in building replication

More information

International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November-2013 349 ISSN 2229-5518

International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November-2013 349 ISSN 2229-5518 International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November-2013 349 Load Balancing Heterogeneous Request in DHT-based P2P Systems Mrs. Yogita A. Dalvi Dr. R. Shankar Mr. Atesh

More information

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

1. Comments on reviews a. Need to avoid just summarizing web page asks you for: 1. Comments on reviews a. Need to avoid just summarizing web page asks you for: i. A one or two sentence summary of the paper ii. A description of the problem they were trying to solve iii. A summary of

More information

The Critical Role of an Application Delivery Controller

The Critical Role of an Application Delivery Controller The Critical Role of an Application Delivery Controller Friday, October 30, 2009 Introduction In any economic environment a company s senior management expects that their IT organization will continually

More information

System Requirement Specification for A Distributed Desktop Search and Document Sharing Tool for Local Area Networks

System Requirement Specification for A Distributed Desktop Search and Document Sharing Tool for Local Area Networks System Requirement Specification for A Distributed Desktop Search and Document Sharing Tool for Local Area Networks OnurSoft Onur Tolga Şehitoğlu November 10, 2012 v1.0 Contents 1 Introduction 3 1.1 Purpose..............................

More information

Dependency Free Distributed Database Caching for Web Applications and Web Services

Dependency Free Distributed Database Caching for Web Applications and Web Services Dependency Free Distributed Database Caching for Web Applications and Web Services Hemant Kumar Mehta School of Computer Science and IT, Devi Ahilya University Indore, India Priyesh Kanungo Patel College

More information

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02) Internet Technology Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No #39 Search Engines and Web Crawler :: Part 2 So today we

More information

World-wide online monitoring interface of the ATLAS experiment

World-wide online monitoring interface of the ATLAS experiment World-wide online monitoring interface of the ATLAS experiment S. Kolos, E. Alexandrov, R. Hauser, M. Mineev and A. Salnikov Abstract The ATLAS[1] collaboration accounts for more than 3000 members located

More information

Cross Site Scripting Prevention

Cross Site Scripting Prevention Project Report CS 649 : Network Security Cross Site Scripting Prevention Under Guidance of Prof. Bernard Menezes Submitted By Neelamadhav (09305045) Raju Chinthala (09305056) Kiran Akipogu (09305074) Vijaya

More information

CS 558 Internet Systems and Technologies

CS 558 Internet Systems and Technologies CS 558 Internet Systems and Technologies Dimitris Deyannis deyannis@csd.uoc.gr 881 Heat seeking Honeypots: Design and Experience Abstract Compromised Web servers are used to perform many malicious activities.

More information

Chapter 5. Regression Testing of Web-Components

Chapter 5. Regression Testing of Web-Components Chapter 5 Regression Testing of Web-Components With emergence of services and information over the internet and intranet, Web sites have become complex. Web components and their underlying parts are evolving

More information

The Role and uses of Peer-to-Peer in file-sharing. Computer Communication & Distributed Systems EDA 390

The Role and uses of Peer-to-Peer in file-sharing. Computer Communication & Distributed Systems EDA 390 The Role and uses of Peer-to-Peer in file-sharing Computer Communication & Distributed Systems EDA 390 Jenny Bengtsson Prarthanaa Khokar jenben@dtek.chalmers.se prarthan@dtek.chalmers.se Gothenburg, May

More information

A Tool for Evaluation and Optimization of Web Application Performance

A Tool for Evaluation and Optimization of Web Application Performance A Tool for Evaluation and Optimization of Web Application Performance Tomáš Černý 1 cernyto3@fel.cvut.cz Michael J. Donahoo 2 jeff_donahoo@baylor.edu Abstract: One of the main goals of web application

More information

Concept of Cache in web proxies

Concept of Cache in web proxies Concept of Cache in web proxies Chan Kit Wai and Somasundaram Meiyappan 1. Introduction Caching is an effective performance enhancing technique that has been used in computer systems for decades. However,

More information

MAGENTO HOSTING Progressive Server Performance Improvements

MAGENTO HOSTING Progressive Server Performance Improvements MAGENTO HOSTING Progressive Server Performance Improvements Simple Helix, LLC 4092 Memorial Parkway Ste 202 Huntsville, AL 35802 sales@simplehelix.com 1.866.963.0424 www.simplehelix.com 2 Table of Contents

More information

Controlled Caching of Dynamic WWW Pages

Controlled Caching of Dynamic WWW Pages Controlled Caching of Dynamic WWW Pages Costas Vassilakis 1, Giorgos Lepouras 1 1 University of Athens, Department of Informatics and Telecommunications Panepistimiopolis, TYPA Buildings, Athens 157 71

More information

DATA COMMUNICATOIN NETWORKING

DATA COMMUNICATOIN NETWORKING DATA COMMUNICATOIN NETWORKING Instructor: Ouldooz Baghban Karimi Course Book: Computer Networking, A Top-Down Approach, Kurose, Ross Slides: - Course book Slides - Slides from Princeton University COS461

More information

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v9.x with Microsoft IIS 7.0 and 7.5

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v9.x with Microsoft IIS 7.0 and 7.5 DEPLOYMENT GUIDE Version 1.2 Deploying the BIG-IP System v9.x with Microsoft IIS 7.0 and 7.5 Deploying F5 with Microsoft IIS 7.0 and 7.5 F5's BIG-IP system can increase the existing benefits of deploying

More information

How To Test Your Web Site On Wapt On A Pc Or Mac Or Mac (Or Mac) On A Mac Or Ipad Or Ipa (Or Ipa) On Pc Or Ipam (Or Pc Or Pc) On An Ip

How To Test Your Web Site On Wapt On A Pc Or Mac Or Mac (Or Mac) On A Mac Or Ipad Or Ipa (Or Ipa) On Pc Or Ipam (Or Pc Or Pc) On An Ip Load testing with WAPT: Quick Start Guide This document describes step by step how to create a simple typical test for a web application, execute it and interpret the results. A brief insight is provided

More information

DEPLOYMENT GUIDE Version 1.1. Deploying F5 with IBM WebSphere 7

DEPLOYMENT GUIDE Version 1.1. Deploying F5 with IBM WebSphere 7 DEPLOYMENT GUIDE Version 1.1 Deploying F5 with IBM WebSphere 7 Table of Contents Table of Contents Deploying the BIG-IP LTM system and IBM WebSphere Servers Prerequisites and configuration notes...1-1

More information

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* Junho Jang, Saeyoung Han, Sungyong Park, and Jihoon Yang Department of Computer Science and Interdisciplinary Program

More information

Front-End Performance Testing and Optimization

Front-End Performance Testing and Optimization Front-End Performance Testing and Optimization Abstract Today, web user turnaround starts from more than 3 seconds of response time. This demands performance optimization on all application levels. Client

More information

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v10 with Microsoft IIS 7.0 and 7.5

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v10 with Microsoft IIS 7.0 and 7.5 DEPLOYMENT GUIDE Version 1.2 Deploying the BIG-IP System v10 with Microsoft IIS 7.0 and 7.5 Table of Contents Table of Contents Deploying the BIG-IP system v10 with Microsoft IIS Prerequisites and configuration

More information

Lecture 8a: WWW Proxy Servers and Cookies

Lecture 8a: WWW Proxy Servers and Cookies Internet and Intranet Protocols and Applications Lecture 8a: WWW Proxy Servers and Cookies March 12, 2003 Arthur Goldberg Computer Science Department New York University artg@cs.nyu.edu Terminology Origin

More information

Peer-to-peer Cooperative Backup System

Peer-to-peer Cooperative Backup System Peer-to-peer Cooperative Backup System Sameh Elnikety Mark Lillibridge Mike Burrows Rice University Compaq SRC Microsoft Research Abstract This paper presents the design and implementation of a novel backup

More information

Globule: a Platform for Self-Replicating Web Documents

Globule: a Platform for Self-Replicating Web Documents Globule: a Platform for Self-Replicating Web Documents Guillaume Pierre Maarten van Steen Vrije Universiteit, Amsterdam Internal report IR-483 January 2001 Abstract Replicating Web documents at a worldwide

More information

Adding Advanced Caching and Replication Techniques to the Apache Web Server

Adding Advanced Caching and Replication Techniques to the Apache Web Server Adding Advanced Caching and Replication Techniques to the Apache Web Server Joachim Marder, Steffen Rothkugel, Peter Sturm University of Trier D-54286 Trier, Germany Email: marder@jam-software.com, sroth@uni-trier.de,

More information

DEPLOYMENT GUIDE Version 2.1. Deploying F5 with Microsoft SharePoint 2010

DEPLOYMENT GUIDE Version 2.1. Deploying F5 with Microsoft SharePoint 2010 DEPLOYMENT GUIDE Version 2.1 Deploying F5 with Microsoft SharePoint 2010 Table of Contents Table of Contents Introducing the F5 Deployment Guide for Microsoft SharePoint 2010 Prerequisites and configuration

More information

TPC-W * : Benchmarking An Ecommerce Solution By Wayne D. Smith, Intel Corporation Revision 1.2

TPC-W * : Benchmarking An Ecommerce Solution By Wayne D. Smith, Intel Corporation Revision 1.2 TPC-W * : Benchmarking An Ecommerce Solution By Wayne D. Smith, Intel Corporation Revision 1.2 1 INTRODUCTION How does one determine server performance and price/performance for an Internet commerce, Ecommerce,

More information

Best Practices for Web Application Load Testing

Best Practices for Web Application Load Testing Best Practices for Web Application Load Testing This paper presents load testing best practices based on 20 years of work with customers and partners. They will help you make a quick start on the road

More information

DoS: Attack and Defense

DoS: Attack and Defense DoS: Attack and Defense Vincent Tai Sayantan Sengupta COEN 233 Term Project Prof. M. Wang 1 Table of Contents 1. Introduction 4 1.1. Objective 1.2. Problem 1.3. Relation to the class 1.4. Other approaches

More information

Web Database Integration

Web Database Integration Web Database Integration Wei Liu School of Information Renmin University of China Beijing, 100872, China gue2@ruc.edu.cn Xiaofeng Meng School of Information Renmin University of China Beijing, 100872,

More information

1 How to Monitor Performance

1 How to Monitor Performance 1 How to Monitor Performance Contents 1.1. Introduction... 1 1.1.1. Purpose of this How To... 1 1.1.2. Target Audience... 1 1.2. Performance - some theory... 1 1.3. Performance - basic rules... 3 1.4.

More information

What is Web Security? Motivation

What is Web Security? Motivation brucker@inf.ethz.ch http://www.brucker.ch/ Information Security ETH Zürich Zürich, Switzerland Information Security Fundamentals March 23, 2004 The End Users View The Server Providers View What is Web

More information

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc. Chapter 2 TOPOLOGY SELECTION SYS-ED/ Computer Education Techniques, Inc. Objectives You will learn: Topology selection criteria. Perform a comparison of topology selection criteria. WebSphere component

More information

Developing Web Browser Recording Tools. Using Server-Side Programming Technology

Developing Web Browser Recording Tools. Using Server-Side Programming Technology Developing Web Browser Recording Tools Using Server-Side Programming Technology Chris J. Lu Ph.D. National Library of Medicine NLM, NIH, Bldg. 38A, Rm. 7N-716, 8600 Rockville Pike Bethesda, MD 20894, USA

More information

Forensic Analysis of Internet Explorer Activity Files

Forensic Analysis of Internet Explorer Activity Files Forensic Analysis of Internet Explorer Activity Files by Keith J. Jones keith.jones@foundstone.com 3/19/03 Table of Contents 1. Introduction 4 2. The Index.dat File Header 6 3. The HASH Table 10 4. The

More information

WIRIS quizzes web services Getting started with PHP and Java

WIRIS quizzes web services Getting started with PHP and Java WIRIS quizzes web services Getting started with PHP and Java Document Release: 1.3 2011 march, Maths for More www.wiris.com Summary This document provides client examples for PHP and Java. Contents WIRIS

More information

Webapps Vulnerability Report

Webapps Vulnerability Report Tuesday, May 1, 2012 Webapps Vulnerability Report Introduction This report provides detailed information of every vulnerability that was found and successfully exploited by CORE Impact Professional during

More information

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN Updated: May 19, 2015 Contents Introduction... 1 Cloud Integration... 1 OpenStack Support... 1 Expanded

More information

Experimentation with the YouTube Content Delivery Network (CDN)

Experimentation with the YouTube Content Delivery Network (CDN) Experimentation with the YouTube Content Delivery Network (CDN) Siddharth Rao Department of Computer Science Aalto University, Finland siddharth.rao@aalto.fi Sami Karvonen Department of Computer Science

More information

ACCELERATING DYNAMIC WEB CONTENT DELIVERY USING KEYWORD-BASED FRAGMENT DETECTION a

ACCELERATING DYNAMIC WEB CONTENT DELIVERY USING KEYWORD-BASED FRAGMENT DETECTION a Journal of Web Engineering, Vol. 4, No. 1 (2005) 079 099 c Rinton Press ACCELERATING DYNAMIC WEB CONTENT DELIVERY USING KEYWORD-BASED FRAGMENT DETECTION a DANIEL BRODIE b, AMRISH GUPTA, WEISONG SHI Department

More information

How To Build A Connector On A Website (For A Nonprogrammer)

How To Build A Connector On A Website (For A Nonprogrammer) Index Data's MasterKey Connect Product Description MasterKey Connect is an innovative technology that makes it easy to automate access to services on the web. It allows nonprogrammers to create 'connectors'

More information

How To Understand The Power Of A Content Delivery Network (Cdn)

How To Understand The Power Of A Content Delivery Network (Cdn) Overview 5-44 5-44 Computer Networking 5-64 Lecture 8: Delivering Content Content Delivery Networks Peter Steenkiste Fall 04 www.cs.cmu.edu/~prs/5-44-f4 Web Consistent hashing Peer-to-peer CDN Motivation

More information

DEPLOYMENT GUIDE DEPLOYING F5 WITH MICROSOFT WINDOWS SERVER 2008

DEPLOYMENT GUIDE DEPLOYING F5 WITH MICROSOFT WINDOWS SERVER 2008 DEPLOYMENT GUIDE DEPLOYING F5 WITH MICROSOFT WINDOWS SERVER 2008 Table of Contents Table of Contents Deploying F5 with Microsoft Windows Server 2008 Prerequisites and configuration notes...1-1 Deploying

More information

131-1. Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10

131-1. Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10 1/10 131-1 Adding New Level in KDD to Make the Web Usage Mining More Efficient Mohammad Ala a AL_Hamami PHD Student, Lecturer m_ah_1@yahoocom Soukaena Hassan Hashem PHD Student, Lecturer soukaena_hassan@yahoocom

More information

Developing ASP.NET MVC 4 Web Applications

Developing ASP.NET MVC 4 Web Applications Course M20486 5 Day(s) 30:00 Hours Developing ASP.NET MVC 4 Web Applications Introduction In this course, students will learn to develop advanced ASP.NET MVC applications using.net Framework 4.5 tools

More information

Business Application Services Testing

Business Application Services Testing Business Application Services Testing Curriculum Structure Course name Duration(days) Express 2 Testing Concept and methodologies 3 Introduction to Performance Testing 3 Web Testing 2 QTP 5 SQL 5 Load

More information

MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT

MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT 1 SARIKA K B, 2 S SUBASREE 1 Department of Computer Science, Nehru College of Engineering and Research Centre, Thrissur, Kerala 2 Professor and Head,

More information

Monitoring Siebel Enterprise

Monitoring Siebel Enterprise Monitoring Siebel Enterprise eg Enterprise v6 Restricted Rights Legend The information contained in this document is confidential and subject to change without notice. No part of this document may be reproduced

More information

Web Browsing Quality of Experience Score

Web Browsing Quality of Experience Score Web Browsing Quality of Experience Score A Sandvine Technology Showcase Contents Executive Summary... 1 Introduction to Web QoE... 2 Sandvine s Web Browsing QoE Metric... 3 Maintaining a Web Page Library...

More information

Benchmarking Cassandra on Violin

Benchmarking Cassandra on Violin Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract

More information

Using Steelhead Appliances and Stingray Aptimizer to Accelerate Microsoft SharePoint WHITE PAPER

Using Steelhead Appliances and Stingray Aptimizer to Accelerate Microsoft SharePoint WHITE PAPER Using Steelhead Appliances and Stingray Aptimizer to Accelerate Microsoft SharePoint WHITE PAPER Introduction to Faster Loading Web Sites A faster loading web site or intranet provides users with a more

More information

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Data Center Software Architecture: Topic 1: Distributed File Systems Finding a needle in Haystack: Facebook

More information

Working With Virtual Hosts on Pramati Server

Working With Virtual Hosts on Pramati Server Working With Virtual Hosts on Pramati Server 13 Overview Virtual hosting allows a single machine to be addressed by different names. There are two ways for configuring Virtual Hosts. They are: Domain Name

More information

A Model for Access Control Management in Distributed Networks

A Model for Access Control Management in Distributed Networks A Model for Access Control Management in Distributed Networks Master of Science Thesis Azadeh Bararsani Supervisor/Examiner: Dr. Johan Montelius Royal Institute of Technology (KTH), Stockholm, Sweden,

More information

Designing a Cloud Storage System

Designing a Cloud Storage System Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes

More information

WEB CONTENT MANAGEMENT PATTERNS

WEB CONTENT MANAGEMENT PATTERNS WEB CONTENT MANAGEMENT PATTERNS By YOAB GORFU RENEL SMITH ABE GUERRA KAY ODEYEMI May 2006 Version Date: 23-Apr-06-06:04 PM - 1 - Story TABLE OF CONTENTS Pattern: Separate it. How do you ensure the site

More information

Load testing with. WAPT Cloud. Quick Start Guide

Load testing with. WAPT Cloud. Quick Start Guide Load testing with WAPT Cloud Quick Start Guide This document describes step by step how to create a simple typical test for a web application, execute it and interpret the results. 2007-2015 SoftLogica

More information

EPiServer Operator's Guide

EPiServer Operator's Guide EPiServer Operator's Guide Abstract This document is mainly intended for administrators and developers that operate EPiServer or want to learn more about EPiServer's operating environment. The document

More information

Secure Web Application Coding Team Introductory Meeting December 1, 2005 1:00 2:00PM Bits & Pieces Room, Sansom West Room 306 Agenda

Secure Web Application Coding Team Introductory Meeting December 1, 2005 1:00 2:00PM Bits & Pieces Room, Sansom West Room 306 Agenda Secure Web Application Coding Team Introductory Meeting December 1, 2005 1:00 2:00PM Bits & Pieces Room, Sansom West Room 306 Agenda 1. Introductions for new members (5 minutes) 2. Name of group 3. Current

More information

Mobile Performance Testing Approaches and Challenges

Mobile Performance Testing Approaches and Challenges NOUS INFOSYSTEMS LEVERAGING INTELLECT Mobile Performance Testing Approaches and Challenges ABSTRACT Mobile devices are playing a key role in daily business functions as mobile devices are adopted by most

More information

Guide to Analyzing Feedback from Web Trends

Guide to Analyzing Feedback from Web Trends Guide to Analyzing Feedback from Web Trends Where to find the figures to include in the report How many times was the site visited? (General Statistics) What dates and times had peak amounts of traffic?

More information

Web Intelligence with High Availability A Demand Driven Approach

Web Intelligence with High Availability A Demand Driven Approach Web Intelligence with High Availability A Demand Driven Approach How to build a high available system to provide thin-client tool for query, reporting and analysis. White Paper Zebah Singh Alfred Wipro

More information

Oracle Collaboration Suite

Oracle Collaboration Suite Oracle Collaboration Suite Firewall and Load Balancer Architecture Release 2 (9.0.4) Part No. B15609-01 November 2004 This document discusses the use of firewall and load balancer components with Oracle

More information

File-System Implementation

File-System Implementation File-System Implementation 11 CHAPTER In this chapter we discuss various methods for storing information on secondary storage. The basic issues are device directory, free space management, and space allocation

More information

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at distributing load b. QUESTION: What is the context? i. How

More information

Web Caching and CDNs. Aditya Akella

Web Caching and CDNs. Aditya Akella Web Caching and CDNs Aditya Akella 1 Where can bottlenecks occur? First mile: client to its ISPs Last mile: server to its ISP Server: compute/memory limitations ISP interconnections/peerings: congestion

More information

Introducing the BIG-IP and SharePoint Portal Server 2003 configuration

Introducing the BIG-IP and SharePoint Portal Server 2003 configuration Deployment Guide Deploying Microsoft SharePoint Portal Server 2003 and the F5 BIG-IP System Introducing the BIG-IP and SharePoint Portal Server 2003 configuration F5 and Microsoft have collaborated on

More information

Monitoring Large Flows in Network

Monitoring Large Flows in Network Monitoring Large Flows in Network Jing Li, Chengchen Hu, Bin Liu Department of Computer Science and Technology, Tsinghua University Beijing, P. R. China, 100084 { l-j02, hucc03 }@mails.tsinghua.edu.cn,

More information

Bitrix Site Manager 4.1. User Guide

Bitrix Site Manager 4.1. User Guide Bitrix Site Manager 4.1 User Guide 2 Contents REGISTRATION AND AUTHORISATION...3 SITE SECTIONS...5 Creating a section...6 Changing the section properties...8 SITE PAGES...9 Creating a page...10 Editing

More information

SharePoint Server 2010 Capacity Management: Software Boundaries and Limits

SharePoint Server 2010 Capacity Management: Software Boundaries and Limits SharePoint Server 2010 Capacity Management: Software Boundaries and s This document is provided as-is. Information and views expressed in this document, including URL and other Internet Web site references,

More information

How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time

How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time SCALEOUT SOFTWARE How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time by Dr. William Bain and Dr. Mikhail Sobolev, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T wenty-first

More information

Cache Configuration Reference

Cache Configuration Reference Sitecore CMS 6.2 Cache Configuration Reference Rev: 2009-11-20 Sitecore CMS 6.2 Cache Configuration Reference Tips and Techniques for Administrators and Developers Table of Contents Chapter 1 Introduction...

More information

Managing Web Application Authentication Problems

Managing Web Application Authentication Problems WavecrestTechBrief Managing Web Application Authentication Problems www.wavecrest.net Introduction General. This paper is written for you Wavecrest Computing s proxy product customers and prospects. It

More information

SOFT 437. Software Performance Analysis. Ch 5:Web Applications and Other Distributed Systems

SOFT 437. Software Performance Analysis. Ch 5:Web Applications and Other Distributed Systems SOFT 437 Software Performance Analysis Ch 5:Web Applications and Other Distributed Systems Outline Overview of Web applications, distributed object technologies, and the important considerations for SPE

More information

<Insert Picture Here> Oracle Web Cache 11g Overview

<Insert Picture Here> Oracle Web Cache 11g Overview Oracle Web Cache 11g Overview Oracle Web Cache Oracle Web Cache is a secure reverse proxy cache and a compression engine deployed between Browser and HTTP server Browser and Content

More information

Web. Services. Web Technologies. Today. Web. Technologies. Internet WWW. Protocols TCP/IP HTTP. Apache. Next Time. Lecture #3 2008 3 Apache.

Web. Services. Web Technologies. Today. Web. Technologies. Internet WWW. Protocols TCP/IP HTTP. Apache. Next Time. Lecture #3 2008 3 Apache. JSP, and JSP, and JSP, and 1 2 Lecture #3 2008 3 JSP, and JSP, and Markup & presentation (HTML, XHTML, CSS etc) Data storage & access (JDBC, XML etc) Network & application protocols (, etc) Programming

More information

Configuring Load Balancing

Configuring Load Balancing When you use Cisco VXC Manager to manage thin client devices in a very large enterprise environment, a single Cisco VXC Manager Management Server cannot scale up to manage the large number of devices.

More information

Web Application Development

Web Application Development Web Application Development Introduction Because of wide spread use of internet, web based applications are becoming vital part of IT infrastructure of large organizations. For example web based employee

More information

Performance Workload Design

Performance Workload Design Performance Workload Design The goal of this paper is to show the basic principles involved in designing a workload for performance and scalability testing. We will understand how to achieve these principles

More information

Analysis of Caching and Replication Strategies for Web Applications

Analysis of Caching and Replication Strategies for Web Applications Analysis of Caching and Replication Strategies for Web Applications Swaminathan Sivasubramanian 1 Guillaume Pierre 1 Maarten van Steen 1 Gustavo Alonso 2 Abstract Replication and caching mechanisms are

More information

THE WINDOWS AZURE PROGRAMMING MODEL

THE WINDOWS AZURE PROGRAMMING MODEL THE WINDOWS AZURE PROGRAMMING MODEL DAVID CHAPPELL OCTOBER 2010 SPONSORED BY MICROSOFT CORPORATION CONTENTS Why Create a New Programming Model?... 3 The Three Rules of the Windows Azure Programming Model...

More information

AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM. Dr. T.Ravichandran, B.E (ECE), M.E(CSE), Ph.D., MISTE.,

AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM. Dr. T.Ravichandran, B.E (ECE), M.E(CSE), Ph.D., MISTE., AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM K.Kungumaraj, M.Sc., B.L.I.S., M.Phil., Research Scholar, Principal, Karpagam University, Hindusthan Institute of Technology, Coimbatore

More information

LabVIEW Internet Toolkit User Guide

LabVIEW Internet Toolkit User Guide LabVIEW Internet Toolkit User Guide Version 6.0 Contents The LabVIEW Internet Toolkit provides you with the ability to incorporate Internet capabilities into VIs. You can use LabVIEW to work with XML documents,

More information

Module 12: Microsoft Windows 2000 Clustering. Contents Overview 1 Clustering Business Scenarios 2 Testing Tools 4 Lab Scenario 6 Review 8

Module 12: Microsoft Windows 2000 Clustering. Contents Overview 1 Clustering Business Scenarios 2 Testing Tools 4 Lab Scenario 6 Review 8 Module 12: Microsoft Windows 2000 Clustering Contents Overview 1 Clustering Business Scenarios 2 Testing Tools 4 Lab Scenario 6 Review 8 Information in this document is subject to change without notice.

More information

A Fragment-Based Approach for Efficiently Creating Dynamic Web Content

A Fragment-Based Approach for Efficiently Creating Dynamic Web Content A Fragment-Based Approach for Efficiently Creating Dynamic Web Content JIM CHALLENGER, PAUL DANTZIG, ARUN IYENGAR, and KAREN WITTING IBM Research This article presents a publishing system for efficiently

More information

This chapter describes how to use the Junos Pulse Secure Access Service in a SAML single sign-on deployment. It includes the following sections:

This chapter describes how to use the Junos Pulse Secure Access Service in a SAML single sign-on deployment. It includes the following sections: CHAPTER 1 SAML Single Sign-On This chapter describes how to use the Junos Pulse Secure Access Service in a SAML single sign-on deployment. It includes the following sections: Junos Pulse Secure Access

More information

Chapter 1 - Web Server Management and Cluster Topology

Chapter 1 - Web Server Management and Cluster Topology Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management

More information

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Many corporations and Independent Software Vendors considering cloud computing adoption face a similar challenge: how should

More information

Cisco Application Networking for BEA WebLogic

Cisco Application Networking for BEA WebLogic Cisco Application Networking for BEA WebLogic Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address

More information

Cisco Application Networking for IBM WebSphere

Cisco Application Networking for IBM WebSphere Cisco Application Networking for IBM WebSphere Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address

More information