Automated Performance and Scalability Testing of Rich Internet Applications. Niclas Snellman

Size: px
Start display at page:

Download "Automated Performance and Scalability Testing of Rich Internet Applications. Niclas Snellman"

Transcription

1 Automated Performance and Scalability Testing of Rich Internet Applications Niclas Snellman Master of Science Thesis Supervisor: Ivan Porres Software Engineering Laboratory Department of Information Technologies Faculty of Technology Åbo Akademi University

2

3 ABSTRACT In this thesis we identify different problems and challenges in cost-effective performance and scalability testing of Rich Internet Applications (RIA). We then describe the ASTORIA (Automatic Scalability Testing of RIAs) framework as a solution to the identified problems and challenges. One of the distinguishing characteristics of our proposed approach is that it does not require server-side monitoring of performance metrics or modification of any server configurations. The validity of our approach is demonstrated first by building a prototype of an implementation of the ASTORIA framework and then by using it for conducting experiments involving performance testing of RIAs. i

4

5 ACKNOWLEDGEMENTS I would like to express my deepest gratitude to Professor Ivan Porres for the guidance, encouragement, and patience, without which this thesis would not exist. My sincere appreciation goes to the brilliant people in the Software Engineering Laboratory at Åbo Akademi University for being a constant source of inspiration. Last but not least I would like to thank my colleagues and friends; Adnan Ashraf for the collaboration and many constructive discussions and Johan Selänniemi, Thomas Forss and Fredric Hällis for their constant support and encouragement. Niclas Snellman Åbo, August 25, 2011 iii

6

7 CONTENTS Abstract Acknowledgements Contents i iii v 1 Introduction Purpose of this thesis Thesis structure Problem statement Specific challenges Load generation ASTORIA Framework Overview ASTORIA Master ASTORIA Load Generator (ALG) ASTORIA Local Controller (ALC) Virtual User Simulator (VUS) Collecting and combining data Response time and throughput aggregation Aggregating averages and standard deviations Prototype implementation 19 v

8 5.1 Development process Components and overview AstoriaMaster AstoriaSlave Custom Amazon EC2 machine images Using the prototype Preparation Execution Evaluation of the prototype implementation Experiment Results Interpretation and analysis of the results Conclusions Evaluation of the ASTORIA framework Related work Further research Swedish summary 42 Bibliography 49 vi

9 CHAPTER ONE INTRODUCTION Rich Internet Applications (RIAs) [4] are web applications that provide a desktopapplication-like user experience by using asynchronous techniques for communication and dynamic content rendering, such as Ajax (Asynchronous JavaScript and XML), Adobe Flash or Microsoft Silverlight. The widespread use of RIAs has resulted in higher expectation levels concerning the usability, interactivity and performance of web applications. However, the technologies used in RIAs have also introduced new challenges to performance and scalability testing of web applications. Performance testing is, according to Liu [14], defined as a measure of how fast an application can perform certain tasks, whereas scalability measures performance trends over a period of time with increasing amounts of load. If an application performs well at a nominal load, but fails to maintain its performance before reaching the intended maximum tolerable load, the application is not considered scalable. Good performance at nominal load does not guarantee application scalability. An application should ideally maintain a flat performance curve until it reaches its intended maximum tolerable load. Cloud computing introduces new opportunities and dimensions for ensuring and extending the performance and scalability of RIAs. In a compute cloud, a RIA is 1

10 hosted on one or more application server instances that run inside dynamically provisioned Virtual Machines (VMs). The growing size and complexity of RIAs and of the underlying infrastructure used for hosting these applications has necessitated the need to devise effective ways of doing automatic performance and scalability testing. Conventional functional and non-functional testing of web applications generally involve generating synthetic HTTP traffic to the application under test. A commonly used approach [16, 24, 25] is to capture HTTP requests and later replay them with the help of automated testing tools. Fig. 1.1 presents an abstract view of how this works. User Web Browser Record & Playback HTTP Traffic Application Web Server Figure 1.1: Record and playback HTTP requests However, due to the complexity introduced by the asynchronous technologies used for building RIAs, conventional load generation methods will not work. Instead we need to consider capturing and replaying user actions on a higher level, i.e. capture user interactions with a web browser as shown in Fig Consequently, we are forced to generate load by actually automating ordinary web browsers. In other words, we need to simulate web application users. While web browser automation in itself is not particularly difficult, the task of automating large numbers of web browsers simultaneously in an effective manner is a non-trivial task. Moreover, the high requirements of system resources for web browsers introduce a problem of resource availability. Cloud computing provides a theoretically unlimited number of VMs which we can use for generating test load and simulating large quantities of web application users. However, we still need to use the cloud resources efficiently in order to minimize the costs of performing a test. 2

11 User Record & Playback User Actions Client-side Application Web Browser Network Server-side Application Web Server Figure 1.2: Record and playback user actions 1.1 Purpose of this thesis In this thesis, we describe different problems and challenges in cost-effective performance and scalability testing of RIAs. We then propose the ASTORIA (Automatic Scalability Testing of RIAs) framework as a solution to the identified problems and challenges. One of the distinguishing characteristics of our proposed approach is that it does not require server-side monitoring of performance metrics or modification of any server configurations. The validity of our proposed approach is first demonstrated by building a working prototype of an implementation of the ASTORIA framework and then by using it for conducting experiments involving performance testing of RIAs. 1.2 Thesis structure The contents of the thesis are organized as follows. We begin in the second chapter by describing the challenges that are associated with performance testing of RIAs. In chapter three the ASTORIA framework is described in detail. Chapter four illustrates different issues that arise from collecting and combining accurate data from different sources and our proposed solutions to the identified problems. Chapters five and six are dedicated to describing a prototype implementation of the ASTORIA framework. In chapter seven we conduct a performance test using the prototype and interpret the results of the test and evaluate the prototype. Chapter eight discusses some previous works that are related to our research. In the last chapter we present our final conclusions as well as suggestions for further research. 3

12 CHAPTER TWO PROBLEM STATEMENT The most commonly used metrics for measuring performance and scalability of RIAs are response time and throughput. Response times measure the time required for the RIA to complete one single action while throughput measures how many user actions are completed over a unit of time. By measuring and recording response time and throughput, we can answer the following questions concerning the performance and scalability of RIAs: What is the average response time R at a certain load L (number of simultaneous users)? What is the average throughput T (actions per second) at a certain load L? The objective of this thesis is to discuss how we can answer these questions empirically by carrying out a load test using cloud computing resources. The two main concerns of our approach are how to automate and simplify, from a developer s point of view, the performance test process and how to reduce the costs of a performance test. 4

13 In order to automate and simplify the performance test we propose a testing approach that does not require the installation of any additional software in the application server(s). We also require that the provisioning of the testing hardware is handled automatically. Furthermore, we propose a method for automatically deciding when a performance test can be terminated. What this means in practice is that no user interaction is required after a test has been started Cost Cloud providers usually have usage-based charging schemes, with different tariffs for different services. Except for the most obvious fee, i.e. the provisioning fees of the VMs, it is not uncommon for providers to charge for storage services (not VM local storage), bandwidth usage, load balancing services, etc. For cloud based load generation the most important costs to consider are VM provisioning fees. Slightly simplified, the total cost for conducting a performance test using load generated in the cloud can be calculated using the formula below. Here n is the desired number of simulated users and n vm is the number of users that can be simulated on each VM. The test time is t and c vm denotes the cost for renting on VM per time unit. cost = max(1, n n vm ) c vm t (2.1) In order to minimize the cost, we should clearly try to minimize the cost per virtual machine c vm and the length of the test t and maximize the number of simulated users per VM n vm. Minimizing the cost per virtual machine can be achieved by selecting the cheaper cloud provider for the required level of service. This would require an evaluation of different cloud providers which is out of the scope of this thesis. In Section we discuss how to stop the test automatically once the results are statistically significant, which can minimize the testing times and reduce cost. In order to maximize the number of simulated users per VM we use various techniques such as using headless browsers and introducing a resource control loop in each VM as described in Section 3.4. Finally, in order to reduce bandwidth costs we aggregate the test results in the load generation servers as 5

14 described in Section Specific challenges We now describe some specific problems and challenges in performance and scalability testing of RIAs What we measure? Response times can be measured at several different levels as shown in Fig Server side monitoring methods which only record the time taken to process requests in the server are shown as level 3. Traditional non-functional testing methods usually record response time as the time it takes for the system under test to respond to the HTTP requests that are sent by the testing tool. This is shown as level 2 in Fig When testing RIAs, however, response time is a measure of how fast actions are executed in the web browser (level 1). Thus, response time, in RIA testing, includes the time required for the client to execute any JavaScript that is triggered by the action and the time the server needs to respond to any HTTP requests sent by the JavaScript. User Record & Playback User Actions Client RIA Application Web Browser Network Server RIA Application Web Server Figure 2.1: Different levels for measuring response time Ideally we would like to exclude the time that any HTTP requests spend in the network and any overhead generated by the web browser, since it would provide us with the most precise results. Properly excluding network time would require us to employ server-side monitoring of requests. This approach would require a certain degree of tampering with the servers, which is something we would like 6

15 to avoid altogether. However, the network overhead can most likely be ignored if the tests are performed in the same cloud as the RIA servers. Eliminating the overhead generated by the browser might be achieved by adding a proxy that forwards requests from the browser to the server and records the time for each request. Such methods are, however, outside the scope of this thesis Uniqueness of HTTP requests When using conventional methods for generating load on a web site, the load generation is generally done by sending large amounts of beforehand captured HTTP requests to the system under test, measuring the response times. This is possible since simple web sites often do not keep track of user sessions and requests can be reproduced directly. However, for RIAs to work properly the server has to keep track of the state of every individual user s session. This can be accomplished by, for instance, storing a session token, which identifies the session, on the server as well as in a cookie on the client s computer. Thus, by passing the session token as a parameter with each request, the client can identify itself. A consequence of this is a security issue commonly known as Cross-Site Request Forgery (CSRF), which is a type of exploit where an unauthorized user of a web application identifies itself by using the session token of a user trusted by the web application. A simple capture-replay approach will therefore not work when generating load for RIAs, since the recorded requests contain the token of the session that was active at the time when the requests were recorded Load characterization Since user interface interaction events are more complex than individual HTTP requests, we need to consider that they may be of different classes. If we, for example, consider an on-line vendor, the performance requirements for a user action to request information about a product may be different from the performance requirements of a user action that actually buys the product and processes the payment. Thus, each class of user actions can have a different average response time and the values of the response times would be distributed around the aver- 7

16 age. Thus, all classes of user actions should be identified and defined beforehand and the data aggregation and calculations have to be done separately for each action class. Since response time is a random variable we can suspect the response time for actions of a certain action class to be dispersed around an average value. However, if the action classes are poorly defined, this will not be the case How many VMs and how many virtual users on each VM? When utilizing a compute cloud for the purpose of generating load we need to know how many VMs we need to allocate. Determining the number of required VMs is entirely dependent on the amount of load we want to generate and how much load each VM is capable of generating. If we know that each VM is capable of simulating k users and we need to simulate a total of N users, then we simply divide N by k and get the required number of VMs. However, the number of virtual users each VM is capable of simulating is very difficult to predict. We therefore need to have a control loop that has a goal to create N virtual users. Furthermore, the control loop should allocate VMs in such a way that the target number of virtual users can be reached as fast as possible in order to minimize cost. Similarly, we need a control loop on each VM which controls the number of simulated users on the VM while at the same time monitoring the resource constraints of the VMs, such as CPU capacity and network bandwidth in order to prevent the VMs from being overloaded. Hence, we need two control loops in total; one for deciding how many VMs are needed to reach a target of N users and a secondary loop for deciding how many simultaneous virtual users can be run on each VM. The challenge is not only to do it, but to do it as automatically and as efficiently as possible. 8

17 2.1.5 When to stop testing? In order to achieve unsupervised testing of RIAs, there needs to be an automatic way of determining when the acquired results are sufficient enough to stop the test. Additionally, we need to evaluate when is the most appropriate time to terminate a test with respect to the VM allocation intervals. For example, in the Amazon Elastic Compute Cloud (EC2) [17], VM are rented in hourly intervals. If test results of a running test indicates that there is enough test data to stop testing, we need to decide if it is better to stop at once or continue until we reach the end of the renting hour. The approach we suggest for deciding when a performance test has run for a sufficiently long time is based on the standard deviation of the response times. We calculate the standard deviation from the response times for each specific action or action class and, subsequently, abort the test once the standard deviations reach a certain predetermined lower limit. For a scalable RIA, low standard deviations indicate that the load balancer have been able to successfully scale the RIA back-end server to a level where the load can be handled comfortably and the test can be terminated safely. 2.2 Load generation Due to various reasons, the simple capture-replay approach is not applicable to RIAs. For instance, generating load representing realistic user initiated traffic for web applications that rely on Ajax is challenging due to the randomness of the HTTP requests sent by the Javascript in the users web browser. Another obstacle is due to the various methods that have been developed for preventing CSRF attacks. Recorded requests containing identification tokens from earlier sessions will inevitably be blocked. Thus, in order to use conventional methods for generating traffic for RIAs we would be required to disable CSRF prevention methods on the server. Ideally we would like to avoid having to do any modifications to the system under test. One solution to these problems would be to rely on ordinary web browsers for 9

18 load generation. In order to do so it is necessary to, instead of recording low level HTTP requests, record high level user interactions with the web browser. Such recordings can then be used to automate the behavior of a web browser in order to simulate a real user. However, using regular web browsers for generating traffic introduces several technical challenges for the test automation and orchestration. While web browser automation in itself is not a major problem (there are, in fact, many different tools for automating various web browsers already in existence [23, 28, 10]), using ordinary web browsers such as Mozilla Firefox, Google Chrome or Opera for generating traffic is not an ideal option. Due to their excessive resource dependencies, regular web browsers are not suitable for effective load generation. For example, running one instance of Mozilla Firefox may consume up to a hundred megabytes of memory (and possibly even more). This leaves us with a comparatively low number of web browser instances that we are able to run simultaneously per machine. Furthermore, we are limited to simulating only one user per web browser instance. Consequently, generating traffic with web browsers becomes a costly affair when the number of machines needed to generate a sufficient load increases Headless web browser Another approach is to use a headless web browser, which is a web browser without a graphical user interface (GUI). The lack of a GUI in a headless web browser enables web browsing with a smaller memory and CPU footprint. Using a headless web browser for performance testing could enable us to simulate a higher number of users per machine, thus reducing costs. Headless web browsers are usually controlled programmatically by using an application programming interface (API) provided by its developers [7, 9]. Consequently, for us to be able to automate a headless web browser, we are forced to develop a custom driver application. While this may be more complicated than automating an ordinary web browser using tools already available, it offers us a wider range of options. For instance, output data can be customized to suit the needs of the types of tests we are conducting. 10

19 We also need to be able to control the headless web browsers in such a way as to simulate typical user behavior. One approach for this is to make the driver take some form of test scripts describing different usage scenarios as input. Ideally, the driver would also allow variables in the scripts, which would enable us to easily generate traffic with the characteristics of a diverse user pool. This would mitigate the need for creating multiple test scripts in order to simulate different user scenarios Headless X Window System A similar approach may be accomplished by exploiting the possibility of running the X Window System for Linux in a headless fashion [31]. This would allow us to use ordinary web browsers without being forced to use a GUI. The apparent advantages of this approach are ease of automation, smaller memory and CPU footprints and the reliability and robustness of standard web browsers. There are, however, disadvantages. These include, for example, an overhead that is generated by the X Window System, in spite of the lack of a GUI, and a more complex initialization process of the VMs used for testing. 11

20 CHAPTER THREE ASTORIA FRAMEWORK In this chapter we present the ASTORIA framework as a novel solution for conducting performance testing of RIAs in the cloud. 3.1 Overview ASTORIA (Automatic Scalability Testing of RIAs) is a framework for automatic performance and scalability testing of RIAs. It consists of the following main components: ASTORIA Master, ASTORIA Load Generator (ALG), ASTORIA Local Controller (ALC), and Virtual User Simulator (VUS). An overview of the ASTORIA framework is shown in Fig ASTORIA Master ASTORIA Master orchestrates VMs in an Infrastructure as a Service (IaaS) compute cloud such as Amazon EC2. There are two main challenges for ASTORIA Master: 12

21 ASTORIA Master ALC ALC ALC VUS1 VUS2 VUSm1 ALG1 VUS1 VUS2 VUSm2 ALG2 VUS1 VUS2 VUSm3 ALGn Figure 3.1: Overview of the ASTORIA framework How to automatically decide an optimal number of ALGs based on the desired number of virtual users to simulate? How to automatically decide when to stop testing? Deciding the optimal number of ALGs We do not know beforehand how many users we will be able to simulate per VM due to the varying system resource requirements of the VUS. This is analogous to the varying system resource utilization required by ordinary web browsers when using different web applications. Depending on which IaaS we decide to use, there may also be different types of VMs available with different amounts of memory and CPU cores, which will have a significant influence on the number of virtual users per VM. In order to deal with the first challenge, ASTORIA Master runs a control loop. Initially, the loop is started by instantiating an estimated number of ALGs after which it starts iterating until the target number of virtual users N is reached. In each iteration, ASTORIA Master decides how many more ALGs need to be instantiated in proportion to the existing and required number of virtual users. In other words, ASTORIA Master acts as a proportional controller [8] that tries to reach the target number of virtual users as fast as possible. It uses the following formula for calculating the proportional number of ALGs: 13

22 n N N ALG = i=1 n e i i=1 n e i, (3.1) here N ALG is the optimal proportional number of ALGs to instantiate in an iteration, N is the target number of virtual users, n is number of existing ALGs, and e i denotes the number of existing virtual users on the i-th ALG. For example, if N is 1000, and after instantiating two ALGs in the first iteration of the control loop, e 1 is 101 and e 2 is 99, then in the second iteration, ASTORIA Master launches 8 ALGs in order to reach the target load as fast as possible. For simplicity, measurement delays and ALG instantiation time are not considered When to stop testing? ASTORIA Master uses an approach called sequential sampling [3] to tackle the second challenge. The idea of sequential sampling is to continuously take samples until a desired condition is met. For performance and scalability testing, the desired condition is specified in terms of performance (response time and throughput) requirements for an action class and the intended load level. For example, the desired condition for sequential sampling for a certain action class could be an average response time of 500 milliseconds and an average throughput of 2000 requests per second at an intended load level of 1000 virtual users. ASTORIA Master collects samples of performance data from all ALGs and continuously aggregates it in order to calculate the average and standard deviation for response time and throughput. As soon as the results cross the desired condition/line, the results are considered to be good enough to stop the test and the VMs are terminated. 14

23 3.3 ASTORIA Load Generator (ALG) In the ASTORIA framework, ALGs are realized as VMs in a compute cloud, whose purpose is to generate as much load as possible on the RIA under test. This is done by having an ALC on each VM control a number of VUSs. Each VUS simulates a number of users. 3.4 ASTORIA Local Controller (ALC) On each ALG, an ALC controls m i VUSs. The main responsibility of an ALC is to determine m i and to ensure that each VUS creates as much load as possible without breaking the performance constraints of the VM. Similarly to ASTORIA Master, ALC works as a proportional controller that, in each iteration, calculates how many virtual users should be created or removed in proportion to the current and intended level of CPU utilization and memory consumption. For this, ALC has two regulators; one for CPU utilization and one for memory consumption. The proportional number of virtual users N USERS in each iteration depends on these regulators and the following formulas: n C N C = i=1 n c i i=1 n c i n M N M = i=1 n m i i=1 n and (3.2) m i, (3.3) where N C is the optimal proportional number of users to instantiate in the next iteration based on the CPU consumption. C is the intended CPU consumption, 15

24 n is the number of existing users and c i denotes CPU consumption of the i-th user. Similarly, N M is the optimal proportional number of users to instantiate in a next iteration based on the memory consumption, M is the intended memory consumption, and m i denotes memory consumption of the i-th user. N USERS, i.e. the number of virtual users to simulate in the next iteration, is then simply the minimum of N C and N M. For example, if n is 2, C is 90%, c 1 is 6%, and c 2 is 4%, then the CPU consumption error is 80% and thus PC is 16. Similarly, if M is 80%, m 1 is 5%, and m 2 is 5%, then the memory consumption error is 70% and thus PM is 14. In this case, P would therefore be Virtual User Simulator (VUS) Virtual users are simulated by the VUS(s). The exact number of virtual users to simulate is controlled by the ALC. Depending on the implementation, one instance of a VUS can simulate either one user or multiple users simultaneously if threading is supported. Simulating multiple users per VUS instance may be beneficial if the VUS itself generates a lot of overhead. In such cases we have the option of either simulating a fixed or a variable number of users. The disadvantage of simulating a fixed number of users per VUS is that it may prevent the ALC from scaling the total number of simulated users per VM with a high precision. For example, with 10 simulated users per VUS the ALC cannot simulate a total of exactly 95 users. 16

25 CHAPTER FOUR COLLECTING AND COMBINING DATA Due to bandwidth limitations and the cost of data transfer, the performance data for each action class should be compiled into a smaller set of data locally on each ALG before being transfered to ASTORIA Master. By compressing the produced data we also decrease the risk of overloading ASTORIA Master. In this chapter we describe how such data can be compressed without loosing vital information. 4.1 Response time and throughput aggregation Each VUS logs the response time of every performed action locally on the ALG it is running on. At certain time intervals, the ALC aggregates the response time data from all VUSs on the ALG it is running on and calculates the arithmetic mean and standard deviation for the response times separately for each action class. Additionally, throughput, i.e. actions per time, is calculated separately for each action class. Thus, three values are associated with each action class: average response time, response time standard deviation and throughput. A list containing one set of these values for each action class is then sent to ASTORIA Master. 17

26 Once a list from each ALG has been received, ASTORIA Master calculates overall average and standard deviation for response times and throughput for each action class. This final set of data can then be used to determine the overall performance and scalability of the RIA under test. By doing this we have sacrificed the data produced by each individual VUS and replaced it with averages. In the grand scheme of things, this reduction of the resolution in the test data is acceptable. 4.2 Aggregating averages and standard deviations The following statistical formulas are used for combining the averages and standard deviations of several groups of samples (response times): For aggregating averages, we simply calculate the weighted average of averages. For aggregating standard deviations, we calculate combined standard deviation by using the method described by Langley R. in [12]. It uses the following formula: S = B A2 N N 1. (4.1) Here N is the total number of combined samples from all groups, that is N = n 1 + n 2 + n 3, etc. A represents the total sum of observed measurements, that is, A = A 1 + A 2 + A 3, etc., where A i is calculated as m i n i, where m i is the average of the samples in the i-th class and n i is the sample size. B denotes the sum of B 1, B 2, B 3, etc., where B i is calculated as: where s i is the standard deviation of the i-th class. B i = s 2 i (n i 1) + A2 i n i, (4.2) 18

27 CHAPTER FIVE PROTOTYPE IMPLEMENTATION In this chapter we describe a prototype implementation of the ASTORIA framework developed in the Software Engineering lab of the Department of Information Technologies at Åbo Akademi. The prototype was developed in collaboration with Vaadin Ltd. [27] whose web development framework [27] was used for developing different experimental RIAs that were used for testing the prototype. 5.1 Development process The prototype was developed in several iterations during the last six months of We began by researching different options for automating web browsers in order to simulate user behavior. By the end of the summer, an experimental version of an implementation of a VUS was developed using an open source headless web browser by the name of HTMLUnit [7]. The application, which later got the name AstoriaSlave, was completely rewritten in later stages of the development process due to significant updates in some of the used third party components, which improved the robustness of the application. In the last weeks of the summer of 2010 a tool for orchestrating multiple VMs 19

28 running AstoriaSlave was implemented using the Python programming language. The motivation for using Python was that it allowed us to quickly produce a proof of concept which could be shown on the Cloud Software Program [20] quarterly meeting which was held in Oulu, Finland on the 27th of September. Due to requests from Vaadin Ltd. the application in question was re-implemented using the Java programming language. The orchestration application was conveniently named AstoriaMaster. 5.2 Components and overview The prototype, which was implemented on top of the Amazon EC2 compute cloud [17], consists of two main components, the first being AstoriaMaster which is an implementation of the ASTORIA Master framework component. Secondly, AstoriaSlave concretizes the features of the VUS framework component. An overview of the prototype implementation can be seen in Fig Additionally, two supporting components were used; LogCompiler and ProcessScaleBalancer. LogCompiler is used for aggregating output data locally on the ALGs before it is transferred to AstoriaMaster. In other words, LogCompiler handles the tasks described in Section 4. ProcessScaleBalancer is a tool for local scaling of processes based on system resource constraints such as CPU utilization, memory consumption or bandwidth. It is, in other words, an implementation of an ALC. Furthermore, ProcessScale- Balancer provides AstoriaMaster with information concerning the status of the virtual machine it is running. These data can later be used for intelligent scaling of the number of VMs. At the time of writing, ProcessScaleBalancer is still being developed by Thomas Forss, and it is the main subject of research in his masters thesis. The prototype makes use of usage scenarios called test scripts recorded with Selenium IDE [23] which is described in greater detail in Section All components were implemented using the Java programming language. 20

29 Figure 5.1: Overview of the prototype implementation of the ASTORIA framework 5.3 AstoriaMaster Here we describe the most important features of AstoriaMaster and discuss some of the methods used when implementing said features. An overview of the program can be seen in Fig Basic layout AstoriaMaster was not developed as a stand-alone application, but as a Java package which can be utilized within other Java applications. The main class of AstoriaMaster is EC2BackedPerformanceTest, which provides the user interface needed to initialize and start a performance test using the Amazon EC2 compute cloud. A factory [6] for creating instances of EC2BackedPerformanceTest was implemented for the sake of simplicity. Listing 5.1 shows the createscalabilitytest method provided by the factory for creating instances of EC2BackedPerformanceTest. The Selenium test script to be used in the test can be specified either as an instance of the class TestScript or as text string. If a text string is used (as shown in the example), a few additional parameters describing the test needs to be provided as well. The 21

30 Figure 5.2: Overview of AstoriaMaster additional required parameters are rampuptime, virtualusers and sleeptime. Said parameters are in the first case specified in the TestScript. The user s Amazon Web Services (AWS) credentials are required for the application to be able to launch EC2 instances (an instance is in this context the equal to a VM). Thus, AWS Access Key, AWS Secret Key and EC2 End-Point URL must be given as arguments to the method. Additionally, a base url, which is the URL of the RIA that is to be tested is required as well as the interval (in seconds) in which test reports will be aggregated. The prototype allows the user to specify an initial number of virtual users per instance. Listing 5.1: Using the factory method EC2BackedPerformanceTest test = EC2BackedPerformanceTestFactory. crea tesc alab ility Test ( testscriptstring, " http :// www. reddit. com ", // base url 30, // update interval in seconds " AMAZONAWSACCESSKEY ", " AMAZONAWSSECUREKEY ", " ec2.eu -west -1. amazonaws. com ", // EC2 end - point url "/ tmp / astoria /", // workspace directory rampuptime, virtualusers, sleeptime, // additional test script parameters virtualusersperinstance ); 22

31 EC2BackedPerformanceTest EC2BackedPerformanceTest extends AbstractPerformanceTest and implements the operations specified therein by utilizing the infrastructure provided by the Amazon EC2 compute cloud. The main objectives of the class are to launch and manage EC2 instances, upload and start virtual users on the instances, gather test results from the instances and finally terminate the instances. After an instance of EC2BackedPerformanceTest has been created with the factory the user can start the test by calling on the run method. This results in the execution of the following actions in the given order: 1. Make sure the workspace directory is set-up properly (more on this in Section 6.1.1). 2. Make an initial estimation on the number of required EC2 instances N. 3. Launch N EC2 instances. 4. Upload the tools from the workspace directory to each instance. 5. Start monitoring the instances (more on this in Section 5.3.5). 6. Start the virtual users on the instances. 7. Collect test reports from the instances at regular intervals Amazon EC2 API In order to launch, manage and terminate EC2 instances we used the application programming interface available in the AWS Software Development Toolkit (SDK) [1] for Java provided by Amazon. While using the API gives the programmer access to virtually every feature of the AWS infrastructure services, performing simple tasks such as launching an instance is a bit more cumbersome than we would have prefereed. A class called EC2InstanceHandler was therefore implemented, which features methods for simpler handling of EC2 instances. See Listing 5.2 for an example of how the AWS API can be used. Other notable features are methods for handling security groups as well as a method for generating 23

32 authentication keys which are used by the RemoteMachineController described in Section Listing 5.2: Launching EC2 instances using the AWS SDK public Vector < Instance > launchec2instances ( int N) { } // Launch ec2 instances RunInstancesResult result = null ; RunInstancesRequest runrequest = new RunInstancesRequest (); runrequest. setimageid ( ec2imageid ); runrequest. setinstancetype ( ec2instancetype ); runrequest. setkeyname ( ec2keypairname ); runrequest. setmaxcount (N); runrequest. setmincount (N); result = ec2client. runinstances ( runrequest ); return result. getreservation (). getinstances (); Test results handling In order to keep track of the test data that are collected from the ALGs we have implemented a data container named TestResultsContainer, which helps us store and compile a collection of reports. Reports are a grouping of test data for one specific action class generated in one specific instance and are represented by a class called InstanceGroupReport. In other words, one instance of Instance- GroupReport contains the combined test results of one action class produced by all virtual users running on a particular VM. Therefore, each time reports are downloaded from an ALG we get one instance of InstanceGroupReport per action class. An instance of InstanceGroupReport contains the following test data: timestamp, actioncount, errorcount, actiontimemean, actiontimestandarddeviation, actiontimemin, actiontimemax and groupname. When an instance of InstanceGroupReports is added to a TestResultsContainer it is initially put in a list named latest. By calling a method called 24

33 getintermediatereport, a user of the prototype can choose to get an intermediate report while the test is running. In such an event the reports stored in latest are combined and returned to the user. The reports are then moved from latest to a list called history which stores all reports generated by the test. At the end of the test the user can call on the getfinalresults method which returns the contents of history Communication The first version of AstoriaMaster used the Open SSH client [18] and scp tool provided with Linux for transferring data to and from the instances as well as controlling the behavior of the instances. When converting from Python to Java we decided to try and abolish any platform requirements. In order to implement the same features as were provided by the previously used tools we used JSch [11] which is a Java library implementing the SSH protocol. Since using JSch is rather complicated, we decided to implement a class in order to handle the functionality we needed in a simpler way. The class, RemoteMachineController, provides the following methods: executeremotecommand, transferfile and writetoremotefile. In order to connect to EC2 instances using the SSH protocol one needs to use a public/private key-pair method for authentication. This is a requirement by AWS. Such key-pairs can be generated in a number of ways, for example with the AWS web interface or by using the Java AWS SDK. When using any of these methods it is necessary to specify a name for the key-pair. This name needs to be provided as a parameter when launching EC2 instances in order for the user to be able to connect to them. Because we wanted to make the prototype as easy to use as possible we eliminated the need for users to generate their own keypairs. Instead a key-pair is generated using EC2InstanceHandler at startup. The private key is subsequently used by RemoteMachineController for authentication. 25

34 5.3.5 Background processes In order to continuously keep track of the VMs and their status, different monitors were implemented. These are executed in separate threads after the test has been started and have the following responsibilities: EC2InstanceMonitor: Monitors the status of the VMs InstanceReportFetcher: Collects reports from the ALGs InstanceHealthMonitor: Checks the status of the system resources on each VM, i.e. the health of the instances The following sections briefly describe the purpose and behavior of each monitor EC2InstanceMonitor In order to keep track of the state of each running EC2 instance we need to regularly check their status using the AWS API. EC2 instances can be in any of the following four states which make up the instance life cycle: pending, running, shutting-down and terminated. By calling on the getreservations method provided by the AWS API we get a list of all active instances and their states. The contents of the list is then used to update the InstanceObjects which are abstractions of ALGs. The EC2InstanceMonitor continuously updates the instances states at regular intervals. In our experience, EC2 instances are not particularly prone to failure. Therefore, no time was spent in implementing any methods for recovering lost instances. Instead, we simply remove any failed instances from the global list of instances and rely on the scaling methods to deal with the decrease in virtual users InstanceReportFetcher The purpose of InstanceReportFetcher is to remotely start the aggregation of VUS log data to reports on each VM and subsequently fetch the reports using 26

35 the methods provided by RemoteMachineController. Reports are compiled by LogCompiler on the VMs and are saved in JSON [26] formatted text files. After the contents of the report files have been transferred to AstoriaMaster the JSON is parsed into instances of InstanceGroupReports. The interval in which these actions are performed can be specified by the user InstanceHealthMonitor As a consequence of incorporating ProcessScaleBalancer, we needed to implement a method similar to that of InstanceReportFetcher which instead of downloading reports from the instances gathers health reports in regular intervals. Therefore, a third monitor was created. By using the data available in the health reports we are able to scale the number of VMs according to how many virtual users the VMs are able to simulate simultaneously. The scaling algorithm described in Section was implemented. 5.4 AstoriaSlave AstoriaSlave is a driver application for headless web browsers, in other words an implementation of a VUS. In this section we describe a few notable points about the implementation of AstoriaSlave as well as how it can be used Basic functionality In order to simulate a user, AstoriaSlave creates an instance of the class SimulatedUser and calls on the run method which initializes a HTMLUnit client (see Section 5.4.4) and starts performing a number of actions. The actions are parsed from a provided Selenium test script and translated into a list of instances of the class Action. Actions are executed in HTMLUnit using Selenium RC (see Section 5.4.5) which immensely simplifies the handling of the HTMLUnit client. 27

36 Figure 5.3: Overview of AstoriaSlave One instance of AstoriaSlave has the ability to simulate multiple users simultaneously. This is possible since SimulatedUser extends Thread. Thus, by starting concurrent threads, each with its own HTMLUnit client we can run multiple concurrent virtual users. Each simulated user writes output data to an individual log file in order to avoid race conditions Usage AstoriaSlave is meant to be run as a console application and does not offer a GUI. Instead, when using AstoriaSlave the user is required to set a few parameters using command line arguments. The following parameters are compulsory: testscript, baseurl, users and logdir. By setting the testscript parameter the user decide which test script will be used. The second parameter tells AstoriaSlave which domain is to be tested. The final two parameters are used to specify the number of users to be simulated and where the output data of the tests are saved. AstoriaSlave was compiled and packaged as a Java archive (JAR file) which can be executed using a Java Virtual Machine [13]. An example of how AstoriaSlave can be used is shown in Listing 5.3. Listing 5.3: Example of using AstoriaSlave java - jar AstoriaSlave. jar -- testscript = reddit_ test -- baseurl = http :// www. reddit. com -- users =10 -- logdir =/ tmp / reddit_test / 28

37 5.4.3 Advanced usage In order to introduce a bit of flexibility into the application, a few additional optional parameters can be set at startup. A list of the optional parameters can be seen below. debug - Enables more thorough output data for debugging purposes. sleep - Waiting time between actions (in practice, this parameter should always be used). rampup - Ramp-up time for the starting of simulated users. runtime - The time (in seconds) for how long the tests will be performed repeatedly. repeat - Specifies the number of times the test will be repeated for each simulated user. This overrides runtime. usertimeout - The time (in seconds) to wait for a stalled simulated user to finish executing an action. Additionally, functionality for inserting random integers into a test script at runtime was implemented. These can be placed in the test scripts by a inserting a specific text pattern where random integers are wanted HTMLUnit The headless web browser chosen for the prototype is the HTMLUnit which is released as open source under the Apache 2 license. HTMLUnit is at the time of writing under slow but steady development in the form of an open source project. It is written in Java, which was a requirement for our prototype. HTMLUnit relies on Rhino [5] by Mozilla for JavaScript execution. 29

38 5.4.5 Selenium RC In the first versions of AstoriaSlave, custom methods were implemented for running Selenium actions directly in HTMLUnit. This was, however, not only time consuming but resulted in unstable and unpredictable behavior. With the release of the beta version of Selenium 2 came the possibility of using HTMLUnit as the target web browser in Selenium RC, which allowed us to execute actions in HTMLUnit in a much simpler way. This motivated the complete rewrite of AstoriaSlave which was required for incorporating Selenium Custom Amazon EC2 machine images Apart from the machine images made available by Amazon, users of the EC2 compute cloud have the possibility of modifying existing machine images and storing them in the cloud. This is a valuable feature which allows the users to create custom machine images tailored for their own specific needs. We could for example, in our case, create an image for the ALGs where AstoriaSlave and the other tools needed for the prototype are preloaded. While this would certainly be a good option for final products, we decided not to make use of that in our prototype. The reason for this being that we would be forced to update the machine image every time a new version of any of the components needed on the image was updated. Since updating machine images is rather time consuming we decided to use a technique where we upload all required tools automatically following the start of each VM. However, we did not find an existing machine image with a Java Virtual Machine (JVM) installed, which prompted us to create a custom machine image where a JVM was installed beforehand. Installation of a JVM could be done remotely using RemoteMachineController (such functionality was implemented and used initially) but we decided against it due to the resulting increase in VM startup time. 30

39 CHAPTER SIX USING THE PROTOTYPE In this chapter we describe how the prototype implementation can be used in practice to conduct a performance test. This includes some preparation before the test can be started as well as executing the test using the Java classes described in the previous chapter. 6.1 Preparation The minor preparations that are required before a test is executed includes creating a workspace directory and producing test scripts representing the desired behavior of the users that will be simulated Workspace directory Before using the prototype the user has to set up a workspace directory containing the tools and configuration files that will be transferred to and used on the VMs. Additionally, this is the target directory for the test reports. The workspace directory should contain the following files: 31

40 AstoriaSlave.jar AstoriaLogCompiler.jar ProcessLoadBalancer.jar scalexml.xml groups.xml (optional) The XML formatted [22] file scale.xml contains various parameters used by ProcessLoadBalancer. In order to separate the actions into action classes, as described in Section 2.1.3, users can choose to include the optional groups.xml. Each action class is identified by a group name. Action characterization is accomplished by adding regex tags containing a regular expression matching the actions (or targets) that will be included in the class. Listing 6.1 shows an example of an action classification file. Listing 6.1: Example of XML formatted file for specifying action groups <? xml version =" 1.0 " encoding ="ISO "?> <groups > <group name =" initialization "> <regex >open </ regex > </ group > <group name =" order "> <regex > mouseclick </ regex > <regex > order </ regex > </ group > </ groups > Test scripts Test scripts can be created using Selenium IDE. Selenium IDE is an extension for Mozilla Firefox which enables the recording of high level user interactions with the browser. Test scripts produced by Selenium IDE are stored as text formatted in Hypertext Markup Language (HTML) containing the actions performed as well as some additional information about the test. Actions are described using 32

41 three variables; action type, target and value. The value variable is needed for actions requiring additional information. For instance, the coordinates of the mouse pointer when a click-action is performed or the string that is typed in a text insertion action. 6.2 Execution In order to run Astoria the user has to import AstoriaMaster.jar into a Java project and use the EC2BackedPerformanceTestFactory class to create an instance of EC2BackedPerformanceTest as described in Section By calling the run method in the EC2BackedPerformanceTest Astoria starts performing the testing in the background. The results from the test are returned as a JSON formatted string, either while the test is running, by calling getintermediateresults or when the test has ended by calling getfinalresults. 33

42 CHAPTER SEVEN EVALUATION OF THE PROTOTYPE IMPLEMENTATION In this chapter we describe how a performance test was carried out using the prototype. Additionally, we provide a brief interpretation of the test results. 7.1 Experiment The application under test in our experiment is an experimental RIA developed by Vaadin Ltd. The RIA is a mock-up of an on-line vendor for movie theater tickets called QuickTickets implemented using the Vaadin web development framework[27]. QuickTickets was set-up by the Vaadin staff on two VMs of unknown size in the Amazon EC2 cloud. No CSRF prevention methods were disabled before conducting the tests. One usage scenario was used to generate load. The test script describing the scenario was recorded when a user used QuickTickets to search for tickets, select tickets and make an order. This resulted in a test script with more than 100 actions. At various places in the test script random variables were inserted. This allowed us to use the same test script for simulating users buying tickets for 34

43 different movie theaters and different shows. The test was carried out with an early version of our prototype implementation in which the number of ALGs and the number of users that were simulated per ALG was static. The RIA was tested with 1000 concurrent users. In order to keep the number of VUS low on each ALG, we decided to utilize the maximum number of VMs that were available with a basic Amazon AWS account, which is 20. A ramp-up time was used in order to slowly introduce the RIA to the generated load. The reason for keeping the number of VUS low was previously experienced memory leaks in HTMLUnit, which we feared could impede the VUS abilities to properly simulate users. The testing was performed during the course of a few hours and reports were fetched from the ALGs with an interval of five minutes. Throughput was not used as a metric for this test and therefore no such data was collected. 7.2 Results The prototype showed promising results for the adequacy of the ASTORIA framework for generating load for non-functional testing. ALGs were launched and controlled by ASTORIA Master automatically without any user interaction after the test was started. The output data from the VUSs were properly aggregated first separately on the each ALG and then collectively on ASTORIA Master. Results from the test can be seen in Fig Since the system under test was able to comfortably handle the load generated by the ALGs, the graph only shows the first few time steps. No surprises were encountered in the later stages of the test. 7.3 Interpretation and analysis of the results Generally, the response times in the performed test were slightly higher than expected. Some extremely high samples could also be seen. We suspected that these were a result of multiple instance of HTMLUnit running simultaneously on the same VM. This was later confirmed when we compared the average response 35

44 Response times for action # 48 Response time (ms) Time step (5 minutes) Figure 7.1: Box plot showing response times for time steps 1-5 with outliers removed time for an action performed by an increasing number of concurrent HTMLUnit instances. A graph showing the results of this test can be seen in Fig There is a clear increase in response time when multiple instances of HTMLUnit are running at the same time. At the time of writing, the reasons for the added overhead are unclear. There have been speculations in various forums related to HTMLUnit about certain performance issues stemming from an ineffective handling of the JavaScript execution methods in Rhino. While this is unfortunate for the prototype, it does not invalidate the ASTORIA framework as such. We do, however, need to evaluate whether HTMLUnit is mature enough for these purposes and possibly look at other alternatives. Such additional research is not within the scope of this thesis. 36

45 Response time (ms) Threads Figure 7.2: Results from HTMLUnit profiling 37

46 CHAPTER EIGHT CONCLUSIONS This chapter concludes this thesis by evaluating the presented framework as well as briefly presenting some commercial solutions with similar features. Additionally, we discuss a couple of aspects of the project which could prove to be interesting for further research. 8.1 Evaluation of the ASTORIA framework In this thesis, we described different problems and challenges that are associated with generating load for cost-effective performance and scalability testing of RIAs. We then presented the ASTORIA framework as a novel solution for overcoming these challenges and problems. The effectiveness of our framework was demonstrated by presenting experimental results of an early prototype implementation. We showed that headless web browsers can be automated across several VMs in a compute cloud in order to simulate users, thus generating load. By simulating users in headless web browsers we were to conduct a test without disabling any CSRF prevention methods in the application servers. Additionally, we were able 38

47 to log response times in the headless web browsers, thus eliminating the need for server side monitoring. By using test scripts recorded with Selenium IDE, we were able to generate load with the characteristics of real user traffic. A diversified user pool was simulated by introducing random variables into the recorded usage scenarios. In this thesis we demonstrated how the proposed ASTORIA framework can be used for performance testing RIAs. In addition to performance testing, the framework can also be used to perform other types of non-functional testing of RIAs, such as scalability testing, endurance testing, stress testing, and spike testing [19]. 8.2 Related work While there are several commercial solutions available for performance testing web applications [21, 15], most are questionably capable of testing RIAs properly. Two notable solutions that, to the best of our knowledge, will work even with RIAs are BrowserMob [2] and Xceptance Load Test (XLT) [30]. Both solutions have the ability to not only simulate low-level HTTP traffic but also simulate real browsers using standard web browsers or its headless counterparts. BrowserMob, like our prototype, makes use of Selenium IDE for recording test scrips. Load generation is achieved by replaying the recorded usage scenarios in real browsers on VMs in the cloud. XLT, on the other hand, does not use Selenium IDE for recording test scripts, but a very similar in-house application also implemented as an extension for Mozilla Firefox. Like our prototype, XLT uses HTMLUnit for executing the recorded test scripts and simulating users. XLT has a static testing back-end which is extended in the cloud when necessary. BrowserMob and Xceptance have similar pricing models, where the user has access to the respective services testing back-end for a monthly fee. Both services offer a couple of different packages with varying limitations on the maximum number of simulated users. BrowserMob, for example, has three different packages with a maximum of 100, 500 and 5000 users. Additionally, both solutions offer free, although very limited, packages for minimal tests. 39

48 8.3 Further research Here follows a brief description of a couple of areas that may prove to be interesting for further research related to the work done in this thesis Headless Web Browser The lack of a proper headless web browser became very obvious while developing the prototype solution. HTMLUnit is, at the time of writing, not mature enough for larger tests due to the issues that arise from running many concurrent instances on the same machine. It is, however, a well documented project under active development and should, therefore, be taken into account whenever there is a need for a headless web browser. Another promising project is PhantomJS [9] which is a web browser implementation developed using the WebKit [29] web browser engine. PhantomJS is a fairly new project and is not purely headless yet, meaning it will require an X Window System in order to run even though it does not actually render any graphics. Controlling PhantomJS is done by writing JavaScript code which is called every time a page is loaded. Scripted controlling would therefore require either an interpreter of, for example, Selenium test scripts or a compiler that converts test scripts of some form into JavaScript code. The best possibility for the moment would probably be to use the approach described in Section where we proposed running a standard web browser, for example Mozilla Firefox, in a headless X Window System. This would, however, require us to implement a method for capturing the action execution times Preemptive test profiling The time required to initialize a test using the methods described in Section could be reduced by profiling the test scripts before executing the test. This preemptive profiling could give an indication of how much resources are required by the test script. The acquired data could consequently be used to make a 40

49 decent estimation of the required number of VMs. 41

50 SWEDISH SUMMARY Automatiserad prestanda- och skalbarhetstestning av rika internetapplikationer Introduktion Rika internetapplikationer är en benämning på avancerade webbapplikationer med egenskaper som påminner om traditionella datorprogram. Typiskt för rika internetapplikationer är ett förhållandevis interaktivt gränssnitt som skapas med hjälp av teknologier som erbjuder asynkron kommunikation och dynamisk innehållsrendering som till exempel Ajax (Asynchronous JavaScript and XML), Adobe Flash eller Microsoft Silverlight. Populäriteten hos rika internetapplikationer har inte bara förhöjt kraven som användarna ställer på webbapplikationer utan även introducerat nya utmaningar för utvecklare som vill prestanda- eller skalbarhetstesta sina produkter. Liu [14] definierar prestanda som ett mått på hur snabbt en applikation kan utföra vissa uppgifter medan skalbarhet anger prestandatrender under ett visst tidsintervall. En webbapplikation sägs ha hög prestanda om den snabbt klarar av att svara på förfrågningar från en användares webbläsare. Kravet på en webbapplikation som har god skalbarhet är således att den klarar av att hålla hög prestanda under en längre tid när webbapplikationen utsätts för varierande belastning. 42

51 Rika internetapplikationer som byggs i datormoln har goda förutsättningar att kunna erbjuda god skalbarhet i och med att datorkraft hyrs från datornmolnet i den mån det behövs. Eftersom datormolnet i teorin kan erbjuda en oändlig mängd datorkraft behövs effektiva och helst i hög grad automatiserade metoder för prestanda- och skalbarhetstestning. Konventionell prestandatestning utförs vanligen genom att använda speciella verktyg för att skapa syntetisk belastning på en webbapplikation med hjälp av på förhand inspelade HTTP förfrågningar [16, 24, 25]. Inspelningen görs vanligtvis när en användare utför ett för webbapplikationen i fråga typiskt användarscenario. Dylika metoder kan inte direkt appliceras på rika internetapplikationer på grund av de avancerade teknologier som de använder. Istället måste vi skapa syntetisk belastning med användarscenarion inspelade på en högre nivå. Det vill säga användarscenarion som byggs upp av en användares interaktion med en webbläsare, så kallade handlingar. Med handlingar menas till exempel knapptryck med musen eller insättning av text i ett textfält. Vi måste således skapa belastning genom att simulera verkliga användare med verkliga webbläsare. Att automatisera en webbläsare för att simulera en användare av en webbapplikation är i sig inte särdeles utmanande. Vad som är en utmaning, å andra sidan, är att automatisera en stor mängd webbläsare samt att göra det så effektivt som möjligt. På liknande sätt som datormolnet kan användas för skalbara webbapplikationer kunde man också använda datormolnet som grund för att automatisera stora mängder webbläsare och således skapa belastning för prestanda- och skalbarhetstestning. Frågan är dock hur detta kan göras så effektivt och förmånligt som möjligt. I det här arbetet beskrivs den forskning som gjorts vid Institutionen för informationsteknologier vid Åbo Akademi inom automatiserad prestanda- och skalbarhetstestning av rika internetapplikationer. Vi presenterar ramverket ASTORIA som lösning på en del av problemen samt beskriver en prototyp av en implementation av ramverket. 43

52 Problem och utmaningar Kostnad Vid användning av datormoln debiteras man vanligtvis enligt vilken utsträckning man använder sig av datormolnets olika resurser. Till exempel på basis av antal hyrda virtuella maskiner, använd bandbredd eller använt lagringsutrymme. För att minimera kostnaden bör man minimera antalet hyrda virtuella maskiner som i de flesta fall står för majoriteten av kostnaderna. Antalet hyrda virtuella maskiner som krävs för prestanda- eller skalbarhetstestning av rika internetapplikationer beror på hur många användare man vill simulera samt hur många användare man kan simulera per virtuell maskin. Vad som mäts Vid prestandatestning mäter man vanligtvis responstiden, det vill säga hur snabbt ett system kan hantera en viss uppgift [14]. För konventionell prestandatestning av webbapplikationer innebär detta att man mäter tiden det tar för en webbapplikation att producera ett svar på en HTTP förfrågan. Vid prestandatestning av rika internetapplikationer, å andra sidan, kan vi inte mäta tiden för den enskilda HTTP förfrågningen. Responstiden för en rik internetapplikation är istället den tid det tar för applikationen att hantera en enskild handling som utförs i en användares webbläsare. Handlingsklasser Det är viktigt att notera att de handlingar som utförs i en webbläsare när en webbapplikation används kan ha mycket varierande responstider. Som exempel kan vi jämföra en handling som väljer en vara i en internetbutik med en handling som resulterar i ett köp och hantering av kreditkortsuppgifter. Responstiden för dessa handlingar är med stor sannolikhet väldigt olika. Därför bör de indelas i olika handlingsklasser för att förhindra att medelvärdet av en stor mängd handlingar blir vilseledande. 44

53 Antalet virtuella maskiner Hur många användare man kan simulera per virtuell maskin är svårt att förutse på grund av stora variationer i de automatiserade webbläsarnas resursanvändning för olika webbapplikationer. För att kunna utnyttja de virtuella maskinerna maximalt behövs därför, på varje enskild virtuell maskin, ett styrsystem som med hjälp av att övervaka systemets resurser bestämmer antalet simulerade användare på den specifika maskinen. Ett ytterligare styrsystem bör dessutom övervaka det sammanlagda antalet simulerade användare och öka eller minska antalet virtuella maskiner så att den önskade mängden simulerade användare kan uppnås, gärna så snabbt som möjligt. Generering av syntetisk belastning För att skapa syntetisk belastning på en rik webbapplikation måste man simulera riktiga användare genom att automatisera webbläsare. Att använda vanliga webbläsare, som till exempel Mozilla Firefox, Google Chrome eller Opera, för dylika ändamål är inte ett effektivt alternativ eftersom sådana program ofta är tämligen resurskrävande. Istället kan man använda så kallade huvudlösa webbläsare, var ordet huvudlös syftar på webbläsarens avsaknad av grafiskt användargränssnitt. På grund av att huvudlösa webbläsare inte renderar någon grafik kan resursanvändningen, åtminstone teoretiskt, hållas betydligt lägre än vid användning av normala webbläsare. För att kontrollera webbläsare som saknar ett grafiskt användargränssnitt är man oftast tvungen att implementera nån form av styrprogram. Ramverket ASTORIA ASTORIA (Automated Scalability Testing of Rich Internet Applications) är ett ramverk som beskriver ett system för prestanda- och skalbarhetstestning för rika internetapplikationer. I ramverket används virtuella maskiner i ett datormoln för att generera syntetisk belastning mot en rik internetapplikation som man vill testa. För att applikationens prestanda skall kunna evalueras insamlas data från 45

54 varje simulerad användare som senare sammanslås till ett mindre stycke data som är enkelt att hantera. Komponenter Ramverket består av följande komponenter: ASTORIA Ledare (benämns hädanefter ledare), ASTORIA Belastningsgenererare (ABG), ASTORIA Lokal kontroller (ALK) och användarsimulerare (AS). En ABG har, som namnet antyder, i uppgift att generera belastning. För att åstadkomma detta kör varje ABG en mängd AS. Antalet användare som simuleras styrs av en ALK som finns på varje ABG. Ledaren har i uppgift att starta, kontrollera och slutligen stänga av ett antal ABG. En överblick av komponenterna och deras förhållanden kan ses i Fig.8.1. ASTORIA Ledare ALK ALK ALK AS1 AS2 ASm1 ABG1 AS1 AS2 ASm2 ABG2 AS1 AS2 ASm3 ABGn Figure 8.1: Överblick av ramverket ASTORIA Insamling och sammanslagning av testdata Mätdata som insamlas är responstiden för varje handling som utförs av varje simulerad användare. Dessa data samlas ihop från alla ABG av ledaren. Eftersom antalet simulerade användare kan vara mycket stort sammanslås all mätdata som producerats på varje enskild ABG till ett mindre stycke data innan överföringen. Sammanslagningen innebär att man kan minska på den bandbredd som krävs av ramverket. 46

55 Prototyp För att kunna göra en utvärdering av ASTORIA skapades en prototypimplementation av ramverket. Prototypen utgörs av olika komponenter som vardera motsvarar en eller flera av komponenterna i ramverket. Komponenterna som skapats för prototypen är: AstoriaMaster, AstoriaSlave, ProcessScaleBalancer och LogCompiler. Programmeringsspråket Java användes för konkretisering av alla komponenter. Datormolnleverantören som används i prototypen är Amazons Elastic Compute Cloud (EC2). AstoriaMaster är en implementation av ledaren i ramverket ASTORIA som, i enlighet med specifikationen, fungerar som dirigent för ett antal virtuella maskiner i EC2. Således är varje virtuell maskin som kontrolleras av AstoriaMaster i praktiken en ABG. En SSH klient (JSch [11]) ger AstoriaMaster möjligheten att på distans exekvera kommandon i de virtuella maskinerna och på så vis fjärrstyra deras beteende. Som användarsimulerare fungerar AstoriaSlave vilken använder den huvudlösa webbläsaren HTMLUnit [7] för att simulera på förhand inspelade användarscenarion. Ett användarscenario är en uppsättning handlingar som kan genereras automatiskt med hjälp av ett tillägg till webbläsaren Mozilla Firefox vid namn Selenium IDE [23]. Selenium IDE spelar in alla handlingar som utförs i webbläsaren när ett typiskt användarscenario utförs i den rika internetapplikation som man vill testa. ProcessScaleBalancer används för att balansera antalet instanser av AstoriaSlave som körs samtidigt på varje ABG. Syftet är att maximera antalet simulerade användare utan att överskrida den virtuella maskinens resurser. De data som varje simulerad användare genererar behandlas lokalt på varje enskild ABG av LogCompiler innan de överförs till AstoriaMaster. Responstider sammanslås till medeltal och standardavvikelser, skilt för varje handlingsklass. Väl i AstoriaMaster sammanslås alla data från varje ABG till en uppsättning data som beskriver hela testet. 47

56 Evaluering av prototypen För att testa prototypimplementationen av ramverket utfördes ett prestandatest av en rik internetapplikation framställd av Vaadin Ltd. [27]. Applikationen som testades var en modell av en internetbaserad biljettförsäljningstjänst för ett antal fiktiva biosalonger. För att utföra testet användes 20 virtuella maskiner med vilka man simulerade sammanlagt 1000 användare. Ett användarscenario var användaren först valde en biosalong, sedan en förställning och till sist beställde biljetterna spelades in med Selenium IDE. Testet utfördes under några timmar medan testdata från de virtuella maskinerna komprimerades och sammanställdes en gång i halvtimmen enligt beskrivningen. Testapplikationen klarade av att hantera belastningen utan problem vilket innebar att testet inte bjöd på några överraskningar efter det att den önskade belastningen hade uppnåtts. Efter att testet avslutats analyserades de ackumulerade data efter vilket man noterade en aning högre responstider än beräknat. Anledningen till de höga responstiderna fastslogs senare som en brist i den använda huvudlösa webbläsaren. Detta tyder på att HTMLUnit inte är moget för dylik storskalig användning. Sammanfattning I det här arbetet presenterades problem och utmaningar som man sannolikt stöter på vid prestanda- eller skalbarhetstestning av rika internetapplikationer. Som lösning på problemen introducerades ramverket ASTORIA samt en prototyp implementation av ramverket. Vi demonstrerade hur man kan simulera stora mängder användare genom att automatisera webbläsare i datormoln. För att minimera kostnaderna som uppstår vid användning av datormoln använde vi huvudlösa webbläsare som, i och med deras minskade resursanvändning jämfört med vanliga webbläsare, reducerar behovet av virtuella maskiner. Evaluering av testade webbapplikationers prestanda kan göras med hjälp av de testdata som produceras av ramverket. 48

57 BIBLIOGRAPHY [1] Amazon Web Services LLC. Amazon Web Services SDK for Java. [2] BrowserMob, LLC. BrowserMob. [3] M.H. DeGroot. Optimal Statistical Decisions. Wiley Classics Library. John Wiley & Sons, [4] J. Farrell and G.S. Nezlek. Rich Internet Applications The Next Stage of Application Development. In Information Technology Interfaces, ITI th International Conference on, pages , june [5] D. Flanagan. JavaScript: The Definitive Guide: Activate Your Web Pages. Definitive Guide Series. O Reilly Media, [6] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley Professional, [7] Gargoyle Software Inc. HTMLUnit. [8] J.L. Hellerstein, Y. Diao, S. Parekh, and D.M. Tilbury. Feedback Control of Computing Systems. John Wiley & Sons, [9] A. Hidayat. PhantomJS. [10] iopus Inc. imacros. 49

58 [11] JCraft, Inc. Java Secure Channel. [12] R. Langley. Practical Statistics Simply Explained. Number v. 1971, pt. 3 in Dover Books Explaining Science Series. Dover Publications, [13] T. Lindholm and F. Yellin. Java Virtual Machine Specification. Addison- Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2nd edition, [14] H.H. Liu. Software Performance and Scalability: A Quantitative Approach. John Wiley & Sons, May [15] Load Impact. Load Impact. [16] D.A. Menascé. Load Testing of Web Sites. IEEE Internet Computing, 6:70 74, July [17] J. Murty. Programming Amazon Web Services: S3, EC2, SQS, FPS, and SimpleDB. O Reilly Media, [18] OpenBSD. OpenSSH. [19] J. Palomäki. Web Application Performance Testing. Master s thesis, University of Turku, Turku, Finland, [20] Cloud Software Program. Cloud Software Program. [21] RadView Software Ltd. WebLoad. [22] E.T. Ray. Learning XML. Learning Series. O Reilly, [23] Selenium. [24] J. Shaw. Web Application Performance Testing a Case Study of an On-line Learning Application. BT Technology Journal, 18:79 86, /A: [25] J. Shaw, C. Baisden, and W. Pryke. Performance Testing A Case Study of a Combined Web/Telephony System. BT Technology Journal, 20:76 86, /A: [26] S. Stefanov. JavaScript Patterns. O Reilly Series. O Reilly Media,

59 [27] Vaadin Ltd. Vaadin. [28] Watir. [29] WebKit. [30] Xceptance Software Technologies, Inc. Xceptance Load Test. [31] Xvfb. 51

Towards Automatic Performance and Scalability Testing of Rich Internet Applications in the Cloud

Towards Automatic Performance and Scalability Testing of Rich Internet Applications in the Cloud Towards Automatic Performance and Scalability Testing of Rich Internet s in the Cloud Niclas Snellman, Adnan Ashraf, Ivan Porres Department of Information Technologies, Åbo Akademi University, Turku, Finland.

More information

GUI Test Automation How-To Tips

GUI Test Automation How-To Tips www. routinebot.com AKS-Labs - Page 2 - It s often said that First Impression is the last impression and software applications are no exception to that rule. There is little doubt that the user interface

More information

A Tool for Evaluation and Optimization of Web Application Performance

A Tool for Evaluation and Optimization of Web Application Performance A Tool for Evaluation and Optimization of Web Application Performance Tomáš Černý 1 [email protected] Michael J. Donahoo 2 [email protected] Abstract: One of the main goals of web application

More information

Load and Performance Load Testing. RadView Software October 2015 www.radview.com

Load and Performance Load Testing. RadView Software October 2015 www.radview.com Load and Performance Load Testing RadView Software October 2015 www.radview.com Contents Introduction... 3 Key Components and Architecture... 4 Creating Load Tests... 5 Mobile Load Testing... 9 Test Execution...

More information

Web Applications Testing

Web Applications Testing Web Applications Testing Automated testing and verification JP Galeotti, Alessandra Gorla Why are Web applications different Web 1.0: Static content Client and Server side execution Different components

More information

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition Liferay Portal Performance Benchmark Study of Liferay Portal Enterprise Edition Table of Contents Executive Summary... 3 Test Scenarios... 4 Benchmark Configuration and Methodology... 5 Environment Configuration...

More information

Background on Elastic Compute Cloud (EC2) AMI s to choose from including servers hosted on different Linux distros

Background on Elastic Compute Cloud (EC2) AMI s to choose from including servers hosted on different Linux distros David Moses January 2014 Paper on Cloud Computing I Background on Tools and Technologies in Amazon Web Services (AWS) In this paper I will highlight the technologies from the AWS cloud which enable you

More information

AgencyPortal v5.1 Performance Test Summary Table of Contents

AgencyPortal v5.1 Performance Test Summary Table of Contents AgencyPortal v5.1 Performance Test Summary Table of Contents 1. Testing Approach 2 2. Server Profiles 3 3. Software Profiles 3 4. Server Benchmark Summary 4 4.1 Account Template 4 4.1.1 Response Time 4

More information

Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH

Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH CONTENTS Introduction... 4 System Components... 4 OpenNebula Cloud Management Toolkit... 4 VMware

More information

Evaluation of Nagios for Real-time Cloud Virtual Machine Monitoring

Evaluation of Nagios for Real-time Cloud Virtual Machine Monitoring University of Victoria Faculty of Engineering Fall 2009 Work Term Report Evaluation of Nagios for Real-time Cloud Virtual Machine Monitoring Department of Physics University of Victoria Victoria, BC Michael

More information

Performance Testing Process A Whitepaper

Performance Testing Process A Whitepaper Process A Whitepaper Copyright 2006. Technologies Pvt. Ltd. All Rights Reserved. is a registered trademark of, Inc. All other trademarks are owned by the respective owners. Proprietary Table of Contents

More information

Load Testing Ajax Apps using head-less browser tools. NoVaTAIG April 13, 2011 Gopal Addada and Frank Hurley Cigital Inc.

Load Testing Ajax Apps using head-less browser tools. NoVaTAIG April 13, 2011 Gopal Addada and Frank Hurley Cigital Inc. Load Testing Ajax Apps using head-less browser tools NoVaTAIG April 13, 2011 Gopal Addada and Frank Hurley Cigital Inc. 1 Agenda About Cigital Background : AJAX and Load Test requirements Tools research

More information

Simply type the id# in the search mechanism of ACS Skills Online to access the learning assets outlined below.

Simply type the id# in the search mechanism of ACS Skills Online to access the learning assets outlined below. Programming Practices Learning assets Simply type the id# in the search mechanism of ACS Skills Online to access the learning assets outlined below. Titles Debugging: Attach the Visual Studio Debugger

More information

SiteCelerate white paper

SiteCelerate white paper SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance

More information

Assignment # 1 (Cloud Computing Security)

Assignment # 1 (Cloud Computing Security) Assignment # 1 (Cloud Computing Security) Group Members: Abdullah Abid Zeeshan Qaiser M. Umar Hayat Table of Contents Windows Azure Introduction... 4 Windows Azure Services... 4 1. Compute... 4 a) Virtual

More information

How To Test Your Web Site On Wapt On A Pc Or Mac Or Mac (Or Mac) On A Mac Or Ipad Or Ipa (Or Ipa) On Pc Or Ipam (Or Pc Or Pc) On An Ip

How To Test Your Web Site On Wapt On A Pc Or Mac Or Mac (Or Mac) On A Mac Or Ipad Or Ipa (Or Ipa) On Pc Or Ipam (Or Pc Or Pc) On An Ip Load testing with WAPT: Quick Start Guide This document describes step by step how to create a simple typical test for a web application, execute it and interpret the results. A brief insight is provided

More information

A Server and Browser-Transparent CSRF Defense for Web 2.0 Applications. Slides by Connor Schnaith

A Server and Browser-Transparent CSRF Defense for Web 2.0 Applications. Slides by Connor Schnaith A Server and Browser-Transparent CSRF Defense for Web 2.0 Applications Slides by Connor Schnaith Cross-Site Request Forgery One-click attack, session riding Recorded since 2001 Fourth out of top 25 most

More information

LoadRunner and Performance Center v11.52 Technical Awareness Webinar Training

LoadRunner and Performance Center v11.52 Technical Awareness Webinar Training LoadRunner and Performance Center v11.52 Technical Awareness Webinar Training Tony Wong 1 Copyright Copyright 2012 2012 Hewlett-Packard Development Development Company, Company, L.P. The L.P. information

More information

A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing

A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing N.F. Huysamen and A.E. Krzesinski Department of Mathematical Sciences University of Stellenbosch 7600 Stellenbosch, South

More information

Chapter-1 : Introduction 1 CHAPTER - 1. Introduction

Chapter-1 : Introduction 1 CHAPTER - 1. Introduction Chapter-1 : Introduction 1 CHAPTER - 1 Introduction This thesis presents design of a new Model of the Meta-Search Engine for getting optimized search results. The focus is on new dimension of internet

More information

Thin@ System Architecture V3.2. Last Update: August 2015

Thin@ System Architecture V3.2. Last Update: August 2015 Thin@ System Architecture V3.2 Last Update: August 2015 Introduction http://www.thinetsolution.com Welcome to Thin@ System Architecture manual! Modern business applications are available to end users as

More information

AUTOMATED CONFERENCE CD-ROM BUILDER AN OPEN SOURCE APPROACH Stefan Karastanev

AUTOMATED CONFERENCE CD-ROM BUILDER AN OPEN SOURCE APPROACH Stefan Karastanev International Journal "Information Technologies & Knowledge" Vol.5 / 2011 319 AUTOMATED CONFERENCE CD-ROM BUILDER AN OPEN SOURCE APPROACH Stefan Karastanev Abstract: This paper presents a new approach

More information

International Journal of Advanced Engineering Research and Science (IJAERS) Vol-2, Issue-11, Nov- 2015] ISSN: 2349-6495

International Journal of Advanced Engineering Research and Science (IJAERS) Vol-2, Issue-11, Nov- 2015] ISSN: 2349-6495 International Journal of Advanced Engineering Research and Science (IJAERS) Vol-2, Issue-11, Nov- 2015] Survey on Automation Testing Tools for Mobile Applications Dr.S.Gunasekaran 1, V. Bargavi 2 1 Department

More information

Web 2.0 Technology Overview. Lecture 8 GSL Peru 2014

Web 2.0 Technology Overview. Lecture 8 GSL Peru 2014 Web 2.0 Technology Overview Lecture 8 GSL Peru 2014 Overview What is Web 2.0? Sites use technologies beyond static pages of earlier websites. Users interact and collaborate with one another Rich user experience

More information

Amazon Web Services Primer. William Strickland COP 6938 Fall 2012 University of Central Florida

Amazon Web Services Primer. William Strickland COP 6938 Fall 2012 University of Central Florida Amazon Web Services Primer William Strickland COP 6938 Fall 2012 University of Central Florida AWS Overview Amazon Web Services (AWS) is a collection of varying remote computing provided by Amazon.com.

More information

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications Open System Laboratory of University of Illinois at Urbana Champaign presents: Outline: IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications A Fine-Grained Adaptive

More information

A Middleware Strategy to Survive Compute Peak Loads in Cloud

A Middleware Strategy to Survive Compute Peak Loads in Cloud A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: [email protected]

More information

Amazon EC2 XenApp Scalability Analysis

Amazon EC2 XenApp Scalability Analysis WHITE PAPER Citrix XenApp Amazon EC2 XenApp Scalability Analysis www.citrix.com Table of Contents Introduction...3 Results Summary...3 Detailed Results...4 Methods of Determining Results...4 Amazon EC2

More information

An Easier Way for Cross-Platform Data Acquisition Application Development

An Easier Way for Cross-Platform Data Acquisition Application Development An Easier Way for Cross-Platform Data Acquisition Application Development For industrial automation and measurement system developers, software technology continues making rapid progress. Software engineers

More information

Mobile Cloud Computing T-110.5121 Open Source IaaS

Mobile Cloud Computing T-110.5121 Open Source IaaS Mobile Cloud Computing T-110.5121 Open Source IaaS Tommi Mäkelä, Otaniemi Evolution Mainframe Centralized computation and storage, thin clients Dedicated hardware, software, experienced staff High capital

More information

Evaluation of Load/Stress tools for Web Applications testing

Evaluation of Load/Stress tools for Web Applications testing May 14, 2008 Whitepaper Evaluation of Load/Stress tools for Web Applications testing CONTACT INFORMATION: phone: +1.301.527.1629 fax: +1.301.527.1690 email: [email protected] web: www.hsc.com PROPRIETARY

More information

Generating Load from the Cloud Handbook

Generating Load from the Cloud Handbook Ingenieurbüro David Fischer AG A Company of the Apica Group http://www.proxy-sniffer.com Generating Load from the Cloud Handbook Version 5.0 English Edition 2011, 2012 December 6, 2012 Page 1 of 44 Table

More information

Recommendations for Performance Benchmarking

Recommendations for Performance Benchmarking Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best

More information

Usage of OPNET IT tool to Simulate and Test the Security of Cloud under varying Firewall conditions

Usage of OPNET IT tool to Simulate and Test the Security of Cloud under varying Firewall conditions Usage of OPNET IT tool to Simulate and Test the Security of Cloud under varying Firewall conditions GRADUATE PROJECT REPORT Submitted to the Faculty of The School of Engineering & Computing Sciences Texas

More information

Performance Tuning Guide for ECM 2.0

Performance Tuning Guide for ECM 2.0 Performance Tuning Guide for ECM 2.0 Rev: 20 December 2012 Sitecore ECM 2.0 Performance Tuning Guide for ECM 2.0 A developer's guide to optimizing the performance of Sitecore ECM The information contained

More information

File S1: Supplementary Information of CloudDOE

File S1: Supplementary Information of CloudDOE File S1: Supplementary Information of CloudDOE Table of Contents 1. Prerequisites of CloudDOE... 2 2. An In-depth Discussion of Deploying a Hadoop Cloud... 2 Prerequisites of deployment... 2 Table S1.

More information

MyCloudLab: An Interactive Web-based Management System for Cloud Computing Administration

MyCloudLab: An Interactive Web-based Management System for Cloud Computing Administration MyCloudLab: An Interactive Web-based Management System for Cloud Computing Administration Hoi-Wan Chan 1, Min Xu 2, Chung-Pan Tang 1, Patrick P. C. Lee 1 & Tsz-Yeung Wong 1, 1 Department of Computer Science

More information

Source Code Review Using Static Analysis Tools

Source Code Review Using Static Analysis Tools Source Code Review Using Static Analysis Tools July-August 05 Author: Stavros Moiras Supervisor(s): Stefan Lüders Aimilios Tsouvelekakis CERN openlab Summer Student Report 05 Abstract Many teams at CERN,

More information

Front-End Performance Testing and Optimization

Front-End Performance Testing and Optimization Front-End Performance Testing and Optimization Abstract Today, web user turnaround starts from more than 3 seconds of response time. This demands performance optimization on all application levels. Client

More information

zen Platform technical white paper

zen Platform technical white paper zen Platform technical white paper The zen Platform as Strategic Business Platform The increasing use of application servers as standard paradigm for the development of business critical applications meant

More information

4 Understanding. Web Applications IN THIS CHAPTER. 4.1 Understand Web page development. 4.2 Understand Microsoft ASP.NET Web application development

4 Understanding. Web Applications IN THIS CHAPTER. 4.1 Understand Web page development. 4.2 Understand Microsoft ASP.NET Web application development 4 Understanding Web Applications IN THIS CHAPTER 4.1 Understand Web page development 4.2 Understand Microsoft ASP.NET Web application development 4.3 Understand Web hosting 4.4 Understand Web services

More information

Real-time Device Monitoring Using AWS

Real-time Device Monitoring Using AWS Real-time Device Monitoring Using AWS 1 Document History Version Date Initials Change Description 1.0 3/13/08 JZW Initial entry 1.1 3/14/08 JZW Continue initial input 1.2 3/14/08 JZW Added headers and

More information

Performance Testing Web 2.0. Stuart Moncrieff (Load Testing Guru) www.jds.net.au / www.myloadtest.com

Performance Testing Web 2.0. Stuart Moncrieff (Load Testing Guru) www.jds.net.au / www.myloadtest.com Performance Testing Web 2.0 Stuart Moncrieff (Load Testing Guru) www.jds.net.au / www.myloadtest.com 1 Foundations of Web 2.0 (a history lesson) 1993 The National Center for Supercomputing Applications

More information

Case Study: Load Testing and Tuning to Improve SharePoint Website Performance

Case Study: Load Testing and Tuning to Improve SharePoint Website Performance Case Study: Load Testing and Tuning to Improve SharePoint Website Performance Abstract: Initial load tests revealed that the capacity of a customized Microsoft Office SharePoint Server (MOSS) website cluster

More information

DiskPulse DISK CHANGE MONITOR

DiskPulse DISK CHANGE MONITOR DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com [email protected] 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product

More information

Performance testing Web 2.0

Performance testing Web 2.0 Performance testing Web 2.0 Stuart Moncrieff, Performance Test Consultant JDS 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice What is

More information

DISCOVERY OF WEB-APPLICATION VULNERABILITIES USING FUZZING TECHNIQUES

DISCOVERY OF WEB-APPLICATION VULNERABILITIES USING FUZZING TECHNIQUES DISCOVERY OF WEB-APPLICATION VULNERABILITIES USING FUZZING TECHNIQUES By Michael Crouse Dr. Errin W. Fulp, Ph.D., Advisor Abstract The increasingly high volume of users on the web and their use of web

More information

Cloud Computing. Adam Barker

Cloud Computing. Adam Barker Cloud Computing Adam Barker 1 Overview Introduction to Cloud computing Enabling technologies Different types of cloud: IaaS, PaaS and SaaS Cloud terminology Interacting with a cloud: management consoles

More information

Job Reference Guide. SLAMD Distributed Load Generation Engine. Version 1.8.2

Job Reference Guide. SLAMD Distributed Load Generation Engine. Version 1.8.2 Job Reference Guide SLAMD Distributed Load Generation Engine Version 1.8.2 June 2004 Contents 1. Introduction...3 2. The Utility Jobs...4 3. The LDAP Search Jobs...11 4. The LDAP Authentication Jobs...22

More information

SMock A Test Platform for the Evaluation of Monitoring Tools

SMock A Test Platform for the Evaluation of Monitoring Tools SMock A Test Platform for the Evaluation of Monitoring Tools User Manual Ruth Mizzi Faculty of ICT University of Malta June 20, 2013 Contents 1 Introduction 3 1.1 The Architecture and Design of SMock................

More information

Selenium 1.0 Testing Tools

Selenium 1.0 Testing Tools P U B L I S H I N G community experience distilled Selenium 1.0 Testing Tools David Burns Chapter No. 6 "First Steps with Selenium RC" In this package, you will find: A Biography of the author of the book

More information

Curl Building RIA Beyond AJAX

Curl Building RIA Beyond AJAX Rich Internet Applications for the Enterprise The Web has brought about an unprecedented level of connectivity and has put more data at our fingertips than ever before, transforming how we access information

More information

A Monitored Student Testing Application Using Cloud Computing

A Monitored Student Testing Application Using Cloud Computing A Monitored Student Testing Application Using Cloud Computing R. Mullapudi and G. Hsieh Department of Computer Science, Norfolk State University, Norfolk, Virginia, USA [email protected], [email protected]

More information

CS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015

CS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015 CS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015 1. Goals and Overview 1. In this MP you will design a Dynamic Load Balancer architecture for a Distributed System 2. You will

More information

Business Application Services Testing

Business Application Services Testing Business Application Services Testing Curriculum Structure Course name Duration(days) Express 2 Testing Concept and methodologies 3 Introduction to Performance Testing 3 Web Testing 2 QTP 5 SQL 5 Load

More information

Performance analysis and comparison of virtualization protocols, RDP and PCoIP

Performance analysis and comparison of virtualization protocols, RDP and PCoIP Performance analysis and comparison of virtualization protocols, RDP and PCoIP Jiri Kouril, Petra Lambertova Department of Telecommunications Brno University of Technology Ustav telekomunikaci, Purkynova

More information

JVA-122. Secure Java Web Development

JVA-122. Secure Java Web Development JVA-122. Secure Java Web Development Version 7.0 This comprehensive course shows experienced developers of Java EE applications how to secure those applications and to apply best practices with regard

More information

1 (11) Paperiton DMS Document Management System System Requirements Release: 2012/04 2012-04-16

1 (11) Paperiton DMS Document Management System System Requirements Release: 2012/04 2012-04-16 1 (11) Paperiton DMS Document Management System System Requirements Release: 2012/04 2012-04-16 2 (11) 1. This document describes the technical system requirements for Paperiton DMS Document Management

More information

Load Testing RIA using WebLOAD. Amir Shoval, VP Product Management [email protected]

Load Testing RIA using WebLOAD. Amir Shoval, VP Product Management amirs@radview.com Load Testing RIA using WebLOAD Amir Shoval, VP Product Management [email protected] Agenda Introduction to performance testing Introduction to WebLOAD Load testing Rich Internet Applications 2 Introduction

More information

Web Development. Owen Sacco. ICS2205/ICS2230 Web Intelligence

Web Development. Owen Sacco. ICS2205/ICS2230 Web Intelligence Web Development Owen Sacco ICS2205/ICS2230 Web Intelligence Brief Course Overview An introduction to Web development Server-side Scripting Web Servers PHP Client-side Scripting HTML & CSS JavaScript &

More information

Understanding the Performance of an X550 11-User Environment

Understanding the Performance of an X550 11-User Environment Understanding the Performance of an X550 11-User Environment Overview NComputing's desktop virtualization technology enables significantly lower computing costs by letting multiple users share a single

More information

ICE Trade Vault. Public User & Technology Guide June 6, 2014

ICE Trade Vault. Public User & Technology Guide June 6, 2014 ICE Trade Vault Public User & Technology Guide June 6, 2014 This material may not be reproduced or redistributed in whole or in part without the express, prior written consent of IntercontinentalExchange,

More information

Load testing with. WAPT Cloud. Quick Start Guide

Load testing with. WAPT Cloud. Quick Start Guide Load testing with WAPT Cloud Quick Start Guide This document describes step by step how to create a simple typical test for a web application, execute it and interpret the results. 2007-2015 SoftLogica

More information

Tutorial: Load Testing with CLIF

Tutorial: Load Testing with CLIF Tutorial: Load Testing with CLIF Bruno Dillenseger, Orange Labs Learning the basic concepts and manipulation of the CLIF load testing platform. Focus on the Eclipse-based GUI. Menu Introduction about Load

More information

Visualisation in the Google Cloud

Visualisation in the Google Cloud Visualisation in the Google Cloud by Kieran Barker, 1 School of Computing, Faculty of Engineering ABSTRACT Providing software as a service is an emerging trend in the computing world. This paper explores

More information

Credits: Some of the slides are based on material adapted from www.telerik.com/documents/telerik_and_ajax.pdf

Credits: Some of the slides are based on material adapted from www.telerik.com/documents/telerik_and_ajax.pdf 1 The Web, revisited WEB 2.0 [email protected] Credits: Some of the slides are based on material adapted from www.telerik.com/documents/telerik_and_ajax.pdf 2 The old web: 1994 HTML pages (hyperlinks)

More information

Client vs. Server Implementations of Mitigating XSS Security Threats on Web Applications

Client vs. Server Implementations of Mitigating XSS Security Threats on Web Applications Journal of Basic and Applied Engineering Research pp. 50-54 Krishi Sanskriti Publications http://www.krishisanskriti.org/jbaer.html Client vs. Server Implementations of Mitigating XSS Security Threats

More information

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad Test Run Analysis Interpretation (AI) Made Easy with OpenLoad OpenDemand Systems, Inc. Abstract / Executive Summary As Web applications and services become more complex, it becomes increasingly difficult

More information

SAIP 2012 Performance Engineering

SAIP 2012 Performance Engineering SAIP 2012 Performance Engineering Author: Jens Edlef Møller ([email protected]) Instructions for installation, setup and use of tools. Introduction For the project assignment a number of tools will be used.

More information

OPC COMMUNICATION IN REAL TIME

OPC COMMUNICATION IN REAL TIME OPC COMMUNICATION IN REAL TIME M. Mrosko, L. Mrafko Slovak University of Technology, Faculty of Electrical Engineering and Information Technology Ilkovičova 3, 812 19 Bratislava, Slovak Republic Abstract

More information

Tools for Testing Software Architectures. Learning Objectives. Context

Tools for Testing Software Architectures. Learning Objectives. Context Tools for Testing Software Architectures Wolfgang Emmerich Professor of Distributed Computing University College London http://sse.cs.ucl.ac.uk Learning Objectives To discuss tools to validate software

More information

Project Proposal. Data Storage / Retrieval with Access Control, Security and Pre-Fetching

Project Proposal. Data Storage / Retrieval with Access Control, Security and Pre-Fetching 1 Project Proposal Data Storage / Retrieval with Access Control, Security and Pre- Presented By: Shashank Newadkar Aditya Dev Sarvesh Sharma Advisor: Prof. Ming-Hwa Wang COEN 241 - Cloud Computing Page

More information

HIGH-SPEED BRIDGE TO CLOUD STORAGE

HIGH-SPEED BRIDGE TO CLOUD STORAGE HIGH-SPEED BRIDGE TO CLOUD STORAGE Addressing throughput bottlenecks with Signiant s SkyDrop 2 The heart of the Internet is a pulsing movement of data circulating among billions of devices worldwide between

More information

Detecting and Exploiting XSS with Xenotix XSS Exploit Framework

Detecting and Exploiting XSS with Xenotix XSS Exploit Framework Detecting and Exploiting XSS with Xenotix XSS Exploit Framework [email protected] keralacyberforce.in Introduction Cross Site Scripting or XSS vulnerabilities have been reported and exploited since 1990s.

More information

Load Testing on Web Application using Automated Testing Tool: Load Complete

Load Testing on Web Application using Automated Testing Tool: Load Complete Load Testing on Web Application using Automated Testing Tool: Load Complete Neha Thakur, Dr. K.L. Bansal Research Scholar, Department of Computer Science, Himachal Pradesh University, Shimla, India Professor,

More information

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction

More information

WHAT WE NEED TO START THE PERFORMANCE TESTING?

WHAT WE NEED TO START THE PERFORMANCE TESTING? ABSTRACT Crystal clear requirements before starting an activity are always helpful in achieving the desired goals. Achieving desired results are quite difficult when there is vague or incomplete information

More information

Cloud computing is a marketing term that means different things to different people. In this presentation, we look at the pros and cons of using

Cloud computing is a marketing term that means different things to different people. In this presentation, we look at the pros and cons of using Cloud computing is a marketing term that means different things to different people. In this presentation, we look at the pros and cons of using Amazon Web Services rather than setting up a physical server

More information

Some Issues on Ajax Invocation

Some Issues on Ajax Invocation Some Issues on Ajax Invocation I. Introduction AJAX is a set of technologies that together a website to be -or appear to be- highly responsive. This is achievable due to the following natures of AJAX[1]:

More information

Web Application Deployment in the Cloud Using Amazon Web Services From Infancy to Maturity

Web Application Deployment in the Cloud Using Amazon Web Services From Infancy to Maturity P3 InfoTech Solutions Pvt. Ltd http://www.p3infotech.in July 2013 Created by P3 InfoTech Solutions Pvt. Ltd., http://p3infotech.in 1 Web Application Deployment in the Cloud Using Amazon Web Services From

More information

Overview. Timeline Cloud Features and Technology

Overview. Timeline Cloud Features and Technology Overview Timeline Cloud is a backup software that creates continuous real time backups of your system and data to provide your company with a scalable, reliable and secure backup solution. Storage servers

More information

MEGA Web Application Architecture Overview MEGA 2009 SP4

MEGA Web Application Architecture Overview MEGA 2009 SP4 Revised: September 2, 2010 Created: March 31, 2010 Author: Jérôme Horber CONTENTS Summary This document describes the system requirements and possible deployment architectures for MEGA Web Application.

More information

Cisco Application Networking for Citrix Presentation Server

Cisco Application Networking for Citrix Presentation Server Cisco Application Networking for Citrix Presentation Server Faster Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address

More information

Certified Selenium Professional VS-1083

Certified Selenium Professional VS-1083 Certified Selenium Professional VS-1083 Certified Selenium Professional Certified Selenium Professional Certification Code VS-1083 Vskills certification for Selenium Professional assesses the candidate

More information

Shoal: IaaS Cloud Cache Publisher

Shoal: IaaS Cloud Cache Publisher University of Victoria Faculty of Engineering Winter 2013 Work Term Report Shoal: IaaS Cloud Cache Publisher Department of Physics University of Victoria Victoria, BC Mike Chester V00711672 Work Term 3

More information

How To Test A Web Server

How To Test A Web Server Performance and Load Testing Part 1 Performance & Load Testing Basics Performance & Load Testing Basics Introduction to Performance Testing Difference between Performance, Load and Stress Testing Why Performance

More information

INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD

INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD M.Rajeswari 1, M.Savuri Raja 2, M.Suganthy 3 1 Master of Technology, Department of Computer Science & Engineering, Dr. S.J.S Paul Memorial

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

EXECUTIVE SUMMARY CONTENTS. 1. Summary 2. Objectives 3. Methodology and Approach 4. Results 5. Next Steps 6. Glossary 7. Appendix. 1.

EXECUTIVE SUMMARY CONTENTS. 1. Summary 2. Objectives 3. Methodology and Approach 4. Results 5. Next Steps 6. Glossary 7. Appendix. 1. CONTENTS 1. Summary 2. Objectives 3. Methodology and Approach 4. Results 5. Next Steps 6. Glossary 7. Appendix EXECUTIVE SUMMARY Tenzing Managed IT services has recently partnered with Amazon Web Services

More information

Addressing Mobile Load Testing Challenges. A Neotys White Paper

Addressing Mobile Load Testing Challenges. A Neotys White Paper Addressing Mobile Load Testing Challenges A Neotys White Paper Contents Introduction... 3 Mobile load testing basics... 3 Recording mobile load testing scenarios... 4 Recording tests for native apps...

More information

Cloud computing - Architecting in the cloud

Cloud computing - Architecting in the cloud Cloud computing - Architecting in the cloud [email protected] 1 Outline Cloud computing What is? Levels of cloud computing: IaaS, PaaS, SaaS Moving to the cloud? Architecting in the cloud Best practices

More information

DEPLOYMENT GUIDE DEPLOYING F5 AUTOMATED NETWORK PROVISIONING FOR VMWARE INFRASTRUCTURE

DEPLOYMENT GUIDE DEPLOYING F5 AUTOMATED NETWORK PROVISIONING FOR VMWARE INFRASTRUCTURE DEPLOYMENT GUIDE DEPLOYING F5 AUTOMATED NETWORK PROVISIONING FOR VMWARE INFRASTRUCTURE Version: 1.0 Deploying F5 Automated Network Provisioning for VMware Infrastructure Both VMware Infrastructure and

More information

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani

More information

Using Steelhead Appliances and Stingray Aptimizer to Accelerate Microsoft SharePoint WHITE PAPER

Using Steelhead Appliances and Stingray Aptimizer to Accelerate Microsoft SharePoint WHITE PAPER Using Steelhead Appliances and Stingray Aptimizer to Accelerate Microsoft SharePoint WHITE PAPER Introduction to Faster Loading Web Sites A faster loading web site or intranet provides users with a more

More information

marlabs driving digital agility WHITEPAPER Big Data and Hadoop

marlabs driving digital agility WHITEPAPER Big Data and Hadoop marlabs driving digital agility WHITEPAPER Big Data and Hadoop Abstract This paper explains the significance of Hadoop, an emerging yet rapidly growing technology. The prime goal of this paper is to unveil

More information

ELIXIR LOAD BALANCER 2

ELIXIR LOAD BALANCER 2 ELIXIR LOAD BALANCER 2 Overview Elixir Load Balancer for Elixir Repertoire Server 7.2.2 or greater provides software solution for load balancing of Elixir Repertoire Servers. As a pure Java based software

More information

How To Install An Aneka Cloud On A Windows 7 Computer (For Free)

How To Install An Aneka Cloud On A Windows 7 Computer (For Free) MANJRASOFT PTY LTD Aneka 3.0 Manjrasoft 5/13/2013 This document describes in detail the steps involved in installing and configuring an Aneka Cloud. It covers the prerequisites for the installation, the

More information