JFlooder - Application performance testing with QoS assurance Tomasz Duszka 1, Andrzej Gorecki 1, Jakub Janczak 1, Adam Nowaczyk 1 and Dominik Radziszowski 1 Institute of Computer Science, AGH UST, al. Mickiewicza 30, 30-059 Krakow, Poland Abstract Contemporary distributed systems, such as Grids, must guarantee high efficiency, scalability and failover. Attainment of these features requires complex tuning of numerous highly coupled configuration parameters at system, middleware and application level. Popular tools available for performance testing suffer from lack of automation in results comparison, dynamic load characteristic changes and support for QoS requirements. This imperfections induce authors to create a new tool - JFlooder. This paper covers basics of JFlooder features and possible usage scenarios. Contemporary distributed systems, such as Grids, must guarantee high efficiency, scalability and failover. Attainment of these features requires complex tuning of numerous highly coupled configuration parameters at system, middleware and application level. Because even little, outwardly not significant, change of one parameter may affect system efficiency and even stability, tuning of distributed system must be validated by series of load tests[1]. Planning, implementation, execution, presentation and comparison of results of these tests are not easy tasks; it is a main reason why in many installations these tests are skipped or reduced to simple stress testing. Well performed load tests are build upon QoS (Quality of Service) requirements and can help in detecting system behavior under different load - only results of such tests can be compared and may answer the question whether system tuning was correct and, what is more important, how the system will perform in a real production environment[4]. Popular tools available for generation of system load and results acquisition like Grinder[5] or JMeter[2] help a little but suffer from a lack of automation in results comparison, dynamic load characteristic changes and support for QoS requirements [4, 3]. Imperfections of available application performance testing tools induce authors to create a new tool - JFlooder, which is described in this paper. The paper is strucure as follows: in the first chapter general architecture of JFlooder is presented, second chapter addresses concept of testing plugins, next two chapter covers metrics and optimal operating point. The paper is concluded with the future development plans. 1 Architecture JFlooder is a program that performs load tests of SOA conform application and middleware running on distributed infrastructure (cluster and grid). It
consists of agents running on worker nodes and central graphical management console - Fig.1. Fig. 1: JFlooder general architecture The console is responsible for agent coordination during the test execution, results gathering and presentation. Agents are responsible for test scenario execution and generation of desired system load. They can work in two base data gathering modes: off-line results are gathered after the test has been performed, such behaviour gives better relevance of it, on-line results are gathered and processed while the test is being performed user is able to see test results during its execution. JFlooder supports definition of load characteristics. Among built-in load characteristics: plain and peak load, user can define his own load functions. This powerful mechanisms allows for simulation of almost any system load characteristic. 2 Plugins Plugin architecture is one of the most outstanding features of JFlooder, it allows for very flexible definition of system-under-test access technology and test scenarios. Built-in plugins allows for easy definition of multi-technology tests performing load tests of: RMI based services - with supplied RMI plugin, Web Services - with supplied WS plugin,
SMTP and HTTP servers - with supplied HTTP plugin, custom services - with plugin implemented over supplied plugin s API. Supported test scenarios can vary from simple to very complex. Simple scenarios are limited to the execution of a selected set of methods on a tested system. Set of parameters required by the plugin to its execution (e.g. destination URL which is used to test HTTP server) is defined through unified configuration GUI. Fig. 2: Sample plugin configuration screen More complicated test scenarios can be coded by a specific plugin and configured via GUI provided by this plugin. Current JFlooder s distribution provides two plugins of that type: JRuby plugin, Jython plugin. These plugins support programming of test scenarios in JRubby and Jython respectively. Both plugins expose GUI with syntax highlighting, compilation and execution output console. 3 Metrics Introduction of plugins concept caused a variance in tests results (metrics). Each plugin produces results conform to plugin specific set of metrics. Metric set contains list of metrics i.e. name and description of values that can be measure
by a plugin during test execution (e.g. awaiting connections count, current free memory size etc). JFlooder can handle different metrics in a very flexible way. The only limitation is plugins ability to measure something, this something regarding its type, can be presented by standard JFlooder visualization module - Fig.3. Standard, build-in plugins can handle basic metrics: system response time, system load and operation correctness. Custom plugins may perform any activity and measure any custom metrics. Fig. 3: JFlooder - test execution results screen JFlooder lets one do more than just return and display some values from a plugin. It allows to process values and visualize them in a convenient way. Test execution results can be filtered, according to the filters defined during test definition. These filters can include typical aggregation functions like deviation, average, min, max etc. After filtering results can be presented as a single-value field (ie. to know the minimum free memory size after the test) or plotted in time domain as charts. Moreover after the test completion, it is possible to save all results in CSV format for further processing by external systems, charts can be saved as images. 4 Optimal operation point The unique ability of the program is a possibility of detection of the optimal operation point for a system under test. Optimal operation point is a highest
system load under which the system can still guarantee given QoS parameters. As an example the following situation can be considered. Imagine you created a perfect system and you are going to deliver it to your client. Because the client spend a lot of his money for this system, he is very concerned about it. He just asks one simple question: How many transactions can my system serve with response time less then 0.5s?. You may quickly respond: A huge.. But really, how many? 100? 1000? All you need is to perform number of tests for various number of requests and load characteristics and check; sounds like a lot of work... What needs to be found out is an optimal operation point; with JFlooder only simple configuration needs to be set up. JFlooder starts with a low system load and perform tests collecting metrics (with special interest in QoS parameters), next it automatically increases the load by starting additional agents and does the testing again - this process is repeated up to the moment in which QoS parameters are violated - optimal operation point is detected. This mechanism is very helpful for detection of system characteristic in a new or tuned environment. It answers, without much effort, the question what is the maximum load for which QoS parameters are kept. Fig. 4: Optimal operating point 5 Future JFlooder has also been successfully used for testing adaptation mechanisms of a component system. The most recent work cover SOA-related topics, such as testing workflow-based applications. It is being consequently improved and will be available as open source project in the near future.
References 1. Peter Zadrozny, Philip Aston, Ted Osborne: J2EE Performance Testing with BEA WebLogic Server, Expert 70 (2005). 2. The Apache Jakarta Project - JMeter: JUnit user s manual, 2006. 3. Krzysztof Zielinski, Marcin Jarzab, Damian Wieczorek, Kazimierz Balos: JIMS Extensions for Resource Monitoring and Management of Solaris 10, 1039-1046, International Conference on Computational Science, 2006. 4. Dominik Radziszowski: Scalable, component system for collecting and storing of data origin from monitoring of distributed systems, Phd Thesis, AGH University of Science and Technology, 2007. 5. Philip Aston: The Grinder: Load Testing for Everyone, Dev2Dev, 2002.