Framework for Measuring Performance Parameters SLA in SOA Alawi Abdullah Al-Sagaf Faculty of Computer Science & Information Systems Universiti Teknologi Malaysia 81310 UTM Skudai, Johor, Malaysia alawi0002@yahoo.com Dayang Norhayati Abang Jawawi Faculty of Computer Science & Information Systems Universiti Teknologi Malaysia 81310 UTM Skudai, Johor, Malaysia dayang@utm.my Abstract - The current trend in modeling and designing serviceoriented systems is by following a new paradigm called SOA. SOA is a new promising paradigm that manages the execution of the service instance which is not fully under the control of the client or service requestor but under the third party or provider. SLAs in SOA framework are still new but recently it became extremely important due to the demand on service-oriented systems. Generally natural language is used to describes SLA elements, however nowadays SLAs is machine-readable. WSLA is an XML-based language that used to create machine-readable SLAs which defines service interfaces using WSDL for services implementation in web services technology. However, it is not adequate enough to characterize the performance, thus an extension of WSLA model to show the performance issues in detail and to help us measure the performance agreed upon in the SLA have been developed. In the consideration that WSLA model is a Platform Independent Model (PIM) and SEI 6- Element is Platform Specific Model (PSM), we have used QVT to map between them. Next, the evaluation and measurement process of the performance will be done automatically. Keywords- SLA; performance; framework; measurement; I. INTRODUCTION In internet computing, both internal and external customers presume that internet-speed business services have to be always accessible and offer disruption-free performance. Thus, any problem that interrupts the performance of internet-speed business will lead to the loss of business in the real world. Therefore, it is extremely important to detect, diagnose and correct any performance problem before the service deployed. The current business requirements depict a complex map for IT infrastructure because it is generally very large and quickly expanding. Hence, it is important for modern enterprise to have a mechanism to monitor the quality of the service that provided by IT infrastructure. This mechanism are generally called dashboard, it allows user to monitor, detect and correct the infrastructure [1] in order to increase the value of their business processes. Technologies and mechanism to monitor the performance constraint are necessary in order to achieve the business goals. The function of these automatic monitoring mechanisms became necessary by maintain it with the size of complex systems. With this initiative, OMG separates the conceptual Platform Independent Model (PIM) and Platform Specific Model (PSM). The current trend in modeling and designing of serviceoriented systems follows a new paradigm called Service- Oriented Architecture (SOA) [2][3]. In this approach the functionality of the system is assigned to loosely coupled services where integration between heterogonous systems is possible, thereby reuse increased agility to adapt to changing business requirement. Service Level Agreement (SLA) is an obligation between service provider and service consumer in which services and the level of quality are specified. SLAs have been used in IT organizations and departments for many years. The definition of SLAs in SOA framework are still new but recently it became extremely important due to the demand since service-oriented systems start to cross over the organizational boundaries and a lot of third-party service providers are begin to emerge [4]. Therefore, it is required to measure and ensure quality of service from both service provider and service consumer. Generally, SOA software systems have a different lifecycle than traditional software which consist of analysis, design, implementation and testing. The other main difference is SOA manages the execution of the service instance which is not fully under the control of the client or service requestor but under the third party or provider. This situation complicates the testing activity that carried out to ensure the quality of software, unlike traditional software. The second difference is SLA as new artifact in this paradigm requires several engineering sub-tasks such as specification languages (i.e. WSLA) [5], measuring and evaluation, and etc. From an engineering point of view, assessing the performance of service-oriented system and checking its compliance with the specified parameters in SLA is a very significant task in SOA. This is because performance is the critical factor affects the global Quality of Service (QoS) that expected by the end-users of the systems. Therefore, by using model-based approach the complexity of the large engineering activities that involved will be minimized. Software testing and test cases are common practice in engineering task [6] [7]. One of the common software testing methods under the model-based initiative is called Model- Based Testing (MBT). MBT is a form of black-box testing that 6
uses behavioral and structural model such as UML diagrams to generate test cases automatically. MBT is the most suitable for SOA environment because the test cases generators are able to cover almost all model related features such as states in UML state machine and boundary values for the data coverage [8]. Workload assumptions are important element in measuring and evaluating QoS under SOA [9], this is because the service guaranteed in SLA cannot be open-ended. The difficulty in testing online SOA system is because of some mentioned differences [10] has led into the consideration of Stub and workload generator. Stub is an agent that replacing the service which is not accessible and Workload generator is a program that simulates the scenario of multiuser systems. II. STATEMENT OF THE PROBLEM To address the research problems, following is the fundamental question throughout this research: How to make the measurement of performance parameters in SLA for SOA possible? This study is about assessing the performance of nonfunctional requirements gathered from end user at provisioning time while the business process services testing level. From the related works, numerous contribution in the literature focused on assessing at design time and SOA infrastructure [9][11]. But some contributions focused in evaluating performance at the provisioning while business processes level. Thus, this question would be better addressed by presenting a framework to help SOA engineers to evaluate the performance at provisioning time. The main issue in this study will be solved by dividing it into several research questions as followed: A. What is the scope of SLA performance measurement in the context of SOA? The huge definition of internet service-based systems perspectives, environment and engineering stages leads to the need to firstly identify the domain of the problem. It is important to sort out the differences between client view and provider view and service integrator view and end user view. This is will help researcher to choose type of performance metric that suitable. The throughput can be measured in terms of request per second or else it can be measured in data rate per second as average data rate including latency as well [12]. In addition, performance can be measured for infrastructure components (BPEL engines, parser, etc.) [9] as well as from the developer perspective at design time. The SLA performance measurement intended to be measured in this study is from end user perspectives and at provisioning time. Although the measurement is fixed by using MBT, but it has different level and scope. For example, unit testing level or integration level and which set of appropriate MBT parameters in SOA are relevant. B. What is the appropriate framework to measure SLA performance in SOA context? There are various reasons that made testing SOA applications is a challenging task. In most cases, services are often outside the control of the organization that using the service (service developers are independent from service providers and service consumers). As a result, potential mismatches and misunderstandings between parties can occur. Additionally, SOA applications are highly dynamic with frequently changing services and service consumers, changeable load on services, SOA infrastructure, and basic network. Consequently, it is normally impossible to capture all possible configurations and loads during the testing process. Therefore, what is possible to be done as from many common practices and literature is to identify and maintain a set of configurations that are considered important and use it as the base to define the test environments [4]. It is aware that there will be influence from measurement or testing environment on the result. Thus, it must be as similar as possible to the deployment environment. This means the simulation environment should include complete and realistic elements such as a workload generator and, it is required for simulating services that are not yet developed. It is an important element for simulating the concurrent clients executing services at provisioning time. This will allow measurement metrics such as latency and response time to be more realistic because it is performed under real working load. SLA specification of performance is still lack of a rich language and the one that oriented to SOA environment. In fact, most of the current SLA specifications influenced by past telecommunication systems in SLA practices. Therefore, it is essential to customize and refine it in the context of SOA. Here, we suggested 6-elements of quality achievement [13] as rich language that can be used in expressing performance in SLA. This language can be considered as domain language, it guide properly stimulated instrumentation system, unlike other languages (i.e. WSLA). C. What are the elements to be considered in measuring SLA performance? There are only a small number of researches related to measuring SLA performance at provision time. So, experiences and practices from the state of the relevant areas art in SOA and MBT will be considered. The measurement of SLA performance will be considered in three dimensions; a) SLA languages, b) transformation mechanisms, and c) generation of measurement programs. The goal of the first dimension is to comprise an expressive language that has the ability to encode the quality attribute specifications which also gives the ability to describe different levels of quality provided by a single service. Next, the goal of the second dimension is to be able to relate the terms of SLA with the capabilities of available instrumentation. Finally, the goal of the last dimension is to 7
develop complex functions to calculate the SLA performance terms by considering the workload. III. MOTIVATION Service-oriented systems recently grow and start to cross over the organization boundaries. One of the huge examples is Amazon web services (http://www.amazon.com) - services for businesses, consumers, Amazon associates, sellers, and other developers. Service-based systems currently are engineered by SOA principles in which software functionality is outsourced by one or more providers, so the program entirely is not under the control of clients as traditionally experienced. For example, for UTM email system, the service is outsourced by Microsoft Outlook Live, the third party which is the exchange server hosting company. Performance of services is among most important concerns in organization because of the dynamic nature of service-based systems. Service providers usually enhance services for many reasons such as optimization and improvements of services. Therefore, there are demands on automated systems and frameworks to manage the whole SLA life cycle. Performance assessment of a service component, at a minimum consists in executing a series of tests, each one with a specific workload for the component and collecting and aggregating some performance metrics that characterize the system. In the distributed systems and service-oriented systems, the components can be deployed on different machines over different networks and it may need to be stimulated by different remote clients. However, this task when performed manually or with a limited amount of automation can be problem and error-prone [9]. IV. BACKGROUND OF THE PROBLEM This study will be a step forwards by automating the measurement of SLA performance parameters in SOA systems and building a machine-readable SLA under the SOA domain. SLA is essential artifact in realistic SOA systems, especially for the multi organization. It is important to be monitored from both end user and service consumer perspectives. So firstly, SLA languages that used in specifying metrics and parameters of performance will be described. Next, the background of SLA management, evaluation and life cycle will be presented. And finally, the general approaches to monitoring SLA will be defined. A. SLA languages In the history of IT computing, English or some other natural language has been used to describe SLA elements of agreement for service levels. One of example of SLA document is Amazon S3 Service Level Agreement [14]. This document includes a section which declares the company s commitment in providing the Simple Storage Service (S3) that available with monthly uptime hour at least 99.9%. The SLA includes a service credit where users who experience unavailability of service can demand for monetary credit as compensation. There are many standards proposed in the literature [12] such as IBM s WSLA framework and the WS-Agreement specification. These XML-based languages can be used to create machine-readable SLAs for services implemented using web services technology which define service interfaces using WSDL. WSLA is an extensible language that can be extended to adopt other technical or service-based technologies. A machine-readable SLA is better than text for reasons identified by SEI in [12]. So if SLA were machine-readable, the measurement could be automated easily. B. SLAs management and evaluation As mentioned, the development of SLA is the main differences between traditional system and SOA systems. It is an engineering task that consist a numbers of sub-activities such as instrumentation and measurement which assign values to SLA parameters [15]. The important part for this study is the evaluation of the subsystem, it takes input from SLA metrics and parameters and checks the values against the guaranteed conditions of SLA. In case of violation, certain actions should be triggered and this process is expected to be reliable on tools so it can be performed automatically. C. SLA life cycle The following example of SLA life cycle is defined by [16], four phases that are a) Service and SLA template development, b) Negotiation, c) Preparation and d) Execution. The focus of this study is on execution step which concerns in assessing or evaluating SLA and QoS that provided to the individual or group of customers. Generally, review on QoS, customer satisfaction and improvement is rarely done. As stated by SEI [4], this is an active research area where SLA life cycle is automated to enable the dynamic provisioning time of service between organizations. In here all of the steps are done at runtime. Moreover, the assessment of QoS is seizing the attention of researchers and the large scale organizations [4] [17]. V. THE PRINCIPLES OF THE PROPOSED FRAMEWORK The basic idea is taken from MDA principles for SLA monitoring. A standard PSM to monitor is developed using 6- element framework which supports QoS instrumentation technique. Some gaps are found between source and target metamodel, so an extension is suggested for both. Standardized the vocabulary of monitors would help to achieve re-use which is necessary to solve the difficulties in monitoring the requirements. Achieve independent way of solving the problem is solving the difficulties as well because business people deal with monitoring issue [18]. WSLA itself is not a suitable abstraction level from business perspective, therefore what is needed is to increase the abstraction level of WSLA for SLA but at the same time with flexibility in implementation technologies. There are some different systems with different monitoring requirements, for example some uses a metric for the transaction rate while for others chose number of failed 8
message. In addition, the implementation of metric collection needs to be rendered into different platforms according to SOA principles. On the other hand, based on [18] the configuration of instrumentation components itself has different scenarios. Although this approach support creating live application instrumentation, in this study prototype model based on the database system concepts uses to simulate instrumentation and monitoring. However, the technique is flexible where SOA technologies could be used as a target platform for monitors. A. User User in this context can be service provider and/or service consumer and also tester team. All of them representing the source of models needed for the framework to start working. Developing PIM metamodel The concept of metamodel is simply the model of a model. PIM is a business model where stockholders could communicate easier with each other, for example communication between service provider and service consumer. It is also a reason for using UML and UML-based language which provides visual (notations) elements that familiar for users in some domain. It is basically a requirement for future develops applications under MDA. The principle of MDA encourages having a common understanding or standard while the metamodel development time so as to increase the reusability. This metamodel can be a source of re-use for similar applications in the domain. IBM has developed a PIM metamodel for WSLA [19] but for only explanation purpose, however it has been used as a starting purpose in making it re-usable PIM metamodel. Figure 1. Interaction the element framework to measure SLA performance parameters As seen in Figure 1, there are four actors or components that are a) user, b) MBT tool, c) QVT and d) execution toolset. It shows us the sequence and dependency between the framework elements. These components interact to fulfill the functionality required to measure SLA performance parameters. User should supply with service-based system models that are necessary to verify SLA parameters. MBT tool takes behavioral model from the service system to generate necessary test cases. Later, QVT will obtain WSLA metamodel and 6-element metamodel as standard design for monitors to produce the specific monitor according to WLSA instances. Finally the last step is the executions toolset which produce the result of instrument. These four actors or components are important elements of proposed automation framework for measuring SLA in this study. Each element will be considered in order to achieve the objective of this proposed framework. Therefore, each element will be deal as a stage or phase separately. Further explanation about these elements was explained detailed in the following subsections. Figure 2. PIM metamodel of WSLA Language In this study, the standard re-useable PIM will be used to develop similar monitors based on this study domain. Developing PSM metamodel The concept of PSM is to give a standard vocabulary of design by targeting some platform. PSM will be extended into calls and then into code-base APIs to be executed. Developing standard PSM has many advantages, for example the specific problem in this context would help designer to avoid repeated designs effort and facilitate the manipulation of different monitors and instrumentation mechanism by having a single model. From a number of observations and investigation, it found out that there are a lot of different terminologies and instrumentations strategies. However, the instrumentation usually does not involve the semantic of quality-attribute measurement. Most of the previous work focused only on the technical part that is on how to configure the components of 9
measurement and generally on the management of the components [18]. SLA monitor has not standardized as well, SLA specification are not extended (e.g. WSLA) so there is no common vocabulary. Models are not used as a first option while designing monitor, instrumentation and generally in SLA management. In addition, in many cases monitor is engineered from scratch and there is deficient of re-use capability [20] [21]. SEI Institute proposed a general framework with six elements as a tool for software architect to measure and achieves the quality attributes from software architecture prospective. Some common problems with NFRs have been solved through this framework. Thus, in this study the concept of some scenario are adopted to resolve the problem of overlapping between NFRs and to provide the operational framework. It consists of six elements that represent requirements for a given quality attribute as illustrated in Figure 3. systems in which a schema is generate from target metamodel and the class diagram of the service system. To put the framework into practice, a execution model is proposed where a schema is generated automatically for the PSM to represent the platform for test cases. The SQL operations such as update and insert are used to model the execution of the test cases. Then, during this period statistics is generated in a log file. As a final stage, a report by matching the log file with defined SLA performance expected values were generated. Figure.3 Quality attribute parts The proposed idea in this study is to use this knowledge as a reference to build PSM for monitors and instrumentation process. Therefore, a metamodel will be established for these six elements to build the PSM in MDA. The main restriction of this design is to allow all elements as shown in Figure 4 to be implemented. Figure 4. PSM metamodel for 6-Element Language VI. RESEARCH METHODOLOGY B. MBT tool The goal of using MBT is to generate test cases that will be used for measurement. The selection of test suit will be based on the provided model by tester team. There will be numbers of requirements to generate test cases, behavioural model of system (activity diagram in the case study) and selection of appropriate coverage criteria to generate the test cases. C. QVT QVT is OMG standard mapping language operating under MDA principles. The mapping inputs in this case are PIM metamodel, PIM instance, PSM metamodel and QVT mappings rules. The mapping result from source to target is based on the instances from the target. The key idea behind this approach is the concept of the repository which automatically generated for schema corresponds to the metamodel. The type of information result gained from model mappings are design decision that can be turn into code for some target platforms as represented by the high level PSM API. D. Execution toolset In MDA principles, user have to connect its PSM to the platform, hence the high level APIs can be turned into low level APIs to enable it to be executed. Here rest the database Figure 5. Research Process Framework 10
This research processes are divided into several phases as seen in Figure 5, each phases is explained as follows: A. Literature review (Phase 1) In this study, the understanding on SOA and its dimensions is important because of its novelty. For example SLA elements and languages that used to specify conditions. And also some studies in MBT as a new trend that has many challenges when used with SOA systems was studied. These studies are important to identifying the elements that commonly used for measuring performance in SLA. The studies on workload generator and different approaches to its construction are important as well because it reflects the accuracy of the simulation environment and provide the closest environment to the real system s environment. B. The case study (Phase 2) The main purpose of using case study is to understand the performance metrics and its challenges. In this study, the entire developed concept will be implemented on the case study to evaluate all proposed ideas. C. Formal model creation and refinement (Phase 3) To produce final framework, the result is refined by evaluating the model with the case study, hence it is easier to be generalized. D. Tools investigation (Phase 4) The proposed framework has to be constructed therefore, it necessary to investigate available toolsets. This is to enable it to be re-used in this study under chosen framework principles. In engineering prospective, re-use concept will reduce the investment cost. VII. CONTRIBUTIONS The contribution of this study can be considered as follows: Model-driven architecture for monitor design and instrumentation process to measure performance parameters in SLA for SOA systems. Model-based testing for generating test cases as performance artifact. Standard platform specific model for monitors using 6- element framework (new PSM metamodel ) Proposing a repository from service class model to support test cases execution. ACKNOWLEDGMENT This research is fully funded by the Research University Grant (RUG) from the Universiti Teknologi Malaysia (UTM) and Ministry of Higher Education (MOHE) under Cost Center No.Q.J130000.7128.00H60. Our profound appreciation also goes to ERetSEL lab members for their continuous support in the working of this paper. REFERENCES [1] SWEBOK 2004 (downloaded from http://www.swebok.org) [2] E. Thomas, SOA Principles of Service Design, Book Soa: principles of service design First,Prentice Hall Press Upper Saddle River, NJ, USA 2007, ISBN: 9780132344821. [3] M. Nicolai, Josuttis., SOA in Practice 2007. [4] B.Philip, A.Lewis, Service Level Agreements in Service-Oriented Architecture Environments 2008. [5] B. Diana, B. Boyan, Design of Service Level Agreements for Software Services,13-1-13-6. International Conference on Computer Systems and Technologies,2009 [6] U.Mark, Practical Model-Based Testing. A Tool Approach, 2007. [7] H. Bill, Helmut, G., Klaus, B., Model-Based Testing of System Requirements Using UML Use Case Model, Siemens,2008. [8] S.Wieczorek,, A.Stefanescu, J. Großmann, (), Enabling model-based testing for SOA integration testing, In Proc. of Model-based testing in practice (MOTIP 08), satellite workshop of ECMDA, pp 77-82,2008. [9] B. Domenico, B.Walter., D. Mauro, Automated Performance Assessment for Service-Oriented Middleware, USI-INF Technical Report, on site at NASA Ames Research Center, 2009. [10] B.Antonia, Guglielmo A. De, F. Lars, P. Andrea, Model-Based Generation of Testbeds for Web Services, Institute for Computing and Information Sciences, (ICIS),Radboud University Nijmegen The Netherlandslf@cs.ru.nl, Department of Mathematics and Computer Science University of Camerino Italy, andrea.polini@unicam.it. 2008 [11] S. Alin, W. Sebastian, K.Andrei, A Model-Based Testing Approach forservice Choreographies, SAP Research, CEC Darmstadt, Bleichstr.8,64283Darmstadt,Germany,alin.stefanescu,sebastian.wieczore k@sap.com, IBM Haifa Research Lab, Haifa University, Mount Carmel, 31905 Haifa, Israel, kirshin@il.ibm.com, 2009. [12] M. Ed, William, A., Sriram, B., David, Carney., John, Morley., Patrick, P., Soumya, Simanta., Testing in Service-Oriented Environments, Software Engineering Institute, http://www.sei.cmu.edu, 2010. [13] B.Len, et al., Software Architecture in Practice, Second Edition, 1998. [14] Amazon Web Services. Amazon S3 Service Level Agreement, www. aws.amazon.com/s3-sla/h, 2008. [15] A. Keller, H. Ludwig, (2003), The WSLA Framework: Specifying and Monitoring Service Level Agreements for Web Services, IBM Research Division, T.J. Watson Research Center, P.O. Box 704, Yorktown Heights, New York 10598. E-Mail: falex,hludwigg@us.ibm.com. [16] TMForum,www.tmforum.org/DocumentLibrary/SLAHandbookSolution /2035/Home.htmlH,Telemanagement(TM) Forum. SLA Handbook Solution Suite V2.0, 2008. [17] OMG,, Handling Non-Functional Properties in a Service Oriented Architecture Request For Information,2009. [18] E. Simon,M. Graham,Toward reusable SLA monitoring capabilities, School of Computing Science, Newcastle University, U.K. 2011. [19] IBM, Heiko Ludwig, IBM T.J. Watson Research Center Alexander Keller, Asit Dan, Richard P. King, Richard Franck, IBM Software Group, Version: 1.0, 2003. [20] B. Zain, F. Mohd, B. Hassan, Agent based Monitoring Framework for SOA Applications Quality Computer and Information Science Dept.Universiti Teknology Petronas Bandar Seri Iskandar, Tronoh Perak, Malaysia, 978-1 -4244-67 1 6-7/ 1 0/$26.00 20 1 0 I E E E. [21] A. David, F. Xavier, Service Level Agreement Monitor (SALMon), 2008, Universitat Politècnica de Catalunya, Spain {dameller, franch}@lsi.upc.edu. 11