Metrics for the Internet Age: Quality of Experience and Quality of Business



Similar documents
Metrics for the Internet Age: Quality of Experience and Quality of Business 1

quality of business : metrics for the Internet age

supported Application QoS in Shared Resource Pools

The Importance of Load Testing For Web Sites

an analysis of RAID 5DP

Adding Enhanced Services to the Internet: Lessons from History. k.c. claffy, David Clark UCSD, MIT September, 2015

Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications

Summary Ph.D. thesis Fredo Schotanus Horizontal cooperative purchasing

A Model for Resource Sharing for Internet Data Center Providers within the Grid

E-services Management Requirements

SECURITY METRICS: MEASUREMENTS TO SUPPORT THE CONTINUED DEVELOPMENT OF INFORMATION SECURITY TECHNOLOGY


QUANTITATIVE MODEL FOR INFORMATION SECURITY RISK MANAGEMENT

End-User Experience. Critical for Your Business: Managing Quality of Experience.

ReSIST NoE Resilience for Survivability in IST. Introduction


QUANTIFYING CUSTOMER SATISFACTION WITH E-COMMERCE WEBSITES

Managing effective sourcing teams

Survey on Models to Investigate Data Center Performance and QoS in Cloud Computing Infrastructure

Semantic Analysis of Business Process Executions

Value-Based Business-ICT Aligment: A Case Study of the Mobile Industry

SLA Business Management Based on Key Performance Indicators

PROJECT MANAGEMENT USING DYNAMIC SCHEDULING: BASELINE SCHEDULING, RISK ANALYSIS & PROJECT CONTROL By Mario Vanhoucke

.:!II PACKARD. Performance Evaluation ofa Distributed Application Performance Monitor

management software developments Aad van Moorsel Newcastle University, UK

E-Commerce: From Converging 'B2B versus B2C' Segments to Solutions for Different Product Groups

Ensuring End-to-End QoS for IP Applications. Chuck Darst HP OpenView. Solution Planning

From Business to Process Models a Chaining Methodology

Integrating User-Perceived Quality into Web Server Design

Page 1. White Paper, December 2012 Gerry Conway. How IT-CMF can increase the Energy Efficiency of Data Centres

218 Chapter 11. Conclusions. Therefore, this thesis aims to contribute to improving productivity of SMEs through DM and Project Communication.

6 Ways to Calculate Returns from Your Network Monitoring Investment

How To Provide Qos Based Routing In The Internet

THE IMPACTS OF BREACHES

An Ontology-Based Approach for Optimal Resource Allocation in Vehicular Cloud Computing

Load Balancing in Fault Tolerant Video Server

ForeverSOA: Towards the Maintenance of Service Oriented Software

Web Services Management Network: An Overlay Network for Federated Service Management

Chapter 14 Managing Operational Risks with Bayesian Networks

Towards Management of SLA-Aware Business Processes Based on Key Performance Indicators

1.9. MIP: Managing the Information Provision

Optimizing Inventory Management at SABIC

Bastian Koller HLRS High Performance Computing Center Stuttgart, University of Stuttgart Nobelstrasse Stuttgart

Redesigning the supply chain for Internet shopping Bringing ECR to the households

Standardization of Components, Products and Processes with Data Mining

The Business Impact of the Cloud. According to 460 Senior Financial Decision-Makers

NexTReT. Internet Status Monitor (ISM) Whitepaper

--- Who s watching over your network to insure that it is operating efficiently? ---

Improving vertical B2B relationships in the market research industry

Current Status of Electronic Data Interchange in the U.S. 1 -SUMMARY-

of Managing Applications in the Cloud

Low-Volatility Investing: Expect the Unexpected

The Association of System Performance Professionals

Strategies and Methods for Supplier Selections - Strategic Sourcing of Software at Ericsson Mobile Platforms

Drivers: the Secrets to Creating a Great Customer Experience

White Paper: Validating 10G Network Performance

Quo Vadis, Customer Experience?

IBM Tivoli Netcool network management solutions for enterprise

Business Impact of Application Performance Problems

APPLICATION PERFORMANCE MONITORING

A Quality of Service Monitoring System for Service Level Agreement Verification

PROS BIG DATA INNOVATIONS

QUALITY OF SERVICE REGULATION ISSUES IN FUTURE INTERNET

IT Optimization through Predictive Capacity Management

White Paper HP and the IT Infrastructure Library

Capacity Planning: an Essential Tool for Managing Web Services

From cost center to spearhead

Adapting an Enterprise Architecture for Business Intelligence

Load balancing model for Cloud Data Center ABSTRACT:

Brand metrics: Gauging and linking brands with business performance

Fault Slip Through Measurement in Software Development Process

Quality of service in IP networks

The Business Case for Software Performance Engineering. Lloyd G. Williams, Ph.D. Connie U. Smith, Ph.D.

Broadband v Ethernet

Designing and Developing Performance Measurement Software Solution

Benchmark yourself with your peers in positioning IT Services

The Open University s repository of research publications and other research outputs. Bridging the segmentation theory/practice divide

The case for Application Delivery over Application Deployment

C. Wohlin and B. Regnell, "Achieving Industrial Relevance in Software Engineering Education", Proceedings Conference on Software Engineering

How To Motivate Employees To Integrate Marketing

How To Teach I* To A First Year Bachelor Degree

Self-Encrypting Hard Disk Drives in the Data Center

Impact of B2B E-procurement Systems A Summary Report

The Case for Improving the B2B Customer Experience

HP OpenView Performance Insight Report Pack for Databases

TRUFFLE Broadband Bonding Network Appliance. A Frequently Asked Question on. Link Bonding vs. Load Balancing

Managing Your Consulting Firm for Growth. Worldwide Survey

The Role of Agreements in IT Management Software Development

PRODUCER PRICE INDEX FOR FACILITIES MANAGEMENT

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Resource Allocation Avoiding SLA Violations in Cloud Framework for SaaS

How To Understand The Relationship Between Network Performance And Quality Of Service

Customer Engagement What s Your Engagement Ratio?

Introduction. Chapter 1

Business Metric Driven Customer Engagement For IT Projects

White Paper: Impact of Inventory on Network Design

ITG Executive Summary

Dotted Chart and Control-Flow Analysis for a Loan Application Process

Apigee Institute Survey Results An Emerging Digital Divide

Do slow applications affect call centre performance?

Transcription:

Metrics for the Internet Age: Quality of Experience and Quality of Business Aad van Moorsel Software Technology Laboratory HP Laboratories Palo Alto HPL-2001-179 July 19 th, 2001* E-mail: aad@hpl.hp.com Quality of Service, Quality of Experience, Quality of Business, performability The primary topic of this report is metrics. More specifically, we discuss quantitative metrics for evaluating Internet services: what should we quantify, monitor and analyze in order to characterize, evaluate and manage services offered over the Internet? We argue that the focus must be on quality-of-experience and, what we call, quality-of-business metrics. These QoE and QoBiz metrics quantify the user experience and the business return, respectively, and are increasingly important and tractable in the Internet age. We introduce a QoBiz evaluation framework, responding to the emergence of three types of services that impact quantitative evaluation: business to consumer services, business to business services, and service utility through service providers. The resulting framework opens avenues for systematic modeling and analysis methodology for evaluating Internet services, not unlike the advances made over the past two decades in the area of performability evaluation. * Internal Accession Date Only Approved for External Publication To be published in the Fifth Performability Workshop, September 16, 2001, Erlangen, Germany Copyright Hewlett-Packard Company 2001

Metrics for the Internet Age: Quality of Experience and Quality of Business Aad van Moorsel, E-Services Software Research Department Hewlett-Packard Laboratories Palo Alto, California, USA aad@hpl.hp.com Contribution to the fifth performability workshop, in honor of John F. Meyer, inventor of performability. Abstract The primary topic of this report is metrics. More specifically, we discuss quantitative metrics for evaluating Internet services: what should we quantify, monitor and analyze in order to characterize, evaluate and manage services offered over the Internet? We argue that the focus must be on quality-of-experience and, what we call, quality-of-business metrics. These QoE and QoBiz metrics quantify the user experience and the business return, respectively, and are increasingly important and tractable in the Internet age. We introduce a QoBiz evaluation framework, responding to the emergence of three types of services that impact quantitative evaluation: business to consumer services, business to business services, and service utility through service providers. The resulting framework opens avenues for systematic modeling and analysis methodology for evaluating Internet services, not unlike the advances made over the past two decades in the area of performability evaluation. Keywords: Quality of Service, Quality of Experience, Quality of Business, QoS, QoE, QoBiz, management, e-services, web services, performability, metrics, analysis 1. Quantitative Evaluation in the Internet Age: What is Different? Who cares if your web shopping site has an availability of three nines? 99.9% availability corresponds to a little over eight hours of total down time in a year, but most customers do not care, nor do line-of-business managers. The customer only cares if the site is up when she wants to shop. Moreover, if the site is down for a few seconds during a shopping session, she will only care if it makes any difference for the execution of her ongoing tasks. The business manager only cares if she loses business because of down time, and for how much money this could have been prevented. From the above example, it follows that system metrics such as availability are not sufficient to evaluate an Internet service. Instead, user experience metrics and business metrics are necessary. Surely, this is not a new realization at any point in computing system history, there have been implicit considerations about user and business requirements when evaluating systems and services. For instance, decisions about upgrading computer equipment routinely consider user needs and costs. However, the

problem has always been that the relationship between system quality, user experience, and business cost is hard to make concrete. QoS, QoE and QoBiz analysis has therefore never been done in systematic and integrated fashion. In the Internet age this must change. The relation between QoS, QoE and QoBiz has become very apparent, and more easily to discern. In the web shopping scenario, the user experience directly influences how much shopping the customer does, and how much money the business makes as a consequence. Results are available that relate response times with user behavior [1] and business loss [12]. In the Internet age, not only are QoBiz metrics increasingly important, they are also more directly measurable and therefore better tractable. Acronyms: QoS: Quality of Service QoE: Quality of Experience QoBiz: Quality of Business B2C: Business to Consumer B2B: Business to Business xsp: any Service Provider ISP: Internet Service Provider ASP: Application Service Provider MSP: Management Service Provider CSP: Communications Service Provider HSP: Hosting Service Provider SLA: Service Level Agreement The crucial difference in the Internet age is that people, processes and institutions play roles that are closely integrated with the system (that is, the Internet) itself. They form ecosystems in which business is conducted through technological means (business-toconsumer as well as business-to-business). In addition, technology services are being provided and consumed in similar fashion as electricity and water (the service utility model). In particular, we identify the following trends as drivers for QoBiz management: Integration of the user in Internet ecosystems (B2C) Integration of enterprise business processes in web-service ecosystems (B2B) Emergence of the service utility model (service providers/xsps) In what follows we discuss the metrics that are suitable to evaluate the ecosystems that arise from these three trends. Intuitively, B2C naturally leads to QoE analysis, and B2B requires QoBiz analysis, as we discuss in Section 3. In Section 4, however, we introduce a more abstract notion of QoE and QoBiz, which allows us to construct a general framework in which every participant in an Internet ecosystem deals with QoS, QoE and QoBiz concerns at its own level of abstraction. That is useful for structured evaluation of service providers, which may have various participants interacting in a joined ecosystem (ISP, ASP, MSP, CSP, HSP, ). Eventually, we hope that the proposed framework is a useful steppingstone towards generic methodology and software support for QoBiz evaluation of Internet services. Before discussing Internet metrics in detail, however, we consider in Section 2 how metrics like QoE and QoBiz relate to the performability metric for fault-tolerant systems. 2. A Performability Perspective The introduction of the performability metric in 1978 was motivated by the growing acceptance of fault tolerant systems [6]. One characteristic that makes those systems

different from the preceding generation is that they are able to operate in degraded mode. This implies that it is no longer sufficient to know that the system is up or down instead, one needs to know how much up it is (at what mode of degradation), and what the level of performance is in this mode. In other words, one needs to evaluate not only what the system is, but also what it does, thus combining reliability with performance evaluation [4]. The introduction of performability has its roots in the need of evaluating a new class of systems in a more systematic and meaningful manner than previously done. It is no surprise that a disruptive technology like the Internet requires something similar. Performability provides a bridge between mathematical modeling and domain-specific system engineering. That is a very important aspect: there is a system engineering need to systematically classify and identify appropriate domain-specific metrics, since it leads to more powerful quantitative modeling approaches for the considered class of systems. As an example, in earlier work, we identified so-called system metrics and task metrics, and we showed that the latter, though arguably more relevant, are systematically harder to analyze [8]. This has lead to the insight that conversion laws (like Little s law for queuing systems) are critical, and ultimately made us propose the action model paradigm for task metrics [9]. The most convincing example demonstrating what can be achieved when tying mathematical modeling and system engineering is the performability evaluation approach based on reward models and (variations of) stochastic Petri nets [7]. This approach provides systematic modeling and software tool support for quantitative evaluation of fault-tolerant systems. It has taken quantitative modeling of fault-tolerant systems beyond solving case studies. This paper would serve its purpose well if it would instigate systematic approaches to QoE and QoBiz analysis for Internet services, comparable to those existing for performability evaluation of fault-tolerant systems. 3. QoS, QoE and QoBiz To create an intuition for the various quality metrics, let us consider for a moment a grocery store, and identify what quality of service (QoS), quality of experience (QoE) and quality of business (QoBiz) means in that context. Grocery store. QoS metrics of a grocery store are metrics that say something about the store itself, not taking the perspective of customer or business: is the store open or not, what is the average number of customers that pass through a counter every hour. These QoS metrics provide base info about the grocery store however, they do not tell anything important for customer or owner. For this we need QoE and QoBiz metrics, respectively. QoE metrics relate to the customer experience: is the grocery store open if I (the customer) want to visit. Another QoE metric is the actual time I spend at the register, and also if I consider the service slow or fast. Note that the mentioned QoE metrics have a counterpart in the QoS metrics mentioned above, but are different in that they take the customer angle (which may, indeed, be subjective in nature). QoBiz metrics, finally, relate very closely to dollars: will my business make more money if I add counters?, will my business make more money if I open my store between 10 pm and 5 am? and is the layout of my store optimal in guiding customers to the most profitable items?

The above scenario can easily be transposed to Internet services. QoS metrics are metrics such as availability and performance; QoE metrics are response times and availability of a service as experienced by a customer [13] [14]; finally, the QoBiz metric is expressed in terms of money, such as the average amount of money received per executed transaction. Figure 1 shows some of the typical metrics. Note that QoS metrics are further divided in two: system metrics and task metrics [8]. QoS task metrics are very close to QoE metrics, with two exceptions: (1) QoE metrics may have a subjective element to it while QoS task measure do not, and (2) QoE metrics may be influenced by any system between the service provider and the customer, while task measure relate to the quality of the considered system and service only. An example of the latter is the QoE of a web site, which heavily depends on the ISP s network conditions, the speed of the modem, etc [3] [5]. In that case the QoS task metric for the web service would only consider the web site, while the QoE metric must take into account all the other elements as well. MONQP MONSW MONSR TVU "!#!#$ %#&(' )*+-,, %#& +. )# +/ 0 %#123145' 67*8 5'47*-!#&-'9)!# +. )# +/ '#: 1"7*/*-; ;<;* +. )# +/!9)= > 6?,!#+.+.,,?@A!4 +.*-;B):5*-% )4*-0 2 ( %#&C( ;< 7,)0*-%4,6 ( ;< +/'9/ ('4*"!#&-'4)!D ;;<*7!9,6/1C@E*,F,!#+.+.,,?@A!4 14*/23%4*/ 1 +.*-;B):5*-% )4*-0 -%#1"G*-G %D1 7,)0*-%4,6 ( ;< )#+.:1,)0/:1C*/@ 14*/23%4*/ 1 +/!4,?*-;< @56 %#1HI, ( %9,6 +.(*"%4, *-,? 14*- 6,<)0- ( %9,6 +.(*"% +.*-,?>*/@,?7 6, 7: %4!#J*/@ 2 K, 7: %4!#$)0 )# & L,5 %D& 7*-0*/(,8+.*-,?5, Figure 1 Examples of QoS (system and task), QoE and QoBiz metrics 4. QoBiz Evaluation Framework In Section 1, we identified various roles played by people, processes and institutions participating in Internet business. Together, conglomerates of these players create an ecosystem, consisting of customers, suppliers, partners, etc. Each player in such an ecosystem has QoS, QoE and QoBiz concerns, and the quality metrics of different parties depend on each other [10] [11]. By abstracting away the intuitive notions of customer and business used in Section 3, we now construct a framework for systematic quantitative evaluation of Internet services.

X?YZ7[L\]YB^A_`/a d ef?ghidj gk l gnm.oph l g:q-dsr ˆ Š Œ Ž ^5\ bdc b#]/\c _.^A_`/a f v <r d mid 5d yzg rg yzyzdsr d }hidsr D mpd mig:q. pd=r d mid (dƒ }j g x t l u h? (h l d qkj gk migh l gy h? (h l d qkj gk t p / pdƒhih l g f v 8yzg:r#hidsr mid Figure 2 t uiv l6whhjg:x>g y gyzy{d r d }jgk~ Š?Ž 7 L ˆŽ Œ L Ž mpgh l gy r d mpd: (d hid r D mpd v p }jg:x QoS, QoE and QoBiz for a single participant in an Internet ecosystem. Figure 2 depicts the quality concerns of any player or participant in an Internet ecosystem. The player has connections with service providers and with customers (bottom and top of Figure 2, respectively). A service is received from the service provider, and a service is offered to the customer. This service flow is depicted on the left-hand side of the figure. Each player has its own metrics of interest: an Internet service provider is interested in bandwidth, an application service provider is interested in number of customers supportable, and an end customer is interested in perceived response time. Using the participant s own metrics of interest, it receives a certain QoE level from the service provider (the QoE box in Figure 2). In addition, it exposes a service to the customer, possibly using some additional resources of its own these resources together with the received service have a certain QoS (the QoS box). This service will then be exposed to the customers. The right-hand side of Figure 2 gives the financial flow, which directly relates to quality of business metrics. QoBiz is influenced by cost and revenue considerations. To achieve a certain QoS level, resources must be paid for ( cost of system QoS arc). In addition, each service provider must be paid for its delivered QoE (arc down to service provider). If the service provider delivers high QoE, the participant s willingness to pay for the service increases (as depicted by the dashed arc between the QoE and QoBiz box). These are all cost items for the considered participant. In addition to cost, there is revenue from the customers, which is depicted by the arc from the customer to the QoBiz box. Evaluation of QoBiz must take into account all these cost and revenue aspects.

We now relate the building block for the QoBiz evaluation framework in Figure 2 with the three important trends for QoBiz evaluation identified in Section 1, in the following order: xsps, B2C and B2B. : œ : Dš ž=ÿhÿ>z z / : 0 z H H -ª«z / Hª( 4«z > ª : : Dš ± "². z >³ H -ª«z / >ª5 4«z > ª œ : : Dš µ 9 0 -ª( H H "ª«z / >ª5 4«z > ª Figure 3 XSPs: Service Level Agreements between service providers Service Providers (xsps). In Figure 3 a chain of service providers all face QoS, QoE and QoBiz concerns. SLAs specify the delivered service levels. An ISP delivers bandwidth to an HSP, who delivers CPU and storage to an ASP, who delivers applications to a customer. For each service receiver, the SLAs are most useful if they are expressed in terms of its QoE metrics, while for the providers visibility may not go beyond the offered QoS. As we noted in Section 3, the offered QoS does not necessarily directly translate to the perceived QoE, since other network and communication players may play a role in how the service is delivers. With respect to quality of business, the SLAs also play an important role. Often, SLAs specify costs and penalties depending on the delivered quality of service. Hence, QoBiz evaluation has a lot to do with a cost-benefit analysis and prediction for SLAs. In addition, the service utility model has direct impact on QoBiz evaluation as well. In the service utility model, resources are not bought by the customer but paid for when used. This opens up new charging models for providers and requires new strategy and planning for resource reservation by customers. All together, it is clear that interesting opportunities exist to create systematic analysis methodology and tools for QoBiz evaluation of (chains of) xsps.

Ù Ö ÜSÙ Ñ#ÝHÑ#Ï ÖÖ ÛÔÚÞÒ#ßàÙÛÏÕ 3Ḩ¹ 3Ḩº»z¼ ½¾H À "Á Â-Ã=ÄÂ-Å>Æ ¾H Â"Ã(Ç 3ḨÍ 3Ḩ¹ 3Ḩº»z¼ ÎÏ/ÐÏ Ñ4Ò0ÏBÓpÎÔ-Õ Ö Ô- ØÚÙÛÏÕ Ö È ÂÉ Ê ḨËHË>»zÅHÌ» À  Figure 4 Business to Consumer scenario: QoBiz influenced by executed transactions Business to Consumer (B2C). We argued in earlier sections that QoBiz becomes more important and better tractable for Internet applications. Figure 4 and Figure 5 show why this is the case. Figure 4 depicts a B2C scenario, such as for amazon.com. Figure 5 depicts a B2B scenario, corresponding to for instance a travel agent web service that conducts business with a hotel booking service. In both cases, the important aspect is that transactions executed between players directly correspond to the business revenue created. In the B2C case, transactions that correspond to orders create business revenue for the service provider. Moreover, QoS and QoE directly influence the QoBiz results, albeit in intricate ways. Nevertheless, the relationships between QoS/QoE and QoBiz is much more direct than for old-style computing systems that support administrative activities. It is also more dynamic and natural than for SLA agreements. Figure 4 shows that the end customer makes decisions about purchasing based on the perceived QoE (and many other elements). The QoS concern of the end customer is a non-issue, since there is no service delivery that must be done, and we therefore removed the QoS box from the end user. QoBiz between customer and web service now includes the revenue generated at the web service from the interactions with the customer (see the label in Figure 4).

áâá3é èê4ë6èå7ìlíî.è áâáàã0äåæ5çdè åé èê4ë6èåìlíî.è ïnð-ñ ïnð-ò ïnð áíó ô õ ö(õ.øpõ ùô ú:û ü õƒý þÿnû-ý ü õ Figure 5 Business to Business partner scenario: QoBiz influenced by executed transactions Business to Business (B2B). Figure 5 depicts two partners executing B2B transactions. In this case, the term partners better describes the mode of interaction than the term service providers. Their mutual willingness to do business with each other depends on the perceived QoE. Better QoE increases the willingness to make deals. QoBiz, then, depends on how much business is executed between the partners. Having identified QoS, QoE and QoBiz relationships for Internet ecosystems is only a first step in establishing quantitative evaluation methodology for Internet services. In [2], Jim Grant argues for QoE/QoBiz management throughout the system life cycle, an aspect we have not considered in this manuscript. However, the real challenge is in identifying mathematical modeling techniques, efficient solution algorithms, and software tool support that systematically deal with QoBiz evaluation. 5. Conclusion This paper introduces the concept of QoBiz (quality of business) metrics and evaluation. It builds up an evaluation framework for Internet services, relating QoS (quality of service), QoE (quality of experience) and QoBiz. The need to consider QoBiz metrics when evaluating Internet services has become increasingly apparent because of the emergence of three types of service models: business to consumer services, business to business services, and the service utility model using service providers. In all three cases, business and system considerations must be considered in unison. The developed evaluation framework is a steppingstone towards systematic support for QoBiz evaluation of Internet services. In the future, we may hope to see generic modeling methodology, solution algorithms and software tools to support such evaluation.

References [1] N. Bhatti, A. Bouch and A. Kuchinsky, Integrating User-Perceived Quality into Web Server Design, HP Labs Technical Report HPL-2000-3, January 2000. [2] Jim Grant, The Emerging Role of IT: Managing the Customer Experience through Systems Management, The IT Journal, Business Implications of Information Technology, Joe Podolsky (Editor), Hewlett-Packard Company, pp. 52-61, OpenView 2001 issue, second quarter 2001. [3] Thomas Gschwind, Kave Eshghi, Pankaj K. Garg and Klaus Wurster, Web Transaction Monitoring, HP Labs Technical Report HPL-2001-62, March 2001. [4] Boudewijn R. Haverkort, Raymond Marie, Gerardo Rubino and Kishor Trivedi (Editors), Performability Modeling, Techniques and Tools, John Wiley, Chichester, UK, 2001. [5] Chris Loosley, Richard L. Gimarc and Amy C. Spellmann, E-Commerce Response Time: A Reference Model, in Proceedings of CMG 2000, 2000. [6] John F. Meyer, On Evaluating the Performability of Degradable Computing Systems, in Proceedings 8 th International Symposium on Fault-Tolerant Computing, Toulouse, France, pp. 44-49, June 1978. [7] John F. Meyer and William H. Sanders, Specification and Construction of Performability Models, in Performability Modeling, Techniques and Tools, B. R. Haverkort, R. Marie, G. Rubino, K. Trivedi (Editors), Wiley, UK, Chapter 9, pp. 179-222, 2001. [8] Aad P. A. van Moorsel, Performability Evaluation Concepts and Techniques, PhD thesis, Universiteit Twente, Netherlands, 1993. [9] Aad P. A. van Moorsel, Action Models: A Reliability Formalism for Fault-Tolerant Distributed Computing Systems, in Proceedings of Third Annual IEEE International Computer Performance and Dependability Symposium, Durham, North-Carolina, pp. 119-128, Sept. 1998. [10] Akhil Sahai, Jinsong Ouyang, Vijay Machiraju and Klaus Wurster, BizQoS: Specifying and Guaranteeing Quality-of-Service for Web Services through Real Time Measurement and Adaptive Control, HP Labs Technical Report HPL-2001-134, May 2001. [11] Katinka Wolter and Aad van Moorsel, The Relationship between Quality of Service and Business Metrics: Monitoring, Notification and Optimization, HP Labs Technical Report, HPL-2001-96, April 2001 (also in OpenView Forum 2001, New Orleans, Louisiana, USA, June 11-14, 2001). [12] Zona Research, The Need for Speed II, Zona Market Bulletin, No. 5, April 2001, Zona Research, USA. [13] http://www.qualityofexperience.org/: learning center for QoE, sponsored by industry parties. [14] http://www.webqoe.org/: WebQoE work group formed by industry parties.