Quantitative Analysis of Desirability in User Experience
|
|
|
- Suzan Hicks
- 10 years ago
- Views:
Transcription
1 Quantitative Analysis of Desirability in User Experience Sisira Adikari School of Information Systems and Accounting University of Canberra Canberra, Australia ACT Craig McDonald School of Information Systems and Accounting University of Canberra Canberra, Australia ACT John Campbell School of Information Systems and Accounting University of Canberra Canberra, Australia ACT Abstract The multi-dimensional nature of user experience warrants a rigorous assessment of the interactive experience in systems. User experience assessments are based on product evaluations and subsequent analysis of the collected data using quantitative and qualitative techniques. The quality of user experience assessments is dependent on the effectiveness of the techniques deployed. This paper presents the results of a quantitative analysis of desirability aspects of the user experience in a comparative product evaluation study. The data collection was conducted using 118 item Microsoft Product Reaction Cards (PRC) tool followed by data analysis based on the Surface Measure of Overall Performance (SMOP) approach. The results of this study suggest that the incorporation of SMOP as an approach for PRC data analysis derive conclusive evidence of desirability in user experience. The significance of the paper is that it presents a novel analysis method incorporating product reaction cards and surface measure of overall performance approach for an effective quantitative analysis which can be used in academic research and industrial practice. Keywords User Experience, Product Reaction Cards, Surface Measure of Overall Performance 1 Introduction There are many definitions of user experience from the literature. Hassenzahl and Tractinsky (2006) consider user experience as ranging from traditional usability to beauty, hedonic, affective or experiential aspects of technology use. According to Kunivasky (2010, p.10), user experience is the totality of user perceptions associated with their interactions with an artefact (product, system, or service) in terms of effectiveness (how good is the result?), efficiency (how fast it?), emotional satisfaction (how good does it feel?), and the quality of the relationship with the entity that created the artefact. The ISO standard (2009) defines user experience as perceptions of persons and responses that result from the use or anticipated use of a product, system or service emphasising two main aspects: use and anticipated use. The ISO standard also points out that user experience includes all the users emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviours and accomplishments that occur before, during and after use, and usability criteria can be used to assess aspects of user experience (ISO, 2009). These two definitions by Kunivasky (2010) and ISO standard show the complex and multifaceted view of user experience. According to these definitions, user experience assessment needs to be an evaluation of the total users interactive experience of a product, system or service. As highlighted by Adikari et al. (2011), the interactive experience is the combined result of use (i.e. actual interaction experience), anticipated use (i.e. preinteraction experience such as needs and expectations), and after use (post-interaction experience), and these three components are equally important for consideration in user experience assessments. Based on the analysis of the ISO standard , Bevan (2009) considers that if user experience includes all behaviour, it presumably includes users effectiveness and efficiency and it seems consistent with the methods nominated by many people in industry who appear to have subsumed 1
2 usability within user experience. As defined in ISO standard , usability is concerned with the effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments. Bevan s view indicates that user experience is not distinct, and it is an extension of usability. Preece et al. have explained this broader view of usability within user experience (2002, p. 19) in terms of user experience goals and usability goals emphasising that user experience is at a level beyond that of usability. According to them, user experience occurs as a result of achieving usability goals during an interaction (see Figure 1). Moreover, they point out that user experience goals are more concerned with how users experience an interactive system from their perspective rather than assessing how useful or productive a system is from the product s own perspective. Their definition of user experience goals is: satisfying, enjoyable, fun, entertaining, helpful, motivating, aesthetically pleasing, supportive of creativity, rewarding and emotionally fulfilling. Figure 1: Usability and UX goals (Preece et al., 2002, p. 19) As shown in Figure 1, their model of usability and user experience consists of six usability goals: efficient to use, effective to use, safe to use, having good utility, easy to learn, and easy to remember. Importantly, the model does not consider satisfaction as a usability goal at the operational level; instead, it shows as a user experience goal. Another important difference in this model is that of safety which has been included as a primary usability goal. Later, the same authors (Rogers et al. 2011, pp ) presented an updated model of user experience goals in two dimensions; desirable (positive) ones, and undesirable (negative) ones. The desirable user experience goals are: satisfying, enjoyable, engaging, pleasurable, exciting, entertaining, helpful, motivating, challenging, enhancing sociability, supporting creativity, cognitively stimulating, fun, provocative, surprising, rewarding, and emotionally fulfilling. The undesirable user experience goals are: boring, frustrating, making one feel guilty, annoying, childish, unpleasant, patronizing, making one feel stupidity, cutesy and gimmicky. Rogers et al. (2011, pp ) also pointed out that interaction design should not only set usability goals for product design but also set user experience goals to assess whether the product is enjoyable, satisfying and motivating. They described many of these goals as subjective qualities concerned with how a system feels to a user. They also highlight that not all usability and user experience goals will be relevant to the design and evaluation of an interactive product (or system) as some combinations will not be compatible. Accordingly, effective user experience assessments should include evaluation criteria that consider usability goals as well as user experience goals, which are mostly subjective qualities that are experienced by the user during an interaction. Some of the user experience evaluation criteria indicate interrelation among other criteria. For example, enjoyable (joy) is most likely to be dependent on fun, and vice versa. These factors highlight the emotional impact on the user during an interaction and point out the desirability of an artefact from the user s point of view. There are many user 2
3 experience evaluation methods used in academic and industrial contexts under the categories of lab studies, field studies, surveys, and expert evaluations (Roto et al., 2009). These methods explore different attributes of user experience including some elements of desirability. Desirability in user experience and its importance have been acknowledged (Hartson and Pyla, 2012, p.32). Recent studies have reported that Product Reaction Cards (PRC) are more effective at expanding the understanding of user experience and desirability (Barnum and Palmer, 2010A, 2010B). This paper presents the results of a quantitative analysis of user experience of a comparative product evaluation study. Two software products were developed by two distinct agile software development teams based on the same suite of user stories (requirements) presented by a product owner. One product development team followed an integrated agile and user experience design approach and developed the product designated as EAP product, while the other team developed the product designated as CAP product based on traditional agile software development approach. The integrated approach intended to provide only required details of user experience related information as needed to support the analysis and design during agile software development iterations. 2 Product Reaction Cards for Data Collection Product reaction cards (Benedek & Miner, 2002) aim to elicit the aspects of perceived desirability from a user on the experience of a product. The desirability of user experience is the least tangible aspect of a good user experience, and is concerned with aspects such as imagined pleasure, look and feel of the product and happiness in using the product in context (Goodman et al. 2012, p. 23). Product Reaction Cards (PRC) based assessment has been recognised as one of the preferred methods for assessing the perceived desirability of visual designs (Albert & Tullis, 2013, p. 144; Barnum & Palmer, 2010B). PRC consists of a pack of 118 cards with 60% positive and 40% negative or neutral adjectives, from which participants choose the words that reflected their feelings toward their interactive experience with a product. According to Barnum and Palmer (2010B, p. 257), PRC unlocks information regarding the user s sense of the quality of information in a more revealing way than any other tool or techniques they had tried. Moreover, they highlighted specific strengths and values of using PRC complex information systems: as users they are able to express their feeling about their interactive experience to show what they like and dislike. Accordingly, PRC is an instrument that can be used to get useful insights and perceptions of a product effectively, and PRC-based assessments encourage users to provide a balanced view of both positive and negative feedback. User feedback can be analysed to determine the extent to which the tested product meets user requirements and expectations, and to identify design issues and deficiencies affecting the product. 3 Analysis of Product Reaction Cards Assessment In this research study, the product reaction cards evaluation aimed to gauge the overall acceptability of the product tested by the user participant. Determining the optimal sample size for a user experience research study is important. Based on a large number of studies, Faulkner (2003) summarised that a sample size of 15 users will be able discover minimum of 90% of usability issues and mean of % usability issues. For user testing experiments, Alroobaea and Mayhew (2014) reported that the optimum sample size of 16±4 users provide much validity. A total number of 32 test users participated in this study. Test users were grouped into two independent categories of 16 users. These two groups of test users were used for both individual product evaluations. Upon the completion of individual product evaluation on CAP and EAP product, users were asked to refer to the product reaction cards checklist and choose all words that best described their interactive experience with the product. Users were then asked to refine their selection and narrow down it to the top five product reaction cards words and rank them in the order of importance (5 being the most important and 1 being the least important). There were 16 product reaction cards evaluations for each of the two products totalling 32 for both products CAP and EAP. Table 1 and Table 2 show the participant choices in five categories that have been ranked from one to five for both products CAP and EAP. These PRC word choices can be considered as ordered categorical data. 3
4 Table 1. Ranked PRC words for CAP product Table 2. Ranked PRC words for EAP product Each of the PRC word choices was assigned with numbers ranging from one to five (from Ranking1 = 1 to Ranking5 = 5 ). Then the total scores for each PRC word were calculated by adding each assigned value for each PRC word occurrence in all five categories. Table 3 shows the total score for each attribute for both CAP and EAP product. 4
5 Table 3. Total score of each PRC attributes for CAP and EAP product In order to make a comparison of both products in terms of PRC attributes, attributes that are common in both products were selected. These common attributes and their corresponding total scores for both products (product 1 and product 2) are shown in Table 4. PRC attributes that are not common to both products were ignored. 5
6 Table 4. Total score of PRC attributes that are common for both CAP and EAP products Common attributes in Table 4 consist of both positive and negative items. In order to make the product comparison in terms of positive as well as negative PRC attributes, a further separation was made to distinguish the positive and negative PRC attributes from the common attributes in Table 4. Table 5 shows total scores for common positive and negative PRC attributes of product 1 and product 2 separately. Table 5. Common positive and negative PRC attributes for both products 4 Surface Measure of Overall Performance The radar chart approach based Surface Measure of Overall Performance (SMOP) has been recognised as a technique that obtains the overall performance of a system (Mosley & Mayer, 1998; Schmid et al., 1999; Schütz et al., 1998). SMOP is given by the surface area of a radar chart formed by the joined lines of performance indicators represented in each dimension (radial line) of the chart. Schütz et al. (1998, p. 39) have highlighted the main goals of using a radar-based SMOP approach: Visualization of interrelated performance measures through standardized scales. Presentation of an effective and revealing description of selective performance dimensions in just one synthetic indicator (using the surface of the radar chart to the illustration of the performance of the system). The change in the overall performance between two points of intervals can be analysed. 6
7 The shape of the radar chart, as well as the overall performance measures, can be used for comparisons of systems. The mathematical formula for calculating SMOP for four axes: SMOP = (P1*P2)+(P2*P3)+(P3*P4)+(P4*P5)+(P5*P6)+...+(Pn*P1)) * sin90/2 The mathematical formula for calculating SMOP for more or less than four axes: SMOP = (P1*P2)+(P2*P3)+(P3*P4)+(P4*P5)+(P5*P6)+...+(Pn*P1)) * sin(360/n)/2 Where P = data point on the performance indicator and n = total number of data points. In SMOP, the maximum value that can be assigned to any radial line is 1. With reference to Table 5 for positive PRC attributes, useful can be found as the attribute having the highest total score with the value 37. For SMOP calculations, this highest value 37 is weighted as the calculated highest score 1 and all other total scores are converted as a fraction of 37 to get calculated Scores. Following a similar approach, ambiguous was shown with the highest score among PRC negative attributes with the value 25, and all total scores were expressed as a fraction of 25 to derive calculated scores. For both product 1 and product 2, the calculated scores of an attribute for SMOP calculations are shown in Table 6. Table 6. Calculated CAP and EAP attribute values for SMOP Radar Charts As shown in Figure 2 and 3, two radar charts were developed from the data of Table 6, representing the performance measure of the corresponding PRC attributes on each radial axis. Figure 1 shows the radar chart for positive PRC attributes. Figure 2: Radar chart for positive PRC attributes 7
8 The positive PRC attributes represented in Figure 1 are: approachable, business-like, clear, easy to use, effective, meaningful, organise, satisfying, usable and useful. The two overlaying radar charts in Figure 1 depicts the relative system performance of the CAP product and the EAP product in terms of positive PRC attributes. The highest performing three positive PRC attributes are: Useful (performance score = 1) Usable (performance score = 0.89) Easy to use (performance score = 0.7). Figure 2 illustrates the comparative positive performance between CAP and EAP products. As evident in Figure 2, the EAP product demonstrates a higher level of performance, representing a larger surface area, and the CAP product has a much lower performance than the EAP product, with a significantly smaller surface area of the radar chart. Figure 3 shows the radar chart for negative PRC attributes. Figure 3: Radar chart for negative PRC attributes The negative PRC attributes represented in Figure 3 are Ambiguous, Confusing, Frustrating, Inadequate, Ordinary, Simple, Unattractive, Unrefined, and Vague. The two overlaying radar charts in Figure 3 depicts the relative system performance of the CAP and EAP products in terms of negative PRC attributes. The highest performing three negative PRC attributes are: Ambiguous (performance score = 1) Confusing (performance score = 0.92) Simple and Unattractive (performance score = 0.44). Unlike Figure 2, Figure 3 illustrates the comparative negative performance between CAP and EAP products. Hence, the radar chart with the larger surface area corresponds to the product with poor performance, and the radar chart with smaller surface area corresponds to the product with less poor performance. As evident in Figure 3, the CAP product demonstrates a higher level of poor performance representing a larger surface area, and the EAP product performs much better than the CAP product with a significantly smaller surface area of the radar chart. 8
9 4.1 SMOP calculations positive PRC attributes in CAP product SMOP for positive PRC attributes in CAP product is calculated applying the formula: SMOP (CAP) = (P1*P2)+(P2*P3)+(P3*P4)+(P4*P5)+(P5*P6)+...+(Pn*P1)) * sin(360/n)/2 =(0.1*0.05)+(0.05*0.08)+(0.08*0.18)+(0.18*0.13)+(0.13*0.08)+(0.08*0.02)+(0.02*0.05)+(0.05*0. 27)+(0.27*0.16)+(0.16*0.01)*Sin(360/10)*0.5 = SMOP calculations positive PRC attributes in EAP product SMOP for positive PRC attributes in EAP product is calculated applying the formula: SMOP (EAP) = (P1*P2)+(P2*P3)+(P3*P4)+(P4*P5)+(P5*P6)+...+(Pn*P1)) * sin(360/n)/2 =(0.27*0.1)+(0.1*0.27)+(0.27*0.7)+(0.7*0.24)+(0.24*0.13)+(0.13*0.02)+(0.02*0.13)+(0.13*0.89)+( 0.89*1.0)+(1.0*0.27)*Sin(360/10)*0.5 = SMOP calculations negative PRC attributes in CAP product SMOP for negative PRC attributes in CAP product is calculated applying the formula: SMOP (CAP) = (P1*P2)+(P2*P3)+(P3*P4)+(P4*P5)+(P5*P6)+...+(Pn*P1)) * sin(360/n)/2 =(1.0*0.92)+(0.92*0.32)+(0.32*0.28)+(0.28*0.28)+(0.28*0.44)+(0.44*0.44)+(0.44*0.04)+(0.04*0. 02)+(0.02*1.0)*Sin(360/9)*0.5 = SMOP calculations negative PRC attributes in EAP product SMOP for negative PRC attributes in EAP product is calculated applying the formula: SMOP (EAP) = (P1*P2)+(P2*P3)+(P3*P4)+(P4*P5)+(P5*P6)+...+(Pn*P1)) * sin(360/n)/2 =(0.02*0.16)+(0.16*0.12)+(0.12*0.24)+(0.24*0.24)+(0.24*0.20)+(0.20*0.20)+(0.20*0.08)+(0.08*0.04)+(0.04*0.02)*Sin(360/9)*0.5= Summary of SMOP calculations Table 7 shows the summary of results of SMOP calculations. Table 7. Summary of results of SMOP calculations Results of the PRC analysis represented by the two radar charts (see Figure 4 and 5 which shows both radar charts in Figure 2 and 3 for comparison) show that the EAP product is significantly more effective than the CAP product. 9
10 Figure 4: Analysis: PRC positive attributes Figure 5: Analysis: PRC negative attributes The overall conclusion that can be drawn from the results of product reaction cards evaluation is that the EAP product has greater positive PRC attributes and fewer negative PRC attributes. 5 Conclusions This paper presents a quantitative data analysis of a comparative desirability assessment of two software products developed based on two conceptually different design approaches. The data collection was conducted using 118 item Microsoft Product Reaction Cards (PRC) tool, followed by data analysis based on the Surface Measure of Overall Performance (SMOP) approach. The main contribution of this research comes from the integrated method that incorporated product reaction cards and surface measure of performance for data collection and data analysis. The product reaction cards evaluation introduced two novel techniques for data analysis: a novel weighting system to scale the data for analysis, and the incorporation of the Surface Measure of Overall Performance as a technique of data analysis to derive conclusive results. The results of this study suggest that the incorporation of SMOP as an approach for PRC data analysis is effective in deriving conclusive evidence of desirability in user experience. 6 References Adikari, S., McDonald, C., & Campbell, J A design science framework for designing and assessing user experience. Human-computer interaction. Design and Development Approaches, Berlin, Heidelberg, Springer: pp Alroobaea, R., Mayhew, P. J How many participants are really enough for usability studies?. Proceedings of Science and Information Conference (SAI), IEEE: pp Albert, W., Tullis, T Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics. Elsevier. Barnum, C. M., Palmer, L. A. 2010A. More than a feeling: understanding the desirability factor in user experience. CHI'10 Extended Abstracts on Human Factors in Computing Systems. ACM: pp ). Barnum, C. M., Palmer, L. A. 2010B. Tapping into Desirability in User Experience. Usability of Complex Information Systems: Evaluation of User Interaction, pp Behringer, F., Kapplinger, B., Moraal, D., Schonfeld, G Striking Differences in Continuing Training in Enterprises across Europe: Comprehensive Overview of Key Results of CVTS. 9.pdf. Retrieved 01 August,
11 Benedek, J., Miner, T Measuring Desirability: New Methods for Evaluating Desirability in a Usability Lab Setting. Proceedings of Usability Professionals Association, pp Bevan, N What is the difference between the purpose of usability and user experience evaluation methods?. Proceedings of the UXEM'09 Workshop, INTERACT Goodman, E., Kuniavsky, M., Moed, A Observing the User Experience: A Practitioner's Guide to User Research. Morgan Kaufmann. Faulkner, L. Beyond the five-user assumption: Benefits of increased sample sizes in usability testing, Behavior Research Methods, Instruments and Computers, (35:3), pp Hartson, R., Pyla, P. S The UX Book: Process and guidelines for ensuring a quality user experience. Elsevier. Hassenzahl, M., Tractinsky, N User Experience - A Research Agenda. Behaviour & Information Technology, (25:2), pp ISO ISO FDIS : Human-centred design process for interactive systems. International Organization for Standardization. Kuniavsky, M Smart Things: Ubiquitous Computing User Experience Design. Morgan Kaufmann. Mosley, H., Mayer, A Benchmarking National Labour Market Performance: A Radar Chart Approach: WZB Discussion Paper, No. FS I Retrieved 01 August, Preece, J., Rogers, Y., Sharp, H Interaction design: beyond human-computer interaction. JohnWiley & Sons. Rogers, Y., Sharp, H., Preece, J Interaction Design: Beyond Human-Computer Interaction. John Wiley & Sons. Roto, V., Obrist, M., Vaananen-Vainio-Mattila, K User experience evaluation methods in academic and industrial contexts. CHI Extended Abstracts. pp Schmid, G., Schütz, H., Speckesser, S Broadening the scope of benchmarking: radar charts and employment systems, Labour, (13:4), pp Schütz, H., Speckesser, S., Schmid, G Benchmarking Labour Market Performance and Labour Market Policies: Theoretical Foundations and Applications: WZB Discussion Paper, No. FS I 98- Copyright Retrieved 01 August, Copyright: 2015 Sisira Adikari, Craig McDonald and John Campbell. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 3.0 Australia License, which permits non-commercial use, distribution, and reproduction in any medium, provided the original author and ACIS are credited.
User Experience in HMI: An Enhanced Assessment Model
User Experience in HMI: An Enhanced Assessment Model Sisira Adikari*, Craig McDonald + and John Campbell Faculty of Information Sciences and Engineering University of Canberra ACT 2601 Australia *[email protected]
Measuring User Experience of Usability Tool, Designed for Higher Educational Websites
Middle-East Journal of Scientific Research 14 (3): 347-353, 2013 ISSN 1990-9233 IDOSI Publications, 2013 DOI: 10.5829/idosi.mejsr.2013.14.3.2053 Measuring User Experience of Usability Tool, Designed for
Development of Evaluation Heuristics for Web Service User Experience
Development of Evaluation Heuristics for Web Service User Experience Kaisa Väänänen-Vainio-Mattila Tampere University of Technology Human-Centered Technology Korkeakoulunkatu 6 33720 Tampere, Finland [email protected]
Bad designs. Chapter 1: What is interaction design? Why is this vending machine so bad? Good design. Good and bad design.
Bad designs Chapter 1: What is interaction design? Elevator controls and labels on the bottom row all look the same, so it is easy to push a label by mistake instead of a control button People do not make
Divergence of User experience: Professionals vs. End Users
Divergence of User experience: Professionals vs. End Users Anssi Jääskeläinen Lappeenranta University of Technology P.O.Box 20, 53851 Lappeenranta Finland [email protected] Kari Heikkinen Lappeenranta University
Understanding and Supporting Intersubjective Meaning Making in Socio-Technical Systems: A Cognitive Psychology Perspective
Understanding and Supporting Intersubjective Meaning Making in Socio-Technical Systems: A Cognitive Psychology Perspective Sebastian Dennerlein Institute for Psychology, University of Graz, Universitätsplatz
Extending Quality in Use to Provide a Framework for Usability Measurement
Proceedings of HCI International 2009, San Diego, California, USA. Extending Quality in Use to Provide a Framework for Usability Measurement Nigel Bevan Professional Usability Services, 12 King Edwards
360 feedback. Manager. Development Report. Sample Example. name: email: date: [email protected]
60 feedback Manager Development Report name: email: date: Sample Example [email protected] 9 January 200 Introduction 60 feedback enables you to get a clear view of how others perceive the way you work.
2012 VISUAL ART STANDARDS GRADES K-1-2
COGNITIVE & K Critical and Creative Thinking: Students combine and apply artistic and reasoning skills to imagine, create, realize and refine artworks in conventional and innovative ways. The student will
A Framework for Integrating Software Usability into Software Development Process
A Framework for Integrating Software Usability into Software Development Process Hayat Dino AFRICOM Technologies, Addis Ababa, Ethiopia [email protected] Rahel Bekele School of Information Science, Addis
INTRODUCTION TO HUMAN-COMPUTER INTERACTION AND INTERACTION DESIGN
INTRODUCTION TO HUMAN-COMPUTER INTERACTION AND INTERACTION DESIGN Barbara M. Wildemuth School of Information & Library Science University of North Carolina at Chapel Hill HCI Seminar, Institute for Information
Measurement Information Model
mcgarry02.qxd 9/7/01 1:27 PM Page 13 2 Information Model This chapter describes one of the fundamental measurement concepts of Practical Software, the Information Model. The Information Model provides
UPA 2004 Presentation Page 1
UPA 2004 Presentation Page 1 Thomas S. Tullis and Jacqueline N. Stetson Human Interface Design Department, Fidelity Center for Applied Technology Fidelity Investments 82 Devonshire St., V4A Boston, MA
THE WELLBEING FRAMEWORK FOR SCHOOLS
April 2015 21/04/15_16531 CONNECT SUCCEED THRIVE THE WELLBEING FRAMEWORK FOR SCHOOLS Introduction The NSW Department of Education and Communities (DEC) is committed to creating quality learning opportunities
The metrics that matter
WHITE PAPER The metrics that matter How actionable analytics can transform field service management performance. www. Introduction The top strategic action for two-thirds of service organisations is to
Business Process Services. White Paper. Improving Efficiency in Business Process Services through User Interface Re-engineering
Business Process Services White Paper Improving Efficiency in Business Process Services through User Interface Re-engineering About the Authors Mahesh Kshirsagar Mahesh has a vast experience of about 24
Key Factors Affecting User Experience of Mobile Crowdsourcing Applications
, March 16-18, 2016, Hong Kong Key Factors Affecting User Experience of Mobile Crowdsourcing Applications Hao Y., Chong W., Man K.L., Liu O., and Shi X. Abstract With the increasing popularity of information
The Usability of Electronic Stores based on the Organization of Information and Features
The Usability of Electronic Stores based on the Organization of Information and Features CHAN KAH-SING Singapore Polytechnic This paper describes an investigation on how the perceived usability of electronic
Stakeholder Analysis: The Key to Balanced Performance Measures
Stakeholder Analysis: The Key to Balanced Performance Measures Robert M. Curtice Vice President, Performance Improvement Associates The Need for Balanced Performance Measures Traditional business performance
USER EXPERIENCE AS A TEST PLATFORM FOR INNOVATIONS
Lappeenranta University of Technology School of Industrial Engineering and Management Degree Program in Computer Science Francis Matheri USER EXPERIENCE AS A TEST PLATFORM FOR INNOVATIONS Examiners : Dr.
Aalborg Universitet. Fast, Fastere, Agile UCD Pedersen, Tina Øvad; Larsen, Lars Bo. Publication date: 2014
Aalborg Universitet Fast, Fastere, Agile UCD Pedersen, Tina Øvad; Larsen, Lars Bo Publication date: 2014 Document Version Preprint (usually an early version) Link to publication from Aalborg University
Authentic Leadership Coaching
THE DIVERSITY PRACTICE Authentic Leadership Coaching A new leadership paradigm Carol Campayne, Caroline Harper Jantuah, Lori Shook Of all the personal development and leadership interventions available
Abstract. Keywords: Mobile commerce, short messaging services, mobile marketing. Mobile Marketing
Consumer Perspectives On Mobile Advertising And Marketing Craig Standing, Steve Benson, Edith Cowan University Heikki Karjaluoto, University of Oulu, Finland Abstract Mobile marketing is set to make a
What Makes elearning Work?
What Makes elearning Work? Ian MacLean and Dr. Paul Lambert Ten principles of elearning instructional design that should be considered, to ensure your courses are effective. No single set of principles
DATA ANALYSIS, INTERPRETATION AND PRESENTATION
DATA ANALYSIS, INTERPRETATION AND PRESENTATION OVERVIEW Qualitative and quantitative Simple quantitative analysis Simple qualitative analysis Tools to support data analysis Theoretical frameworks: grounded
Integration of Usability Techniques into the Software Development Process
Integration of Usability Techniques into the Software Development Process Xavier Ferre Universidad Politecnica de Madrid [email protected] Abstract Software development organisations are paying more and
Effect of Job Autonomy Upon Organizational Commitment of Employees at Different Hierarchical Level
psyct.psychopen.eu 2193-7281 Research Articles Effect of Job Autonomy Upon Organizational Commitment of Employees at Different Hierarchical Level Shalini Sisodia* a, Ira Das a [a] Department of Psychology,
PERCEPTION OF BUILDING CONSTRUCTION WORKERS TOWARDS SAFETY, HEALTH AND ENVIRONMENT
Journal of Engineering Science and Technology Vol. 2, No. 3 (2007) 271-279 School of Engineering, Taylor s University College PERCEPTION OF BUILDING CONSTRUCTION WORKERS TOWARDS SAFETY, HEALTH AND ENVIRONMENT
HUMAN COMPUTER INTERACTION (HCI) AND PSYCHOLOGY
Abstract Both of Human Computer Interaction (HCI) and Psychology aim to deliver a useful, accessible and usable software. This paper intends to outline psychology in relation to the Human Computer Interaction
Website Usage Monitoring and Evaluation
11 11 WEBSITE USAGE MONITORING AND EVALUATION Website Usage Monitoring and Evaluation Better Practice Checklist Practical guides for effective use of new technologies in Government www.agimo.gov.au/checklists
User Experience Why, What, How?
User Experience Why, What, How? Virpi Roto Acting Professor Aalto University School of Arts, Design and Architecture Department of Design Engaging Co-design Research team Virpi Roto www.allaboutux.org/virpi:
Measurement and Metrics Fundamentals. SE 350 Software Process & Product Quality
Measurement and Metrics Fundamentals Lecture Objectives Provide some basic concepts of metrics Quality attribute metrics and measurements Reliability, validity, error Correlation and causation Discuss
THE BASICS OF STATISTICAL PROCESS CONTROL & PROCESS BEHAVIOUR CHARTING
THE BASICS OF STATISTICAL PROCESS CONTROL & PROCESS BEHAVIOUR CHARTING A User s Guide to SPC By David Howard Management-NewStyle "...Shewhart perceived that control limits must serve industry in action.
Single Level Drill Down Interactive Visualization Technique for Descriptive Data Mining Results
, pp.33-40 http://dx.doi.org/10.14257/ijgdc.2014.7.4.04 Single Level Drill Down Interactive Visualization Technique for Descriptive Data Mining Results Muzammil Khan, Fida Hussain and Imran Khan Department
Exploring Graduates Perceptions of the Quality of Higher Education
Exploring Graduates Perceptions of the Quality of Higher Education Adee Athiyainan and Bernie O Donnell Abstract Over the last decade, higher education institutions in Australia have become increasingly
DESIGNING FOR THE USER INSTEAD OF YOUR PORTFOLIO
DESIGNING FOR THE USER INSTEAD OF YOUR PORTFOLIO AN INTRODUCTION TO USER EXPERIENCE DESIGN Wade Shearer wadeshearer.com Wade Shearer User Experience Designer and Evangelist Vivint, Omniture, LDS Church,
To measure or not to measure: Why web usability is different. from traditional usability.
To measure or not to measure: Why web usability is different from traditional usability. Charlotte Olsson Department of Informatics Umeå University, 901 87 UMEÅ, Sweden +46 90 786 6820 [email protected]
Improving Government Websites and Surveys With Usability Testing and User Experience Research
Introduction Improving Government Websites and Surveys With Usability Testing and User Experience Research Jennifer Romano Bergstrom, Jonathan Strohl Fors Marsh Group 1010 N Glebe Rd., Suite 510, Arlington,
Lean UX. Best practices for integrating user insights into the app development process. Best Practices Copyright 2015 UXprobe bvba
Lean UX Best practices for integrating user insights into the app development process Best Practices Copyright 2015 UXprobe bvba Table of contents Introduction.... 3 1. Ideation phase... 4 1.1. Task Analysis...
A Framework for Improving Web 2.0 Interaction Design
291 Ciência e Natura, v. 36 Part 2 2015, p. 291 296 ISSN impressa: 0100-8307 ISSN on-line: 2179-460X A Framework for Improving Web 2.0 Interaction Design Mohammad Hajarian 1* 1 Young Researchers and Elite
INDIVIDUAL CHANGE Learning and the process of change in what ways can models of
INDIVIDUAL CHANGE Learning and the process of change in what ways can models of learning help us understand individual change? The behavioural approach to change how can we change people s behaviour? The
Key performance indicators
Key performance indicators Winning tips and common challenges Having an effective key performance indicator (KPI) selection and monitoring process is becoming increasingly critical in today s competitive
Our Guide to Customer Journey Mapping
Our Guide to Customer Journey Mapping Our Guides Our guides are here to help you understand a topic or to provide support for a particular task you might already be working on. Inside you ll find lots
Guidelines for Using the Retrospective Think Aloud Protocol with Eye Tracking
Guidelines for Using the Retrospective Think Aloud Protocol with Eye Tracking September, 2009 Short paper by Tobii Technology Not sure of how to design your eye tracking study? This document aims to provide
Intro to Human Centered Service Design Thinking
Intro to Human Centered Service Design Thinking thinkjarcollective.com Adapted from ideo.org DESIGN A BETTER COMMUTE DISCOVER Human-centered design begins with in-depth interviews and qualitative research.
ORGANISATIONAL CULTURE. Students what do you all think Organizational Culture is? Can you all define it in your own way.
Lesson:-35 ORGANISATIONAL CULTURE Students what do you all think Organizational Culture is? Can you all define it in your own way. In the 1980's, we saw an increase in the attention paid to organizational
The Emotional Economy at Work
The Emotional Economy at Work White Paper Series: Paper 400: Selecting & Working with Emotional Engagement Performance Metrics Jeremy Scrivens This paper is dedicated to my highly rational but emotionally
Best Practices. Create a Better VoC Report. Three Data Visualization Techniques to Get Your Reports Noticed
Best Practices Create a Better VoC Report Three Data Visualization Techniques to Get Your Reports Noticed Create a Better VoC Report Three Data Visualization Techniques to Get Your Report Noticed V oice
Knowledge Visualization: A Comparative Study between Project Tube Maps and Gantt Charts
Knowledge Visualization: A Comparative Study between Project Tube Maps and Gantt Charts Remo Aslak Burkhard (University of St.Gallen, [email protected]) Michael Meier (vasp datatecture GmbH, [email protected])
Basic underlying assumptions
Organisational culture Broadly speaking there are two schools of thought on organisational culture. The first, suggests that culture is tangible and exists in much the same way an organisation chart can
Building tomorrow s leaders today lessons from the National College for School Leadership
Building tomorrow s leaders today lessons from the National College for School Leadership If you re looking for innovative ways to engage your would-be leaders through gaming and video role play, the National
Designing Interactive Systems
THIRD EDITION Designing Interactive Systems A comprehensive guide to HCl, UX and interaction design David Benyon PEARSON Harlow, England London * New York Boston San Francisco Toronto Sydney * Auckland
www.thecustomerexperience.es
www.thecustomerexperience.es 2 four How to measure customer experience Carlos Molina In most organizations, CRM strategy now focuses on customer experience. Measuring customer experience has thus become
Paper Prototyping as a core tool in the design of mobile phone user experiences
Paper Prototyping as a core tool in the design of mobile phone user experiences Introduction There is much competition in the mobile telecoms industry. Mobile devices are increasingly featurerich they
Testing Metrics. Introduction
Introduction Why Measure? What to Measure? It is often said that if something cannot be measured, it cannot be managed or improved. There is immense value in measurement, but you should always make sure
Brand metrics: Gauging and linking brands with business performance
Brand metrics: Gauging and linking brands with business performance Received (in revised form): rd February, 00 TIM MUNOZ is a managing partner of Prophet (www.prophet.com), a management consulting firm
On History of Information Visualization
On History of Information Visualization Mária Kmeťová Department of Mathematics, Constantine the Philosopher University in Nitra, Tr. A. Hlinku 1, Nitra, Slovakia [email protected] Keywords: Abstract: abstract
HPF Tool. Template for the Performance and Accountability Statement Second round reviews. Amended December 2012
HPF Tool Template for the Performance and Accountability Statement Second round reviews Amended December 2012 The High Performance Framework was developed by the Public Sector Performance Commission. This
MODULE 10 CHANGE MANAGEMENT AND COMMUNICATION
MODULE 10 CHANGE MANAGEMENT AND COMMUNICATION PART OF A MODULAR TRAINING RESOURCE Commonwealth of Australia 2015. With the exception of the Commonwealth Coat of Arms and where otherwise noted all material
Taking the first step to agile digital services
Taking the first step to agile digital services Digital Delivered. Now for Tomorrow. 0207 602 6000 [email protected] @CACI_Cloud 2 1. Background & Summary The Government s Digital by Default agenda has
BA 655: Marketing Management and Strategy in Life Sciences Summer, 2012 (May 14 July 6)
BA 655: Marketing Management and Strategy in Life Sciences Summer, 2012 (May 14 July 6) Faculty Information Fred Roedl Clinical Associate Professor of Marketing Director, MBA Business Marketing Academy
A Practical Guide to Performance Calibration: A Step-by-Step Guide to Increasing the Fairness and Accuracy of Performance Appraisal
A Practical Guide to Performance Calibration: A Step-by-Step Guide to Increasing the Fairness and Accuracy of Performance Appraisal Karen N. Caruso, Ph.D. What is Performance Calibration? Performance Calibration
Testing Websites with Users
3 Testing Websites with Users 3 TESTING WEBSITES WITH USERS Better Practice Checklist Practical guides for effective use of new technologies in Government www.agimo.gov.au/checklists version 3, 2004 Introduction
REFLECTING ON EXPERIENCES OF THE TEACHER INDUCTION SCHEME
REFLECTING ON EXPERIENCES OF THE TEACHER INDUCTION SCHEME September 2005 Myra A Pearson, Depute Registrar (Education) Dr Dean Robson, Professional Officer First Published 2005 The General Teaching Council
PhD Qualifying Examination: Human-Computer Interaction
PhD Qualifying Examination: Human-Computer Interaction University of Wisconsin Madison, Department of Computer Sciences Spring 2014 Monday, February 3, 2014 General Instructions This exam has 7 numbered
The Concept of Project Success What 150 Australian project managers think D Baccarini 1, A Collins 2
The Concept of Project Success What 150 Australian project managers think D Baccarini 1, A Collins 2 1 Curtin University of Technology, Perth, Western Australia 2 Broad Construction Services, Perth, Western
Performance Management and Job Satisfaction of University Library Professionals in Karnataka: A Study
DESIDOC Journal of Library & Information Technology, Vol. 28, No. 6, November 2008, pp. 39-44 2008, DESIDOC Performance Management and Job Satisfaction of University Library Professionals in Karnataka:
A Framework for Business Sustainability
Environmental Quality Management, 17 (2), 81-88, 2007 A Framework for Business Sustainability Robert B. Pojasek, Ph.D Business sustainability seeks to create long-term shareholder value by embracing the
Online Customer Experience
Online Customer Experience What is the experience and how is it difference for frequent and infrequent purchasers, age cohorts and gender groups? Prepared by Jillian Martin and Gary Mortimer for Coles
Recognising and measuring engagement in ICT-rich learning environments
Recognising and measuring engagement in ICT-rich learning environments Chris Reading SiMERR National Centre, University of New England INTRODUCTION Engagement is being widely recognised as critical to
User research for information architecture projects
Donna Maurer Maadmob Interaction Design http://maadmob.com.au/ Unpublished article User research provides a vital input to information architecture projects. It helps us to understand what information
AER reference: 52454; D14/54321 ACCC_09/14_865
Commonwealth of Australia 2014 This work is copyright. In addition to any use permitted under the Copyright Act 1968, all material contained within this work is provided under a Creative Commons Attribution
Corporate Learning Watch
Quantifying the Organizational Benefits of Training August 2010 The Majority of CEOs Want Return on Investment Data for Training Programs, But Few Receive It Watch List A recent survey of CEOs found that
Excellence Indicators for Teaching Practices for the English Language Education Key Learning Area
Excellence s for Teaching Practices for the English Language Education Key Learning Area Foreword The Excellence s for Teaching Practices for the English Language Education Key Learning Area are compiled
Measuring ERP Projects
Measuring ERP Projects It is important to not only understand the factors influencing success, but also to have an approach for measuring and tracking an ERP project s success. ERP implementation projects
