1 Open Research Online The Open University s repository of research publications and other research outputs Using LibQUAL+ R to Identify Commonalities in Customer Satisfaction: The Secret to Success? Journal Article How to cite: Killick, Selena; van Weerden, Anne and van Weerden, Fransje (2014). Using LibQUAL+ R to Identify Commonalities in Customer Satisfaction: The Secret to Success? Performance Measurement and Metrics, 15(1/2) pp For guidance on citations see FAQs. c 2014 Emerald Group Publishing Limited Version: Accepted Manuscript Link(s) to article on publisher s website: Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online s data policy on reuse of materials please consult the policies page. oro.open.ac.uk
2 Using LibQUAL+ to Identify Commonalities in Customer Satisfaction: The Secret to Success? Selena Killick Cranfield University, Shrivenham, UK Anne van Weerden Utrecht University, Utrecht, Netherlands Fransje van Weerden University of Groningen, Groningen, Netherlands Abstract Purpose What is the key to library user satisfaction? Can LibQUAL+ help in the quest for delivering a quality library service? This paper presents international research into library customer satisfaction as measured by the LibQUAL+ survey methodology. Commonalities of satisfaction and dissatisfaction have been identified which influence the customers overall view of the library. This knowledge can be used to further increase customer satisfaction through targeting these areas for service improvement. Methodology LibQUAL+ is a library service quality survey instrument developed by the Association of Research Libraries (ARL) in association with Texas A&M University. The survey comprises of 22 questions categorised by three dimensions of service; Affect of Service, Information Control and Library as Place. LibQUAL+ uses gap theory to evaluate customer expectations as well as perceptions. For each of the 22 questions respondents are asked their minimum and desired expectations along with their current perceived level of service on a nine point scoring scale. From these ratings two score measures are calculated; service adequacy, and service superiority. The adequacy score is calculated by subtracting the minimum rating from the perceived score, with a negative score indicating that minimum expectations are not being met. Similarly, the superiority score is calculated by subtracting the desired score from the perceived rating, with a positive score indicating that desired expectations are being exceeded. This paper builds on research conducted at Utrecht University in The LibQUAL+ results from Utrecht University were analysed to explore the differences between customers who were very satisfied, and those who were very dissatisfied, with the service. This research has now been extended to an international level using the UK and Ireland s Society of College, National and University Libraries (SCONUL) LibQUAL+ consortium results and the results from Leiden University in the Netherlands. Results from each of the three dimensions of service quality were reviewed separately. The survey results from respondents who had given a high satisfaction mean score to one of the three dimensions were analysed separately to assess if they had also given high satisfaction mean scores overall. This process was then repeated for those who had given low satisfaction mean scores. High satisfaction was defined as a mean superiority gap score of larger than zero, indicating that their desired expectations were exceeded, together with a mean adequacy gap score of more than one, indicating the minimum expectations were clearly exceeded. Low satisfaction was defined as a mean adequacy gap score of less than zero, indicating minimum expectations were not being met, together with a superiority gap score of less than minus one, indicating the desired expectations were clearly not met.
3 Initially all responses meeting the research criteria were reviewed, except for those from library staff who traditionally have different expectations than the other customer groups. The analysis was then replicated with the results from the different customer groups separately; undergraduates, postgraduates, staff and academic staff. Findings For all customer groups the findings showed a pattern to satisfaction levels with regards to the Information Control dimension of the LibQUAL+ survey and a similar pattern with the Affect of Service dissatisfaction levels. Respondents with high satisfaction mean scores (as defined above) in the Information Control dimension were discovered to have the largest positive scores for the overall average perceived scores, indicating they are the most satisfied customers. When reviewing the surveys with low satisfaction mean scores in the Affect of Service dimension it was discovered that these respondents also had the largest negative scores for the overall average perceived scores, indicating they are the most dissatisfied customers. The findings show that both information resources and customer service affects the overall opinion of the library service for all customer groups. Research Limitations and Implications The implication from the research is that good information resources could have a positive effect on customers opinions of the library just as much as poor service from library staff could have a detrimental effect. Nevertheless, any conclusions drawn from these findings should recognise that the research is limited to measuring service quality within the confines of the LibQUAL+ survey methodology. The research has not investigated the reasons for the commonality, nor do these averages say anything about the motivation for each individual respondent to give these scores in the survey. Therefore, these averages can, with respect to ideas for improvements, only be used as indicators for what should be emphasized. Even within both groups, the satisfied and the dissatisfied customers, there remain considerable differences in what causes the (dis)satisfaction. Preliminary Conclusions Statistical analyses confirm that these findings hold for every user group. Therefore, for the library manager seeking to deliver a quality library service it will be important to take both of these factors into account and deliver information not only in a professional, but also in a helpful manner. Originality and value of the proposal Although based on previous research, the extension of the analysis from an institutional level to an international consortia level strengthens the initial research conclusions. The findings, implications and conclusions are valuable to library managers seeking to improve the customer perceptions of their library service, providing evidence of factors that influence customers opinions. Keywords: LibQUAL+, Academic Library Performance, Customer Satisfaction. Category: General Review.
4 Using LibQUAL+ to Identify Commonalities in Customer Satisfaction: The Secret to Success? This paper presents international research into library customer satisfaction as measured by the LibQUAL+ survey methodology. Building on research initially conducted at Utrecht University in 2012 the research has sought to identify commonalities in customer satisfaction and dissatisfaction. LibQUAL+ is a library service quality survey instrument developed by the Association of Research Libraries (ARL) in association with Texas A&M University (Association of Research Libraries, 2011). The survey was consist of 22 standardised questions on library services which are used internationally, allowing libraries to benchmark their performance against one another. One of the key strengths of LibQUAL+ is its use of gap theory to evaluate customer expectations as well as perceptions. For each of the 22 questions respondents are asked their minimum and desired expectations along with their current perceived level of service on a nine point scoring scale. Alongside the minimum, desired and perceived scores two further scores are calculated to aid the evaluation of the library; the service adequacy and the service superiority. The service adequacy score is calculated by subtracting the minimum score from the perceived score. A positive adequacy score indicates that the customers minimum expectations are being exceeded, whereas a negative adequacy score indicates that the library is failing to meet minimum expectations. Similarly, the service superiority score is calculated by subtracting the desired from the perceived score. A positive superiority score indicates the library is exceeding customers desired expectations, whereas a negative score indicates that desired expectations are not being met. The 22 questions are split into three dimensions of library service, Affect of Service (AS), Information Control (IC) and Library as Place (LP). The Affect of Service dimension contains nine questions relating to the library staff, including their knowledge to answer customer questions and the level of customer service received. The Information Control dimension consists of eight questions covering the library resources and how easy it is to access information. The Library as Place dimension consists of five questions assessing the physical environment. For each dimensions the scores are averaged providing a dimension mean score for each of the five question scores; minimum, desired, perceived, adequacy and superiority. The survey is administered in two formats, LibQUAL+ Long which asks all 22 core questions, and LibQUAL+ Lite which presents 8 of the core questions to each respondent. LibQUAL+ Lite deploys a random sampling methodology resulting in all 22 core questions being assessed at the participating library, but each individual respondent only answers a sub-set of the core questions. The research sought to identify if there were any common factors for the scores received from satisfied and dissatisfied customers. The analysis used all of the scores received from individual respondents from the sample group. The dimension adequacy and superiority mean scores were used to identify respondents who were satisfied or dissatisfied with one of the dimensions. The scores from these sub-sets of respondents were then analysed to see if there were any commonalities in their scores for the other two dimensions. For example, if a customer was satisfied with the Affect of Service dimension, were they satisfied or dissatisfied with the other two dimensions? This research was the repeated for respondents who were satisfied with the Information Control dimension, and finally those who were satisfied with the Library as Place dimension. Correspondingly, the same analysis was conducted for customers who were dissatisfied with one of the dimensions, with each dimension reviewed separately.
5 Methodology This research used a sample of 33,820 surveys, of which 28,208 were from the SCONUL consortium, 3,761 from Leiden University (LU), and 1,851 from Utrecht University (UU). The UU surveys were in the long version, the LU in the lite version, and the SCONUL surveys in mixed form. Firstly, the Library Staff surveys, n libst = 371, were removed since, as stated before, they traditionally have different expectations than the other customer groups. Secondly, all surveys were removed which had inconsistent responses, n inc = 5,346. Inconsistent responses are defined as when a respondent has rated their minimum service level higher than their desired service level (i.e. score inversions), which is illogical. Ordinarily LibQUAL+ automatically filters out excessive inconsistent responses as part of their data screening. As the research was using the raw survey data and specifically focussing on gap scores, using them to define (dis)satisfaction, it was necessary to remove all inconsistent responses. Thirdly, incomplete data sets were removed for respondents who had selected not applicable to all questions within one dimension, n empty = 1,057. Since this research compares scores given by respondents for the three dimensions, it is important that all dimensions had at least one rating, otherwise it is not known what the respondent might have said in connection to the other two sections. Following this data screening the final number of surveys used for this research was 27,046, which represents 80% of the total sample. Satisfaction and dissatisfaction Satisfaction and dissatisfaction are defined by the average adequacy and superiority gaps for the three dimensions, Information Control (IC), Affect of Service (AS) and Library as Place (LP). Searching for definitions several criteria were tested. Owing to the large sample group the research had the freedom to apply a strict definition to satisfied and dissatisfied customers. Satisfaction can be defined as an average perceived score exceeding the average desired score; the level of service is higher than desired expectations. In cases where the desired and minimum scores were close together the respondents were not deemed to be highly satisfied customers. For example, an average minimum score of 6.7, an average desired score of 6.8, and an average perceived score of 6.9, receives a positive superiority score (indicating desired expectations are being exceeded), but the minimum expectations are only just being met. Therefore, a second criteria was chosen, the perceived level of service should not only exceed the desired levels, but also clearly be larger than minimum. As shown in figure 1, the definition for satisfied respondents was selected as where the dimension average adequacy gap score is larger than one, and the average superiority gap score is larger than zero. Using the same arguments, dissatisfied customers were defined as respondents giving a dimension average adequacy gap score of smaller than zero and an average superiority gap score of smaller than minus one. This definition indicated that the respondent s average perceived score was below minimum expectations and more than one point lower than the desired levels.
6 Figure 1: Conditions for satisfaction and dissatisfaction Findings The above definition of satisfaction was applied to the sample group for each dimension in turn, and the overall perceived score reviewed for each sub-set, as shown in table 1. It was discovered that respondents who were satisfied with the IC dimension yielded an average perceived score of Applied to the LP dimension, the average perceived score was 7.44, and applied to the AS dimension the average perceived score was Clearly, the positive IC group was the most overall satisfied group, which is shown in figure 2. This indicates that customers who are highly satisfied with the IC dimension are, on average, satisfied with all aspects of the library service. Figure 2: Responses from the sample group with high satisfaction with the Information Control domain. Next applying the conditions for dissatisfaction, the average perceived score for the negative IC group was 6.05, the average perceived score for the negative LP group was 6.15, and the average perceived score for the negative AS group was As shown in figure 3, the negative AS group thus was the most overall dissatisfied group, indicating that customers who are highly dissatisfied with the AS dimension are, on average, dissatisfied with all aspects of the library service.
7 Figure 3: Responses from the sample group with high dissatisfaction with the Affect of Service domain. Comparing the average perceived scores per dimension, the highest score was for the IC dimension in the satisfied IC group, but the lowest score was for LP dimension in the dissatisfied LP group, even when the average perceived score of this dimension was not the lowest, this being the negative AS group. Respondents who were very negative about the Library as Place, were not so negative about the IC and AS dimensions, raising the average scores of the negative LP group. Group Average Information Control Library as Place Affect of Service Average Satisfied IC Satisfied LP Satisfied AS Dissatisfied IC Dissatisfied LP Dissatisfied AS Table 1: Perceived mean scores for each sample sub-group for each dimension. Next, the same analysis was done for the different user groups who responded to the survey: undergraduates, postgraduates, academic staff, and staff. Surprisingly, when reviewing the results for the different roles, although the values were different for each group, the outcomes were always the same. Customers who were satisfied with IC were the most satisfied overall and those who were dissatisfied with AS were the most dissatisfied overall. The highest score was in the IC dimension for the satisfied IC group, the lowest score was in the LP dimension for the dissatisfied LP group. This held true for all user groups surveyed. Statistics Using the program Statistica 7.0, the results for the average perceived scores were checked for significance, to see if, for instance, the most positive group, the positive IC group, is also significantly the most positive group. To see if the most positive average perceived score, the 7.61 score of the IC dimension, is significantly higher than the second highest score, the 7.44 score of the LP dimension, the Student's t- test, used to determine if two sets of data differ significantly from each other by looking for the statistical probability (p) that the sets overlap. Assuming normal distributions, p < is found, confirming the statistical difference. For the lowest average perceived scores, the test is performed for the 5.69 score of the AS dimension and the 6.05 score of the IC dimension, again finding p <
8 Clearly, mainly due to the large numbers of surveys, the results of these significance tests were in order. A second set of tests was performed to see if the most positive and negative average perceived scores per dimension were differing significantly, the results can be seen in figure 4. In these graphs, the means and 95% confidence intervals are shown; red lines represent the IC dimension, green lines the LP dimension, and blue lines the AS dimension. The used codes are defined as: Code -1: The first set of values on the left in each graph represents the excluded surveys. These values are the same for the three graphs, since they were excluded before the selections were made. Code 1: The second set of values is, in the left graph, from the group selected for negative scores for the IC dimension, representing their given average perceived scores for both the selected IC dimension, together with their scores for the other two dimensions, and likewise for the middle and right graphs, for the LP and the AS dimension, respectively. Code 0: The third set of values is from the groups of respondents who are, according to the criteria, neither satisfied nor dissatisfied. Code 2: the fourth set of values is generated similarly to Code 1, but now for positive scores. Figure 4: Graphs of the selections for positive and negative average perceived scores for the three dimensions (means and 95% confidence intervals; IC = red, LP = green, AS = blue) In the three graphs of figure 4 each coloured line, consisting of three segments, represents the total amount of surveys, thus in total six different selections are made for the whole set of surveys: three times a positive selection and three times a negative one. For each of the six selections the average perceived scores are plotted in one colour, red for the IC selections, green for the LP selections, and blue for the AS selections. This figure shows the groups formed by selecting the response for a specific dimension, and the other lines represent the answers they gave for the two other dimensions. In the left graph the positive and negative responses for the IC dimension were selected, in the middle graph the responses for the LP dimension, and in the right graph the responses for the AS dimension. In the left graph, the blue AS groups and the green LP group are plotted for the red IC group, likewise in the middle and right graph the other two dimensions are plotted for the green LP group and the blue AS group, respectively. Therefore, in the left graph the second set coincides with the Dissatisfied IC data in table 1 and the fourth set coincides with the satisfied IC data in table1, in the middle graph the second set coincides with the dissatisfied LP data of table 1, etc. In these graphs it can be seen that in the negative selections, the LP dimension always has the lowest scores, although not entirely significant in the right graph. These lowest scores can indeed be recognised in table 1, where for the negative selections the LP dimension each time has the lowest score.
9 In the same way the scores for the various roles, and the overall average perceived scores for the six selections, represented in the first column of table 1, were checked, and each time, the main conclusions hold, the positive IC groups consist of the overall most satisfied customers, and the negative AS groups consist of the overall most dissatisfied customers. Conclusions Librarians have long been comfortable with the concept that the more information available at a library, the better the library must be. Indeed, the earliest forms of library assessment centred on the size of a library s collection. Performance measurement has moved on since those days, however this research has shown how the human element is a vital component for delivering a good quality library service. It is found that for each role, undergraduates, postgraduates, staff and academic staff, the same conclusions hold; if respondents are satisfied with the level of service in the IC dimension, they are satisfied with the library overall. If they are dissatisfied with the level of service in the AS dimension, they are dissatisfied with the library overall. Since these findings hold for all customer groups, the main objective of the library should be to target service improvements in the areas measured by the Information Control dimension, as a positive level of service in these areas increases satisfaction for the entire library. Alongside this, the research has identified that the interaction and support from library staff also play a significant role in the customers perception of the library service. Should customers receive a poor level of service from the library staff it is likely to impact upon their view of the entire library service. In order to meet customer needs it is therefore vital to improve both the Information Control and Affect of Service elements of the library service. Strikingly, although the scores for the Library as Place can be very low, these do not reflect on the overall positive or negative scores; the LP dimension has the least impact on the overall satisfaction, or dissatisfaction.
10 Author Details: Surname: Forenames: Title: Employing organisation: Position: Surname: Forenames: Title: Employing organisation: Position: Surname: Forenames: Title: Employing organisation: Position: Killick Selena Mrs Cranfield University Library Quality Officer Weerden, van Anne Ms Utrecht University Information Specialist Weerden, van Fransje Dr University of Groningen AI Student Autobiographical note: Selena Killick is the Library Quality Officer at Cranfield University Libraries, with responsibility for the analysis of customer feedback and library performance data. She regularly undertakes research to inform service improvements. This includes the development of models to guide planning and evaluation of library resources and services; the implementation the LibQUAL+ survey within the University and the management of the Cranfield University SCONUL Statistics submission. Selena is also retained by the Association of Research Libraries (ARL) to support European consortia and wider international participation in the LibQUAL+ survey methodology, and regularly analyses survey data, presents, publishes and advises libraries on an international basis on library assessment. Recent research has included the evaluation of electronic resources through usage metrics; a qualitative assessment of customer needs and expectations, and the development of a value and impact framework for Cranfield Libraries. Anne van Weerden is information specialist Mathematics and Geosciences at the Utrecht University Library, and she is a physics student. She is also a member of the "Loket gebruikersonderzoeken" (help desk for user surveys). In the 'loket', she is responsible for performing user surveys; collecting, processing, analysis. She writes the reports, and contributes to the internal and external communication. Fransje van Weerden has extensive experience with statistics having finished a masters in biology. She has worked for 15 years as informaticist and medical statistician, and is now a master student in Artificial Intelligence.
Using Basic Data Visualization Methods to Explore LibQual+ Data Ray Lyons Paper presented at 9th Northumbria International Conference on Performance Measurement in Libraries and Information Services University
TAXREP 01/16 (ICAEW REP 02/16) January 2016 ICAEW research survey: HMRC Customer Service Standards 2015 Results of the ICAEW 2015 research survey among ICAEW smaller agents about HMRC service standards.
IT Services 2016 Satisfaction Survey Student Response Facts and Figures In January 2016 IT Services launched our fourth annual Satisfaction Survey. The response to the survey was very good with a total
University of Connecticut DigitalCommons@UConn UConn Libraries Published Works University of Connecticut Libraries 1-1-2011 Utilizing LibQUAL+ to Identify Best Practices in Academic Research Library Website
Cover Page Research report Customer Satisfaction Survey Prepared for: Solon South West Housing Association Customer Satisfaction Survey Prepared for: Solon South West Housing Association Prepared by: BMG
Employee Engagement Profiling Demo Pilot Study 2013 Sample Report - Tables and Charts have been shortened Registered Associates All enquiries to: firstname.lastname@example.org Web: http://www.pario-innovations.com
The Office of Public Services Reform The Drivers of Satisfaction with Public Services Research Study Conducted for the Office of Public Services Reform April - May 2004 Contents Introduction 1 Executive
Assessment Policy 1 Introduction This document has been written by the National Foundation for Educational Research (NFER) to provide policy makers, researchers, teacher educators and practitioners with
Item no Report no Edinburgh People s Survey treatment of don t know and neutral responses Policy and Strategy Committee 6 September 2011 1 Purpose of report 1.1 To advise Committee on the inclusion and
Open Research Online The Open University s repository of research publications and other research outputs Scope and focus in engineering design research: distance learning experience at masters level compared
Skills in New Zealand and around the world Survey of Adult Skills (PIAAC) Published by: Ministry of Education and Ministry of Business Innovation and Employment Crown Copyright This work is licensed under
Using Kruskal-Wallis to Improve Customer Satisfaction A White Paper by Sheldon D. Goldstein, P.E. Managing Partner, The Steele Group Using Kruskal-Wallis to Improve Customer Satisfaction KEYWORDS Kruskal-Wallis
Success Through Strategy Designing Effective Business Metrics One of the most important aspects of a business strategy or new initiative is designing the proper metrics for success. This sounds much easier
ARi Summary 10 Adoption Research initiative November 2010 Supporting direct contact after adoption This study is part of the Adoption Research Initiative (ARi), a group of major research projects commissioned
Oncology s $5 Billion Opportunity: Oncology Companies Can Improve the Customer Experience The 2015 ZS Oncology Customer Experience Tracker Jon Roffman, Sankalp Sethi and Pranav Srivastava Oncology s $5
CUSTOMER SATISFACTION IN TOURISM HOW TO MEASURE IT? Păun Raluca Mihaela 1 Master student, Faculty of Business and Tourism, Bucharest University of Economic Studies ABSTRACT Measuring customer satisfaction
National Foundation for Educational Research Survey of Teachers 2010 Support to improve teaching practice Analysis by teacher role and experience Helen Poet Joanne Kelly Helen Aston March 2011 All rights
The Value of Consulting An analysis of the tangible benefits of using management consultancy Management Consultancies Association 2010 March 2010 2 The Value of Consulting CONTENTS Foreword 3 Executive
Assessing the quality of UK medical schools: what is the validity of student satisfaction ratings as an outcome measure? Tim Lancaster, Director of Clinical Studies, Oxford University Medical School Thomas
Turning around morale and employee engagement within a dispersed construction business Award Submission Contents Executive Summary 2 Situation Analysis 3 Goals and Objectives 4 Research 5 Target Publics
Customer satisfaction survey 2012 July 2014 Written by ekosgen Published by Skills for Care Published by Skills for Care, West Gate, 6 Grace Street, Leeds LS1 2RP www.skillsforcare.org.uk Skills for Care
CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents
WCC Business Solutions Data Quality Analytics Improve Matching Results by Improving Data Quality Introduction For both staffing agencies and public employment services, helping job seekers find the best
Tasmanian Department of Health and Human Services Agency Health Professional Reference Group Allied Health Professional Workforce Planning Group Allied Health Professional Workforce Planning Project Health
Educational Technology & Society 6(1) 200 ISSN 146-4522 Skills Gaps and Training Needs for Information and Communications Technology in Small and Medium Sized Firms in the South East of England David B.
Open Research Online The Open University s repository of research publications and other research outputs The degree-diameter problem for circulant graphs of degree 8 and 9 Journal Article How to cite:
A Review of the Integration of Brokerage Services in the South West EXECUTIVE SUMMARY This report presents the findings of the research study, A Review of the Integration of Brokerage Services in the South
Department of Culture and the Arts 2012 Client Feedback Survey Summary Report Introduction The Department of Culture and the Arts (DCA) Client Feedback Survey is undertaken biennially by DCA s Development
This statement should be read alongside the statement for Main Panel F and the generic statement. Absences of chair and declaration of interests 1. Sub-panel members will not take responsibility for assessing
2015 Bonnie V. Hancock Executive Director ERM Initiative North Carolina State University 2801 Founders Drive Raleigh, NC 27695 919.513.0901 www.erm.ncsu.edu CONTENTS INTRODUCTION... 2 KEY CONSIDERATIONS
National Disability Authority Resource Allocation Feasibility Study January 2013 The National Disability Authority (NDA) has commissioned and funded this evaluation. Responsibility for the evaluation (including
Evaluation of a self-audit tool to support information skills development in postgraduate students Suzie Kitchin Library Liaison Adviser Northumbria University email@example.com Dr Sue Lampitt
Sack the HR department! Why HR can be the source of disengagement and how it can be addressed NOVEMBER 2013 Sack the HR department! Why HR can be the source of disengagement and how it can be addressed.
Federal Employees Group Life Insurance Program (FEGLI) Customer Feedback Survey Report Federal Employees Enrolled United States Office of Personnel Management Office of Insurance Programs Washington, D.C.
Common resource allocation framework Financial framework Introduction This document outlines the purpose of the financial framework, its principles and the process for developing the financial elements
Chapter IX. Data quality and metadata This draft is based on the text adopted by the UN Statistical Commission for purposes of international recommendations for industrial and distributive trade statistics.
Helping children get the care experience they need Independent advocacy for children and young people in care May 2016 Introduction Professionals are often required to make lifechanging decisions for children
VOLUME 11, NUMBER 1, 2009 Assessing Study Abroad Programs at U.S. Research Universities: A Social Systems Study Chunmei Yao, Ed D Doctoral Student Department of Educational Leadership Studies West Virginia
CIPD Employee engagement See www.cipd.co.uk for further details January 2007 This factsheet gives introductory guidance. It: considers what is meant by employee engagement and why organisations are interested
Organisation Profiling and the Adoption of ICT: e-commerce in the UK Construction Industry Martin Jackson and Andy Sloane University of Wolverhampton, UK A.Sloane@wlv.ac.uk M.Jackson3@wlv.ac.uk Abstract:
60 feedback Manager Development Report name: email: date: Sample Example firstname.lastname@example.org 9 January 200 Introduction 60 feedback enables you to get a clear view of how others perceive the way you work.
July 2007 Management Practice & Productivity: Why they matter Nick Bloom Stanford University Stephen Dorgan McKinsey & Company John Dowdy McKinsey & Company John Van Reenen Centre for Economic Performance,
The University of Adelaide Business School MBA Projects Introduction There are TWO types of project which may be undertaken by an individual student OR a team of up to 5 students. This outline presents
Open Research Online The Open University s repository of research publications and other research outputs Effective Early Childhood Programmes Edited Book How to cite: Siraj-Blatchford, Iram and Woodhead,
Business Benefits of Volunteering An introduction to skills based volunteering Mari Frengstad TABLE OF CONTENT: Executive Summary... 3 Introduction... 5 What skills are key to Hammerson s success?... 5
Quality metrics in academic libraries: Striving for excellence Leoné Tiemensma Midrand Graduate Institute South Africa email@example.com Outline of paper 1. Quality management and Quality Assurance in academic
IBM SPSS Statistics How to get more value from your survey data Discover four advanced analysis techniques that make survey research more effective Contents: 1 Introduction 2 Descriptive survey research
Commissioning and value for money: guidance and tools link This tool is intended to help to support the development of more effective commissioning practice taking account of Value for Money VfM requirements.
1 Report of the Faculty Senate s 2013 Faculty Job Satisfaction Survey: Quantitative Analysis Prepared by Michael Smilowitz, Ph.D. September 18, 2013 2 Key Points of the 2013 Faculty Satisfaction Survey
1 IR Report Non-returner Survey: Why Students Leave Laura Ariovich and W. Allen Richman Prince George s Community College 310 Largo Rd Largo, MD 20774 Abstract Prince George s Community College has regularly
Volume 22 Issue 2 Article 5 1997 The importance of assessment procedures to student learning outcomes in religious education Phillip Cox Edith Cowan University John Godfrey Edith Cowan University Recommended
PARTNERSHIP WORKING: KEY ISSUES AROUND EVALUATION Office of the Chief Researcher Scottish Executive July 2002 abc This report is available on the Scottish Executive Social Research website only www.scotland.gov.uk/socialresearch.
An analysis of collaborative journal article authorship at New Zealand universities Trends in inter-institutional res ah collaboration by New Zealand universities Tertiary education occasional paper 2013/01
IMPACT: International Journal of Research in Business Management (IMPACT: IJRBM) ISSN(E): 2321-886X; ISSN(P): 2347-4572 Vol. 2, Issue 3, Mar 2014, 1-8 Impact Journals A COMPARATIVE STUDY OF WORKFORCE DIVERSITY
PRELIMINARY ITEM STATISTICS USING POINT-BISERIAL CORRELATION AND P-VALUES BY SEEMA VARMA, PH.D. EDUCATIONAL DATA SYSTEMS, INC. 15850 CONCORD CIRCLE, SUITE A MORGAN HILL CA 95037 WWW.EDDATA.COM Overview
UNDERSTANDING LINEAR ALGEBRAIC EQUATIONS VIA SUPER CALCULATOR REPRESENTATIONS Ye Yoon Hong, Mike Thomas & Oh-Nam Kwon The University of Auckland Ewha University For many students, understanding of linear
IMPLEMENTATION NOTE Subject: Category: Capital No: A-1 Date: January 2006 I. Introduction The term rating system comprises all of the methods, processes, controls, data collection and IT systems that support
Satellite Broadband: A Global Comparison A report for nbn 28 th April 2016 For further information, please contact: Craig Skinner, Principal Analyst, Consumer Contents Executive summary... 3 Report approach
USING INTERNET-BASED SIMULATIONS IN HOSPITALITY EDUCATION: BRIDGING THE GAP Alecia Douglas University of Delaware Delaware, USA Brian Miller. University of Delaware Delaware, USA and Francis Kwansa. University
AUSTRALIAN TELEVISION COMMERCIAL (TVC) PRODUCTION COMPANIES SURVEY BY THE AUSTRALIAN FILM COMMISSION (AFC) JANUARY 2001 INTRODUCTION Prior to 1992, virtually all commercials shown on Australian television
IGEM GUIDANCE NOTES TECHNICAL REPORT GUIDANCE www.igem.org.uk INTRODUCTION The Technical Report Option is a process that allows applicants to demonstrate their academic knowledge to either Bachelor level
Final Report Carried out by Sponsored by Contents 1 Introduction 2 Our approach 1 3 Executive summary 3 4 Involvement in corporate citizenship 5 5 Involvement in volunteering 11 6 Impact of volunteering
VLE Pilot Project Case Study Time Management E-Learning Development Team Dr Eleanor Loughlin, Graduate Training Unit October 2006 Contents Contents...2 Overview...3 Background...4 Description of approach...4
CIPS Australasia The Seven Habits of Highly Effective CPOs A new study of practitioners identifies a set of behaviours that distinguishes the best. It offers a realistic and compelling insight to how CPOs
Quality. Expertise. Passion. Why you really need a SIAM Tooling Strategy To make multisourcing arrangements effective, customers must get suppliers to work together, both from the commercial and operational
A UK Usability Professionals Association guide - Key questions to ask your usability testing supplier How do you know if your usability supplier really knows what it s doing? More and more service providers
MICROSOFT APPRENTICESHIP PROGRAMME RESEARCH March 2015 EDELMAN BERLAND FOR MICROSOFT TABLE OF CONTENTS ABSTRACT... 3 EXECUTIVE SUMMARY... 4 BACKGROUND & METHODOLOGY... 6 Background to the Microsoft Apprenticeship
Tasmanian State Service Workplace Satisfaction Survey Results Prepared by Survey Matters On Behalf of the Community and Public Sector Union (CPSU) and Health & Community Services Union (HACSU) March 2013
290 Chapter 6 The Normal Distribution Figure 6 5 Areas Under a Normal Distribution Curve 34.13% 34.13% 2.28% 13.59% 13.59% 2.28% 3 2 1 + 1 + 2 + 3 About 68% About 95% About 99.7% 6 3 The Distribution Since
Component of Statistics Canada Catalogue no. 11-522-X Statistics Canada s International Symposium Series: Proceedings Article Symposium 2008: Data Collection: Challenges, Achievements and New Directions
This full text version, available on TeesRep, is the post-print (final version prior to publication) of: Redman, T. and Story, A. (2001) 'Do training agreements reduce failure rates in university-provided
1 National Psychology Week 2006 Health Behaviour Change survey conducted by The Australian Psychological Society The current climate in Australian health is one of increasing concern about the sedentary
ASSESSING PERCEPTIONS Using Education for the Future Questionnaires By Victoria L. Bernhardt Where did the questionnaires come from, what do they tell us, and are they valid and reliable? In 1991, while
Management Information Initial Teacher Training Performance Profiles: academic year 2012 to 2013 Date 23 October 2014 Coverage England Theme Initial Teacher Training Issued by Department for Education,