360 Multi-Source Multi Rater Feedback A General Overview



Similar documents
FYI LEADERSHIP. Coaching - A General Overview

Best Practices to Ensure Impact

St. John Fisher College Performance Review Instructions and Procedures Comprehensive Version For the Review Period June 1, 2014 through May 31, 2015

360-DEGREE FEEDBACK DEFINED

360-DEGREE FEEDBACK. Performance Management Series Part II. By Leslie A. Weatherly, SPHR, HR Content Expert

360 Feedback Assessment RATER Guide

Assessing Candidates For Executive-Level Roles

BEST PRACTICES IN UTILIZING 360 DEGREE FEEDBACK

Competency-based 360 Multi-Source Feedback

Coaches Coach, Players Play, and Companies Win

A Blueprint for a Successful 360-Degree Feedback Intervention

Guide to180 Feedback HUMAN RESOURSES COLLEGE OF EDUCATION AND HUMAN ECOLOGY

8 APPRAISING AND IMPROVING PERFORMANCE

Coming Full Circle Using Research and Practice to Address 27 Questions About 360-Degree Feedback Programs

OPM LEADERSHIP DEVELOPMENT MATRIX:

A Guide To Understanding Your 360- Degree Feedback Results

Preparation for 360-degree Feedback in Leadership Development Programs and Executive Coaching

A Practical Guide to Performance Calibration: A Step-by-Step Guide to Increasing the Fairness and Accuracy of Performance Appraisal

The 360 Degree Feedback Advantage

Winning Leadership in Turbulent Times Developing Emotionally Intelligent Leaders

African Leadership in ICT The Leadership Toolbox Review

Longitudinal Evaluation of a 360 Feedback Program: Implications for Best Practices

Masters Comprehensive Exam and Rubric (Rev. July 17, 2014)

360-Degree Assessment: An Overview

John Sample January 15, Leadership Intelligence. 360 Feedback Report. Individual Report for Business Leaders

Crosswalk of the New Colorado Principal Standards (proposed by State Council on Educator Effectiveness) with the

DEPARTMENT OF OF HUMAN RESOURCES. Career Planning and Succession Management Resources

Running Head: 360 DEGREE FEEDBACK 1. Leadership Skill Assessment: 360 Degree Feedback. Cheryl J. Servis. Virginia Commonwealth University

NATIONAL COMPETENCY-BASED TEACHER STANDARDS (NCBTS) A PROFESSIONAL DEVELOPMENT GUIDE FOR FILIPINO TEACHERS

collaboration/teamwork and clinical knowledge and decision-making (attachment 4.1.b). The Staff Nurse position description links position

EVALUATION REPORT Management & Leadership Development Program Pilot 10/00

EVALUATING THE CONCEPT OF THE 360-DEGREE FEEDBACK EVALUATION FOR THE DELTONA FIRE DEPARTMENT EXECUTIVE LEADERSHIP

AMOCO SKILLS APPRAISAL

SUCCESSION PLANNING and LEADERSHIP DEVELOPMENT

SigmaRADIUS Leadership Effectiveness Report

APPENDIX I. Best Practices: Ten design Principles for Performance Management 1 1) Reflect your company's performance values.

Recommendations for implementing 360-degree appraisal system:

*This is a sample only actual evaluations must completed and submitted in PeopleSoft*

PERFORMANCE MANAGEMENT

Career Development and Succession Planning. Changing Landscape of HR 2012 Conference

Performance Factors and Campuswide Standards Guidelines. With Behavioral Indicators

THE 360 DEGREE LEADERSHIP DEVELOMENT PROCESS

PSI Leadership Services

Report on Moravian MBA Program Learning Objectives Related to Communication Behaviors of Students. Data Sources and Instrument Information

Business leaders have long recognized that attracting and

Sample Behavioural Questions by Competency

Best Practices in Reporting 360-Degree Feedback Russell Lobsenz, Ph.D., Karen Caruso, Ph.D., and Amanda Seidler, MA November, 2004

Performance Appraisal and it s Effectiveness in Modern Business Scenarios

Unleashing your power through effective 360 feedback 1

During the 90s, executive coaching established a foothold in corporate

Q: What is Executive Coaching? Q: What happens in an executive coaching session? Q: How does Executive Coaching Take Place?

Company Profile. The Flourishing Company. TFC Company Profile rev Copyright The Flourishing Company. All Rights Reserved

African Leadership in ICT. 360 Degree Assessment

Unlocking Potential: Understanding and Applying Tools to Identify and Develop Learning Agility

ISHN Using Performance Objectives and Corrective Feedback. To Increase Competence

Performance Feedback


PRESENTATION ON PERFORMANCE APPRAISAL BY: LOUISE AGYEMAN-BARNING

Diploma In Coaching For High Performance & Business Improvement

Work in the 21 st Century: An Introduction to Industrial- Organizational Psychology. Chapter 6. Performance Measurement

STANDARDS FOR SOCIAL WORK PRACTICE WITH GROUPS. Second Edition

Individual Development Planning. Staff Guidelines

8 Steps for Turning 360 Feedback into Results. Leadership Intelligence White Paper // Paul Warner, Ph.D.

DTRQ9FO8D9N/TGDTRbzN2v4v. Sample Leader Leader Sample

Performance Management Handbook. City of American Canyon

Case analysis of 360 degree feedback

Performance Management

GAO PERFORMANCE MANAGEMENT SYSTEMS. IRS s Systems for Frontline Employees and Managers Align with Strategic Goals but Improvements Can Be Made

Guide to Effective Staff Performance Evaluations

GaPSC Teacher Leadership Program Standards

FAILURE TO LAUNCH: Why Companies Need Executive Onboarding

University of Detroit Mercy. Performance Communication System (PCS)

PREPARING FOR THE INTERVIEW

A New Approach for Evaluating Of Performance Appraisal

Chapter 6 Appraising and Managing Performance. True/False Questions

SU311 Why 360 Degree Feedback Doesn t Work and What to Do About It

GUIDE TO EFFECTIVE STAFF PERFORMANCE EVALUATIONS

Terex Leadership Competency Model

1 Executive Onboarding Reward vs. Risk

THE DUPONT INTEGRATED APPROACH (DnA) FOR SAFETY: A CATALYST TO ACCELERATE PERFORMANCE

360 Degrees Performance Appraisal

360 degree feedback toolkit implementing surveys to build leadership capability

Evaluating Training. Debra Wilcox Johnson Johnson & Johnson Consulting

360 Degree Feedback 1

Promotion, Transfer and Termination attachment one Developing and Implementing Succession Plans

Encouraging Effective Performance Management Systems

How To Be A Successful Leader

DONGWHA HOLDINGS AT A GLANCE Dongwha Holdings creates eco-friendly and human-oriented products using wooden materials.

Attribute 1: COMMUNICATION

Onboarding and Engaging New Employees

EFFECTIVE PERFORMANCE APPRAISALS

Transcription:

FYI LEADERSHIP 360 Multi-Source Multi Rater Feedback A General Overview Revised: March 2014 Summary: This FYI contains information on the background and history of 360 multi-source feedback, things to consider when introducing it in the organization, and steps that can be taken to increase its effectiveness. For additional resources, please see page 7. Important Notice: The information provided herein is general in nature and designed to serve as a guide to understanding. These materials are not to be construed as the rendering of legal or management advice. If the reader has a specific need or problem, the services of a competent professional should be sought to address the particular situation. Copyright, 2014 Mountain States Employers Council, Inc. All rights reserved. Quotation from or reproduction of any part of this report, in any form, without prior permission in writing from the Employers Council, is prohibited.

360 Multi-Source Multi-Rater Feedback: A General Overview

I. What is 360 Degree Multi-Source Feedback? 360 degree feedback is the process in which an individual (the participant ) rates themselves based on certain competencies and is also rated by others (the raters ) at work about their interpersonal and technical/functional performance. It is best to select a diverse cross section of raters. Through their responses to the assessment questions, each rater can provide a different perspective on the participant s skills, attributes, and core strengths that lead to their success, and also behaviors and characteristics that might impede success. At its core, a 360 feedback instrument and process are designed to help create clarity about and alignment between how the participant sees himself or herself, i.e., their intentions, and how others experience that person, i.e., their impact. II. Brief History 360 degree multi-source feedback has evolved significantly since the concept was developed in the early 1900 s. Psychologists began to investigate how to best hire and train employees, and subsequently how to evaluate job performance. Dr. Will Scott introduced the method of rating work performance prior to World War I. He invented what is termed the man-to-man comparison scale a scale used in rating the efficiency of Army officers. Post World War II, advances in psychological research led to the creation of various rating sources and assessment techniques based on behaviors that led to either effective or ineffective job performance. One significant study of predictive validity of peer ratings used Marine officers. The study found that peer evaluations were more valid predicators of officer candidate success than objective tests, and were more valid predicators of future performance than supervisor ratings alone. The University of Michigan Survey Research Center conducted groundbreaking studies in the 1940s that resulted in a better understanding of the impact of changing patterns of supervisor/employee relationship by using survey results. A two-year study involving supervisor and employee dialogue about how to enhance workplace performance found significant and positive changes in employee attitudes and perceptions compared to a control group. A 1959 study looked at the development and use of a feedback tool for supervisors called Rate Your Supervisor. This assessment gave supervisors a personal report showing how they rated both individually and also as a group of leaders. This tool was geared toward selfdevelopment and consisted of 63 scale ranking questions and five open-ended questions designed to elicit information about personal traits, work group performance outcomes, and interpersonal people-skills qualities. In the 1960s-1970s, additional research presented positive outcomes about and created more momentum for using what was termed the multi-trait, multi-rater measurement. The belief was that ratings from multiple sources would offer clear and tangible insights about the meaning of the responses as a whole. This is also when there was a move to a more employee-involved workplace and participative management was becoming more the norm (as compared to top-down or hierarchical management). III. How is 360 Degree Feedback Used? 360 degree multi-source feedback has been used historically for development purposes but has also been used by some organizations as a performance appraisal tool. This article will focus specifically on its use for development (personal and professional) since a majority of the literature and data reflects this is the most effective and successful application of a 360 degree feedback process. A description of both follows accompanied by a summary of the research that supports the use of 360 feedback for development rather than appraisal: 360 Multi-Source Multi-Rater Feedback: A General Overview 1

A. PERFORMANCE APPRAISAL Research indicates that 360 feedback as part of performance appraisal has mixed success at best. Some authors have argued against its use for administrative/appraisal purposes because it negatively affects the quality of ratings. In appraisal processes based on 360 feedback, the organization owns the data received and can be used in whatever manner the company determines. Subsequently, the participant s focus tends to be on getting good ratings since pay/promotion is attached to appraisal based feedback. As such, the responses and results may not be indicative of true strengths and weaknesses. 360 feedback for appraisal purposes should, in theory, have a high degree of objectivity and reliability. The research, however, demonstrates that both are negatively impacted when results are tied to performance appraisal. Several reasons for this are: The boss administering the feedback has a stake in the outcome. The participant might be less comfortable reporting honestly about his or her perceived areas for development in the self-ratings as a result. Interrater agreement (similar feedback among raters) is typically low which impacts reliability. Low interrater agreement is problematic since consolidation of the appraisal information into a more global judgment needs to possible. This is unlikely to happen given the need for diverse raters who are likely to have diverse experiences of the participant s behaviors and contributions. (This is not at issue when using 360 feedback for development.) The participant is apt to disregard negative feedback. When pay is attached to the feedback results, the focus becomes more on participants fears and concerns about job status and compensation, and less on possible behavioral changes. A participant may try to explain away perceived negative feedback to maintain job security rather than be open to and learn from it. Hesitance to make unpopular decisions. Participants in a leadership role may anticipate or be anxious about the upcoming feedback. As a result they may avoid making important decisions for the company for fear they could be perceived as negative by potential raters. Raters may be less truthful. If raters are aware that their feedback has an impact on the participant s pay or position, they may not respond as honestly, possibly giving mostly high ratings or primarily low ratings depending on their relationship to this participant and their desired outcome. B. GROWTH AND DEVELOPMENT The first characteristic that distinguishes development based 360 feedback from performance appraisal based 360 feedback is that in development oriented processes, the data is the property of the person being rated. They typically determine who sees that data and have influence over how the data might be used. This alone can increase the likelihood the participant may self-report more honestly and take ownership for making any behavioral shifts or changes to be more effective as a result of the feedback. Second, when feedback is used for development purposes, reliability and interrater agreement is less problematic and may, in fact, be desirable. Receiving a mixture of perspectives from various raters can support the participant s development by allowing multiple interpretations of their success level in the different roles they serve. In development based feedback, inconsistencies in ratings are acceptable and regarded as informational. IV. What Makes for a Good Instrument? Role-specific Performance Dimension - The feedback received is based on a specific set of performance dimensions (a.k.a., competencies) designed to match the level of leadership and responsibilities of the participant. The performance dimensions should be 2 360 Multi-Source Multi-Rater Feedback: A General Overview

comprehensive, encompassing technical skills, results expectations, personal mastery, and interpersonal/people skills. Dimensions should also match the level of leadership or contribution of the participant. There will be a specific set of behavior-based questions for each dimension or competency. Subjectivity - The questions that are linked to the performance dimensions/competencies are behavior-based which offers the participant clearer observable data about the impact of his/her actions (be they conscious or unconscious) on the raters. This provides the participant and their coach (assuming one is used) clarity about specific areas of strength and areas for development. It also allows the participant to make connections among different dimensions to see patterns of how their behaviors impact others and/or may be perceived. Since rater impact and rater perceptions are subjective, behaviorbased questions offer the participant some direction to take to align their intended actions with their impact on others. Objectivity - As noted above, objectivity of rater agreement is more critical in appraisal based feedback. To attain more objective responses in general, a few open-ended questions are included at the end of multi-rater feedback to offer detail and specificity that might not have come out in the earlier ratings. These questions typically fall under the scope of: (a) greatest strengths, (b) key areas to develop, and (c) other comments in general. Anonymity - Anonymity is critical to receiving honest feedback from raters (and thus increasing validity of the feedback). This is sometimes compromised in appraisal based feedback. Anonymity is needed to create the safest environment possible for raters to offer their feedback without fear of recourse or retaliation, or without expectation that favorable feedback might create a more favorable relationship. The 360 feedback should be structured with the expectation that there are enough raters in each category to allow for anonymity. The only rater not anonymous is the participant s direct supervisor. Validity and Reliability - Following are a few, but not all, aspects that can impact validity: o o o o Rater selection. Is it diverse enough to capture the participant s impacts on others in various situations? Type of instrument used. Development 360 feedback or performance appraisal 360 feedback. Supervisor/participant relationship. How visible is the participant s performance to his or her supervisor? Performance dimensions selected. Do they accurately represent the scope of work and expectations of the participant? V. Top Five Pitfalls of 360 Degree Feedback There are several actions to avoid when planning and implementing a 360 feedback process. Avoiding these Top 5 can increase the likelihood of a successful experience and outcome for the participant. Poor communication and lack of clarity about the purpose of the 360 feedback for the participant(s) and prospective raters. Participants and prospective raters must be made aware of the intent of the process, expectation of anonymity, and what might be done with the results. Receiving invalid data due to raters (mis)perceptions about intent and application of the results. This is most typically a pitfall when doing appraisal based feedback. Repeating the same process over and over, too soon, and/or without support to better understand recurring themes and patterns. 360 Multi-Source Multi-Rater Feedback: A General Overview 3

Assuming that participants will automatically develop new skills and patterns of behavior once they receive feedback. One way many organizations avoid this pitfall is to provide an executive coach to support the participant through the process of insight, awareness, and new action. Designing a 360 feedback process without expert consultation/coaching about how to determine competency dimensions and choose appropriate behavior-based questions. Choosing too many, too few, too verbose, or too generic questions can hinder the process and outcome. Additionally, participants are significantly more likely to understand, accept and learn from their feedback results when professional coaching is part of the process. VI. Why Does 360 Degree Feedback Work? Participants who benefited the most from 360 feedback were typically initiated to the process well and were offered to participate in order to enhance their growth and development. Typically, entering the process with a positive outlook creates more open-mindset to whatever the feedback offers. Through the 360 feedback for development process, participants better understand what the organization, and its employees, are looking for. This comes through the competencies and specific behaviors associated. The process helps minimize assumptionmaking and the guesswork that often accompanies typical miscommunications or lack of communication that can be inherent in organizational relationships. 360 feedback, particularly when executive coaching is offered as well, provides the participant a safe atmosphere to learn about themselves and look from different vantage points at the behaviors they display both consciously and unconsciously. When administered and coached with kindness and clarity, participants can significantly increase their self-awareness and develop specific action plans to both leverage their strengths and reduce the blind spots that have impeded their success. VII. The 360 Degree Feedback Process The following steps serve as a guideline for effectively introducing and implementing 360 degree feedback. STEP 1: ASSESSING ORGANIZATIONAL CLIMATE AND READINESS Before implementing 360 feedback, it is helpful to consider the climate of the organization and its readiness for 360 feedback. The effectiveness of the process can be greatly influenced by employees past experience with 360 assessment, perceptions and misperceptions, timing of the initiative and level of trust in each other, the organization, and the process. Questions to consider are: Has the organization done 360 assessments in the past? What were the outcomes? How was it received by staff? What came about as a result of this process? What perceptions or misperceptions might staff have of 360 feedback? What concerns might people have? What considerations are there regarding timing? Given the number of participants and potential raters involved, how much time will the process involve collectively? How open to feedback are participants likely to be? How honest in giving feedback are respondents likely to be? 4 360 Multi-Source Multi-Rater Feedback: A General Overview

To what degree is this initiative aligned with strategic objectives? STEP 2: ORIENTING PARTICIPANTS TO THE 360 FEEDBACK PROCESS For the 360 feedback process to be successful, participants must understand the intent of the initiative and be oriented to the process. This can be done one-on-one in an initial coaching session or in a group session if there are multiple participants. It is helpful to have a key organizational stakeholder share why the organization is undertaking 360 feedback and specifically how it will used; i.e., for developmental rather than appraisal purposes. As a way of increasing the likelihood that participants will be open to the feedback process, it is suggested that an outside or neutral in-house coach facilitate the process. The designated coach should orient employees to the concept of 360 feedback; i.e., that it acts like a fourway mirror and provides multi-perspective feedback. In a one-one-one coaching session, there is an opportunity for the coach to learn more about the participant, including intentions for the process, goals and aspirations, challenges, concerns, and other contextual information. The coach should briefly discuss the competencies and skills measured in the 360 instrument and provide an overview of the 360 process including selecting raters, completing the selfassessment, distributing surveys to raters, analyzing results, and creating a development plan. STEP 3: SELECTING AND INVITING RATERS While 360 participants typically have control over choosing their raters, it is helpful to provide them with guidelines for appropriately choosing raters. If the 360 instrument is used as a developmental tool rather than an appraisal tool, there is a greater incentive for participants to avoid choosing raters based on who will give them the best ratings. Participants should be encouraged to select a cross-section of raters; i.e., people with whom they have the best to worst working relationships. In addition, they should seek a large sample, typically 12-18 people who see the participant in action. Ideally there should be a variety of rater groups within the sample; i.e., boss(es), peers, direct reports, and others. Participants should be encouraged to personally invite their raters to take part in the process and share their intent for doing 360 assessment, provide brief administrative details, and convey that they value the rater s perspective and would like candid feedback. STEP 4: ORIENTING RATERS TO THE 360 FEEDBACK PROCESS Because raters are typically unfamiliar with the 360 process, it can be beneficial to conduct a brief orientation for them as well. This increases the likelihood that they will provide useful feedback. As in the participant orientation, it is helpful to discuss the intent of the initiative, the nature of 360 feedback, anonymity, the 360 instrument, and steps to completing the survey. It can also be helpful to discuss rater errors to avoid; i.e., halo, horn, compatibility, recency, and central tendency. STEP 5: INTERPRETING THE 360 ASSESSMENT RESULTS Because receiving 360 assessment results can be overwhelming and confusing, participants should work with a coach in a one-on-one setting or with a facilitator in a group setting to interpret their results. The coach or facilitator typically begins the session by providing an overview of the session and addressing questions or concerns. It can be helpful to discuss a range of potential reactions and how to manage defensiveness. In a workshop setting, the facilitator may walk through a sample report. Once the participant is oriented to the structure of the report, the coach works with the participant to interpret the data by discussing contextual issues, importance of specific competencies to his or her job, areas of alignment and misalignment, feedback that is surprising or affirming, and themes regarding strengths and developmental opportunities. The coach should be facilitative rather than directive in his or her approach, and assure that the discussion focuses on strengths as well as development areas. The discussion typically moves from big picture themes to detailed behaviors and back to themes, so that the participant leaves the discussion with a deep understanding of his or her key talents to leverage and key development areas to enhance. Participants typically need 360 Multi-Source Multi-Rater Feedback: A General Overview 5

additional time to process the data. It can be helpful to provide a planning guide in which the participant reflects on specific questions and records insights. STEP 6: DEVELOPMENT PLANNING In order to avoid dead-end feedback where no discernable change occurs, participants must construct a development plan. A concrete, written plan provides focus for growth and a platform from which to measure future success. Typically the coach and participant start by identifying development priorities, including two or three key behavioral objectives and criteria for success. For each objective, the coach and participant explore developmental activities for achieving the objectives which might include behaviors to practice, behaviors to observe in oneself, behaviors to observe in others, training, reading, involvement in projects, tasks, and being mentored. This discussion should incorporate strengths the participant can capitalize on to reach his or her other objectives. Together, the participant and coach should also identify people who can support the participant s development and ways the participant can reflect on what happens as he or she experiments with new behavior. STEP 7: DEBRIEFING WITH RATERS Following up with raters also keeps the feedback alive. If approached with intention and openness, the participant can create a meaningful dialogue with his or her raters. This is an opportunity to share insights from his or her feedback and discuss future steps towards development. It is particularly helpful to follow up with boss(es) and direct reports. With direct reports, peers, and others, debriefing can occur one-on-one or in groups. The participant should begin by developing an agenda for the meeting and sharing his or her intent for following up. The participant should thank the rater(s) for taking part in the assessment process and remind them that the results were anonymous for everyone except the boss(es). The participant can share the major findings, including themes from both strengths and areas for development, and discuss actions for going forward. As appropriate, the participant can invite suggestions and request support. VIII. Conclusion When considering a 360 assessment initiative, it is very helpful to consider the intent for introducing it in the organization, how it will be used, elements that make an instrument effective, and steps that can be taken to increase effectiveness. IX. References Creelman Research (2007). Making Multi-rater Feedback Work in Professional Service Firms. Creelman Research and Halogen Software White Paper. Gray, et al (1997-2000). 360 Feedback: Best practice guidelines. Chartered Institute of Personnel and Development; SHL; British Psychological Society; The Department of Trade and Industry; University of Surrey Roehampton. Smither, London, and Reilly (2005). Does Performance Improve Following Multisource Feedback? A theoretical model, meta-analysis, and review of empirical findings. Personnel Psychology. Morgenson, Mumford, and Campion (2005). Coming Full Circle: Using Research and Practice to Address 27 Questions About 360-Degree Feedback Programs. Society of Consulting Psychology. Van Hooft, Van der Flier, and Minne (2006). Construct Validity of Multi-Source Performance Ratings: An examination of the relationship of self, supervisor, and peer ratings with cognitive and personality measures. International Journal of Selection and Assessment. Wingrove (2008). Do 360 Mirrors Alone Really Produce Sustainable Make-Overs? What actually makes 360 degree feedback work. Pilat White Paper. 6 360 Multi-Source Multi-Rater Feedback: A General Overview

X. Additional Resources on this Subject A. SEMINARS Coaching: Partnering for Performance Skillscope: A Multi-Rater Assessment Tool for Supervisors and Managers B. REFERENCE MATERIAL FYI Coaching: A General Overview C. MSEC SERVICES Conflict Resolution Training Services Workplace Coaching Services 360 Multi-Source Multi-Rater Feedback: A General Overview 7