Issue 9508 Published by ASTD How to Build and Use a 360-Degree Feedback System Management Development Info-line The How-To Reference Tool for Training & Performance Professionals
Issue 9508 How To Build and Use a 360-Degree Feedback System A U T H O R Warren Shaver, Jr. ASTD 1640 King Street Alexandria, VA 22314 Management Development What Is 360-Degree Feedback?...1 The Instrument...1 Who Can Do It?...3 Buy or Build?...3 Setting Up the System...4 Step 1. Design and Plan the Process...4 Step 2. Design and Develop the Tool...5 Editorial staff for 9508 Editor Barbara Darraugh ASTD Internal Consultants Inta Berzins Michele Brock Revised 1998 Editor Cat Sharpe Step 3. Administer the Instrument...6 Step 4. Process and Report Feedback...7 Step 5. Plan Responses to the Feedback...8 Pitfalls...9 Benefits...10 References & Resources...11 Job Aid...12 Contributing Editor Ann Bruen Designer Steve Blackwood Reprinted 2000 Come visit Info-line on the ASTD Web site: www.astd.org Info-line is a series of how-to reference tools; each issue is a concisely written, practical guidebook that provides in-depth coverage of a single topic vital to training and HRD job performance. Info-line is available by subscription and single copy purchase. Printed in the United States of America. Copyright 1995, 2000 ASTD. All rights reserved. No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means graphic, electronic, or mechanical, including photocopying, recording, taping, or information storage and retrieval systems without the express written permission of the publisher. Material appearing on page 12 is not covered by the copyright and may be reproduced and used at will.
What Is 360-Degree Feedback? Basically, a 360-degree feedback evaluation is a questionnaire that asks people superiors, direct reports, peers, and internal and external customers how well a manager performs in any number of behavioral areas. These raters should know the manager, or ratee, and they should have opinions that the organization respects. Sometimes the manager will also want to rate himself or herself as a sort of personal benchmark. The rationale behind such a broad and well-rounded evaluation is simple. Upper management does not always see aspects of a manager s performance that others in the organization see. And a manager failing in those performance areas is probably also hurting the bottom line. For example, a manager who browbeats workers into making production deadlines may stay on schedule, but the fast-paced environment and disgruntled employees may make quality suffer. Or a manager who is the workers friend may avoid personal conflict, but his or her lack of critical feedback may leave workers feeling adrift. Possible reasons to use 360-degree feedback evaluation include the following: Helping managers with their personal and professional development. It is easy to miss our own faults, but understanding those faults can improve our performance and our careers. Providing input for performance appraisals. This is controversial, but human resource development (HRD) professionals are looking for ways to make the link between 360-degree evaluations and performance appraisals stick. Helping in an organization s succession planning. The detailed reports generated in a 360-degree evaluation make it easier for an organization to match a manager s skills with a particular job or function. Helping facilitate organizational change. Multirater feedback systems can make sure that managers align themselves with the organization s strategies and values. Why Use It? I could do my job better if only my boss would (or wouldn t)... Who couldn t fill in that blank on almost any given day? But, how many of us actually tell our bosses what should go in that blank? Multirater (or 360- degree) feedback gives us a chance to do just that. We can evaluate our managers, subordinates, and peers anonymously, honestly, and thoroughly. And we can learn from being rated ourselves by those very same people. People work together better. The bottom line improves. It sounds simple enough, but this is a subtle system that can be easily misused, if not abused. Personality conflicts can take over the evaluation if the process is not carefully designed. And everyone must trust the system for it to work effectively. In addition, the technique can be very scary for ratees. Some of the collected information can be personal or even embarrassing. (It s hard to remember that the criticism is supposed to be constructive when half a dozen people have said you are too disorganized.) Although the use of multirater systems has been increasing for years, we are still learning what its potential problems are how to ensure accuracy; and its long-term effects on raters, ratees, and their organizations. This issue of Info-line describes the ins and outs of 360-degree evaluations, tells you why they are so popular, and walks you step by step through an implementation process. The Instrument The instrument is a questionnaire of statements, questions, or behaviors that users rate along an assigned scale (for example, very satisfied to very dissatisfied ). These items are usually grouped together under category headings, and they usually discuss very specific actions managers do or should do. Most instruments also include space for openended responses. (See an example of such a feedback instrument on the next page.) 1
Sample Instrument Rate each of the statements below along the scale to the right. Be sure to mark an answer for every statement. Customer Oriented Very Somewhat Somewhat Very Not Satisfied Satisfied Satisfied Dissatisfied Dissatisfied Dissatisfied Applicable How satisfied am I that my manager: understands the product or service well anticipates customer needs meets customer deadlines responds to complaints or problems quickly answers phone calls and correspondence promptly Communicates Well How satisfied am I that my manager: seeks feedback listens well expresses himself or herself well verbally expresses himself or herself well in writing uses constructive criticism Respects Individuals How satisfied am I that my manager: helps with my professional development keeps promises is open to different opinions is fair supports a balance between work life and home life 2
Collected feedback can range from He stands too close and talks too loud to She doesn t understand how we work. Rarely is the feedback easy for managers to take. Some managers may even want to discount the feedback as being only someone s opinion. But whether the feedback is objective or subjective, it is still what somebody thinks. That alone should make a good manager want to address the problem. Who Can Do It? Any organization can conduct 360-degree evaluations. Smaller organizations, however, will have trouble getting enough responses for accurate reporting and will probably have trouble keeping the responses anonymous. Another consideration is the organization s culture. An open, participatory culture used to continuous improvement and change will be more successful with a 360-degree evaluation. Hierarchical organizations characterized by internal competition and inflexibility will be less successful and less likely to stick with such an evaluation method. In either case, don t expect the system to work perfectly right away. It may take several years to build faith in the process and create a sense of trust among workers. Finally, if your organization is reengineering or otherwise experiencing a change in its culture, wait awhile before moving to 360-degree evaluation. Give people a chance to adjust before throwing another new idea their way. Buy or Build? So, you like the idea. Now what do you do? Your organization can get its own multirater system in the following ways: Buy You can purchase a package off the shelf. Several hundred are currently on the market, and each has its own particular pluses and minuses. Choosing an Off-the-Shelf Evaluation Tool If your organization is trying to choose an off-the-shelf 360-degree feedback instrument, ask yourself the following questions before buying: 1. What is your organization s vision or mission? Does the model stress those ideas? 2. What management model does the instrument follow? Is it similar to your organization s model? 3. What part or parts of the manager s performance for example, basic business knowledge, interpersonal skills, or product knowledge do you want to measure? 4. Are the items clearly written? Will all raters understand the instrument and its instructions? 5. Is the length of the instrument appropriate? Will it require time to be set aside for raters to complete it? 6. Are the items appropriate to the ratee s management level? to the organization itself? 7. Does the instrument s design help ensure confidentiality? Is the instrument valid that is, do you have evidence that the instrument can help managers improve their management skills? 8. Is the instrument reliable? Researchers Ellen van Velsor and Stephen J. Wall recommend that the instrument s technical report list at a minimum: test/retest reliability at.4 or better internal consistency between.65 and.85 an inter-rater agreement of.4 or better 9. Does the feedback report help the manager interpret the data? Will he or she need additional assistance understanding it? 10. Does the instrument feedback report lend itself to action planning? 3
Setting Up the System Hire You can hire a consultant to develop a customized multirater process. This may be relatively expensive, but the hands-on treatment ensures a good organization-evaluation fit. And some consultants will help with the ratee s action planning at the later stages of the process. Build You can construct a system internally. This can be a big step for some organizations. Think hard before jumping in. For more information about what is involved in building your own instrument, see the next section. Combine You can use a combination of any or all of these methods. This method provides the most flexibility in terms of instrument design, time involved, and cost. No matter where it comes from, a good 360-degree feedback system should be: Reliable and consistent. If a manager doesn t change, the same raters responses shouldn t change over time. Also, raters at the same level in the organization with the same level of contact with the ratee should have similar responses. Finally, scales within the instrument should be the same. Valid. The instrument should actually measure what it claims to measure. Only actual reports of performance improvement in ratees who used the instrument provide true validity testing. Easy to use. Every rater should be able to understand and answer every item. An agent of positive change. The instrument should measure management behaviors that your organization values and desires. Unneeded feedback can confuse the issues by describing embarrassing or painful behaviors that do not affect job performance. Let s assume your organization wants to develop its own feedback system. It s not easy. In fact, many consultants recommend not doing it yourself, simply because good 360-degree feedback systems are so new and so subtle, and because the outcome of a poor system can be so disastrous. Even if you decide to hire a consultant to build the evaluation system, however, it helps to know how to put one together. This knowledge allows you to spot potential problems at each of the five steps: 1. Design and plan the process. 2. Design and develop the tool. 3. Administer the instrument. 4. Process and report feedback. 5. Plan responses to the feedback. Step 1. Design and Plan the Process If your organization has decided to use 360-degree evaluations, then we can assume it has also decided who the ratees will be. Next, choose the raters. This is not as easy as it may appear. Although the technical definition of 360-degree feedback says the evaluation should include input from all levels above, below, beside, and outside such massive, full-circle systems create a number of problems when implemented: Too Many Forms Conducting a multirater feedback system is complicated enough, without extra, unnecessary paperwork clogging the works. Too Much Time The more raters, the more staff-hours spent on the evaluation. Remember, some feedback forms can take an hour or more to complete. Not Enough Knowledge Some of the people in a full-circle evaluation may have limited contact with the ratee. Getting feedback from these people wastes everyone s time. 4
Of these three problems, the last is the most important. Feedback from people lacking specific knowledge of a manager s performance or work style can dilute the evaluation with hearsay and incomplete or inaccurate comments. Such feedback also wastes the organization s resources. Worst of all, poor feedback misleads the ratee and defeats the purpose of the evaluation. For example, suppose your organization uses an open-call feedback system in which everyone in the organization is invited to rate the manager. Workers with strong personal feelings for that ratee may volunteer feedback, even though they ve never worked for that manager. Their biases, either negative or positive, may cause the manager to adjust his or her behavior needlessly or even wrongly. Using external customers is another tricky proposition. Some consultants say that a 360-degree evaluation isn t complete unless external customers are involved. But how can customers who may have never worked directly with a manager provide any useful feedback? One way around these problems is to label this feedback external customer or no direct contact. Use such feedback as addenda to the actual 360-degree evaluation; don t link it with a performance appraisal or future performance evaluations. Simply allow the manager to see this input to fill in the blanks in the evaluation. Other things to watch out for while planning the process are as follows: Fairness. Is everyone playing by the same rules throughout the organization? Do all parties feel the process is fair? Even perceived unfairness brings in poor results. It also gives the ratee an excuse to ignore the evaluation s final analysis. Timing. Are the raters going to be present to fill out the forms, or are they on vacation or in training? Are there performance appraisals or other evaluation events that need to be linked to the 360-degree feedback? Confidentiality. Does everyone perceive that the process maintains strict confidentiality? A study by David Antonioni says that feedback systems in which the raters are known by the ratee produce inflated ratings. This ultimately means poor performance improvement. Step 2. Design and Develop the Tool The best feedback systems aim at the future by showing the present. Remember this as you start to develop or choose your feedback instrument. A good way to focus on the future is to develop the instrument from the organization s vision. For example, if the organization focuses on teamwork, then the feedback instrument should also focus on teamwork. If the organization values some other behaviors, then the instrument should emphasize those behaviors. The feedback instrument may touch on a variety of areas or behaviors; all of them, however, should ultimately support the organization s vision. If your organization plans on linking the feedback questionnaire to the manager s performance appraisal, consider using the appraisal s categories or headings in the questionnaire. This will focus your efforts and make the link between the two easier to manage. You can use facilitated group process to develop the actual questions. (See Info-lines No. 9406, How To Facilitate and No. 9407, Group Process Tools, for more information on group process.) Most of the same people involved in the evaluation ratees and raters know what is and is not important in the position of manager. One easy way to build the list of behaviors is to create broad categories linked to the organization s vision, such as customer first or teamwork. Then break those headings into specific behaviors. For example, under customer first you can place behaviors such as predicts customer needs and responds quickly to customer demands. 5
Once the behaviors are defined, you need to create a response scale. David W. Bracken says that satisfaction scales such as very satisfied to very dissatisfied produce more helpful results than frequency scales such as always to never. Bracken also recommends using six response choices. This is enough to measure subtle improvement over time and prevent the raters from taking the easy, middle-of-the-scale response. Also, provide a separate don t know or not applicable answer. This will ensure the feedback accurately reflects the feelings of the raters. Include a section for open-ended questions. Managers can see why raters rated them the way they did and get clues on how to fix those problems. Bracken says the best way to use open-ended questions is to have the raters complete a sentence: My manager should stop doing Other issues to consider at this stage include the following: Focusing on Behavior Does the evaluation ask about behaviors or personality traits? Behaviors are specific and can be changed; personalities are often vague and probably cannot be changed easily. Buy-In Do raters and ratees feel that the listed behaviors are the most important ones? Looking Toward the Future Do the listed behaviors include not just what managers do now, but also what they should be (or will be) doing in the future? Length Is the questionnaire too long? Workers probably won t complete one that takes more than 15 minutes to finish. And incomplete and uncompleted instruments mean compromised results. Step 3. Administer the Instrument Now you have to decide who will answer the questionnaires and how to get all potential raters into the process. Pencil and paper is the simplest and most popular way to deliver a feedback instrument. Its downside is the difficulty of dealing with mounds of paper and the time it takes to enter all that data electronically. Electronic data collection instruments such as online, fax, electronic meeting support, or telephone solve those problems, but they can create new ones if the raters either cannot use or are uncomfortable with these technologies. No matter which methods you use, make sure the raters feel that their answers are given in confidence. (See Info-line No. 9507, Basics of Electronic Meeting Support, for more information on this topic.) To ensure that all chosen raters finish the questionnaires, set aside a few minutes of the workday for people to do their feedback forms. Even better, provide a time and place for delivering the instrument, and have an administrator on hand to answer questions and push for completed forms. At the very least, include a policy and intent statement with the form, so all raters know what is being done, why it is being done, and how to do it. Finally, if the raters are to mail in completed forms, provide postage and a return envelope. Other things to watch out for include the following: Accurate Coding Does the rater know who he or she is rating? Preprinting the ratee s name or code on all forms helps ensure that a simple handwriting mistake can t misdirect data to the wrong ratee. Incomplete Participation Did you forget to include any appropriate raters from off-site, other shifts, or other departments? Too Many Forms Are you going to drown in a sea of returned questionnaires? Limit the number any particular manager will get, but don t just set a random number as the limit. Managers in larger departments should get more raters. 6
Step 4. Process and Report Feedback Once all the forms are turned in, you have to enter, compile, and report the data. This can be a daunting task, especially if all managers are rated at the same time. (This, by the way, is usually how it is done.) As stated before, one way to speed this process is to deliver the questionnaires electronically. Another way is to hire consultants to compile the data for you, even if your organization had developed its own instrument. Outside consultants also improve confidence that the process is fair, confidential, and legitimate. As you collect responses, you have to decide how many is enough to generate a report. David W. Bracken recommends five as the ideal number. This means you do not report any questions with fewer than five responses. Of course, managers in smaller departments may not get that many. You can choose a smaller threshold, but be sure to inform all parties in those situations that confidentiality may be more difficult to maintain. The report itself will usually include a list of the questions or behaviors and their scores. Another way to deliver the data is to report only category scores, rather than question scores. For example, report a score for the heading customer first, not responds to customer needs quickly and so forth. This method is less confusing for the ratee. But it also loses some of the subtlety and specificity of a full report. In any case, be sure that the questions or behaviors in each category are actually related to one another. Bracken recommends using a statistical (or factor) analysis at the beginning of the process to ensure questions belong under a particular heading. (See Info-line No. 9101, Statistics for HRD Practice, for more information.) If people will be conducting a performance appraisal using data from the evaluations, be sure to instruct them explicitly on how to use that data. Many consultants are uneasy linking opinions to salaries. Clear guidelines can help guide all involved parties through this tricky area. (See the section titled Performance Appraisals for more information on this topic.) 360-Degree Glossary Coaches: help ratees turn action plans into action. They are personal trainers for managers trying to change behaviors in a positive way. Face validity: is an informal measure of whether the instrument makes sense, is relatively easy to use, and at least seems to serve the organization s requirements. Internal consistency: is a statistical rating of how well a feedback instrument s items measure a particular category of behavior and whether that category has enough items in it. Inter-rater agreement: is a statistical measure of how ambiguous or interpretable the items in a feedback instrument are. Multirater feedback: comes from a number of sources, rather than solely from an employee s manager. This can include peers, subordinates, managers, internal customers, or external customers. Reliability: tells how well the feedback instrument can measure ratees behaviors and whether the instrument is accurate and consistent. Test/retest reliability: is a statistical measure of a feedback instrument s stability over time. Look for this information in an instrument s technical report. 360-degree feedback: comes from a number of sources and will include feedback from all of these sources: peers, subordinates, managers, internal customers, and external customers. (The inclusion of external customers is controversial; they are often left out of the process.) Upward feedback: comes only from subordinates (and sometimes internal and external customers). Validity: tells whether the feedback instrument measures what it claims to measure and whether those items are worth measuring in the first place. 7
Keep an eye out for the following problems: Inaccurate Transcription Do your data entry methods ensure that the right answers go with the right ratees? Editing Are the open-ended questions being edited? (This may save space and time, but it can also delete crucial information and give the impression that responses are being censored.) Slow Processing Are questionnaires processed fast enough so you can include latecomers in the final report? This is especially important if you think some managers may not get enough responses. Step 5. Plan Responses to the Feedback After you have compiled all the data in a report, that rated manager must now turn the questionnaire results into an action plan for change. This process is not automatic. The best-designed and well-written reports may lead the ratee to a conclusion, but actually deciding what must be done is the rated manager s task. To formulate an action plan, that manager needs detailed data. The data must be specific, clear, and related to the rated manager. The data should also reflect behaviors or issues that the manager can control or act on. The manager should also have the appropriate data interpretation skills. Seeing the data is only a start. The manager has to turn the report into an action plan, and the organization has a responsibility to help him or her do that. David W. Bracken suggests three ways to do this: 1. Facilitators. One-on-one discussions are best for obvious reasons. Facilitators can tell managers how to read the evaluation report, tailoring the discussion to individual managers. They can help managers write their action plans. And they can help monitor improvements over time. On the downside, using facilitators can be more expensive, time-consuming, and (in cases where many managers are being rated at once) unwieldy. 2. Workshops. Group discussions can provide many of the same benefits as individual facilitators. The main problems with workshops include dealing with a number of very individualized problems at once and the possibility of sharing personal or embarrassing information with other managers. 3. Workbooks. Printed materials are cheaper than facilitators, can be used at will, and act as a permanent record of what the organization expects of all ratees. Workbooks can also be rigid, unclear, or incomplete, however. Most organizations use workbooks with some sort of facilitated discussion for exactly these reasons. Because there may be a number of criticisms in the report, the manager should choose the one or two most important items that he or she scored the lowest on. The manager should focus the action plan on these items. Once the action plan is complete, the organization should point the manager to the appropriate training and development resources. A particularly helpful (and expensive) technique is personal coaching. These professionals hold a manager s hand and guide him or her through the entire plan. The manager should then hold a meeting with as many of the raters as possible. He or she should share the results of the feedback report with these people, without justifying or making excuses for any criticized behavior. Have the manager describe the action plan, give timelines if possible, and explain why the chosen behaviors were marked for change. It is important for everyone involved in such a meeting to be nondefensive, nonconfrontational, open, and polite. Finally, issues to watch out for in this last step include the following: Poor data. Does the manager have all of the data? Is it understandable? Does it measure behaviors the manager has control over? Unfairness. Are all rated managers given the same access to help? If facilitators are used, don t forget about other shifts or off-site ratees. Drifting. Is there a method to ensure ratees develop an action plan? This is particularly important if the only instruction managers get comes from workbooks. 8
Pitfalls Multirater feedback systems do raise some concerns. Here are a few things to think about as you consider using one. Too New We don t know enough about the process to say how well it works for example, how reliable is the raters information, and does improvement in lowrated behaviors actually affect the bottom line? It is important to remember that 360-degree evaluations deal with real people s lives and personalities. You aren t simply measuring how many widgets their department can produce or how far under budget they are. You are rating these managers core competencies who they are. Also keep in mind that multirater systems, like quality initiatives, will work only if everyone buys into the idea. If rated managers don t act or if the organization doesn t follow through and provide HRD resources for these people, then they run the risk of hurting workers morale and trust. In such situations, it would have been better for everyone if the multirater system had never been started. Inexperienced Raters Many participants in a 360-degree evaluation may be unskilled in evaluation or observation techniques. They may have to rely on the instrument s instructions, which could very well be incomplete, inaccurate, or nonexistent. Another problem is that multirater feedback is very memory dependent. The instrument s questions should cover the entire year. But a rater may either forget events that have happened or may allow recent events to color perceptions of past events. For example, an employee who thought a manager was tough but fair may change that response if he or she was recently counseled for chronic tardiness. External Customers Should you include external customers in your 360-degree evaluation? Technically, a 360-degree feedback system has to include external customers responses. But their participation raises several questions. Generally speaking, the customers feedback rests on whether the organization has met their needs. An effective manager will have played some part in that customer service. But determining where or how the manager s influence comes into play isn t easy especially if these customers had no direct contact with the manager. Also keep in mind that questions used by employee raters in the evaluation may mean nothing to an external customer: My manager provides opportunities for me to develop on the job is a useless criterion for a customer. To have a fair, accurate, and easy-to-analyze evaluation, every rater must use the same questions. Look at your own organization. Do your customers have some direct contact with managers? Are the areas you are measuring customer oriented (for example, knows nuances of the product ) or more employee oriented ( helps me in my professional development )? The answers to these questions will tell you whether to include customer feedback in your evaluation, to put it in an addendum, or to ignore it. Performance Appraisals Although organizations usually use 360-degree and multirater feedback systems to give managers a picture of their management styles, more and more are looking for ways to tie the feedback to performance appraisals. This may seem sound on the surface. But linking people s perceptions to a manager s paycheck can raise concerns. What if personality conflicts tempt the employee into unfairly rating the manager? What if the manager, fearing a cut in pay, coerces his or her employees into delivering a betterthan-deserved rating? Or what if that same manager acts as if he or she were in a popularity contest, perhaps ignoring organizational concerns in an effort to please subordinates? How can external customers fairly rate someone they may never have had any contact with? 9
Companies Ask Around for Feedback The use of full-circle feedback and other kinds of multirater systems is surging; more and more companies are asking for input on employees performance from many different sources. Several factors contribute to the trend s increasing popularity in organizations: Multirater systems complement other popular organization initiatives, such as empowerment, employee participation, organizational flattening, and teamwork. Questions such as these may be slowing the rush to merge 360-degree evaluations with performanceappraisal systems, but they are not stopping it. One thing is certain: More research needs to be done in this area. Benefits The manager and organization get the following benefits from a 360-degree evaluation: Most traditional performance-appraisal systems emphasize the role of an employee s manager in performance management. Such systems are increasingly insufficient as feedback and development tools, because downsizing has increased managerial spans of control. Other traditional means for creating organizational change through employee input such as employee-attitude surveys continue to fall short of expectations. Frequently, this is due to inadequate follow-through and a lack of accountability for subsequent action. The creation of a new multirater instrument gives management the opportunity to operationalize a vision for success, through descriptions of behaviors that support organizational values. Multirater systems feel more reliable than single-rater feedback systems. Multiple raters can provide a variety of perspectives; people generally assume that those viewpoints will add up to an accurate assessment of an employee s performance. Adapted from Training & Development, June 1993. Copyright 1993, ASTD. All rights reserved. News A multirater feedback system can uncover expectations, strengths, or weaknesses the manager may never have thought of. Global Perspective Multirater systems provide varied information from different, usually untapped sources. Standards The feedback can become a performance benchmark in a manager s performance appraisal. Accountability The person being rated is responsible for improving his or her skills and behaviors. Efficiency A 360-degree feedback system is relatively inexpensive, simple to implement, and quickly completed in a timely fashion. Perhaps a better way to view 360-degree management is as a source of information that can make managers manage better. Researchers Joy Fisher Hazucha, Sarah A. Hezlett, and Robert J. Schneider determined that highly rated managers advanced further (in terms of pay) than lowerrated managers. If 360-degree feedback can improve management skills, then the technique can lead to career advancement. 10
Articles Antonioni, David. The Effects of Feedback Accountability on Upward Appraisal Ratings. Personnel Psychology, Summer 1994, pp. 349-356.. Designing an Effective 360-Degree Appraisal Feedback Process. Organizational Dynamics, Autumn 1996, pp. 24-38. Baumgartner, Jerry. Give It to Me Straight. Training & Development, June 1994, pp. 48-51. Bernardin, H. John, et al. Attitudes of First-Line Supervisors Toward Subordinate Appraisals. Human Resource Management, Summer and Fall 1993, pp. 315-324. Bracken, David W. Straight Talk About Multirater Feedback. Training & Development, September 1994, pp. 44-51. Budman, Matthew, and Berkeley Rice. The Rating Game. Across the Board, February 1994, pp. 34-38. Hazucha, Joy F., et al. The Impact of 360-Degree Feedback on Management Skills Development. Human Resource Management, Summer and Fall 1993, pp. 325-351. Kaplan, Robert E. 360-Degree Feedback PLUS: Boosting the Power of Co-Worker Ratings for Executives. Human Resource Management, Summer and Fall 1993, pp. 299-314. LaMountain, Dianne. Things Are Looking Up. Small Business Reports, June 1992, pp. 11-14. Lee, Chris. Performance Appraisal: Can We Manage Away the Curse? Training, May 1996, pp. 44-59. Lepsinger, Richard, and Anntoinette D. Lucia. 360-Degree Feedback and Performance Appraisal. Training, September 1997, pp. 62-70. London, Manuel, and Richard W. Beatty. 360-Degree Feedback as a Competitive Advantage. Human Resource Management, Summer and Fall 1993, pp. 353-372. Ludeman, Kate. Upward Feedback Helps Managers Walk the Talk. HR Magazine, May 1993, pp. 85-93.. To Fill the Feedback Void. Training & Development, August 1995, pp. 38-41. McGarvey, Robert, and Scott Smith. When Workers Rate the Boss. Training, March 1993, pp. 31-34. Milliman, John F. Companies Evaluate Employees From All Perspectives. Personnel Journal, November 1994, pp. 99-103.. Customer Service Drives 360- Degree Goal Setting. Personnel Journal, June 1995, pp. 136-142. Moravec, Milan, et al. A 21st Century Communication Tool. HR Magazine, July 1993, pp. 77-81. Moses, Joel, et al. Other People s Expectations. Human Resource Management, Summer and Fall 1993, pp. 283-297. Nowack, Kenneth M. 360-Degree Feedback: The Whole Story. Training & Development, January 1993, pp. 69-72. O Reilly, Brian. 360-Degree Feedback Can Change Your Life. Fortune, October 17, 1994, pp. 93-100. Romano, Catherine. Fear of Feedback. Management Review, December 1993, pp. 38-41. Smith, Lee. The Executive s New Coach. Fortune, December 27, 1993, pp. 126-134. Tornow, Walter W. Editor s Note: Introduction to Special Issue on 360- Degree Feedback. Human Resource Management, Summer and Fall 1993, pp. 211-219.. Perceptions or Reality: Is Multi-Perspective Measurement a Means or an End? Human Resource Management, Summer and Fall 1993, pp. 221-229. van Velsor, Ellen, et al. An Examination of the Relationships Among Self- Perception Accuracy, Self-Awareness, Gender, and Leader Effectiveness. Human Resource Management, Summer and Fall 1993, pp. 229-263. van Velsor, Ellen, and Stephen J. Wall. How To Choose a Feedback Instrument. Training, March 1992, pp. 47-52. Wilson, Jane L. 360 Appraisals. Training & Development, June 1997, pp. 44-45. Yammarino, Francis J., and Leanne E. Atwater. Understanding Self- Perception Accuracy: Implications for Human Resource Management. Human Resource Management, Summer and Fall 1993, pp. 231-247. Books References & Resources Langdon, Danny G. The New Language of Work. Amherst, Massachusetts: HRD Press, 1995. Slater, Robert. Get Better or Get Beaten! Burr Ridge, Illinois: Irwin Professional Publishing, 1994. 11
Job Aid Is Your Organization Ready for 360? Before you jump into a 360-degree feedback process, ask yourself the following 10 questions. A yes answer on any one is a red flag. It doesn t mean abandon the idea. But carefully consider that issue as you develop your multirater system. 1. Is the organizational culture hierarchical rather than participatory? 6. Do managers have little or no direct contact with external customers? 2. Does the organizational culture force managers to compete with one another? 7. Is the organization in the midst of or has it recently completed reengineering or other large, organizational change processes? 3. Are there continuous, long-standing personality conflicts in the organization? 8. Do some people (at any level of the organization) not buy into the 360-degree feedback process? 4. Is there a lack of trust between managers and subordinates? between managers? between managers and upper management? 9. Does the organization have hard-to-reach employees, such as those on other shifts, those at distant sites, or those who do not speak the organization s dominant language? 5. Do managers who will be rated have fewer than three subordinates each? 10. Have any other large-scale organizational development events such as total quality management been started at your organization, then abandoned later? 12 The material appearing on this page is not covered by copyright and may be reproduced at will.
Order Selected Info-line Single Issues Business Skills 9907 Conducting Focus Groups 9007 Cost-Benefit Analysis 0011 Hiring and Retaining Top-Performing Employees 8806 Listening to Learn 8710 Productive Meetings 9004 Project Management 9506 Time Management Career Development 9504 Career Advising 9410 Career Systems Development 0004 Mentoring 8708 Orientation Programs 9312 Succession Planning Consultant s Series 9515 Ethics 9403 Outside Consulting 9613 Promoting Your Business 9513 Write A Business Plan 9514 Write a Marketing Plan Evaluation & Research 9801 Benchmarking 9008 Collect Data 9705 Essentials for Evaluation 9709 Evaluating Technical Training 0001 Measure Customer Satisfaction 9813 Level 1 Evaluation 9814 Level 2 Evaluation 9815 Level 3 Evaluation 9816 Level 4 Evaluation 9805 Level 5 Evaluation: ROI 8612 Surveys from Start to Finish 8907 Testing for Learning Outcomes Instructional Systems Development 9706 Basics of ISD 8505 Behavioral Objectives 8905 Course Design & Development 9201 Dev. & Admin. Training 9711 Effective Job Aids 9707 Training Manuals 9712 Instructional Objectives 8906 Lesson Design & Development 9611 Mini Needs Assessment 9401 Needs Assessment by Focus Groups 8502 Needs Analyst 9408 Strategic Needs Analysis 9808 Task Analysis Management Development 8902 15 Activities for Creativity 9508 360-Degree Feedback System 9006 Coaching & Feedback 8909 Coming to Agreement 8901 Creativity 9011 How to Delegate 9402 Leadership 9904 Managing Change 8711 Management Development Process 9108 Motivation 9005 Performance Appraisal 9809 Scenario Planning 9710 Strategic Planning 9107 Visioning Managing the Training Function 0002 Outsourcing Training 9503 Core Competencies 8504 Facilities Planning 9913 Global Training Success 8506 Good Learning Environments 0007 How to Budget Training 9603 How to Partner 8605 Market Your Training Program 9708 OJT 9003 Training Managers to Train 8705 Training with Partners Organizational Development 9602 16 Steps to Learning Organization 9704 Action Learning 9807 Chaos & Complexity Theory 9903 Knowledge Management 9306 Learning Organizations 8812 Organization Development 9304 Organizational Culture 0006 Storytelling 9703 Systems Thinking Performance 9910 Evaluating Performance Interventions 9702 From Training to Performance 9811 Fundamentals of HPI 9606 Link Training to Performance Goals Presentations Skills & Games 8411 10 Great Games 8602 Alternatives to Lecture 8911 Icebreakers 9409 Improve Your Speaking Skills 8606 Make Every Presentation a Winner 8412 Simulation & Role Play 8410 Visual Aids Teams & Quality 0005 Call Center Training 9210 Continuous Process Improvement 9111 Fundamentals of Quality 9906 Group Decision Making 9407 Group Process Tools 9406 How to Facilitate 9901 Service Management 2000 Training Basics 9609 3-5-3 Approach to Creative Training 9209 Accelerated Learning 8808 Basic Training for Trainers 9608 Do s & Don ts for New Trainers 8604 Effective Workshops 9911 Teaching SMEs to Train 9909 Technical Training 8804 Training & Learning Styles 9804 Transfer of Skills Training Training Technology 9701 Delivering Quick Response IBT/CBT 9607 Distance Learning 9806 EPSS 9908 Evaluating Off-the-Shelf CBT Courseware 0003 Implementing WBT 9802 Intranets 9810 Job Oriented Computer Training 9902 Learning Technologies 9905 Training Telecommuters Workplace Issues 9912 Sexual Harassment For a complete listing of all available Info-line issues, visit us at: www.astd.org See attached card for ordering information, or call 800.628.2783 or 703.683.8100
1640 King Street Box 1443 Alexandria, VA 22313-2043 USA Tel 800.628.2783 Fax 703.683.8103 www.astd.org 259508