The Kryptonite of Evidence-Based I O Psychology



Similar documents
Systematic Reviews and Clinical Practice Guidelines

Response Briner & Rousseau: Evidence Based I O Psychology: Not There Yet but Now a Little Nearer?

COMPARATIVE EFFECTIVENESS RESEARCH (CER) AND SOCIAL WORK: STRENGTHENING THE CONNECTION

Evidence-Based I O Psychology: Not There Yet but Now a Little Nearer?

Outline. Publication and other reporting biases; funnel plots and asymmetry tests. The dissemination of evidence...

NEDS A NALYTIC SUMMARY

How To Be A Health Care Provider

Fixed-Effect Versus Random-Effects Models

RUNNING HEAD: How trustworthy is the scientific literature in I-O psychology? How trustworthy is the scientific literature in I-O psychology?

Publication Bias in medical research: issues and communities

The Scientific Literature page 1. The Scientific Literature In Psychology

How Trustworthy Is the Scientific Literature in Industrial and Organizational Psychology?

Systematic reviews and meta-analysis

On the Experience of Conducting a Systematic Review in Industrial, Work and Organizational. Psychology: Yes, It Is Worthwhile.

Challenging the Scientist Practitioner Model: Questions About I-O Education and Training

Ethical Guidelines to Publication of Chemical Research

Issues in Information Systems

The Effectiveness of Occupational Health and Safety Management Systems: A Systematic Review. Summary

Executive Summary: Diabetes and Commercial Motor Vehicle Driver Safety. September 8, 2006

Evidence-Based Nursing Practice Toolkit

Finding and Evaluating Evidence: Systematic Reviews and Evidence-Based Practice

LEVEL ONE MODULE EXAM PART ONE [Clinical Questions Literature Searching Types of Research Levels of Evidence Appraisal Scales Statistic Terminology]

Current reporting in published research

GUIDELINES FOR REVIEWING QUANTITATIVE DESCRIPTIVE STUDIES

Practical Exploration of Subgroup Differences in Performance Ratings. Kevin G. Smith, M.A. Tiffany M. Bennett, Ph.D. July 19, 2011

WHY HR PRACTICES ARE NOT EVIDENCE-BASED

Use advanced techniques for summary and visualization of complex data for exploratory analysis and presentation.

(Effective for audits of financial statements for periods beginning on or after December 15, 2009) CONTENTS

How To Find Out If I-O Psychology Is More Popular In The Uk

QUALITATIVE METHODS: DATA ANALYSIS AND VALIDATION

AMCD briefing on the prevention of drug and alcohol dependence

Domain #1: Analytic Assessment Skills

Themes. Why is waste in research an ethical issue? Waste occurs in all stages of research. Research funding is finite

Defining Education Research: Continuing the Conversation

The School Psychologist s Role in Response to Intervention (RtI): Factors that influence. RtI implementation. Amanda Yenni and Amie Hartman

Cultural Competence: Essential Ingredient for Successful Transitions of Care

Utah Cost of Crime. Therapeutic Communities in Secure Settings for Substanceabusing Offenders (Juveniles): Technical Report.

Evidence in Infection control Evidence based guidelines. Anne Dalheim Infection Control Nurse / MSc in Evidence Based Practice

STRATEGIC PLANNING AND FINANCIAL PERFORMANCE: MORE EVIDENCE*

Examining Science and Engineering Students Attitudes Toward Computer Science

On becoming a high impact journal in international business and management

Strengthening Risk Evaluation and Mitigation Strategies (REMS) Through Systematic Analysis, Standardized Design, and Evidence-Based Assessment

University of Maryland School of Medicine Master of Public Health Program. Evaluation of Public Health Competencies

Shawn Marshall, MD, MSc (Epi), FRCPC, Ottawa Hospital Research Institute (OHRI) and University of Ottawa, Ontario

Metropolitan State University of Denver Master of Social Work Program Field Evaluation

252 10:15AM to 1:15PM Organizers: David Denyer, Cranfield U.; Rob B. Briner, U. of London Primary Sponsor: Research Methods

Daryl Fullerton. Oil & Gas Industry Principal

Past, present and future issues for the US Preventive Services Task Force. Ned Calonge, M.D., M.P.H. Chair, U.S. Preventive Services Task Force

B o a r d of Governors of the Federal Reserve System. Supplemental Policy Statement on the. Internal Audit Function and Its Outsourcing

Evaluating Analytical Writing for Admission to Graduate Business Programs

Developing deep, widespread and sustained improvements in online learning in a Faculty of Business: An analysis of the change process

Skills for the Ethical Practice of Public Health

Evidence-Based Management and the Trustworthiness of Our Cumulative Scientific Knowledge: Implications for Teaching, Research, and Practice

VANDERBILT AVENUE ASSET MANAGEMENT

American Cancer Society Cancer Action Network

Intercoder reliability for qualitative research

Risk Management Policy

RESEARCH STUDY PROTOCOL. Study Title. Name of the Principal Investigator

PEER REVIEW HISTORY ARTICLE DETAILS VERSION 1 - REVIEW. Saket Girotra University of Iowa, Iowa City, IA United States 04-Aug-2015

A Risk Management Standard

M.S. University of Florida, 2006 Warrington College of Business Administration Major: Business Administration w/ Concentration in Management

Communication Through Community? The Effects of Structural Changes on High School Communication

Analyzing Research Articles: A Guide for Readers and Writers 1. Sam Mathews, Ph.D. Department of Psychology The University of West Florida

Encouraging Effective Performance Management Systems

It is widely accepted by those in the scientific community that women have been

James E. Bartlett, II is Assistant Professor, Department of Business Education and Office Administration, Ball State University, Muncie, Indiana.

Academic perceptions of the research evidence base in work and organizational psychology: A European perspective

Inter-Rater Agreement in Analysis of Open-Ended Responses: Lessons from a Mixed Methods Study of Principals 1

GUIDANCE FOR MANAGING THIRD-PARTY RISK

Project Decision Analysis Process

Warwick Business School

Evidence-Based Entrepreneurship (EBE): A Systematic Approach to

Research Productivity in Counseling Psychology: An Update

PEER REVIEW HISTORY ARTICLE DETAILS VERSION 1 - REVIEW. Avinesh Pillai Department of Statistics University of Auckland New Zealand 16-Jul-2015

Official Journal of the European Union. (Acts whose publication is obligatory)

Curriculum Vitae. M.B.A., University of Chicago, 1977; Major field: Organizational Behavior; Minor Field: Finance

Enhancing the relevance of organizational behavior by embracing performance management research

REGULATIONS FOR THE POSTGRADUATE CERTIFICATE IN PUBLIC HEALTH (PCPH) (Subject to approval)

Warning: 360-degree Feedback May Be Hazardous to Your Health

INTERNATIONAL STANDARD ON AUDITING (UK AND IRELAND) 200

Electronic Health Record-based Interventions for Reducing Inappropriate Imaging in the Clinical Setting: A Systematic Review of the Evidence

Guide for Performance Evaluation of Health Department Director/ Administrator

New Zealand Institute of Chartered Accountants

Essays on Teaching Excellence. The Meaning of College Grades

Section Three. Nursing: MSN Research Procedures. Page 25

Performance Improvement Competencies for Instructional Technologists

Internationale Standards des HTA? Jos Kleijnen Kleijnen Systematic Reviews Ltd

A Meta-Analysis of the Effects of Calculators on Students Achievement and Attitude Levels in Precollege Mathematics Classes

The Journals Production Process at the American Meteorological Society. Michael Friedman, Ph.D. Journals Production Manager

Methods for Meta-analysis in Medical Research

California Lutheran University Information Literacy Curriculum Map Graduate Psychology Department

PCORI Methodology Standards: Academic Curriculum Patient-Centered Outcomes Research Institute. All Rights Reserved.

AOSPINE PRESENTATIONS ON SCIENTIFIC RESEARCH METHODS

Comparative Analysis of PhD programs in Engineering Education

PUBLICATION ETHICS AND MALPRACTICE STATEMENT

Principles of Systematic Review: Focus on Alcoholism Treatment

International Ethical Principles for Scholarly Publication

Corporate risk register

The virtues of using research evidence to inform

Transcription:

Industrial and Organizational Psychology, 4 (2011), 40 44. Copyright 2011 Society for Industrial and Organizational Psychology. 1754-9426/11 The Kryptonite of Evidence-Based I O Psychology GEORGE C. BANKS AND MICHAEL A. MCDANIEL Virginia Commonwealth University The focal article (Briner & Rousseau, 2011) addresses at length the merits of systematic reviews for advancing evidencebased practice. In general, we are in agreement with the authors, but we identify one major omission of the focal article that is critical for advancing evidence-based industrial organizational (I O) psychology and management: the topic of publication bias. Publication bias exists when primary study results available to a reviewer systematically differ from all primary study results (McDaniel, Rothstein, & Whetzel, 2006; Rothstein, Sutton, & Borenstein, 2005). Typically, publication bias serves to cause systematic reviews to overestimate the magnitude of effects. Thus, an ineffective management practice may be declared effective. Although systematic reviews (including qualitative studies and quantitative studies, such as meta-analyses) may demonstrate superman -like qualities for their abilities to advance cumulative knowledge and evidence-based practice, publication bias can have a devastating effect on the accuracy of the reviews, much like kryptonite has a debilitating effect on Superman. The focal article asks, What is needed to make I O psychology an evidence-based Correspondence concerning this article should be addressed to George C. Banks. E-mail: banksgc@vcu.edu Address: Department of Management, Virginia Commonwealth University, 301 West Main Street, P.O. Box 844000, Richmond, VA 23284 discipline? Evidence-based practice is well established in other fields, particularly in medicine. Medicine has long investigated how publication bias influences systematic reviews and limits evidence-based practice. We believe that Society for Industrial and Organizational Psychology (SIOP), scientists, and practitioners, through collaboration, hold the key to overcoming publication bias. The following commentary concisely explains recommendations for preventing and evaluating publication bias to promote improved systematic reviews and evidence-based practice. Causes of Publication Bias Contrary to popular belief, the editorial review process is not the only cause of publication bias (Dickersin, 2005; Halpern & Berlin, 2005), there are multiple causes. Table 1 displays the most common causes of publication bias. As indicated in Table 1, there are two primary causes of publication bias. The first cause derives from publication issues and the second stems from access issues. It is not clear how prevalent publication bias is in I O psychology. Studies have evaluated the presence of publication bias in test vendor data (McDaniel et al., 2006) as well as racial differences in job performance (McDaniel, McKay, & Rothstein, 2006) and personality results (Tate & McDaniel, 2008). Findings in these examples indicate the existence of publication bias. Another study explored 40

Evidence-based I-O psychology 41 Table 1. Sources of Publication Bias That Impede Evidence-Based Practice A. Publication issues The study is not submitted for one or more of the following reasons The study is not accepted by a conference or journal The study is submitted and accepted Decision of the author: 1. The study has a small sample size and a small magnitude effect 2. The results are statistically insignificant 3. The results are contrary to theory 4. The results are contrary to trend of past research 5. The results are contrary to the position of the author (e.g., an advocate of personality testing in personnel selection has results indicating near zero validity for a personality measure) 6. The author believes that the study has a limited chance of acceptance 7. The author never gets around to submitting the paper Decision of the organization that owns the data: 1. The results present a liability to the organization (e.g., an employment test shows adverse impact) 2. The results are harmful to an organization s profits (e.g., a consultant has evidence that its product or services do not work) 1. The results are not sufficiently interesting to warrant publication 2. The study has a small sample size and a small magnitude effect 3. The results are statistically insignificant 4. The results are contrary to the theory 5. The results are contrary to the trends of past research 6. The results are contrary to the position of the editor or reviewer (e.g., the editor/reviewer believes A to be true and the paper argues that A is not true) 1. The editor requests that some results be removed from the paper to save space 2. The editor requests that some results be removed from the paper because it is socially uncomfortable (e.g., editor requests that sex difference results be removed) 3. The editor requests that some results be removed because they are contrary to the editor s position (e.g., the editor believes A to be true and the paper argues that A is not true) 4. The author removes some results that are socially uncomfortable prior to submission or during a revision process 5. The author removes some results that are contrary to the author s position prior to submission or during a revision process 6. An author ignores requests for results not reported in the article B. Access issues (identifying grey literature; Hopewell, Clarke, & Mallett, 2005) 1. A conference paper is presented but never distributed to researchers who request it 2. Studies are not identified, located, or retrieved in systematic searches 3. Researchers conducting a systematic review elect not to or are unable to translate studies published in foreign languages 4. An insufficient number of studies has been conducted and is available, resulting in a summary effect size that may be due to chance or genuine heterogeneity 5. Time-lag bias and Proteus effect: Results that are statistically significant or interesting are published more quickly (Trikalinos & Ioannidis, 2005)

42 G.C. Banks and M.A. McDaniel the possibility of publication bias in regards to the validity of customer service tests (Whetzel, 2006). This study found little evidence for publication bias in the validity of customer service tests. The best conclusion that can be drawn at this point is that the degree to which publication bias affects our scientific results and our evidence-based practice is unknown. Most systematic reviews in I O psychology do not address publication bias. What is clear is that evidence-based practice cannot develop as illustrated in the focal article without addressing the need for SIOP, scientists, and practitioners to collaborate in the completion of systematic reviews that account for and minimize publication bias. Steps I O Psychology Can Take to Limit Publication Bias We present five recommendations for SIOP, scientists, and practitioners that can be used to limit the influence of publication bias on systematic reviews and, therefore, to improve evidence-based practice. Recommendation #1: Create research registries. The creation of research registries as an attempt to prevent and limit the influence of publication bias is not a new concept (Berlin & Ghersi, 2005). A research registry is an electronic database where researchers register the studies that they plan to conduct, are in the process of conducting, or have already conducted. Research registries already exist in several fields that conduct evidence-based practice, such as in education (e.g., What Works Clearinghouse established in 2002 by the U.S. Department of Education), social work (e.g., Campbell Collaboration), and medical research (there are many, some have argued too many). Research registries can aid in systematic reviews by assisting the reviewer in locating studies that would otherwise likely be missing from a review. Registries could also limit the frequency with which researchers replicate studies with near-zero population effects. Such replications may be attempted when the previous studies that showed near-zero magnitude effects were not published or were otherwise not available. Furthermore, registries can help facilitate collaboration and communication between researchers and practitioners. For example, one technique referred to as prospective meta-analysis determines a priori to include studies that are not yet underway or completed. Scientists and practitioners can decide to collaborate and conduct a series of studies to be included in a prospective meta-analysis. This technique encourages standardization in research design as researchers collaborate on the variables included and the types of measures used. Registries can help to diminish the influence of publication bias even if they are incomplete. A complete registry, although desired, is not necessary to begin to prevent and mitigate the influence of publication bias (Berlin & Ghersi, 2005). We suggest that SIOP (and the Academy of Management) could lead the way in creating a registry for scientific research using all papers submitted to the annual conferences. Recommendation #2: Improve reporting practices of systematic searches. From our perspective, a primary deficiency of I O psychology meta-analyses is in the literature reviews and documentation of their literature reviews. Literature reviews need to be explicit, transparent, and replicable. Systematic reviews should fully report efforts to identify published and unpublished studies from multiple sources. Scientists should seek unpublished studies to be included in both qualitative and quantitative systematic reviews. This practice is common in traditional meta-analyses published in I O psychology. However, scientists must consider all the causes of publication bias displayed in Table 1. The focal article noted that academics and practitioners are not mingling with each other in journals. Collaboration on systematic reviews to obtain published and unpublished studies (e.g., technical reports and internal data) can lead to less publication bias and superior systematic reviews.

Evidence-based I-O psychology 43 Recommendation #3: Evaluate the presence and influence of publication bias. Every systematic review should address the attempts of the researchers to evaluate the presence and influence of publication bias. There are well-established techniques to evaluate the presence and influence of publication bias in qualitative and quantitative systematic reviews. Due to the limited nature of this commentary, we do not address specific methodological techniques that can be employed in systematic reviews (For a complete review of those methods see Borenstein, Hedges, Higgins, & Rothstein, 2009; Rothstein et al., 2005). More method development is needed, particularly when evaluating potential publication bias in the presence of moderators or other sources of variance, such as differences across studies in measurement error and range restriction/enhancement. Nonetheless, all systematic reviews should fully report the results of their evaluation of publication bias; otherwise, we will never learn the extent to which publication bias is or is not a problem for our field. Recommendation #4: Practice customercentric science. A recent paper by Aguinis et al. (2009) reviews the manner in which primary studies may be conducted and reported that would address a science practice gap. One recommendation is that researchers could conduct a focus group after completing a study. The outcome of the focus group is to identify the practical significance of certain study results. We propose to extend the suggestion made in this paper to systematic reviews. It is possible that the presence of publication bias influences the results of a systematic review but does not change the ultimate conclusion of the practical significance. For example, publication bias may be found in test vendor data, such that it overestimates the validity of a test that predicts counterproductive work behaviors. In other words, a test with a population validity of.10 is offered as having a validity of.40. In the context of some organizations, the practical significance of such a finding may diminish to the point that the organization would no longer use the test. However, in an organization where counterproductive work behavior can have huge financial (e.g., a bank) or social (e.g., a hospital) consequences, a validity of.10 may still have practical significance. Therefore, the customer-centric approach advocated by Aguinis et al. may be applied to the evaluation of the influence of publication bias on systematic reviews and, therefore, evidence-based practice. Recommendation #5: Release unpublished studies. Practitioners should cooperate with the attempts of scientists and other practitioners to conduct systematic reviews. Organizations can greatly aid both science and practice by releasing internal studies for systematic reviews. This would help to limit publication bias. If a technical report merits inclusion in a systematic review to be published, the researchers can agree to protect the identity of the organization that supplies the technical report. Journal editors need to be accepting of data cited so as to maintain confidentiality of the data source. Organizations should reward and recognize their employees who provide research for systematic reviews. References Aguinis, H., Werner, S., Abbott, J. L., Angert, C., Park, J. H., & Kohlhausen, D. (2009). Customercentric science: Reporting significant research results with rigor, relevance, and practical impact in mind. Organizational Research Methods, 13, 515 539. Berlin, J. A., & Ghersi, D. (2005). Preventing publication bias: Registeries and prospective metaanalysis. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta analysis: Prevention, assessment and adjustments. Chichester, UK: Wiley. Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to metaanalysis. West Sussex, UK: Wiley. Briner, R. B., & Rousseau, D. M. (2011). Evidencebased I O psychology: Not there yet. Industrial and Organizational Psychology: Perspectives on Science and Practice, 4, 3 22. Dickersin, K. (2005). Publication bias: Recognizing the problem, understanding its origins and scope, and preventing harm. In H. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta analysis: Prevention, assessment and adjustments. Chichester, UK: Wiley. Halpern, S. D., & Berlin, J. A. (2005). Beyond conventional publication bias: Other determinants of

44 G.C. Banks and M.A. McDaniel data suppression. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in metaanalysis: Prevention, assessment and adjustments (pp. 303 317). Chichester, UK: Wiley. Hopewell, S., Clarke, M., & Mallett, S. (2005). Grey literature and systematic reviews. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta analysis: Prevention, assessment, and adjustments (pp. 48 72). West Sussex, UK: Wiley. McDaniel, M. A., McKay, P., & Rothstein, H. R. (2006, May). Publication bias and racial effects on job performance: The elephant in the room. Paper presented at the 21st Annual Conference of the Society for Industrial and Organizational Psychology, Dallas, TX. McDaniel, M. A., Rothstein, H. R., & Whetzel, D. L. (2006). Publication bias: A case study of four test vendors. Personnel Psychology, 59, 927 953. Rothstein, H. R., Sutton, A. J., & Borenstein, M. (2005). Publication bias in meta analysis: Prevention, assessment and adjustments. Chichester, UK: Wiley. Tate, B. W., & McDaniel, M. A. (2008, August). Race differences in personality: An evaluation of moderators and publication bias. Paper presented at the Annual Meeting of the Academy of Management, Anaheim, CA. Trikalinos, T. A., & Ioannidis, J. P. A. (2005). Assessing the evolution of effect sizes over time. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta analysis: Prevention, assessment and adjustments (pp. 241 259). West Sussex, UK: Wiley. Whetzel, D. L. (2006, May). Publication bias of studies on the validity of customer service measures: A review of Frei and McDaniel s (1998) metaanalytic results. Paper presented at the 21st Annual Conference of the Society for Industrial and Organizational Psychology, Dallas, TX.