ChangingPractice. Appraising Systematic Reviews. Evidence Based Practice Information Sheets for Health Professionals. What Are Systematic Reviews?



Similar documents
Article Four Different Types of Evidence / Literature Reviews

Systematic reviews and meta-analysis

Chapter 2 What is evidence and evidence-based practice?

18/11/2013. Getting the Searches off to a good start: Scoping the Literature and Devising a Search Strategy

When it comes to health

Evidence-Based Practice in Speech Pathology

ASKING CLINICAL QUESTIONS

Guide. to the. Development of Clinical Guidelines. for. Nurse Practitioners

Does training in workplace adjustment and posture prevent occupational overuse syndrome in workplace based computer users?

Principles of Systematic Review: Focus on Alcoholism Treatment

PCORI Methodology Standards: Academic Curriculum Patient-Centered Outcomes Research Institute. All Rights Reserved.

The Development of the Clinical Healthcare Support Worker Role: A Review of the Evidence. Executive Summary

Supporting Document for the Joanna Briggs Institute Levels of Evidence and Grades of Recommendation

Can I have FAITH in this Review?

Does discharge planning prevent readmission to inpatient psychiatric units?

Prepared by:jane Healey ( 4 th year undergraduate occupational therapy student, University of Western Sydney

Evidence-Based Practice

Critical appraisal of systematic reviews

Best supportive care: Do we know what it is?

Clinical practice guidelines

A Guide To Producing an Evidence-based Report

PCORI Methodology Standards: Academic Curriculum Patient-Centered Outcomes Research Institute. All Rights Reserved.

Systematic Review Resource Package

Critical Appraisal of a Research Paper

Systematic Reviews and Clinical Practice Guidelines

Guidance for the format and content of the protocol of non-interventional post-authorisation safety studies

The integrative review: updated methodology

Biostat Methods STAT 5820/6910 Handout #6: Intro. to Clinical Trials (Matthews text)

Evidence-Based Nursing Practice Toolkit

GUIDELINES FOR REVIEWING QUANTITATIVE DESCRIPTIVE STUDIES

Building Your Evidence Table. LeeAnna Spiva, PhD, RN, PLNC WellStar Health System Director of Nursing Research

ADMINISTRATIVE REPORT. NHMRC advice on the effectiveness of homeopathy for treating health conditions

Evidence-Based Practice in Occupational Therapy: An Introduction

PEER REVIEW HISTORY ARTICLE DETAILS VERSION 1 - REVIEW. Tatyana A Shamliyan. I do not have COI. 30-May-2012

Evidence-Based Software Engineering. Barbara Kitchenham Tore Dybå (SINTEF) Magne Jørgensen (Simula Laboratory)

MIND-BODY THERAPIES FOR HYPERTENSION

Systematic Reviews. knowledge to support evidence-informed health and social care

How to review the evidence: systematic identification and review of the scientific literature

Research & Development Guidance for Students

Requirements for Drug Information Centres

Assessing study quality: critical appraisal. MRC/CSO Social and Public Health Sciences Unit

SECONDMENT POLICY. Reviewer: Grampian Area Partnership Forum. Signature Signature Signature. Review date:

How To Write A Systematic Review

Guidelines for performing Systematic Literature Reviews in Software Engineering

A guide to synthesising qualitative research for researchers undertaking health technology assessments and systematic reviews

Planning a Critical Review ELS. Effective Learning Service

The Joanna Briggs Institute Reviewers Manual Methodology for JBI Umbrella Reviews

Combined Master of Health Administration/Master of Public Health Recommended Website:

HTA. Quality Assessment Tools Project Report. Canadian Agency for Drugs and Technologies in Health

How to literature search

Evidence Based Practice Information Sheets for Health Professionals. Strategies to reduce medication errors with reference to older adults

CLINICAL EXCELLENCE AWARDS. Academy of Medical Royal Colleges submission to the Review Body on Doctors and Dentists Remuneration

IDENTIFICATION, MANAGEMENT AND COMMERCIALISATION OF UNIVERSITY-OWNED R&D INTELLECTUAL PROPERTY

EVIDENCED-BASED NURSING MONOGRAPHS AUTHORING PROCESS

Procedures for the Review of New and Existing Undergraduate Programmes

Palliative Care Knowledge for All Australians: Librarians Work within a Multidisciplinary Team Creating a National Health Knowledge Network

Systematic Reviews in JNEB

How to write a plain language summary of a Cochrane intervention review

Evidence-based guideline development. Dr. Jako Burgers/dr. H.P.Muller Dutch Institute for Healthcare Improvement CBO, Utrecht, The Netherlands

Note that the following document is copyright, details of which are provided on the next page.

PRACTICE FRAMEWORK AND COMPETENCY STANDARDS FOR THE PROSTATE CANCER SPECIALIST NURSE

PROCESSES FOR TOOLKIT DEVELOPMENT

How to Write a Systematic Review

Guidelines for AJO-DO submissions: Randomized Clinical Trials June 2015

Finding the Evidence

33 % of whiplash patients develop. headaches originating from the upper. cervical spine

MSN, RN, CRNP, ANP-BC,

252 10:15AM to 1:15PM Organizers: David Denyer, Cranfield U.; Rob B. Briner, U. of London Primary Sponsor: Research Methods

Searching for systematic reviews. Candida Fenton Information Scientist University of Glasgow

Current reporting in published research

Joanna Briggs Institute Reviewers Manual 2011 Edition

HSE Procedure for developing. Policies, Procedures, Protocols and Guidelines

Breast cancer treatment for elderly women: a systematic review

Assessment Policy. 1 Introduction. 2 Background

User Manual: Version 5.0. System for the Unified Management, Assessment and Review of Information

Finding and Evaluating Evidence: Systematic Reviews and Evidence-Based Practice

IFRS 13 Fair Value Measurement

REGULATIONS FOR THE DEGREE OF BACHELOR OF NURSING (PART-TIME) (BNurs)

REGULATIONS FOR THE DEGREE OF BACHELOR OF NURSING (PART-TIME) (BNurs)

On the attributes of a critical literature review. Saunders, Mark N. K. 1 & Rojon, Céline 2. United Kingdom.

Clinical Resources for the Development of Nurse Practitioner Clinical Protocols

REGULATIONS FOR THE POSTGRADUATE DIPLOMA IN CLINICAL RESEARCH METHODOLOGY (PDipClinResMethodology)

EPPI-Centre Methods for Conducting Systematic Reviews

Transcription:

Supplement 1, 2000 ChangingPractice Evidence Based Practice Information Sheets for Health Professionals Appraising Systematic Reviews The series Changing Practice has been designed to support health professionals wishing to implement evidence based practice and to complement the Joanna Briggs Institutee series Best Practice. What Are Systematic Reviews? The volume of health care literature has increased dramatically over recent decades, and as a result of this, health professionals are no longer able to keep up with all publications on their area of practice. In addition to this large volume, clinicians often have to deal with contradictory research results. This has created difficulties in ensuring that clinical practice is based on reliable sources of research. This situation has created the need for accurate accounts of past research. Systematic reviews have emerged to fulfil this need by providing comprehensive and unbiased summaries of the research on a single topic. These reviews bring together large numbers of individual studies in a single document. As part of the systematic review process, this research is subject to a critical appraisal. Even when the research evidence is limited or nonexistent, these reviews summarise current best evidence on the topic. These reviews not only provide the best evidence for clinical decision making, they can also help determine future research needs. This Sheet Covers the Following Concepts: What Are Systematic Reviews? Quality of Systematic Reviews Appraising a Systematic Review Components of Critical Appraisal Tools Review Question Searching for the Research Inclusion Criteria Critical Appraisal of Studies Data Synthesis Similarity of Studies Reporting and Recommendations Critical Appraisal of a Systematic Review Quality of Systematic Reviews Table 1- Levels of Evidence All studies were categorised according to the strength of the evidence based on the following classification system 1. Level I - Evidence obtained from a systematic review of all relevant randomised controlled trials. Level II - Evidence obtained from at least one properly designed randomised controlled trial. Level III.1 - Evidence obtained from well designed controlled trials without randomisation. Level III.2 - Evidence obtained from well designed cohort or case control analytic studies preferably from more than one centre or research group. Level III.3 - Evidence obtained from multiple time series with or without the intervention. Dramatic results in uncontrolled experiments. Level IV - Opinion of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees. As a systematic review is a scientific exercise, and will influence health care decisions, it should have the same rigour that is expected of all research. The quality of a review, and so it s worth, depends on the extent to which scientific review methods were used to minimise the risk of error and bias. It is the use of these explicit and rigorous methods that distinguish systematic reviews from the traditional reviews of the literature. However, because the quality of published reviews vary considerably, it is necessary to appraise their quality, as is done for any research study before the results are implemented into clinical practice. Much has been written on how best to appraise systematic reviews, and while there is some variation on how this is achieved, most agree on key components of the critical appraisal. Table One summarises the most common questions asked when appraising these reviews. supplement 1, page 1, 2000

Appraising a Systematic Review Systematic reviews should be conducted with the same rigour as any research endeavour. Like primary research, these reviews follow a predetermined plan which should be clearly documented. This documentation of the methods used, means systematic reviews can be replicated by other reviewers. It also allows the review methods used to be subject to appraisal. The key components of the review process are: formulation of a review question(s) conducting a comprehensive search of the literature assessing studies for inclusion in the review critically appraising studies synthesising the findings from individual studies reporting results and recommendations Table 1: Components of Critical Appraisal Tools Based on published systematic review checklists, the following are the most common questions addressed during the critical appraising of a review 2 7. Focus Specific Questions Question Literature Search Study Selection Critical Appraisal Similarity of Groups and Treatments Data Synthesis Methods Documented Summary of Findings Is the specific purpose of the review stated? Is the review question clearly and explicitly stated? Were comprehensive search methods used to locate studies? Was a thorough search done of appropriate databases and were other potentially important sources explored? How were studies selected? Are the inclusion criteria reported? Was the validity of included studies assessed? Was the validity of studies assessed appropriately? Are the validity criteria reported? Are treatments similar enough to combine? Were reasons for any differences between individual studies explored? Were findings from individual studies combined appropriately? Are the methods used to combine studies reported? Are review methods clearly reported? Is a summary of findings provided? Are specific directives for new research proposed? Were the conclusions supported by the reported data? supplement 1, page 2, 2000

Review Question Systematic reviews should address clearly defined questions. A focused question provides the review with direction, in that specific answers to these questions are sought from the available literature. Usually the review questions address the population of interest and condition, the intervention, a comparison or control, and finally the outcome measure that is to be used to determine effectiveness. These components of the question determine what type of studies will be required to provide the appropriate answers. For example, the review may be only interested in studies that included elderly people, or people with oral mucositis as a result of chemotherapy treatment. However for some topics, such as cutting edge developments, interventions with only limited research, or conceptually broad issues, the review may aim to summarise all available evidence. These reviews may involve more than one intervention, may involve all types of people and may utilise a range of outcome measures to evaluate effectiveness. These broader reviews commonly aim to summarise all current knowledge on the topic and so provide important information on future research needs. Regardless of the scope of the review, all should have a clearly documented focus, and this is best achieved through a review question. From this perspective, the first component of a critical appraisal of a systematic review would involve asking: Is the review question clearly and explicitly stated? Searching for the Research A crucial part of any systematic review is the location of research addressing the topic of interest. The term systematic is used to emphasise the systematic approach to literature searching. The primary objective of the search is to locate as much of the completed research on the topic as possible, and to help this process, a search strategy is developed. Like all steps in the review process, the search strategy should be documented in sufficient detail to allow others to critique it s quality. Systematic review searches usually include electronic databases such as MEDLINE, CINAHL, Psyclit, Embase, Cochrane Library and Current Contents. In addition to this, other more specialised databases may also be searched depending on the review topic. The bibliographies and reference lists of all retrieved articles are searched to increase the likelihood of identifying all relevant studies. Key people and professional organisations may also be contacted to identify missed papers, unpublished or in-progress research. Questions to be asked when appraising the search strategy of a systematic review include: Were comprehensive search methods used to locate studies? Was a thorough search done of appropriate databases, and were other potentially important sources explored? Inclusion Criteria The inclusion criteria operationalises the review question, putting it into a practical format. This inclusion criteria is used to decide which studies should be included in the review and which ones should not. As it is developed and documented prior to the commencement of the review, it helps reduce the risk bias introduced by the investigator during the selection process. Selection of studies for review is therefore based on the population, intervention, outcomes and research method, rather than on the results of the studies. As with primary research, there are boundaries to a review and while the question defines the area of interest, it is the inclusion criteria that explicitly documents the focus, nature and limits of the review. This criteria describes the population of interest, the intervention and a comparison or control, and the outcome measure. In addition to this, the inclusion criteria states the research design that the review aims to summarise, for example randomised controlled trials (RCT). This criteria is used to determine if the population, intervention and outcome measures of a study are consistent with the focus of the systematic review. The main question to ask when appraising the inclusion criteria of a systematic review is: How were studies selected? supplement 1, page 3, 2000

Critical Appraisal of Studies An important part of the systematic review process is the assessment of the validity of all identified studies. The rationale for this assessment is that by excluding lesser quality studies the risk of error and bias in the findings of the review will be lessened. Assessing the validity of research means determining whether the methods used during the study can be trusted to provide a genuine, accurate account of the intervention being evaluated. The appraisal does not address writing style or presentation, but rather it focuses on the details of the research design. For RCT, critical appraisal focuses on the four general types of systematic error, which are selection bias, performance bias, attrition bias and detection bias. Selection Bias: relates to what method was used to randomise subjects and blinding or concealment of the group to which participants are allocated, until treatment was allocated. The use of randomisation and blinding means this part of the study is free from any influence of people involved in the process. Performance Bias: refers to differences in the care provided to study groups other than the intervention being evaluated, and is prevented by blinding, or concealment of the group to which participants are allocated throughout the study period. Attrition Bias: refers to differences between treatment groups in terms of the number of subjects who drop out or are lost from study, as these losses have the potential to bias the results. Detection Bias: refers to differences in the assessment of outcomes, and is prevented by blinding assessors to which groups participants have been allocated. The appraisal of the systematic review is very similar to the critical appraisal that is undertaken during the conduct of the review to assess the methodological quality studies. Questions to be asked when appraising how the reviewers evaluated the methodological quality of studies include: Was the validity of studies assessed appropriately? Data Synthesis The objective of a systematic review is to summarise the results from different studies to obtain an overall evaluation of the effectiveness of an intervention or treatment. Depending on the type of data and quality of studies, this is achieved by meta-analysis. Meta-analysis is the statistical analysis of the results from two or more individual studies. Meta-analysis provides a framework for a systematic review, in that similar measures from comparable studies are listed systematically, and when possible, the measures of the effect of an intervention are combined. Synthesis of results from different studies is achieved by converting individual results to a common scale or measure then applying standard statistical analysis procedures. Meta-analysis is useful when many studies address the same issue, as it provides the means by which to combine the results. It is also useful when studies are too small and so lack the power to detect treatment effects, as combining studies increases the sample size and therefore the power. Questions to be asked when appraising how the results of individual studies were synthesised during the systematic review include: How were the studies combined? Were findings combined appropriately? supplement 1, page 4, 2000

Similarity of Studies Meta-analysis is not used when studies are different in terms of their population, intervention or how outcomes were measured. When treatments evaluated in the individual studies are different, combining these results to obtain an average of the treatment effect will be meaningless. Similarly, there is little point in combining studies if they measured different outcomes or used different populations. Finally, when the findings of individual studies differ significantly they should not be combined in a meta-analysis. This is because combining widely differing results to produce an average effect would fail to represent the great variation in the outcomes.questions to be asked when assesing whether the individual studies included in the systematic review were similar enough to justify combining their results in a meta-analysis include: Were the populations of the different studies similar? Was the same intervention evaluated by the individual studies? Were the same outcomes used to determine the effectiveness of the intervention being evaluated? Were reasons for differences between studies explored? Reporting and Recommendations As systematic reviews are viewed as scientific endeavours, the methods used during the review should be reported in sufficient detail to allow replication of the review and critical appraisal of the processes employed. The normal conventions used in reporting the findings of any research activity also apply to systematic reviews. Areas that should be included in this description are the review question, the inclusion criteria, search strategy, methods used to synthesise individual studies, and information about identified studies. Questions to be asked when appraising the reporting of the systematic review method include: Are review methods clearly documented? Is the review question clearly and explicitly stated? Was the search strategy reported? Was the inclusion criteria reported? Was the criteria for appraising studies reported? Were the methods used to combine studies reported? The conclusions, recommendations and implications for research and clinical practice should be based on the findings of the review. Questions to be asked when appraising the findings and recommendations of a systematic review include: Is a summary of findings provided? Are specific directives for new research proposed? Were the recommendations supported by the reported data? THE JOANNA BRIGGS INSTITUTE FOR EVIDENCE BASED NURSING AND MIDWIFERY supplement 1, page 5, 2000

Table 2: Critical Appraisal of a Systematic Review Review Question Search Strategy Inclusion Criteria Critical Appraisal Data Synthesis Similarity of Studies Reporting of Findings Conclusions & Recommendations Is the review question clearly and explicitly stated? Were comprehensive search methods used to locate studies? Was a thorough search done of appropriate databases, and were other potentially important sources explored? How were studies selected? Was the validity of studies assessed appropriately? How were the studies combined? Were findings combined appropriately? Were the populations of the different studies similar? Was the same intervention evaluated by the individual studies? Were the same outcomes used to determine the effectiveness of the intervention being evaluated? Were reasons for differences between studies explored? Are review methods clearly documented? Is the review question clearly and explicitly stated? Was the search strategy reported? Was the inclusion criteria reported? Was the criteria for appraising studies reported? Were the methods used to combine studies reported? Is a summary of findings provided? Are specific directives for new research proposed? Were the recommendations supported by the reported data? Acknowledgments This publication was developed by Mr David Evans, The Joanna Briggs Institute for Evidence Based Nursing and Midwifery, Adelaide, South Australia and Professor Anne Chang,,The Hong Kong Centre for Evidence Based Nursing and Midwifery (A Collaborating Centre of The Joanna Briggs Institute). The Joanna Briggs Institute for Evidence Based Nursing and Midwifery, Margaret Graham Building, Royal Adelaide Hospital, North Terrace, South Australia, 5000. http://www.joannabriggs.edu.au ph: (08) 8303 4880, fax: (08) 8303 4881 The series Best Practice is disseminated collaboratively by: THE JOANNA BRIGGS INSTITUTE FOR EVIDENCE BASED NURSING AND MIDWIFERY The procedures described in Best Practice must only be used by people who have appropriate expertise in the field to which the procedure relates. The applicability of any information must be established before relying on it. While care has been taken to ensure that this edition of Best Practice summarises available research and expert consensus, any loss, damage, cost, expense or liability suffered or incurred as a result of reliance on these procedures (whether arising in contract, negligence or otherwise) is, to the extent permitted by law, excluded. This sheet should be cited as: JBIEBNM, 2000 Appraising Systematic Reviews, Changing Practice Sup. 1, [Online, accessed date] URL: http://www.joannabriggs.edu.au/cp1.pdf. References 1. NHMRC, 1999, A guide to the development, implementation and evaluation of clinical practice guidelines, Canberra, NHMRC. 2. Greenalgh T. How to read a paper: Papers that summarise other papers: systematic reviews and meta-analysis. British Medical Journal 1997;315:672-5. 3. Jadad AR, Moher D, Klassen TP. Guides for reading and interpreting systematic reviews: II. How did the authors find the studies and assess their quality. Archives of Pediatric & Adolescent Medicine 1998;152(8):812-7. 4. Oxman AD, Guya tt GH. The science of reviewing research. Annals of the New York Academy of Science 1993;703:125-134. 5. Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. Journal of Clinical Epidemiology 1991;44(11):1271-8. 6. Mulrow CD. The medical review article: state of the science. Annals of Internal Medicine 1987;106:485-8. 7. Light RJ, Pillemer DB. Summing Up: The Science of Reviewing Research. Cambridge: Harvard University Press, 1984. supplement 1, page 6, 2000